Technology brings new tools to those with low vision caused by AMD
Although my practice is primarily an anterior segment practice, the care of patients with age-related macular degeneration (AMD) is something I have always followed closely. AMD is the leading cause of blindness in people aged 50 years and older.1 Twenty million Americans and 200 million people around the world have some form of AMD, with the global prevalence expected to reach 288 million by 2040.1 My early training included a vitreoretinal research fellowship, and I have several family members who have AMD, so I am well aware of the damage this disease can wreak for individuals who still want to be independent and productive.
We, as a field, are doing much better with preventing vision loss in retinal disease with regular anti-VEGF injections, but even with these incredibly effective therapies, we still see scarring and visual field deficits from both neovascular AMD and geographic atrophy in dry AMD. At least 4 million people in the US struggle with low vision, defined as best-corrected vision worse than 20/40, and that total will increase dramatically in the coming decades as the population ages.2 I continue to be optimistic about the potential for new pharmaceutical therapies in development to slow the progression of AMD or enable even more effective treatments.
In the meantime, however, patients with low vision due to AMD or other central vision-destroying retinal disease can certainly benefit from the wave of high-tech advancements that are now available or about to hit the market. New, wearable, low-vision assistive devices go far beyond magnification to improve patients’ quality of life. They incorporate many technologies that we hear about in the consumer electronics world and in the workplace, including virtual or augmented reality (VR, AR), artificial intelligence (AI) and machine learning, voice activation, and text-to-speech. Additionally, improvements in battery technology have made sophisticated wearable devices more compact and comfortable for patients to use. Improving the number of people using these assistive or adaptive devices (currently <15%) is one of the public health goals in the federal government’s Healthy People 2030 initiative.3
The Table describes some of the latest devices available (or coming soon), along with their high-tech features. The International Academy of Low Vision Specialists (www.ialvs.com) is another source for information about assistive and adaptive technologies.
I think we are all familiar with VR headsets, like Oculus Quest. These devices are truly immersive and amazing when used for gaming or entertainment. In the low-vision context, specialized VR devices can magnify text and images from a front camera on the device—and sometimes from a smartphone camera, as well—by as much as ×24. Some can stream video content directly from a cable or streaming service. The Acesight VR headset (Zoomax Technology), for example, provides a high-resolution image with up to ×16 magnification using a handheld remote control. Users can watch television (albeit through the device camera) or read a magnified image through the VR glasses. The headset can be set up to display images in the periphery. A downside of enclosed VR headsets is that they may make users feel dizzy or nauseous and are not safe to use while walking around.
AR allows the patient to see, and still be a part of, the real world through and around the glasses or headset, as well as seeing the virtual or projected image inside the lenses. These headsets are more open, making them safer for ambulation. Some, like versions of the Eyedaptic glasses, require a wired connection to an external device.
One new AR device that I am very familiar with is the OcuLenz headset being developed by Ocutrx Technologies. Before its initial use, patients go through an automated visual field test to map out the visual field defects in each eye. Using software algorithms, the device positions high-contrast, pixel-manipulated images from its cameras so they appear outside of the scotoma, within the patient’s functional peripheral field of vision. This effectively eliminates the central “hole” in their vision that AMD patients typically experience. With the OcuLenz headset, patients can read, watch movies, and walk around; eye trackers move the image as the patient moves their eyes (Figure). They can enjoy a very wide field of vision and excellent image quality, with seamless integration of the virtual and real-world elements. Since the OcuLenz headset is Wi-Fi and cellular-enabled as well, it becomes the low-vision patient’s connectivity device, making phone calls and downloading movies and other content from the internet.
Just like Siri or Alexa can play your favorite music, call your contacts, or tell you the weather on demand, similar technology has been integrated into wearable devices like the OrCam MyEye Pro, NuEyes e3+, and Envision Glasses. OcuLenz’s version of connected AI is Simone. Several of these devices are based on a Google Glass model, making them more discreet than VR/AR headsets. In fact, they look exactly like regular spectacles from some angles. Some rely on a wired or wireless connection to a phone or other handheld device, while others are purely voice or device touchpad activated. Turning text—from a book, storefront sign, or product label—into speech helps users navigate their world more easily.
Rapid image and text processing using AI is also changing the world of low-vision devices. For example, OrCam MyEye Pro uses AI to summarize text from a screen or a printed page, describe who is in the room, facilitate facial recognition, and answer questions. Envision Glasses have recently integrated GPT-4 by OpenAI to describe a scene. You can interact with the feature and ask follow-up questions, much like you might do to refine a ChatGPT response. The OcuLenz headset uses both optical character recognition and AI technology to help users see and read complex texts, such as a magazine article with call-out boxes, images, and advertisements, more naturally. The content is shown to the patient through the OcuLenz pixel manipulation system one line or one picture at a time, with or without the addition of artificial speech. The device can also alert patients to steps or obstacles in their paths.
I think all of these are exciting developments—and ones that I know would have enriched my father’s life if they had been available to him. I watched him grow increasingly frustrated with his inability to perform visual tasks that once came easily, like reading stock prices or recognizing family and friends’ faces—and I do not want my patients or my other family members to have to experience what my father went through with AMD. I’ll never forget seeing a 97-year-old woman test out the OcuLenz headset recently and be able to see her son’s face for the first time in years. The first thing she said was, “Oh, I like your goatee!” Already, most of my older patients have smartphones; this familiarity will help them accept and adapt to new AI, AR, and other technologies in low-vision assistive devices. By guiding them to these advanced tools, we can have a huge impact on their quality of life.
Sheri Rowen, MD, FACS
e: srowen10@gmail.com
Rowen is a medical director at NVISION Eye Centers and maintains a clinical practice in Newport Beach, California. She has an equity interest in Ocutrx and serves as a medical adviser to the company.