Artificial Vision

Artificial Vision Support System (AVS^2) for Visual Prostheses

The human retina is not a mere receptor array for photonic information. It performs significant image processing within its layered neural network structure. Current state-of-the-art and near-future artificial vision implants provide tens of electrodes, allowing for limited resolution visual perception (pixelation). Real-time image processing and enhancement improve the limited vision afforded by camera-driven implants, such as the Artificial Retina, ultimately benefiting the subject. The preservation and enhancement of contrast differences and transitions, such as edges, are especially important compared to picture details such as object texture. The Artificial Vision Support System (AVS2; Fink and Tarbell, 2005), devised and implemented under direction of Dr. Wolfgang Fink at the Visual and Autonomous Exploration Systems Research Laboratory at Caltech, performs real-time image processing and enhancement of the miniature camera image stream before it is fed into the Artificial Retina. This research was part of the collaborative NSF-funded BMES ERC and U.S. Department of Energy-funded Artificial Retina Project designed to restore sight to the blind. Since it is difficult to predict exactly what blind subjects with camera-driven visual prostheses, such as the Artificial Retina, may be able to perceive, AVS2 provides the unique capability for current and future retinal implant carriers to choose from a wide variety of image processing filters to optimize their individual visual perception provided by their visual prostheses. AVS2 interfaces with a wide variety of digital cameras and is thus directly and immediately applicable to artificial vision prostheses that are based on an external or internal video-camera system as the first step in the vision stimulation/processing cascade. AVS2 presents the captured camera video stream in a user-defined pixelation, which matches, e.g., the dimensions of the implanted electrode array of the Artificial Retina. It subsequently processes the video data through user-selected image filters and then issues them to the Artificial Retina.

 

Sponsor(s):

DORA: Digital Object-Recognition Audio-Assistant for the Visually Impaired

Severely visually impaired patients, blind patients, or blind patients with retinal implants alike can benefit from a system such as the object recognition assistant presented here. With the basic infrastructure (image capture, image analysis, and verbal image content announcement) in place, this system can be expanded to include attributes such as object size and distance, or, by means of an IR-sensitive camera, to provide basic “sight” in poor visibility (e.g., foggy weather) or even at night.

 

Sponsor(s):

Optimization of Electrical Stimulation Patterns for Artificial Vision Implants

A great challenge exists in predicting how to electrically stimulate the retina of a blind subject via the Artificial Retina, a visual prosthesis, to elicit a desired visual perception. In epi-retinal stimulation, as is the case with the Artificial Retina, the electrical stimulus emission occurs at the ganglion cell layer, i.e., at the end of the retinal neural network structure and processing cascade, feeding directly into the optic nerve. Consequently, it is difficult to predict how to electrically stimulate the retina of a blind subject with a visual prosthesis to elicit a visual perception that matches an object or scene as captured by the camera system that drives the prosthesis. The electronic system must substitute for the information processing normally performed by retinal circuitry. This research was part of the collaborative U.S. Department of Energy-funded Artificial Retina Project designed to restore sight to the blind. To improve the visual experience provided by a visual prosthesis such as the Artificial Retina requires the efficient translation of the camera image stream, pre-processed by the Artificial Vision Support System (AVS2), into spatial-temporal patterns of electrical stimulation of retinal tissue by the implanted electrode array. The Visual and Autonomous Exploration Systems Research Laboratory at Caltech, under direction of Dr. Wolfgang Fink, directly addresses this challenge by developing and testing multivariate optimization algorithms. These algorithms directly involve the blind subjects for the evaluation of their visual perception. The optimization process helps define and optimize the electric stimulation patterns administered by the Artificial Retina to instill useful visual perceptions in blind subjects of objects or scenes that are viewed with the external camera system and pre-processed by AVS2. This novel approach for optimizing visual perception does not rely on assumptions regarding the residual processing capability or architecture of the affected retina, let alone on modeling the retina.

[Issued patent(s): U.S. 7,321,796 and U.S. 8,260,428]

 

Sponsor(s):

μAVS^2: Portable Artificial Vision Support System

It is difficult to predict exactly what blind subjects with camera-driven visual prostheses (e.g., retinal implants) may be able to perceive. Thus, it is prudent to offer them a wide variety of image processing filters and the capability to engage these filters repeatedly in any user-defined order to enhance their visual perception. To attain true portability, we employed a commercial-off-the-shelf, battery-powered, general purpose Linux microprocessor platform to create the Microcomputer-based Artificial Vision Support System, μAVS2, for real-time image processing. Truly standalone, μAVS2 is smaller than a deck of playing cards, lightweight, fast, and equipped with USB, RS-232, and Ethernet interfaces. Image processing filters on μAVS2 operate in a user-defined linear sequential-loop fashion, resulting in vastly reduced memory and CPU requirements during execution. μAVS2 imports raw video frames from a USB or IP camera, performs image processing, and issues the processed data over an outbound Internet TCP/IP or RS-232 connection to the visual prosthesis system. Hence, μAVS2 affords users of current and future visual prostheses independent mobility and the capability to customize the visual perception generated. Additionally, μAVS2 can easily be reconfigured for other prosthetic systems.

 

Sponsor(s):

CYCLOPS: A Mobile Robotic Platform for Testing and Validating Image Processing and Autonomous Navigation Algorithms in Support of Artificial Vision Prostheses

While artificial vision prostheses are quickly becoming a reality, actual testing time with visual prosthesis carriers is at a premium. Moreover, it is helpful to have a more realistic functional approximation of a blind subject: Instead of a normal subject with a healthy retina looking at a low-resolution (pixelated) image on a computer monitor or head-mounted display, a more realistic approximation is achieved by employing a subject-independent mobile robotic platform that uses a pixelated view as its sole visual input for navigation purposes. We introduce CYCLOPS: an AWD, remote controllable, mobile robotic platform that serves as a testbed for real-time image processing and autonomous navigation systems for the purpose of enhancing the visual experience afforded by visual prosthesis carriers. Complete with wireless Internet connectivity and a fully articulated digital camera with wireless video link, CYCLOPS supports both interactive telecommanding via joystick, and autonomous self-commanding. Due to its onboard computing capabilities and extended battery life, CYCLOPS can perform complex and numerically intensive calculations, such as image processing and autonomous navigation algorithms, in addition to interfacing to additional sensors. Its Internet connectivity renders CYCLOPS a worldwide accessible testbed for researchers in the field of artificial vision systems. CYCLOPS enables subject-independent evaluation and validation of image processing and autonomous navigation systems with respect to their utility and efficiency to support and enhance visual prostheses, while potentially reducing to a minimum the need for valuable testing time with actual visual prosthesis carriers.

 

Sponsor(s):

Tactile Feedback Devices for Artificial Vision Implant Carriers

Tactile Feedback Devices, developed at Caltech, display ground-truth perception in the form of a pin-pattern to the blind subject with an artificial vision implant (e.g., DOE's Artificial Retina). This approach of presenting the ground-truth perception to the blind subject takes advantage of the well-developed tactile sense of blind subjects, and together with the simultaneous electrical stimulation of the retina provides a promising way towards the optimization of visual perception of artificial vision implant carriers.

[Issued patent(s): U.S. 7,321,796 and U.S. 8,260,428]

 

Sponsor(s):