David Albert Harrell
2010-11-21 01:06:19 UTC
-----------preface----------------------------------------
All attempts thus far to evaluate and develop ‘Tactile Image
Projection’ have dealt with a stationary subject. This approach does
not offer the kind of real-time environmental feedback necessary for
the subject to begin adapting to the device, nor for the brain to
naturally discover and correlate the relevance and inherent value of
the area being stimulated.
In other words, the most essential prerequisite of this device has
been overlooked entirely; that is, ‘Tactile Image Projection’ simply
will not function [to any degree even approaching the usefulness of
natural sight] on a stationary subject. Mobility is essential for
human adaptation and application, thus allowing for an immediate
environmental feedback interaction cycle to develop.
Putting such a device on a stationary subject, would be like inventing
a parachute and then attempting to test it from the deck of a
submarine. Or a perhaps even more enlightening analogy, attempting to
evaluate this device with a stationary subject would be like trying to
determine the practical values of ‘a new invention known as the
automobile’ without taking the vehicle out of park.
------------end preface-----------------------------------
[Note the earliest references to the concept of ‘Tactile Image
Projection’ appear to have been made by Paul Bach-Y-Rita and Carter C.
Collins at National Symposium for Information Display in 1967:
followed up by Paul Bach-Y-Rita, Carter C. Collins, Frank A. Saunders,
Benjamin White, and Lawrence Scadden at The Smith-Kettlewell Institute
of Visual Sciences in San Francisco, CA in 1969.]
-------------------------------------------------------------
Did you ever put your hand on a TV screen to see if you can feel
anything? You can't. But if you could, you would feel thousands of
dots being electronically selected and lighted to create an image over
the entire two-dimensional field.
If such a field were delivered to the ‘sea of nerve endings’ contained
in a large area of skin, would a human being be able to make use of
this two-dimensionally ordered image?
The device is in three main parts:
1. A video camera.
2. A central processing unit (computer).
3. A flexible pad worn snugly to the skin (or scanning emitter) that
stimulates the nerve endings of the dermal area.
Briefly, the pictorial image from a video camera is received and
processed by a computer, and then delivered to an ‘x,y
grid’ (optimally perhaps as high as 100,000 pixels or more, depending
on the density of the neural receptors being targeted), in the form of
dermal stimulating impulses. The specific type of stimulation is a
variable at this point. Electric shock, vibration, heat, and laser
(of low but perceivable intensity) have been considered, but there are
other possibilities, including the type of electric current which the
brain is already accustom to receiving. This question of ‘What kind
of stimulation would be most effective?’ can only be answered through
experimentation.
The neurological system, pattern recognition capabilities, and natural
adaptive powers of the human mind accomplish the remainder of this
unorthodox direct image perception. It is the natural business of the
brain to respond to a specific overture of patterned stimuli. A
mobile blind subject, wearing such a device, would have an opportunity
to create a real-time feedback relationship with the physical world
environment.
One of the reasons I am convinced, that such a tactile-conveyed image
can be usefully perceived by the brain, is that I have subjectively
proven it. I have repeatedly conducted sessions in which I sat
quietly, blindfolded or with my eyes closed, while another person drew
simple pictures on my back. At first I was only able to deduce the
images by reconstructing them in my mind, however eventually, during
many of the more focused sessions, the touch of the finger on my skin
began to ‘light up’ in the darkness of my mind’s eye, leaving a trail
that lingered long enough in many cases for me to perceive the entire
image as a coherent complete picture. This kind of exercise however
only demonstrates the conveyance of the two dimensional plane to the
brain; in order to realize full potential you must get the subject
moving, navigating obstacles and negotiating objects.
What is essentially being suggested is that the normal two-dimensional
image that falls upon the cones and rods on the rear portion of the
inner eye (retina), can be effectively replace (in its role with the
visual cortex of the brain), by a larger dermal area (such as the
back, stomach, or scalp for instance) undergoing a different (but also
two dimensional) stimulation; creating a parallel system of input that
the brain would have an opportunity to recognize in a somewhat
familiar manner.
The overall objective of the device is to produce some form of
detectable stimulation corresponding with the lighted areas in the
video picture [with polarity reversible]. This stimulation might be
delivered by some form of hovering scan emitter, or a snugly worn pad
embedded with an electrode grid array.
The prototype should be designed to emit as many different types of
stimulation as possible since we don’t yet know what will work most
effectively, and such stimulation may actually need to be changed
during eventual practical usage considering the propensity of specific
neural receptors to become over-stimulated. Given the current state
of electronic technology, I am certain that such stimulation could be
delivered to a targeted dermal area in a variety of different methods
and intensities.
Various versions of this device have been described before, including
discussion on the implantation of a transmitter into the visual
cortex. I consider an implant to the visual cortex to be a clumsy,
unnecessarily invasive, and less effective conduit to the brain than
natural tactile perception.
Notice that all versions of this device thus far appear to be offering
relatively low, therefore ineffectual, resolution. The most essential
prerequisite of ‘Tactile Image Projection’ however has been overlooked
entirely; that is, this device simply will not function [to any degree
even approaching the usefulness of natural sight] on a stationary
subject. Mobility and the immediate environmental feedback
interaction cycle are essential for human adaptation and application.
Applying such a device to a stationary subject, would be like
inventing a parachute and then attempting to test it from the deck of
a submarine. Or a perhaps even more enlightening analogy, attempting
to evaluate this device with a stationary subject would be like trying
to determine the practical values of ‘a new invention known as the
automobile’ without taking the vehicle out of park.
The Institute of Medical Sciences (San Francisco, CA) [Carter C.
Collins] in a 1969-71 publication mentions ‘immobility’ in passing
(along with weight, bulk, expense, and power consumption) as one of
the problems that has arisen in prior projects, but fails to point out
that such immobility entirely negates any attempt to develop or even
evaluate the effectiveness of Tactile Image Projection.
Assume that we have already built such a device, ie a moving picture
is being delivered to the subject in perceptible format; if the
subject does not proceed to interface with a real-time environment,
one cannot expect relevant learning cycle, or even useful and
sustained cerebral discovery of the area, to occur.
All attempts thus far have dealt with far too low resolution, and a
stationary subject. This scenario does not offer the kind of real-
time feedback necessary for the subject to begin adapting to the
device, nor for the brain to discover and correlate the relevance and
inherent value of the area being stimulated.
Remember, adaptability is perhaps the strongest single resource of the
brain. If a useful orderly image is made available, the brain will
‘tune into it’ out of need. The only other ingredient necessary to
achieve effective results with this device is a resolve to make use of
the newly introduced image.
One of the problems with past, and apparently current, thinking is
that the x,y grid is being applied to small areas such as the hands,
tongue, and finger tips. I realize receptor density is greater in
these areas, but the resolution needed is simply not possible using
hundreds of electrodes as oppose to thousands. The focus should be on
covering as large an area as possible (perhaps even wrapping around to
the chest and stomach from the back for greater resolution). A
bodysuit providing coverage of all available potentially useful dermal
areas may even be proven most effective. [Notice that the resolution
of the ‘x,y grid’ should approach, or surpass to some degree, the
density of the dermal neural receptors being targeted.]
As for interfacing the subject with a real-time physical environment,
imagine for instance that a mobile subject was fitted with a working
portable device delivering a picture from a camera (cap or eyeglasses
mounted for instance) to an emitter pad or scanning emitter array.
Now place this subject in an environment void of light, except for a
line guided path which the subject would begin to walk upon. Imagine
this path at some point has a low hanging lighted ‘bright white’ limb
(I don't want to appear cruel of flippant here, but this is necessary
to make my point). The first time the subject encountered the limb,
there would be a registration by the Tactile Image Projection system
of a ‘white stripe’ that would pass across the ‘emitter field’ just
before the subject was impeded by the limb.
My point being that eventually the subject will associate the passing
of the ‘white stripe’ as a cognitive precursor to being struck by the
limb, and will duck. The rest is merely a matter of real-time
experience, learning to distinguish shapes and details. But the key
is to create conditions that offer instant feedback, mobile within a
real-time physical space. This is the kind of endeavor in which the
human mind invariably excels to astonishing heights. For optimum and
expedient adaptation opportunities, stark black and white training
facilities would need to be developed, large checkerboard floors for
instance, with walls compared to objects offering maximum contrast.
Finally I am suggesting that if such an image is made available to the
brain, that it is the natural business of the brain to recognize such
an area of two-dimensional data within a given feedback loop of real-
time information, this being completely analogous to the normal
relationship between the visual cortex and the rear portion of the
inner eye surface (in effect offering the visual cortex an ‘alternate
retina’). It is the correlation between the real-time world, and this
new area of stimulation, that will achieved the inevitable
communication of a useful moving ‘picture’.
Furthermore, I am convinced the only reason a Tactile Image Projection
system has not already been developed and adapted into practical use
for the blind, is because the inadequacies of low resolution, and this
most critical prerequisite of ‘real-time mobile interaction with the
environment,’ are being overlooked.
David Albert Harrell
All attempts thus far to evaluate and develop ‘Tactile Image
Projection’ have dealt with a stationary subject. This approach does
not offer the kind of real-time environmental feedback necessary for
the subject to begin adapting to the device, nor for the brain to
naturally discover and correlate the relevance and inherent value of
the area being stimulated.
In other words, the most essential prerequisite of this device has
been overlooked entirely; that is, ‘Tactile Image Projection’ simply
will not function [to any degree even approaching the usefulness of
natural sight] on a stationary subject. Mobility is essential for
human adaptation and application, thus allowing for an immediate
environmental feedback interaction cycle to develop.
Putting such a device on a stationary subject, would be like inventing
a parachute and then attempting to test it from the deck of a
submarine. Or a perhaps even more enlightening analogy, attempting to
evaluate this device with a stationary subject would be like trying to
determine the practical values of ‘a new invention known as the
automobile’ without taking the vehicle out of park.
------------end preface-----------------------------------
[Note the earliest references to the concept of ‘Tactile Image
Projection’ appear to have been made by Paul Bach-Y-Rita and Carter C.
Collins at National Symposium for Information Display in 1967:
followed up by Paul Bach-Y-Rita, Carter C. Collins, Frank A. Saunders,
Benjamin White, and Lawrence Scadden at The Smith-Kettlewell Institute
of Visual Sciences in San Francisco, CA in 1969.]
-------------------------------------------------------------
Did you ever put your hand on a TV screen to see if you can feel
anything? You can't. But if you could, you would feel thousands of
dots being electronically selected and lighted to create an image over
the entire two-dimensional field.
If such a field were delivered to the ‘sea of nerve endings’ contained
in a large area of skin, would a human being be able to make use of
this two-dimensionally ordered image?
The device is in three main parts:
1. A video camera.
2. A central processing unit (computer).
3. A flexible pad worn snugly to the skin (or scanning emitter) that
stimulates the nerve endings of the dermal area.
Briefly, the pictorial image from a video camera is received and
processed by a computer, and then delivered to an ‘x,y
grid’ (optimally perhaps as high as 100,000 pixels or more, depending
on the density of the neural receptors being targeted), in the form of
dermal stimulating impulses. The specific type of stimulation is a
variable at this point. Electric shock, vibration, heat, and laser
(of low but perceivable intensity) have been considered, but there are
other possibilities, including the type of electric current which the
brain is already accustom to receiving. This question of ‘What kind
of stimulation would be most effective?’ can only be answered through
experimentation.
The neurological system, pattern recognition capabilities, and natural
adaptive powers of the human mind accomplish the remainder of this
unorthodox direct image perception. It is the natural business of the
brain to respond to a specific overture of patterned stimuli. A
mobile blind subject, wearing such a device, would have an opportunity
to create a real-time feedback relationship with the physical world
environment.
One of the reasons I am convinced, that such a tactile-conveyed image
can be usefully perceived by the brain, is that I have subjectively
proven it. I have repeatedly conducted sessions in which I sat
quietly, blindfolded or with my eyes closed, while another person drew
simple pictures on my back. At first I was only able to deduce the
images by reconstructing them in my mind, however eventually, during
many of the more focused sessions, the touch of the finger on my skin
began to ‘light up’ in the darkness of my mind’s eye, leaving a trail
that lingered long enough in many cases for me to perceive the entire
image as a coherent complete picture. This kind of exercise however
only demonstrates the conveyance of the two dimensional plane to the
brain; in order to realize full potential you must get the subject
moving, navigating obstacles and negotiating objects.
What is essentially being suggested is that the normal two-dimensional
image that falls upon the cones and rods on the rear portion of the
inner eye (retina), can be effectively replace (in its role with the
visual cortex of the brain), by a larger dermal area (such as the
back, stomach, or scalp for instance) undergoing a different (but also
two dimensional) stimulation; creating a parallel system of input that
the brain would have an opportunity to recognize in a somewhat
familiar manner.
The overall objective of the device is to produce some form of
detectable stimulation corresponding with the lighted areas in the
video picture [with polarity reversible]. This stimulation might be
delivered by some form of hovering scan emitter, or a snugly worn pad
embedded with an electrode grid array.
The prototype should be designed to emit as many different types of
stimulation as possible since we don’t yet know what will work most
effectively, and such stimulation may actually need to be changed
during eventual practical usage considering the propensity of specific
neural receptors to become over-stimulated. Given the current state
of electronic technology, I am certain that such stimulation could be
delivered to a targeted dermal area in a variety of different methods
and intensities.
Various versions of this device have been described before, including
discussion on the implantation of a transmitter into the visual
cortex. I consider an implant to the visual cortex to be a clumsy,
unnecessarily invasive, and less effective conduit to the brain than
natural tactile perception.
Notice that all versions of this device thus far appear to be offering
relatively low, therefore ineffectual, resolution. The most essential
prerequisite of ‘Tactile Image Projection’ however has been overlooked
entirely; that is, this device simply will not function [to any degree
even approaching the usefulness of natural sight] on a stationary
subject. Mobility and the immediate environmental feedback
interaction cycle are essential for human adaptation and application.
Applying such a device to a stationary subject, would be like
inventing a parachute and then attempting to test it from the deck of
a submarine. Or a perhaps even more enlightening analogy, attempting
to evaluate this device with a stationary subject would be like trying
to determine the practical values of ‘a new invention known as the
automobile’ without taking the vehicle out of park.
The Institute of Medical Sciences (San Francisco, CA) [Carter C.
Collins] in a 1969-71 publication mentions ‘immobility’ in passing
(along with weight, bulk, expense, and power consumption) as one of
the problems that has arisen in prior projects, but fails to point out
that such immobility entirely negates any attempt to develop or even
evaluate the effectiveness of Tactile Image Projection.
Assume that we have already built such a device, ie a moving picture
is being delivered to the subject in perceptible format; if the
subject does not proceed to interface with a real-time environment,
one cannot expect relevant learning cycle, or even useful and
sustained cerebral discovery of the area, to occur.
All attempts thus far have dealt with far too low resolution, and a
stationary subject. This scenario does not offer the kind of real-
time feedback necessary for the subject to begin adapting to the
device, nor for the brain to discover and correlate the relevance and
inherent value of the area being stimulated.
Remember, adaptability is perhaps the strongest single resource of the
brain. If a useful orderly image is made available, the brain will
‘tune into it’ out of need. The only other ingredient necessary to
achieve effective results with this device is a resolve to make use of
the newly introduced image.
One of the problems with past, and apparently current, thinking is
that the x,y grid is being applied to small areas such as the hands,
tongue, and finger tips. I realize receptor density is greater in
these areas, but the resolution needed is simply not possible using
hundreds of electrodes as oppose to thousands. The focus should be on
covering as large an area as possible (perhaps even wrapping around to
the chest and stomach from the back for greater resolution). A
bodysuit providing coverage of all available potentially useful dermal
areas may even be proven most effective. [Notice that the resolution
of the ‘x,y grid’ should approach, or surpass to some degree, the
density of the dermal neural receptors being targeted.]
As for interfacing the subject with a real-time physical environment,
imagine for instance that a mobile subject was fitted with a working
portable device delivering a picture from a camera (cap or eyeglasses
mounted for instance) to an emitter pad or scanning emitter array.
Now place this subject in an environment void of light, except for a
line guided path which the subject would begin to walk upon. Imagine
this path at some point has a low hanging lighted ‘bright white’ limb
(I don't want to appear cruel of flippant here, but this is necessary
to make my point). The first time the subject encountered the limb,
there would be a registration by the Tactile Image Projection system
of a ‘white stripe’ that would pass across the ‘emitter field’ just
before the subject was impeded by the limb.
My point being that eventually the subject will associate the passing
of the ‘white stripe’ as a cognitive precursor to being struck by the
limb, and will duck. The rest is merely a matter of real-time
experience, learning to distinguish shapes and details. But the key
is to create conditions that offer instant feedback, mobile within a
real-time physical space. This is the kind of endeavor in which the
human mind invariably excels to astonishing heights. For optimum and
expedient adaptation opportunities, stark black and white training
facilities would need to be developed, large checkerboard floors for
instance, with walls compared to objects offering maximum contrast.
Finally I am suggesting that if such an image is made available to the
brain, that it is the natural business of the brain to recognize such
an area of two-dimensional data within a given feedback loop of real-
time information, this being completely analogous to the normal
relationship between the visual cortex and the rear portion of the
inner eye surface (in effect offering the visual cortex an ‘alternate
retina’). It is the correlation between the real-time world, and this
new area of stimulation, that will achieved the inevitable
communication of a useful moving ‘picture’.
Furthermore, I am convinced the only reason a Tactile Image Projection
system has not already been developed and adapted into practical use
for the blind, is because the inadequacies of low resolution, and this
most critical prerequisite of ‘real-time mobile interaction with the
environment,’ are being overlooked.
David Albert Harrell