Discussion:
“Tactile Image Projection” Get this device ‘Moving!’ (and with resolution this time)
(too old to reply)
David Albert Harrell
2010-11-21 01:06:19 UTC
Permalink
-----------preface----------------------------------------
All attempts thus far to evaluate and develop ‘Tactile Image
Projection’ have dealt with a stationary subject. This approach does
not offer the kind of real-time environmental feedback necessary for
the subject to begin adapting to the device, nor for the brain to
naturally discover and correlate the relevance and inherent value of
the area being stimulated.

In other words, the most essential prerequisite of this device has
been overlooked entirely; that is, ‘Tactile Image Projection’ simply
will not function [to any degree even approaching the usefulness of
natural sight] on a stationary subject. Mobility is essential for
human adaptation and application, thus allowing for an immediate
environmental feedback interaction cycle to develop.

Putting such a device on a stationary subject, would be like inventing
a parachute and then attempting to test it from the deck of a
submarine. Or a perhaps even more enlightening analogy, attempting to
evaluate this device with a stationary subject would be like trying to
determine the practical values of ‘a new invention known as the
automobile’ without taking the vehicle out of park.
------------end preface-----------------------------------
[Note the earliest references to the concept of ‘Tactile Image
Projection’ appear to have been made by Paul Bach-Y-Rita and Carter C.
Collins at National Symposium for Information Display in 1967:
followed up by Paul Bach-Y-Rita, Carter C. Collins, Frank A. Saunders,
Benjamin White, and Lawrence Scadden at The Smith-Kettlewell Institute
of Visual Sciences in San Francisco, CA in 1969.]
-------------------------------------------------------------

Did you ever put your hand on a TV screen to see if you can feel
anything? You can't. But if you could, you would feel thousands of
dots being electronically selected and lighted to create an image over
the entire two-dimensional field.

If such a field were delivered to the ‘sea of nerve endings’ contained
in a large area of skin, would a human being be able to make use of
this two-dimensionally ordered image?

The device is in three main parts:
1. A video camera.
2. A central processing unit (computer).
3. A flexible pad worn snugly to the skin (or scanning emitter) that
stimulates the nerve endings of the dermal area.

Briefly, the pictorial image from a video camera is received and
processed by a computer, and then delivered to an ‘x,y
grid’ (optimally perhaps as high as 100,000 pixels or more, depending
on the density of the neural receptors being targeted), in the form of
dermal stimulating impulses. The specific type of stimulation is a
variable at this point. Electric shock, vibration, heat, and laser
(of low but perceivable intensity) have been considered, but there are
other possibilities, including the type of electric current which the
brain is already accustom to receiving. This question of ‘What kind
of stimulation would be most effective?’ can only be answered through
experimentation.

The neurological system, pattern recognition capabilities, and natural
adaptive powers of the human mind accomplish the remainder of this
unorthodox direct image perception. It is the natural business of the
brain to respond to a specific overture of patterned stimuli. A
mobile blind subject, wearing such a device, would have an opportunity
to create a real-time feedback relationship with the physical world
environment.

One of the reasons I am convinced, that such a tactile-conveyed image
can be usefully perceived by the brain, is that I have subjectively
proven it. I have repeatedly conducted sessions in which I sat
quietly, blindfolded or with my eyes closed, while another person drew
simple pictures on my back. At first I was only able to deduce the
images by reconstructing them in my mind, however eventually, during
many of the more focused sessions, the touch of the finger on my skin
began to ‘light up’ in the darkness of my mind’s eye, leaving a trail
that lingered long enough in many cases for me to perceive the entire
image as a coherent complete picture. This kind of exercise however
only demonstrates the conveyance of the two dimensional plane to the
brain; in order to realize full potential you must get the subject
moving, navigating obstacles and negotiating objects.

What is essentially being suggested is that the normal two-dimensional
image that falls upon the cones and rods on the rear portion of the
inner eye (retina), can be effectively replace (in its role with the
visual cortex of the brain), by a larger dermal area (such as the
back, stomach, or scalp for instance) undergoing a different (but also
two dimensional) stimulation; creating a parallel system of input that
the brain would have an opportunity to recognize in a somewhat
familiar manner.

The overall objective of the device is to produce some form of
detectable stimulation corresponding with the lighted areas in the
video picture [with polarity reversible]. This stimulation might be
delivered by some form of hovering scan emitter, or a snugly worn pad
embedded with an electrode grid array.

The prototype should be designed to emit as many different types of
stimulation as possible since we don’t yet know what will work most
effectively, and such stimulation may actually need to be changed
during eventual practical usage considering the propensity of specific
neural receptors to become over-stimulated. Given the current state
of electronic technology, I am certain that such stimulation could be
delivered to a targeted dermal area in a variety of different methods
and intensities.

Various versions of this device have been described before, including
discussion on the implantation of a transmitter into the visual
cortex. I consider an implant to the visual cortex to be a clumsy,
unnecessarily invasive, and less effective conduit to the brain than
natural tactile perception.

Notice that all versions of this device thus far appear to be offering
relatively low, therefore ineffectual, resolution. The most essential
prerequisite of ‘Tactile Image Projection’ however has been overlooked
entirely; that is, this device simply will not function [to any degree
even approaching the usefulness of natural sight] on a stationary
subject. Mobility and the immediate environmental feedback
interaction cycle are essential for human adaptation and application.

Applying such a device to a stationary subject, would be like
inventing a parachute and then attempting to test it from the deck of
a submarine. Or a perhaps even more enlightening analogy, attempting
to evaluate this device with a stationary subject would be like trying
to determine the practical values of ‘a new invention known as the
automobile’ without taking the vehicle out of park.

The Institute of Medical Sciences (San Francisco, CA) [Carter C.
Collins] in a 1969-71 publication mentions ‘immobility’ in passing
(along with weight, bulk, expense, and power consumption) as one of
the problems that has arisen in prior projects, but fails to point out
that such immobility entirely negates any attempt to develop or even
evaluate the effectiveness of Tactile Image Projection.

Assume that we have already built such a device, ie a moving picture
is being delivered to the subject in perceptible format; if the
subject does not proceed to interface with a real-time environment,
one cannot expect relevant learning cycle, or even useful and
sustained cerebral discovery of the area, to occur.

All attempts thus far have dealt with far too low resolution, and a
stationary subject. This scenario does not offer the kind of real-
time feedback necessary for the subject to begin adapting to the
device, nor for the brain to discover and correlate the relevance and
inherent value of the area being stimulated.

Remember, adaptability is perhaps the strongest single resource of the
brain. If a useful orderly image is made available, the brain will
‘tune into it’ out of need. The only other ingredient necessary to
achieve effective results with this device is a resolve to make use of
the newly introduced image.

One of the problems with past, and apparently current, thinking is
that the x,y grid is being applied to small areas such as the hands,
tongue, and finger tips. I realize receptor density is greater in
these areas, but the resolution needed is simply not possible using
hundreds of electrodes as oppose to thousands. The focus should be on
covering as large an area as possible (perhaps even wrapping around to
the chest and stomach from the back for greater resolution). A
bodysuit providing coverage of all available potentially useful dermal
areas may even be proven most effective. [Notice that the resolution
of the ‘x,y grid’ should approach, or surpass to some degree, the
density of the dermal neural receptors being targeted.]

As for interfacing the subject with a real-time physical environment,
imagine for instance that a mobile subject was fitted with a working
portable device delivering a picture from a camera (cap or eyeglasses
mounted for instance) to an emitter pad or scanning emitter array.

Now place this subject in an environment void of light, except for a
line guided path which the subject would begin to walk upon. Imagine
this path at some point has a low hanging lighted ‘bright white’ limb
(I don't want to appear cruel of flippant here, but this is necessary
to make my point). The first time the subject encountered the limb,
there would be a registration by the Tactile Image Projection system
of a ‘white stripe’ that would pass across the ‘emitter field’ just
before the subject was impeded by the limb.

My point being that eventually the subject will associate the passing
of the ‘white stripe’ as a cognitive precursor to being struck by the
limb, and will duck. The rest is merely a matter of real-time
experience, learning to distinguish shapes and details. But the key
is to create conditions that offer instant feedback, mobile within a
real-time physical space. This is the kind of endeavor in which the
human mind invariably excels to astonishing heights. For optimum and
expedient adaptation opportunities, stark black and white training
facilities would need to be developed, large checkerboard floors for
instance, with walls compared to objects offering maximum contrast.

Finally I am suggesting that if such an image is made available to the
brain, that it is the natural business of the brain to recognize such
an area of two-dimensional data within a given feedback loop of real-
time information, this being completely analogous to the normal
relationship between the visual cortex and the rear portion of the
inner eye surface (in effect offering the visual cortex an ‘alternate
retina’). It is the correlation between the real-time world, and this
new area of stimulation, that will achieved the inevitable
communication of a useful moving ‘picture’.

Furthermore, I am convinced the only reason a Tactile Image Projection
system has not already been developed and adapted into practical use
for the blind, is because the inadequacies of low resolution, and this
most critical prerequisite of ‘real-time mobile interaction with the
environment,’ are being overlooked.

David Albert Harrell
J. P. Gilliver (John)
2010-11-21 03:05:59 UTC
Permalink
I can only comment for the newsgroup a.c.b-u; for we readers of that
'group, this article has sprung fully-formed out of the blue. I can't
speak for the several other 'groups it has been posted to.

It does seem that the author of the article feels some resentment to
somebody about things he things have not or are not being done.


In message
Post by David Albert Harrell
-----------preface----------------------------------------
All attempts thus far to evaluate and develop ‘Tactile Image
Projection’ have dealt with a stationary subject. This approach does
not offer the kind of real-time environmental feedback necessary for
the subject to begin adapting to the device, nor for the brain to
naturally discover and correlate the relevance and inherent value of
the area being stimulated.
I agree that the perception of a changing environment might well make
learning to use such equipment easier - though I don't think I'd go as
far as to state categorically that that is the case. It may vary with
different technologies, and with different subjects: some subjects may
find it easier to use some technologies with a fixed environment
initially.
Post by David Albert Harrell
In other words, the most essential prerequisite of this device has
been overlooked entirely; that is, ‘Tactile Image Projection’ simply
will not function [to any degree even approaching the usefulness of
natural sight] on a stationary subject. Mobility is essential for
human adaptation and application, thus allowing for an immediate
environmental feedback interaction cycle to develop.
See above. Also, I'm sure it has not been overlooked as you put it: it
may not have been implemented, due to the bulk of the equipment and its
supporting equipment, in the past, which may no longer be the case
(though prototypes are still likely to be bigger than production
models).
Post by David Albert Harrell
Putting such a device on a stationary subject, would be like inventing
a parachute and then attempting to test it from the deck of a
submarine. Or a perhaps even more enlightening analogy, attempting to
Submarines don't have decks really, but I know what you mean.
Post by David Albert Harrell
evaluate this device with a stationary subject would be like trying to
determine the practical values of ‘a new invention known as the
automobile’ without taking the vehicle out of park.
Neither analogy is completely valid: it is certainly possible that a
test subject might be able to perceive a stationary environment, i. e.
get some benefit from it.
Post by David Albert Harrell
------------end preface-----------------------------------
[Note the earliest references to the concept of ‘Tactile Image
Projection’ appear to have been made by Paul Bach-Y-Rita and Carter C.
followed up by Paul Bach-Y-Rita, Carter C. Collins, Frank A. Saunders,
Benjamin White, and Lawrence Scadden at The Smith-Kettlewell Institute
of Visual Sciences in San Francisco, CA in 1969.]
I remember seeing some TV prog. (I am sighted) about a system which
worked by a grid of vibrating needles pressed against the back of the
subject, connected to a camera. Although this was a decade or two or
three ago, I think the limitation of if I remember the grid only being a
few tens of points square was more because of the lack of discrimination
in the back (i. e. there aren't enough nerves there), than limitations
of the equipment.
[]
Post by David Albert Harrell
If such a field were delivered to the ‘sea of nerve endings’ contained
in a large area of skin, would a human being be able to make use of
this two-dimensionally ordered image?
There are few areas of the human body that have anything like the number
of nerve endings you mention below.
Post by David Albert Harrell
1. A video camera.
2. A central processing unit (computer).
3. A flexible pad worn snugly to the skin (or scanning emitter) that
stimulates the nerve endings of the dermal area.
Briefly, the pictorial image from a video camera is received and
processed by a computer, and then delivered to an ‘x,y
grid’ (optimally perhaps as high as 100,000 pixels or more, depending
on the density of the neural receptors being targeted), in the form of
dermal stimulating impulses. The specific type of stimulation is a
Making a grid of that size - even if there were enough nerves to detect
it, which there aren't - would be a pretty major technical challenge.
Post by David Albert Harrell
variable at this point. Electric shock, vibration, heat, and laser
(of low but perceivable intensity) have been considered, but there are
other possibilities, including the type of electric current which the
brain is already accustom to receiving. This question of ‘What kind
of stimulation would be most effective?’ can only be answered through
experimentation.
That (i. e. experimentation) does sound like the way to go. I don't
think the laser is actually a separate type of stimulation - for the
purposes of this discussion, laser would just be heat.
Post by David Albert Harrell
The neurological system, pattern recognition capabilities, and natural
adaptive powers of the human mind accomplish the remainder of this
unorthodox direct image perception. It is the natural business of the
brain to respond to a specific overture of patterned stimuli. A
mobile blind subject, wearing such a device, would have an opportunity
to create a real-time feedback relationship with the physical world
environment.
The nearest we have got to this so far has been the Optacon;
unfortunately, the company that made them folded some years ago.
Post by David Albert Harrell
One of the reasons I am convinced, that such a tactile-conveyed image
can be usefully perceived by the brain, is that I have subjectively
proven it. I have repeatedly conducted sessions in which I sat
quietly, blindfolded or with my eyes closed, while another person drew
simple pictures on my back. At first I was only able to deduce the
images by reconstructing them in my mind, however eventually, during
many of the more focused sessions, the touch of the finger on my skin
began to ‘light up’ in the darkness of my mind’s eye, leaving a trail
that lingered long enough in many cases for me to perceive the entire
image as a coherent complete picture. This kind of exercise however
The grid of vibrating needles I mentioned earlier did also achieve some
success, i. e. the subjects were able eventually to see shapes and so
on.
Post by David Albert Harrell
only demonstrates the conveyance of the two dimensional plane to the
brain; in order to realize full potential you must get the subject
moving, navigating obstacles and negotiating objects.
Again, I tend to agree that that is likely to help.
Post by David Albert Harrell
What is essentially being suggested is that the normal two-dimensional
Are you suggesting it? Writing in the third person isn't really a good
idea for newsgroups; in my opinion it isn't for scientific papers
either, as it makes them sound cold and dead, but many scientists seem
to feel more secure if things are written that way.
Post by David Albert Harrell
image that falls upon the cones and rods on the rear portion of the
inner eye (retina), can be effectively replace (in its role with the
visual cortex of the brain), by a larger dermal area (such as the
back, stomach, or scalp for instance) undergoing a different (but also
two dimensional) stimulation; creating a parallel system of input that
the brain would have an opportunity to recognize in a somewhat
familiar manner.
It seems that the brain does indeed, after a while, start to process the
information as pictorial information, however it comes in.
Post by David Albert Harrell
The overall objective of the device is to produce some form of
detectable stimulation corresponding with the lighted areas in the
video picture [with polarity reversible]. This stimulation might be
delivered by some form of hovering scan emitter, or a snugly worn pad
embedded with an electrode grid array.
The prototype should be designed to emit as many different types of
stimulation as possible since we don’t yet know what will work most
effectively, and such stimulation may actually need to be changed
during eventual practical usage considering the propensity of specific
neural receptors to become over-stimulated. Given the current state
of electronic technology, I am certain that such stimulation could be
delivered to a targeted dermal area in a variety of different methods
and intensities.
It's not the electronic technology, that can certainly do it: high
resolution video processing is everyday these days. What is not anything
like so commonplace is the transducer, especially if it is going to
produce several different types of stimulation. Apart from video
displays, where the only stimulation is light, there is no similarly
dense hardware: the closest I can think of is print heads, but they
cover a fairly small area with not that huge a number of transducers -
they cover the A4 page by scanning the head over it.
Post by David Albert Harrell
Various versions of this device have been described before, including
discussion on the implantation of a transmitter into the visual
cortex. I consider an implant to the visual cortex to be a clumsy,
unnecessarily invasive, and less effective conduit to the brain than
natural tactile perception.
It's not really on because of the amount of invasion required, which is
intrinsically dangerous, and the number of connections required.
Post by David Albert Harrell
Notice that all versions of this device thus far appear to be offering
relatively low, therefore ineffectual, resolution. The most essential
prerequisite of ‘Tactile Image Projection’ however has been overlooked
entirely; that is, this device simply will not function [to any degree
even approaching the usefulness of natural sight] on a stationary
That is your view. It may be true, but you need more evidence than just
saying it.
Post by David Albert Harrell
subject. Mobility and the immediate environmental feedback
interaction cycle are essential for human adaptation and application.
Desirable, certainly. Whether essential is still to be determined.
Post by David Albert Harrell
Applying such a device to a stationary subject, would be like
inventing a parachute and then attempting to test it from the deck of
a submarine. Or a perhaps even more enlightening analogy, attempting
to evaluate this device with a stationary subject would be like trying
to determine the practical values of ‘a new invention known as the
automobile’ without taking the vehicle out of park.
We've already had that paragraph.
Post by David Albert Harrell
The Institute of Medical Sciences (San Francisco, CA) [Carter C.
Collins] in a 1969-71 publication mentions ‘immobility’ in passing
(along with weight, bulk, expense, and power consumption) as one of
the problems that has arisen in prior projects, but fails to point out
that such immobility entirely negates any attempt to develop or even
evaluate the effectiveness of Tactile Image Projection.
And you fail to explain how you would overcome these problems.
Certainly, technology in terms of the electronics required has moved on
almost out of all recognition in the time since that was published, but
the transducer has not developed to anything like the same extent - and
the lack of suitable nerves in the skin, of course, has not changed.
Post by David Albert Harrell
Assume that we have already built such a device, ie a moving picture
is being delivered to the subject in perceptible format; if the
subject does not proceed to interface with a real-time environment,
one cannot expect relevant learning cycle, or even useful and
sustained cerebral discovery of the area, to occur.
The initial assumption is a big one!
Post by David Albert Harrell
All attempts thus far have dealt with far too low resolution, and a
stationary subject. This scenario does not offer the kind of real-
time feedback necessary for the subject to begin adapting to the
device, nor for the brain to discover and correlate the relevance and
inherent value of the area being stimulated.
You may well be right. But stating the problem does not solve it.
[]
Post by David Albert Harrell
One of the problems with past, and apparently current, thinking is
that the x,y grid is being applied to small areas such as the hands,
tongue, and finger tips. I realize receptor density is greater in
these areas, but the resolution needed is simply not possible using
hundreds of electrodes as oppose to thousands. The focus should be on
covering as large an area as possible (perhaps even wrapping around to
the chest and stomach from the back for greater resolution). A
You're still not going to reach the 100,000 pixels you want.
Post by David Albert Harrell
bodysuit providing coverage of all available potentially useful dermal
areas may even be proven most effective. [Notice that the resolution
of the ‘x,y grid’ should approach, or surpass to some degree, the
density of the dermal neural receptors being targeted.]
I suspect the brain would not really adapt to signals received from such
a large area, though I may be wrong.
Post by David Albert Harrell
As for interfacing the subject with a real-time physical environment,
imagine for instance that a mobile subject was fitted with a working
portable device delivering a picture from a camera (cap or eyeglasses
mounted for instance) to an emitter pad or scanning emitter array.
Now place this subject in an environment void of light, except for a
line guided path which the subject would begin to walk upon. Imagine
this path at some point has a low hanging lighted ‘bright white’ limb
(I don't want to appear cruel of flippant here, but this is necessary
to make my point). The first time the subject encountered the limb,
there would be a registration by the Tactile Image Projection system
of a ‘white stripe’ that would pass across the ‘emitter field’ just
before the subject was impeded by the limb.
My point being that eventually the subject will associate the passing
of the ‘white stripe’ as a cognitive precursor to being struck by the
limb, and will duck. The rest is merely a matter of real-time
I think a learned response to an artificial situation is of limited
value. If, as you seem to be, you are talking about avoiding obstacles,
then obstacles will come in all shapes and sizes.
Post by David Albert Harrell
experience, learning to distinguish shapes and details. But the key
is to create conditions that offer instant feedback, mobile within a
real-time physical space. This is the kind of endeavor in which the
human mind invariably excels to astonishing heights. For optimum and
Agreed.
Post by David Albert Harrell
expedient adaptation opportunities, stark black and white training
facilities would need to be developed, large checkerboard floors for
instance, with walls compared to objects offering maximum contrast.
That _may_ be the way to go. But remember that, eventually, for the
system to be of practical use, the subject will have to leave the
laboratory, into a world where things have not been prepared.
Post by David Albert Harrell
Finally I am suggesting that if such an image is made available to the
brain, that it is the natural business of the brain to recognize such
an area of two-dimensional data within a given feedback loop of real-
time information, this being completely analogous to the normal
relationship between the visual cortex and the rear portion of the
inner eye surface (in effect offering the visual cortex an ‘alternate
retina’). It is the correlation between the real-time world, and this
new area of stimulation, that will achieved the inevitable
communication of a useful moving ‘picture’.
Furthermore, I am convinced the only reason a Tactile Image Projection
system has not already been developed and adapted into practical use
for the blind, is because the inadequacies of low resolution, and this
most critical prerequisite of ‘real-time mobile interaction with the
environment,’ are being overlooked.
You write as if something is not being done because no-one has realised
it is a problem. I am sure this is not the case.
Post by David Albert Harrell
David Albert Harrell
John Paul Gilliver (electronic generalist, with no medical training as
such)
--
J. P. Gilliver. UMRA: 1960/<1985 MB++G.5AL-IS-P--Ch++(p)***@T0H+Sh0!:`)DNAf

Perhaps it's worth remembering that Albert Einstein defined common sense as a
'set of prejudices acquired by age 18'. (Quoted by Gordon Dennis on letters page
of computing, 5 February 2004.)
Brian Gaff
2010-11-21 10:18:28 UTC
Permalink
I've not heard of it, but from the standpoint I have, then what is almost
needed for stationary images is dithering of outlines to allow nerves the
stimulus they need to work. after all wherever you touch the body the effect
is transient, and continued touching with no break is not registered, this
is why we are not aware of clothing etc.

Brian
--
Brian Gaff - ***@blueyonder.co.uk
Note:- In order to reduce spam, any email without 'Brian Gaff'
in the display name may be lost.
Blind user, so no pictures please!
Post by J. P. Gilliver (John)
I can only comment for the newsgroup a.c.b-u; for we readers of that
'group, this article has sprung fully-formed out of the blue. I can't speak
for the several other 'groups it has been posted to.
It does seem that the author of the article feels some resentment to
somebody about things he things have not or are not being done.
In message
Post by David Albert Harrell
-----------preface----------------------------------------
All attempts thus far to evaluate and develop 'Tactile Image
Projection' have dealt with a stationary subject. This approach does
not offer the kind of real-time environmental feedback necessary for
the subject to begin adapting to the device, nor for the brain to
naturally discover and correlate the relevance and inherent value of
the area being stimulated.
I agree that the perception of a changing environment might well make
learning to use such equipment easier - though I don't think I'd go as far
as to state categorically that that is the case. It may vary with
different technologies, and with different subjects: some subjects may
find it easier to use some technologies with a fixed environment
initially.
Post by David Albert Harrell
In other words, the most essential prerequisite of this device has
been overlooked entirely; that is, 'Tactile Image Projection' simply
will not function [to any degree even approaching the usefulness of
natural sight] on a stationary subject. Mobility is essential for
human adaptation and application, thus allowing for an immediate
environmental feedback interaction cycle to develop.
See above. Also, I'm sure it has not been overlooked as you put it: it may
not have been implemented, due to the bulk of the equipment and its
supporting equipment, in the past, which may no longer be the case (though
prototypes are still likely to be bigger than production models).
Post by David Albert Harrell
Putting such a device on a stationary subject, would be like inventing
a parachute and then attempting to test it from the deck of a
submarine. Or a perhaps even more enlightening analogy, attempting to
Submarines don't have decks really, but I know what you mean.
Post by David Albert Harrell
evaluate this device with a stationary subject would be like trying to
determine the practical values of 'a new invention known as the
automobile' without taking the vehicle out of park.
Neither analogy is completely valid: it is certainly possible that a test
subject might be able to perceive a stationary environment, i. e. get some
benefit from it.
Post by David Albert Harrell
------------end preface-----------------------------------
[Note the earliest references to the concept of 'Tactile Image
Projection' appear to have been made by Paul Bach-Y-Rita and Carter C.
followed up by Paul Bach-Y-Rita, Carter C. Collins, Frank A. Saunders,
Benjamin White, and Lawrence Scadden at The Smith-Kettlewell Institute
of Visual Sciences in San Francisco, CA in 1969.]
I remember seeing some TV prog. (I am sighted) about a system which worked
by a grid of vibrating needles pressed against the back of the subject,
connected to a camera. Although this was a decade or two or three ago, I
think the limitation of if I remember the grid only being a few tens of
points square was more because of the lack of discrimination in the back
(i. e. there aren't enough nerves there), than limitations of the
equipment.
[]
Post by David Albert Harrell
If such a field were delivered to the 'sea of nerve endings' contained
in a large area of skin, would a human being be able to make use of
this two-dimensionally ordered image?
There are few areas of the human body that have anything like the number
of nerve endings you mention below.
Post by David Albert Harrell
1. A video camera.
2. A central processing unit (computer).
3. A flexible pad worn snugly to the skin (or scanning emitter) that
stimulates the nerve endings of the dermal area.
Briefly, the pictorial image from a video camera is received and
processed by a computer, and then delivered to an 'x,y
grid' (optimally perhaps as high as 100,000 pixels or more, depending
on the density of the neural receptors being targeted), in the form of
dermal stimulating impulses. The specific type of stimulation is a
Making a grid of that size - even if there were enough nerves to detect
it, which there aren't - would be a pretty major technical challenge.
Post by David Albert Harrell
variable at this point. Electric shock, vibration, heat, and laser
(of low but perceivable intensity) have been considered, but there are
other possibilities, including the type of electric current which the
brain is already accustom to receiving. This question of 'What kind
of stimulation would be most effective?' can only be answered through
experimentation.
That (i. e. experimentation) does sound like the way to go. I don't think
the laser is actually a separate type of stimulation - for the purposes of
this discussion, laser would just be heat.
Post by David Albert Harrell
The neurological system, pattern recognition capabilities, and natural
adaptive powers of the human mind accomplish the remainder of this
unorthodox direct image perception. It is the natural business of the
brain to respond to a specific overture of patterned stimuli. A
mobile blind subject, wearing such a device, would have an opportunity
to create a real-time feedback relationship with the physical world
environment.
The nearest we have got to this so far has been the Optacon;
unfortunately, the company that made them folded some years ago.
Post by David Albert Harrell
One of the reasons I am convinced, that such a tactile-conveyed image
can be usefully perceived by the brain, is that I have subjectively
proven it. I have repeatedly conducted sessions in which I sat
quietly, blindfolded or with my eyes closed, while another person drew
simple pictures on my back. At first I was only able to deduce the
images by reconstructing them in my mind, however eventually, during
many of the more focused sessions, the touch of the finger on my skin
began to 'light up' in the darkness of my mind's eye, leaving a trail
that lingered long enough in many cases for me to perceive the entire
image as a coherent complete picture. This kind of exercise however
The grid of vibrating needles I mentioned earlier did also achieve some
success, i. e. the subjects were able eventually to see shapes and so on.
Post by David Albert Harrell
only demonstrates the conveyance of the two dimensional plane to the
brain; in order to realize full potential you must get the subject
moving, navigating obstacles and negotiating objects.
Again, I tend to agree that that is likely to help.
Post by David Albert Harrell
What is essentially being suggested is that the normal two-dimensional
Are you suggesting it? Writing in the third person isn't really a good
idea for newsgroups; in my opinion it isn't for scientific papers either,
as it makes them sound cold and dead, but many scientists seem to feel
more secure if things are written that way.
Post by David Albert Harrell
image that falls upon the cones and rods on the rear portion of the
inner eye (retina), can be effectively replace (in its role with the
visual cortex of the brain), by a larger dermal area (such as the
back, stomach, or scalp for instance) undergoing a different (but also
two dimensional) stimulation; creating a parallel system of input that
the brain would have an opportunity to recognize in a somewhat
familiar manner.
It seems that the brain does indeed, after a while, start to process the
information as pictorial information, however it comes in.
Post by David Albert Harrell
The overall objective of the device is to produce some form of
detectable stimulation corresponding with the lighted areas in the
video picture [with polarity reversible]. This stimulation might be
delivered by some form of hovering scan emitter, or a snugly worn pad
embedded with an electrode grid array.
The prototype should be designed to emit as many different types of
stimulation as possible since we don't yet know what will work most
effectively, and such stimulation may actually need to be changed
during eventual practical usage considering the propensity of specific
neural receptors to become over-stimulated. Given the current state
of electronic technology, I am certain that such stimulation could be
delivered to a targeted dermal area in a variety of different methods
and intensities.
It's not the electronic technology, that can certainly do it: high
resolution video processing is everyday these days. What is not anything
like so commonplace is the transducer, especially if it is going to
produce several different types of stimulation. Apart from video displays,
the closest I can think of is print heads, but they cover a fairly small
area with not that huge a number of transducers - they cover the A4 page
by scanning the head over it.
Post by David Albert Harrell
Various versions of this device have been described before, including
discussion on the implantation of a transmitter into the visual
cortex. I consider an implant to the visual cortex to be a clumsy,
unnecessarily invasive, and less effective conduit to the brain than
natural tactile perception.
It's not really on because of the amount of invasion required, which is
intrinsically dangerous, and the number of connections required.
Post by David Albert Harrell
Notice that all versions of this device thus far appear to be offering
relatively low, therefore ineffectual, resolution. The most essential
prerequisite of 'Tactile Image Projection' however has been overlooked
entirely; that is, this device simply will not function [to any degree
even approaching the usefulness of natural sight] on a stationary
That is your view. It may be true, but you need more evidence than just
saying it.
Post by David Albert Harrell
subject. Mobility and the immediate environmental feedback
interaction cycle are essential for human adaptation and application.
Desirable, certainly. Whether essential is still to be determined.
Post by David Albert Harrell
Applying such a device to a stationary subject, would be like
inventing a parachute and then attempting to test it from the deck of
a submarine. Or a perhaps even more enlightening analogy, attempting
to evaluate this device with a stationary subject would be like trying
to determine the practical values of 'a new invention known as the
automobile' without taking the vehicle out of park.
We've already had that paragraph.
Post by David Albert Harrell
The Institute of Medical Sciences (San Francisco, CA) [Carter C.
Collins] in a 1969-71 publication mentions 'immobility' in passing
(along with weight, bulk, expense, and power consumption) as one of
the problems that has arisen in prior projects, but fails to point out
that such immobility entirely negates any attempt to develop or even
evaluate the effectiveness of Tactile Image Projection.
And you fail to explain how you would overcome these problems. Certainly,
technology in terms of the electronics required has moved on almost out of
all recognition in the time since that was published, but the transducer
has not developed to anything like the same extent - and the lack of
suitable nerves in the skin, of course, has not changed.
Post by David Albert Harrell
Assume that we have already built such a device, ie a moving picture
is being delivered to the subject in perceptible format; if the
subject does not proceed to interface with a real-time environment,
one cannot expect relevant learning cycle, or even useful and
sustained cerebral discovery of the area, to occur.
The initial assumption is a big one!
Post by David Albert Harrell
All attempts thus far have dealt with far too low resolution, and a
stationary subject. This scenario does not offer the kind of real-
time feedback necessary for the subject to begin adapting to the
device, nor for the brain to discover and correlate the relevance and
inherent value of the area being stimulated.
You may well be right. But stating the problem does not solve it.
[]
Post by David Albert Harrell
One of the problems with past, and apparently current, thinking is
that the x,y grid is being applied to small areas such as the hands,
tongue, and finger tips. I realize receptor density is greater in
these areas, but the resolution needed is simply not possible using
hundreds of electrodes as oppose to thousands. The focus should be on
covering as large an area as possible (perhaps even wrapping around to
the chest and stomach from the back for greater resolution). A
You're still not going to reach the 100,000 pixels you want.
Post by David Albert Harrell
bodysuit providing coverage of all available potentially useful dermal
areas may even be proven most effective. [Notice that the resolution
of the 'x,y grid' should approach, or surpass to some degree, the
density of the dermal neural receptors being targeted.]
I suspect the brain would not really adapt to signals received from such a
large area, though I may be wrong.
Post by David Albert Harrell
As for interfacing the subject with a real-time physical environment,
imagine for instance that a mobile subject was fitted with a working
portable device delivering a picture from a camera (cap or eyeglasses
mounted for instance) to an emitter pad or scanning emitter array.
Now place this subject in an environment void of light, except for a
line guided path which the subject would begin to walk upon. Imagine
this path at some point has a low hanging lighted 'bright white' limb
(I don't want to appear cruel of flippant here, but this is necessary
to make my point). The first time the subject encountered the limb,
there would be a registration by the Tactile Image Projection system
of a 'white stripe' that would pass across the 'emitter field' just
before the subject was impeded by the limb.
My point being that eventually the subject will associate the passing
of the 'white stripe' as a cognitive precursor to being struck by the
limb, and will duck. The rest is merely a matter of real-time
I think a learned response to an artificial situation is of limited value.
If, as you seem to be, you are talking about avoiding obstacles, then
obstacles will come in all shapes and sizes.
Post by David Albert Harrell
experience, learning to distinguish shapes and details. But the key
is to create conditions that offer instant feedback, mobile within a
real-time physical space. This is the kind of endeavor in which the
human mind invariably excels to astonishing heights. For optimum and
Agreed.
Post by David Albert Harrell
expedient adaptation opportunities, stark black and white training
facilities would need to be developed, large checkerboard floors for
instance, with walls compared to objects offering maximum contrast.
That _may_ be the way to go. But remember that, eventually, for the system
to be of practical use, the subject will have to leave the laboratory,
into a world where things have not been prepared.
Post by David Albert Harrell
Finally I am suggesting that if such an image is made available to the
brain, that it is the natural business of the brain to recognize such
an area of two-dimensional data within a given feedback loop of real-
time information, this being completely analogous to the normal
relationship between the visual cortex and the rear portion of the
inner eye surface (in effect offering the visual cortex an 'alternate
retina'). It is the correlation between the real-time world, and this
new area of stimulation, that will achieved the inevitable
communication of a useful moving 'picture'.
Furthermore, I am convinced the only reason a Tactile Image Projection
system has not already been developed and adapted into practical use
for the blind, is because the inadequacies of low resolution, and this
most critical prerequisite of 'real-time mobile interaction with the
environment,' are being overlooked.
You write as if something is not being done because no-one has realised it
is a problem. I am sure this is not the case.
Post by David Albert Harrell
David Albert Harrell
John Paul Gilliver (electronic generalist, with no medical training as
such)
--
Perhaps it's worth remembering that Albert Einstein defined common sense as a
'set of prejudices acquired by age 18'. (Quoted by Gordon Dennis on letters page
of computing, 5 February 2004.)
J. P. Gilliver (John)
2010-11-21 10:46:28 UTC
Permalink
Post by Brian Gaff
I've not heard of it, but from the standpoint I have, then what is almost
needed for stationary images is dithering of outlines to allow nerves the
stimulus they need to work. after all wherever you touch the body the effect
is transient, and continued touching with no break is not registered, this
is why we are not aware of clothing etc.
Brian
Good point. That's probably why the subjects found they began to "see"
using the grid of vibrating points I remember seeing the prog. about all
those years ago; it was the vibration that varied to indicate changes -
simple pressure would have as you say "worn off".
--
J. P. Gilliver. UMRA: 1960/<1985 MB++G.5AL-IS-P--Ch++(p)***@T0H+Sh0!:`)DNAf

Perhaps it's worth remembering that Albert Einstein defined common sense as a
'set of prejudices acquired by age 18'. (Quoted by Gordon Dennis on letters
page
of computing, 5 February 2004.)
Brian Gaff
2010-11-22 09:16:32 UTC
Permalink
Also, its beginning to look like the eye itself uses similar techniques to
'see' shapes. I seem to recall the fact that the brain moves the eye
without you noticing to tell if things have moved and presumably also where
edges are.
I suspect this could be why some of us with later sight loss complain of
tired eyes and others see wobbling eyes, because nobody has told the brain
software to stop this wobbling and the wobbling gets bigger and bigger but
no edges are seen.

Brian
--
Brian Gaff....Note, this account does not accept Bcc: email.
graphics are great, but the blind can't hear them
Email: ***@blueyonder.co.uk
______________________________________________________________________________________________________________
Post by J. P. Gilliver (John)
Post by Brian Gaff
I've not heard of it, but from the standpoint I have, then what is almost
needed for stationary images is dithering of outlines to allow nerves the
stimulus they need to work. after all wherever you touch the body the effect
is transient, and continued touching with no break is not registered, this
is why we are not aware of clothing etc.
Brian
Good point. That's probably why the subjects found they began to "see"
using the grid of vibrating points I remember seeing the prog. about all
those years ago; it was the vibration that varied to indicate changes -
simple pressure would have as you say "worn off".
--
Perhaps it's worth remembering that Albert Einstein defined common sense as a
'set of prejudices acquired by age 18'. (Quoted by Gordon Dennis on letters
page
of computing, 5 February 2004.)
k***@kymhorsell.com
2010-11-22 09:35:39 UTC
Permalink
Post by Brian Gaff
Also, its beginning to look like the eye itself uses similar techniques to
'see' shapes. I seem to recall the fact that the brain moves the eye
without you noticing to tell if things have moved and presumably also where
edges are.
I suspect this could be why some of us with later sight loss complain of
tired eyes and others see wobbling eyes, because nobody has told the brain
software to stop this wobbling and the wobbling gets bigger and bigger but
no edges are seen.
There's been some interesting research into saccade motion.

It seems that not only does the optic centre not notice these
jerky motions as the eye follows various points of interest
at high speed, but parts of the brain switch off. During the gap,
a kind of virtual reality playback system kicks in making
the world appear continuous to upper functions. But the VR is only an
approcximation to what is happing during the blackout.

What is interesting about the research is that certain kinds of
motions can "attract the eye" and maximize the switch-off period
to something like 200 ms.

Certain stage magicians have learned independently about this.
By making certain motions they can "hide in plain sight" the mechanics of
various tricks. Most people will watch the trick and just not see what
to the performer appears to be obvious manipulation that should not
really fool anyone but amazingly dose.

Mandrake fans will be happy at the convergence of fantasy and fact.
--
R Kym Horsell <***@kymhorsell.com>

$>This seems to be saying "in logic or philosphy an inverted if or circular
$>argument are no good -- but in science we have different standards".
$You're right. That's exactly what I'm saying.
-- Mike Franklin <***@msn.com>, 20 Nov 2010
David Albert Harrell
2010-11-22 05:43:20 UTC
Permalink
On Nov 20, 7:05 pm, "J. P. Gilliver (John)"
Post by J. P. Gilliver (John)
I can only comment for the newsgroup a.c.b-u; for we readers of that
'group, this article has sprung fully-formed out of the blue. I can't
speak for the several other 'groups it has been posted to.
It does seem that the author of the article feels some resentment to
somebody about things he things have not or are not being done.
Even if I knew who to resent, I wouldn’t have the time. I am merely
‘bewildered by the obvious’ when focusing on the history of this
project.
Post by J. P. Gilliver (John)
In message
Post by David Albert Harrell
-----------preface----------------------------------------
All attempts thus far to evaluate and develop ‘Tactile Image
Projection’ have dealt with a stationary subject. This approach does
not offer the kind of real-time environmental feedback necessary for
the subject to begin adapting to the device, nor for the brain to
naturally discover and correlate the relevance and inherent value of
the area being stimulated.
I agree that the perception of a changing environment might well make
learning to use such equipment easier - though I don't think I'd go as
far as to state categorically that that is the case.
I am going this far, and further. Idem.
Post by J. P. Gilliver (John)
It may vary with
different technologies, and with different subjects: some subjects may
find it easier to use some technologies with a fixed environment
initially.
Some subjects will no doubt find initial discovery of this alien image
to be quicker and easier while at rest. They will not however be able
to take full advantage of this revised mobile/high-resolution version
of ‘Tactile Image Projection’ while sitting in chair.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
In other words, the most essential prerequisite of this device has
been overlooked entirely; that is, ‘Tactile Image Projection’ simply
will not function [to any degree even approaching the usefulness of
natural sight] on a stationary subject. Mobility is essential for
human adaptation and application, thus allowing for an immediate
environmental feedback interaction cycle to develop.
See above. Also, I'm sure it has not been overlooked as you put it: it
may not have been implemented, due to the bulk of the equipment and its
supporting equipment, in the past, which may no longer be the case
(though prototypes are still likely to be bigger than production
models).
This device has been possible to build for decades. And what I am
precisely saying is, the absolute necessity of these two aspects,
mobility and resolution, has clearly been overlooked; otherwise these
aspects would have been addressed by now, as prerequisites to success,
and integrated into its design [at some point] over the last forty
years.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
Putting such a device on a stationary subject, would be like inventing
a parachute and then attempting to test it from the deck of a
submarine. Or a perhaps even more enlightening analogy, attempting to
Submarines don't have decks really, but I know what you mean.
Post by David Albert Harrell
evaluate this device with a stationary subject would be like trying to
determine the practical values of ‘a new invention known as the
automobile’ without taking the vehicle out of park.
Neither analogy is completely valid: it is certainly possible that a
test subject might be able to perceive a stationary environment, i. e.
get some benefit from it.
I think both analogies are useful, and the second is definitively
accurate.
A stationary subject could only make relatively insignificant use of
the mobile/high-resolution device I’m suggesting.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
------------end preface-----------------------------------
[Note the earliest references to the concept of ‘Tactile Image
Projection’ appear to have been made by Paul Bach-Y-Rita and Carter C.
followed up by Paul Bach-Y-Rita, Carter C. Collins, Frank A. Saunders,
Benjamin White, and Lawrence Scadden at The Smith-Kettlewell Institute
of Visual Sciences in San Francisco, CA in 1969.]
I remember seeing some TV prog. (I am sighted) about a system which
worked by a grid of vibrating needles pressed against the back of the
subject, connected to a camera. Although this was a decade or two or
three ago, I think the limitation of if I remember the grid only being a
few tens of points square was more because of the lack of discrimination
in the back (i. e. there aren't enough nerves there), than limitations
of the equipment.
See closing paragraphs below.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
If such a field were delivered to the ‘sea of nerve endings’ contained
in a large area of skin, would a human being be able to make use of
this two-dimensionally ordered image?
There are few areas of the human body that have anything like the number
of nerve endings you mention below.
See closing paragraphs below.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
1. A video camera.
2. A central processing unit (computer).
3. A flexible pad worn snugly to the skin (or scanning emitter) that
stimulates the nerve endings of the dermal area.
Briefly, the pictorial image from a video camera is received and
processed by a computer, and then delivered to an ‘x,y
grid’ (optimally perhaps as high as 100,000 pixels or more, depending
on the density of the neural receptors being targeted), in the form of
dermal stimulating impulses. The specific type of stimulation is a
Making a grid of that size - even if there were enough nerves to detect
it, which there aren't - would be a pretty major technical challenge.
I disagree. Idem
Post by J. P. Gilliver (John)
Post by David Albert Harrell
variable at this point. Electric shock, vibration, heat, and laser
(of low but perceivable intensity) have been considered, but there are
other possibilities, including the type of electric current which the
brain is already accustom to receiving. This question of ‘What kind
of stimulation would be most effective?’ can only be answered through
experimentation.
That (i. e. experimentation) does sound like the way to go. I don't
think the laser is actually a separate type of stimulation - for the
purposes of this discussion, laser would just be heat.
The laser would be an alternate, presumably more efficient and fluent,
way of delivering heat stimulation.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
The neurological system, pattern recognition capabilities, and natural
adaptive powers of the human mind accomplish the remainder of this
unorthodox direct image perception. It is the natural business of the
brain to respond to a specific overture of patterned stimuli. A
mobile blind subject, wearing such a device, would have an opportunity
to create a real-time feedback relationship with the physical world
environment.
The nearest we have got to this so far has been the Optacon;
unfortunately, the company that made them folded some years ago.
I suspect their resolution was too low, and their subject immobile,
otherwise they will still be around and flourishing.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
One of the reasons I am convinced, that such a tactile-conveyed image
can be usefully perceived by the brain, is that I have subjectively
proven it. I have repeatedly conducted sessions in which I sat
quietly, blindfolded or with my eyes closed, while another person drew
simple pictures on my back. At first I was only able to deduce the
images by reconstructing them in my mind, however eventually, during
many of the more focused sessions, the touch of the finger on my skin
began to ‘light up’ in the darkness of my mind’s eye, leaving a trail
that lingered long enough in many cases for me to perceive the entire
image as a coherent complete picture. This kind of exercise however
The grid of vibrating needles I mentioned earlier did also achieve some
success, i. e. the subjects were able eventually to see shapes and so
on.
No doubt. See last paragraph below.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
only demonstrates the conveyance of the two dimensional plane to the
brain; in order to realize full potential you must get the subject
moving, navigating obstacles and negotiating objects.
Again, I tend to agree that that is likely to help.
Post by David Albert Harrell
What is essentially being suggested is that the normal two-dimensional
Are you suggesting it?
This suggestion was apparently first published in 1967, idem.
Post by J. P. Gilliver (John)
Writing in the third person isn't really a good
idea for newsgroups; in my opinion it isn't for scientific papers
either, as it makes them sound cold and dead, but many scientists seem
to feel more secure if things are written that way.
A third person perspective here is unavoidable but fortunately
irrelevant. And I don’t think this text is cold or dead.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
image that falls upon the cones and rods on the rear portion of the
inner eye (retina), can be effectively replace (in its role with the
visual cortex of the brain), by a larger dermal area (such as the
back, stomach, or scalp for instance) undergoing a different (but also
two dimensional) stimulation; creating a parallel system of input that
the brain would have an opportunity to recognize in a somewhat
familiar manner.
It seems that the brain does indeed, after a while, start to process the
information as pictorial information, however it comes in.
Post by David Albert Harrell
The overall objective of the device is to produce some form of
detectable stimulation corresponding with the lighted areas in the
video picture [with polarity reversible]. This stimulation might be
delivered by some form of hovering scan emitter, or a snugly worn pad
embedded with an electrode grid array.
The prototype should be designed to emit as many different types of
stimulation as possible since we don’t yet know what will work most
effectively, and such stimulation may actually need to be changed
during eventual practical usage considering the propensity of specific
neural receptors to become over-stimulated. Given the current state
of electronic technology, I am certain that such stimulation could be
delivered to a targeted dermal area in a variety of different methods
and intensities.
It's not the electronic technology, that can certainly do it: high
resolution video processing is everyday these days. What is not anything
like so commonplace is the transducer, especially if it is going to
produce several different types of stimulation. Apart from video
displays, where the only stimulation is light, there is no similarly
dense hardware: the closest I can think of is print heads, but they
cover a fairly small area with not that huge a number of transducers -
they cover the A4 page by scanning the head over it.
I don’t think many people doubt the parameters I’m proposing are
technically possible [ie maximum attainable resolution and functional
mobility]; these are only a question of time and resources. So we
know we can get there, the current issue is ‘do we have the vision’
that produces the enthusiasm we’ll need to make the journey.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
Various versions of this device have been described before, including
discussion on the implantation of a transmitter into the visual
cortex. I consider an implant to the visual cortex to be a clumsy,
unnecessarily invasive, and less effective conduit to the brain than
natural tactile perception.
It's not really on because of the amount of invasion required, which is
intrinsically dangerous, and the number of connections required.
I agree. Visual cortex implants appear not to be viable for multiple
reasons.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
Notice that all versions of this device thus far appear to be offering
relatively low, therefore ineffectual, resolution. The most essential
prerequisite of ‘Tactile Image Projection’ however has been overlooked
entirely; that is, this device simply will not function [to any degree
even approaching the usefulness of natural sight] on a stationary
That is your view. It may be true, but you need more evidence than just
saying it.
Current and past low-resolution immobile designs are a matter of
record. The potential of this device has not yet been demonstrated,
and cannot currently be demonstrated by any means known to me.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
subject. Mobility and the immediate environmental feedback
interaction cycle are essential for human adaptation and application.
Desirable, certainly. Whether essential is still to be determined.
I obviously disagree. Idem.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
Applying such a device to a stationary subject, would be like
inventing a parachute and then attempting to test it from the deck of
a submarine. Or a perhaps even more enlightening analogy, attempting
to evaluate this device with a stationary subject would be like trying
to determine the practical values of ‘a new invention known as the
automobile’ without taking the vehicle out of park.
We've already had that paragraph.
Post by David Albert Harrell
The Institute of Medical Sciences (San Francisco, CA) [Carter C.
Collins] in a 1969-71 publication mentions ‘immobility’ in passing
(along with weight, bulk, expense, and power consumption) as one of
the problems that has arisen in prior projects, but fails to point out
that such immobility entirely negates any attempt to develop or even
evaluate the effectiveness of Tactile Image Projection.
And you fail to explain how you would overcome these problems.
Certainly, technology in terms of the electronics required has moved on
almost out of all recognition in the time since that was published, but
the transducer has not developed to anything like the same extent - and
the lack of suitable nerves in the skin, of course, has not changed.
I don’t see the transducer, or any of the tech problems, as
insurmountable. Idem.
Receptor density is addressed in idem.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
Assume that we have already built such a device, ie a moving picture
is being delivered to the subject in perceptible format; if the
subject does not proceed to interface with a real-time environment,
one cannot expect relevant learning cycle, or even useful and
sustained cerebral discovery of the area, to occur.
The initial assumption is a big one!
I see this as more apparent, than assumptive.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
All attempts thus far have dealt with far too low resolution, and a
stationary subject. This scenario does not offer the kind of real-
time feedback necessary for the subject to begin adapting to the
device, nor for the brain to discover and correlate the relevance and
inherent value of the area being stimulated.
You may well be right. But stating the problem does not solve it.
I’m suggesting specific changes in the device and essential exercises.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
One of the problems with past, and apparently current, thinking is
that the x,y grid is being applied to small areas such as the hands,
tongue, and finger tips. I realize receptor density is greater in
these areas, but the resolution needed is simply not possible using
hundreds of electrodes as oppose to thousands. The focus should be on
covering as large an area as possible (perhaps even wrapping around to
the chest and stomach from the back for greater resolution). A
You're still not going to reach the 100,000 pixels you want.
Possibly not, but such limits clearly need to be quantified to
discover optimums. Idem.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
bodysuit providing coverage of all available potentially useful dermal
areas may even be proven most effective. [Notice that the resolution
of the ‘x,y grid’ should approach, or surpass to some degree, the
density of the dermal neural receptors being targeted.]
I suspect the brain would not really adapt to signals received from such
a large area, though I may be wrong.
In contrast, I suspect the larger the area being stimulated, the more
readily it will be adopted and assimilated.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
As for interfacing the subject with a real-time physical environment,
imagine for instance that a mobile subject was fitted with a working
portable device delivering a picture from a camera (cap or eyeglasses
mounted for instance) to an emitter pad or scanning emitter array.
Now place this subject in an environment void of light, except for a
line guided path which the subject would begin to walk upon. Imagine
this path at some point has a low hanging lighted ‘bright white’ limb
(I don't want to appear cruel of flippant here, but this is necessary
to make my point). The first time the subject encountered the limb,
there would be a registration by the Tactile Image Projection system
of a ‘white stripe’ that would pass across the ‘emitter field’ just
before the subject was impeded by the limb.
My point being that eventually the subject will associate the passing
of the ‘white stripe’ as a cognitive precursor to being struck by the
limb, and will duck. The rest is merely a matter of real-time
I think a learned response to an artificial situation is of limited
value. If, as you seem to be, you are talking about avoiding obstacles,
then obstacles will come in all shapes and sizes.
I don’t see your point here. I’m not suggesting any training limits
on exploring the details of obstacles, objects, or any other
environmental aspect.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
experience, learning to distinguish shapes and details. But the key
is to create conditions that offer instant feedback, mobile within a
real-time physical space. This is the kind of endeavor in which the
human mind invariably excels to astonishing heights. For optimum and
Agreed.
Post by David Albert Harrell
expedient adaptation opportunities, stark black and white training
facilities would need to be developed, large checkerboard floors for
instance, with walls compared to objects offering maximum contrast.
That _may_ be the way to go. But remember that, eventually, for the
system to be of practical use, the subject will have to leave the
laboratory, into a world where things have not been prepared.
Yes eventually the subject must venture into the real world
(presumably initially assisted). But again I don’t see your point.
Training facilities will obviously acclimate subjects as thoroughly as
possible before graduating to the general environment.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
Finally I am suggesting that if such an image is made available to the
brain, that it is the natural business of the brain to recognize such
an area of two-dimensional data within a given feedback loop of real-
time information, this being completely analogous to the normal
relationship between the visual cortex and the rear portion of the
inner eye surface (in effect offering the visual cortex an ‘alternate
retina’). It is the correlation between the real-time world, and this
new area of stimulation, that will achieved the inevitable
communication of a useful moving ‘picture’.
Furthermore, I am convinced the only reason a Tactile Image Projection
system has not already been developed and adapted into practical use
for the blind, is because the inadequacies of low resolution, and this
most critical prerequisite of ‘real-time mobile interaction with the
environment,’ are being overlooked.
You write as if something is not being done because no-one has realised
it is a problem. I am sure this is not the case.
I’m certain this is precisely the case, and the reason “something is
not being done.” And I’m not claiming no one realizes this is ‘a’
problem, I’m declaring no one realizes this is ‘the’ problem.
The only way to prove or disprove what I’m saying is to build a higher
resolution prototype intended for portable use. Notice it is not
possible to prove such a ‘mysterious cognitively dependant’ device
will work, without having such a device?

If developers understood that a static device cannot approach full
functionality, they would by now [after 43 years] have focused on, and
solved, immobility; even if this required a harness of extended
wires. As for resolution, designs verify they clearly don’t see this
necessity either, whereas I suggest proceeding at once to discover and
enlist whatever ‘tactile neural receptors’ are available, since
‘generally low density’ appears indeed to be a problem.

A bodysuit may be necessary, however I am confident the brain can
flatten this signal out and even piece it together for practical use
(providing symmetry is maintained). (And no, I cannot prove this; and
yes, I know better than to try.) Resolution will never be sharp by HD
standards, but I am positive one can far surpass the postage-stamp
dimensions currently being ventured, and that one will certainly never
discover top resolution limits while addressing isolated small areas,
hands, tongues, and finger tips.

Finally notice this post is not about ‘how tactile image projection
works,’ it is about why it has not worked, and why it is not working,
ie the current design is failing to achieve even close to the
potential of a mobile/high-resolution version. And when I refer to
the ‘potential’ of this device, I’m suggesting that some of the more
adaptable and tenacious blind subjects will be able to read street
signs, watch a movie, or even play tennis, allowing that depth
perception problems will be a limiter in any 3D endeavor.

Realistically all that has been demonstrated thus far, after over 40
years of ‘off and on’ research and development, is that ‘tactile-grid
to brain reception’ is possible. Ok, the engine starts, great! Now
turn on the headlights and shift into gear.

Thank you for your polite and obviously sincere and concerned comments
and questions.

David Albert Harrell
J. P. Gilliver (John)
2010-11-22 20:38:47 UTC
Permalink
In message
Post by David Albert Harrell
On Nov 20, 7:05 pm, "J. P. Gilliver (John)"
[]
Post by David Albert Harrell
Post by J. P. Gilliver (John)
It does seem that the author of the article feels some resentment to
somebody about things he things have not or are not being done.
Even if I knew who to resent, I wouldn’t have the time. I am merely
‘bewildered by the obvious’ when focusing on the history of this
project.
I suppose my response must be, if it's so obvious, why do _you_ think
nobody but you has seen it?
[]
Post by David Albert Harrell
Post by J. P. Gilliver (John)
I agree that the perception of a changing environment might well make
learning to use such equipment easier - though I don't think I'd go as
far as to state categorically that that is the case.
I am going this far, and further. Idem.
That is your right.
Post by David Albert Harrell
Post by J. P. Gilliver (John)
It may vary with
different technologies, and with different subjects: some subjects may
find it easier to use some technologies with a fixed environment
initially.
Some subjects will no doubt find initial discovery of this alien image
to be quicker and easier while at rest. They will not however be able
to take full advantage of this revised mobile/high-resolution version
of ‘Tactile Image Projection’ while sitting in chair.
I'd say it is fairly obvious that learning to use a new system is
probably best with a fixed scene, with the variability of motion one
less variable to be handled initially. The fact that any such system is
of little use if immobile is also obvious. Perhaps slight motion, to
keep the sensor nerves from going to sleep, is a good idea - what in
other areas would be called dithering.

To give an analogy, since you like them: I'm not sure if you are
yourself sighted, so this may or may not help. Consider learning to
focus a camera (or the eye): this is easier to do on a static scene than
a moving one, especially if the camera itself is moving. It is quite a
good analogy in fact - having the brain learn to process input from such
a system is quite similar to the task of focusing.
[]
Post by David Albert Harrell
This device has been possible to build for decades. And what I am
precisely saying is, the absolute necessity of these two aspects,
mobility and resolution, has clearly been overlooked; otherwise these
aspects would have been addressed by now, as prerequisites to success,
and integrated into its design [at some point] over the last forty
years.
We'll just have to agree to differ on that.
Post by David Albert Harrell
Post by J. P. Gilliver (John)
Post by David Albert Harrell
Putting such a device on a stationary subject, would be like inventing
a parachute and then attempting to test it from the deck of a
Actually, parachute training often starts from a fixed platform -
usually, even, without the parachute!
[]
Post by David Albert Harrell
Post by J. P. Gilliver (John)
Post by David Albert Harrell
evaluate this device with a stationary subject would be like trying to
determine the practical values of ‘a new invention known as the
automobile’ without taking the vehicle out of park.
Neither analogy is completely valid: it is certainly possible that a
test subject might be able to perceive a stationary environment, i. e.
get some benefit from it.
I think both analogies are useful, and the second is definitively
accurate.
A stationary subject could only make relatively insignificant use of
the mobile/high-resolution device I’m suggesting.
When learning to drive, I was initially taken to a quiet area. I think
your analogy - assuming you are suggesting that the subject be given the
device with complete mobility - would be like dumping a new driver in
motorway traffic. However, you later do mention the concept of training
areas, which suggests you are considering starting in a "safe area". I
would say, then, that starting with a stationary image is not a bad
idea. What might be good, though, is to have some small part of the
image moving, right from the start; I certainly wouldn't have the whole
image move.

There is some definite crossover with computer vision here. One of the
major difficulties in programming the ability to recognise environment
into computer systems is when the camera moves: the entire frame of
reference then moves. Even moving objects in a stationary image are
difficult, because they not only move, they cause other objects to
appear and disappear (as they come out from behind of, or are obscured
by, the moving object). Programming machine vision has a lot in common
with the current subject: I am assuming you have in mind the completely
blind, rather than those with some limited sight; the latter could
certainly benefit from higher resolution of course, but at least already
have a brain which knows how to handle images, and would just have to
learn to use the new input source, than learn both at once.
[]
Post by David Albert Harrell
I disagree. Idem
You over-use that word - and assume your audience is familiar with it!
[]
Post by David Albert Harrell
Post by J. P. Gilliver (John)
That (i. e. experimentation) does sound like the way to go. I don't
think the laser is actually a separate type of stimulation - for the
purposes of this discussion, laser would just be heat.
The laser would be an alternate, presumably more efficient and fluent,
way of delivering heat stimulation.
Ah. If you're suggesting a single laser, but mechanically (probably by
mirrors) scanned, rather than a grid of 100,000 such lasers, then I
agree, it shows good promise. The scanning mechanism implies a certain
volume over the selected are of skin, mind, analogous to the volume
occupied by the tube of a TV set or monitor, but this should not be a
great problem.
[]
Post by David Albert Harrell
Post by J. P. Gilliver (John)
The nearest we have got to this so far has been the Optacon;
unfortunately, the company that made them folded some years ago.
I suspect their resolution was too low, and their subject immobile,
otherwise they will still be around and flourishing.
I assumed you had encountered, or would have researched, it, but since
you didn't: it was a reading device. It had a small camera, which was
placed over the object to be read, and the image was translated into a
small grid of vibrating needles (blunt, obviously), which went under a
fingertip. As such, it wasn't intended to be mobile in the sense of an
eye on the world, purely a reading device; it was however portable. The
camera was moved over the object to be read, like a small handheld
scanner I suppose. Those folk I know who have one are fond of them and
would not give them up. I suspect the disappearance of the company was
due to the improvements in OCR software for scanners making the task for
which they were designed less required; nevertheless, they did give
vision, of layout, font, and non-text items on the subject material, in
a way nothing else did - I remember showing my friend, for example, that
the artist on a record sleeve was wearing a striped shirt, something she
could not have perceived in any other way.

I think, if the company had survived a little longer, they might be into
eye-type vision by now - but we will never know, sadly.
[]
Post by David Albert Harrell
Post by J. P. Gilliver (John)
Writing in the third person isn't really a good
idea for newsgroups; in my opinion it isn't for scientific papers
either, as it makes them sound cold and dead, but many scientists seem
to feel more secure if things are written that way.
A third person perspective here is unavoidable but fortunately
irrelevant. And I don’t think this text is cold or dead.
It isn't unavoidable, just generally easy to fall into. Certainly, in
discussions of the newsgroup sort, I'd say try to get into using first
person more.
[]
Post by David Albert Harrell
I don’t think many people doubt the parameters I’m proposing are
technically possible [ie maximum attainable resolution and functional
mobility]; these are only a question of time and resources. So we
know we can get there, the current issue is ‘do we have the vision’
that produces the enthusiasm we’ll need to make the journey.
I agree. I guess my main point of disagreement with you is your claim
that something has been overlooked - possibly the combination of high
resolution and mobility; I would contend that it is only limited
development resources that have prevented implementation. I'm sure there
are plenty of people who have the vision (an unfortunate word in this
context, but you know what I mean), and are bursting with ideas they'd
like to try, but are constrained by lack of funds and time. (Time, in
the sense that however keen the individual, they eventually need to
leave the research institute and actually make a living for themselves;
what is needed is for the institution to take on the continuation of the
project. And getting people to continue someone else's work is always
harder.)
[]
Post by David Albert Harrell
I agree. Visual cortex implants appear not to be viable for multiple
reasons.
An evil side-thought: there are certain areas of the body with high
nerve-densities, but I think the social acceptability of placing the
transducer onto those areas might take some overcoming!
[]
Post by David Albert Harrell
Current and past low-resolution immobile designs are a matter of
record. The potential of this device has not yet been demonstrated,
and cannot currently be demonstrated by any means known to me.
But you have said it is well within the capabilities to make it.
[]
Post by David Albert Harrell
Post by J. P. Gilliver (John)
Post by David Albert Harrell
Assume that we have already built such a device, ie a moving picture
is being delivered to the subject in perceptible format; if the
subject does not proceed to interface with a real-time environment,
one cannot expect relevant learning cycle, or even useful and
sustained cerebral discovery of the area, to occur.
The initial assumption is a big one!
I see this as more apparent, than assumptive.
It was you who started the paragraph with the word "Assume"!
[]
Post by David Albert Harrell
Post by J. P. Gilliver (John)
You're still not going to reach the 100,000 pixels you want.
Possibly not, but such limits clearly need to be quantified to
discover optimums. Idem.
Agreed, apart from the repetition of Idem which I am beginning to find
wearing, sorry.
Post by David Albert Harrell
Post by J. P. Gilliver (John)
Post by David Albert Harrell
bodysuit providing coverage of all available potentially useful dermal
areas may even be proven most effective. [Notice that the resolution
of the ‘x,y grid’ should approach, or surpass to some degree, the
density of the dermal neural receptors being targeted.]
I suspect the brain would not really adapt to signals received from such
a large area, though I may be wrong.
In contrast, I suspect the larger the area being stimulated, the more
readily it will be adopted and assimilated.
I did not express myself clearly - it was not the size of the area being
used so much as the disparate nature about which I have doubts. I think
an area wrapped around the body - so that it is no longer flat - might
cause problems for the brain. However, I'd be happy to be proved wrong.
[]
Post by David Albert Harrell
Post by J. P. Gilliver (John)
Post by David Albert Harrell
Furthermore, I am convinced the only reason a Tactile Image Projection
system has not already been developed and adapted into practical use
for the blind, is because the inadequacies of low resolution, and this
most critical prerequisite of ‘real-time mobile interaction with the
environment,’ are being overlooked.
You write as if something is not being done because no-one has realised
it is a problem. I am sure this is not the case.
I’m certain this is precisely the case, and the reason “something is
not being done.” And I’m not claiming no one realizes this is ‘a’
problem, I’m declaring no one realizes this is ‘the’ problem.
OK, you go ahead and declare it. We may not agree on various aspects of
the problem and the solution, but the subject needs forceful individuals
like you to actually get things done.
Post by David Albert Harrell
The only way to prove or disprove what I’m saying is to build a higher
resolution prototype intended for portable use. Notice it is not
possible to prove such a ‘mysterious cognitively dependant’ device
will work, without having such a device?
I think there might be some mileage in a low-resolution but portable
device. You may be right that it won't work, but you may be wrong.
Similarly a high-resolution but not very mobile device.
Post by David Albert Harrell
If developers understood that a static device cannot approach full
functionality, they would by now [after 43 years] have focused on, and
solved, immobility; even if this required a harness of extended
wires. As for resolution, designs verify they clearly don’t see this
necessity either, whereas I suggest proceeding at once to discover and
Not necessarily: designs reflect what is thought to be achievable.
Doesn't mean the designer wouldn't like to go further. There's no point
in designing something you can't make.
Post by David Albert Harrell
enlist whatever ‘tactile neural receptors’ are available, since
‘generally low density’ appears indeed to be a problem.
You continue to speak as if no-one has done any such work.
Post by David Albert Harrell
A bodysuit may be necessary, however I am confident the brain can
flatten this signal out and even piece it together for practical use
(providing symmetry is maintained). (And no, I cannot prove this; and
If you think the brain can flatten the sensor surface (about which I
have doubts but never mind), why do you think symmetry is important?
Post by David Albert Harrell
yes, I know better than to try.) Resolution will never be sharp by HD
Not sure why you don't want to try.
Post by David Albert Harrell
standards, but I am positive one can far surpass the postage-stamp
dimensions currently being ventured, and that one will certainly never
discover top resolution limits while addressing isolated small areas,
hands, tongues, and finger tips.
There's little need for HD, even for the sighted!

But resolution and size of interface area are separate matters, though
are connected to some extent. But just because an area is small doesn't
mean it has to be low resolution, and a large area doesn't necessarily
mean high resolution either.
Post by David Albert Harrell
Finally notice this post is not about ‘how tactile image projection
works,’ it is about why it has not worked, and why it is not working,
ie the current design is failing to achieve even close to the
potential of a mobile/high-resolution version. And when I refer to
You seem to be dismissing any gain to be made by the intermediate
stages. You may be right that there is a level of resolution and
mobility at which things will suddenly start to work, but it is by no
means proven. And dismissing such work will alienate those who might
help you.
Post by David Albert Harrell
the ‘potential’ of this device, I’m suggesting that some of the more
adaptable and tenacious blind subjects will be able to read street
signs, watch a movie, or even play tennis, allowing that depth
perception problems will be a limiter in any 3D endeavor.
You'd be surprised how little that actually matters: I don't have
stereoscopic vision, which is the basis for all artificial 3D systems,
and yet I have little trouble with depth. (I am a little clumsy with
very fine/close work, but at the distances involved in, for example,
driving, I have no trouble.)
Post by David Albert Harrell
Realistically all that has been demonstrated thus far, after over 40
years of ‘off and on’ research and development, is that ‘tactile-grid
to brain reception’ is possible. Ok, the engine starts, great! Now
turn on the headlights and shift into gear.
But don't dismiss the possibility that you can get somewhere without
headlights, during the day for example.
Post by David Albert Harrell
Thank you for your polite and obviously sincere and concerned comments
and questions.
David Albert Harrell
It is a subject I would certainly like to see develop; I just feel that
your stance may antagonize. In fact, it will do so; whether this helps
(by jarring people into action) or hinders (by making the limited
resources - people and funding - go to someone other than you) we will
have to wait and see.
--
J. P. Gilliver. UMRA: 1960/<1985 MB++G.5AL-IS-P--Ch++(p)***@T0H+Sh0!:`)DNAf

If it's pretentious, then at least it's not the sort that wears a horned helmet
and shrieks about trolls. - Stuart Maconie in Radio Times, 14-20 November 2009.
David Albert Harrell
2010-11-24 06:01:38 UTC
Permalink
On Nov 22, 12:38 pm, "J. P. Gilliver (John)"
Post by J. P. Gilliver (John)
In message
Post by David Albert Harrell
On Nov 20, 7:05 pm, "J. P. Gilliver (John)"
[]
Post by David Albert Harrell
Post by J. P. Gilliver (John)
It does seem that the author of the article feels some resentment to
somebody about things he things have not or are not being done.
Even if I knew who to resent, I wouldn’t have the time. I am merely
‘bewildered by the obvious’ when focusing on the history of this
project.
I suppose my response must be, if it's so obvious, why do _you_ think
nobody but you has seen it?
[]
All I know about the persons involved with developing this concept is
how they spell their names.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
Post by J. P. Gilliver (John)
I agree that the perception of a changing environment might well make
learning to use such equipment easier - though I don't think I'd go as
far as to state categorically that that is the case.
I am going this far, and further. Idem.
That is your right.
Post by David Albert Harrell
Post by J. P. Gilliver (John)
It may vary with
different technologies, and with different subjects: some subjects may
find it easier to use some technologies with a fixed environment
initially.
Some subjects will no doubt find initial discovery of this alien image
to be quicker and easier while at rest. They will not however be able
to take full advantage of this revised mobile/high-resolution version
of ‘Tactile Image Projection’ while sitting in a chair.
I'd say it is fairly obvious that learning to use a new system is
probably best with a fixed scene, with the variability of motion one
less variable to be handled initially. The fact that any such system is
of little use if immobile is also obvious. Perhaps slight motion, to
keep the sensor nerves from going to sleep, is a good idea - what in
other areas would be called dithering.
To give an analogy, since you like them: I'm not sure if you are
yourself sighted, so this may or may not help. Consider learning to
focus a camera (or the eye): this is easier to do on a static scene than
a moving one, especially if the camera itself is moving. It is quite a
good analogy in fact - having the brain learn to process input from such
a system is quite similar to the task of focusing.
[]
I’m sighted. I’m not suggesting flailing the head camera from side to
side, but instead, as normally done by sighted people, focusing on one
image at a time. If the image moves, fine, you gain experience with a
moving field. If you didn’t understand the movement, try touching the
object with your hands.

Once immersed in this kind of instant feedback environment, learning
opportunities start to expand exponentially. This is perhaps the most
enlightening, yet most elusive, concept effecting the future of this
device.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
This device has been possible to build for decades. And what I am
precisely saying is, the absolute necessity of these two aspects,
mobility and resolution, has clearly been overlooked; otherwise these
aspects would have been addressed by now, as prerequisites to success,
and integrated into its design [at some point] over the last forty
years.
We'll just have to agree to differ on that.
Post by David Albert Harrell
Post by J. P. Gilliver (John)
Post by David Albert Harrell
Putting such a device on a stationary subject, would be like inventing
a parachute and then attempting to test it from the deck of a
Actually, parachute training often starts from a fixed platform -
usually, even, without the parachute!
[]
Post by David Albert Harrell
Post by J. P. Gilliver (John)
Post by David Albert Harrell
evaluate this device with a stationary subject would be like trying to
determine the practical values of ‘a new invention known as the
automobile’ without taking the vehicle out of park.
Neither analogy is completely valid: it is certainly possible that a
test subject might be able to perceive a stationary environment, i. e.
get some benefit from it.
I think both analogies are useful, and the second is definitively
accurate.
A stationary subject could only make relatively insignificant use of
the mobile/high-resolution device I’m suggesting.
When learning to drive, I was initially taken to a quiet area. I think
your analogy - assuming you are suggesting that the subject be given the
device with complete mobility - would be like dumping a new driver in
motorway traffic.
No I only claim that the vehicle must be shifted into gear, at some
point during the training. The program I'm suggesting is intended to
increment variables with ascending complexity.
Post by J. P. Gilliver (John)
However, you later do mention the concept of training
areas, which suggests you are considering starting in a "safe area". I
would say, then, that starting with a stationary image is not a bad
idea. What might be good, though, is to have some small part of the
image moving, right from the start; I certainly wouldn't have the whole
image move.
By default, I suspect most basic indoctrinating activities would
inherently have minimal motion in the field of ‘vision.’ Whatever the
action however, I would be adamantly opposed to any distortion of the
‘video truth.’ Always “Tell it like it is.” And keep the camera
moving with the head [possibly with the eye muscles in future
designs]. Otherwise the subject might very quickly develop a
‘confidence problem’ with the image, which could seriously impede
progress.
Post by J. P. Gilliver (John)
There is some definite crossover with computer vision here. One of the
major difficulties in programming the ability to recognise environment
into computer systems is when the camera moves: the entire frame of
reference then moves. Even moving objects in a stationary image are
difficult, because they not only move, they cause other objects to
appear and disappear (as they come out from behind of, or are obscured
by, the moving object). Programming machine vision has a lot in common
with the current subject: I am assuming you have in mind the completely
blind, rather than those with some limited sight; the latter could
certainly benefit from higher resolution of course, but at least already
have a brain which knows how to handle images, and would just have to
learn to use the new input source, than learn both at once.
[]
Coincidentally I wrote a cognitive science paper 15 years ago this
month ‘Topographic Vision 'totally alien formatting'’ in which I make
the case that topographic, or a radar/sonar type representation of
one’s ‘field of vision,’ would be far easier for a computer hosted
entity (or a blind subject) to resolve and assimilate than a
conventional 2-D image. I have however been reluctant to bring up
this perhaps obfuscating complication (concerning blind subjects) for
reasons which by now must be all too obvious.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
I disagree. Idem
You over-use that word - and assume your audience is familiar with it!
[]
I sincerely apologized for the assumption.
Idem [definition] the same, especially a book, article, or chapter
previously referred to
Post by J. P. Gilliver (John)
Post by David Albert Harrell
Post by J. P. Gilliver (John)
That (i. e. experimentation) does sound like the way to go. I don't
think the laser is actually a separate type of stimulation - for the
purposes of this discussion, laser would just be heat.
The laser would be an alternate, presumably more efficient and fluent,
way of delivering heat stimulation.
Ah. If you're suggesting a single laser, but mechanically (probably by
mirrors) scanned, rather than a grid of 100,000 such lasers, then I
agree, it shows good promise. The scanning mechanism implies a certain
volume over the selected are of skin, mind, analogous to the volume
occupied by the tube of a TV set or monitor, but this should not be a
great problem.
[]
I’m suggesting a scanning laser, though perhaps more than one will be
required within a single emitter array.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
Post by J. P. Gilliver (John)
The nearest we have got to this so far has been the Optacon;
unfortunately, the company that made them folded some years ago.
I suspect their resolution was too low, and their subject immobile,
otherwise they will still be around and flourishing.
I assumed you had encountered, or would have researched, it, but since
you didn't: it was a reading device. It had a small camera, which was
placed over the object to be read, and the image was translated into a
small grid of vibrating needles (blunt, obviously), which went under a
fingertip. As such, it wasn't intended to be mobile in the sense of an
eye on the world, purely a reading device; it was however portable. The
camera was moved over the object to be read, like a small handheld
scanner I suppose. Those folk I know who have one are fond of them and
would not give them up. I suspect the disappearance of the company was
due to the improvements in OCR software for scanners making the task for
which they were designed less required; nevertheless, they did give
vision, of layout, font, and non-text items on the subject material, in
a way nothing else did - I remember showing my friend, for example, that
the artist on a record sleeve was wearing a striped shirt, something she
could not have perceived in any other way.
I think, if the company had survived a little longer, they might be into
eye-type vision by now - but we will never know, sadly.
[]
They did sort of a peephole adaptation of the devise, but they needed
to enlarge the emitter peephole to a widescreen, and then point the
camera at the interactive moving world, instead of at text and
isolated portions of still flat images.

The part about the striped shirt is particularly interesting; now
considered that this much higher resolution pressure sensation is
being delivered to a dermal field many times larger, and that the
image on the record sleeve is moving.

Notice that when you ‘see’ something with this device, you can
frequently reach out and touch it at the same moment. I cannot
emphasize enough the importance of this short-term feedback loop, yet
I cannot prove its importance.

Imagine for instance a fully equipped subject sitting in a low swivel-
chair in the center of a brightly lit small [7 foot diameter]
hexagonal room, with the six walls alternately painted black and
white. Now hand the subject a checkerboarded volleyball and close the
seamless door. All I’m claiming is, the average subject would begin
progressively adapting to the device from this position. I would
further expect such adaptation, in many individuals, to be timely and
prolific; adaptation being the elemental ‘strong force’ of human
continuity.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
Post by J. P. Gilliver (John)
Writing in the third person isn't really a good
idea for newsgroups; in my opinion it isn't for scientific papers
either, as it makes them sound cold and dead, but many scientists seem
to feel more secure if things are written that way.
A third person perspective here is unavoidable but fortunately
irrelevant. And I don’t think this text is cold or dead.
It isn't unavoidable, just generally easy to fall into. Certainly, in
discussions of the newsgroup sort, I'd say try to get into using first
person more.
[]
Post by David Albert Harrell
I don’t think many people doubt the parameters I’m proposing are
technically possible [ie maximum attainable resolution and functional
mobility]; these are only a question of time and resources. So we
know we can get there, the current issue is ‘do we have the vision’
that produces the enthusiasm we’ll need to make the journey.
I agree. I guess my main point of disagreement with you is your claim
that something has been overlooked - possibly the combination of high
resolution and mobility; I would contend that it is only limited
development resources that have prevented implementation.
Resources are clearly plentiful, it’s ‘vision’ that is blurred and
limited.
Post by J. P. Gilliver (John)
I'm sure there
are plenty of people who have the vision (an unfortunate word in this
context, but you know what I mean), and are bursting with ideas they'd
like to try, but are constrained by lack of funds and time. (Time, in
the sense that however keen the individual, they eventually need to
leave the research institute and actually make a living for themselves;
what is needed is for the institution to take on the continuation of the
project. And getting people to continue someone else's work is always
harder.)
[]
Post by David Albert Harrell
I agree. Visual cortex implants appear not to be viable for multiple
reasons.
An evil side-thought: there are certain areas of the body with high
nerve-densities, but I think the social acceptability of placing the
transducer onto those areas might take some overcoming!
[]
I suggest coloring outside of such incredible lines if a significant
advantage is discovered.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
Current and past low-resolution immobile designs are a matter of
record. The potential of this device has not yet been demonstrated,
and cannot currently be demonstrated by any means known to me.
But you have said it is well within the capabilities to make it.
[]
Technically true, but it does not yet exist; hence no demonstration is
currently possible.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
Post by J. P. Gilliver (John)
Post by David Albert Harrell
Assume that we have already built such a device, ie a moving picture
is being delivered to the subject in perceptible format; if the
subject does not proceed to interface with a real-time environment,
one cannot expect relevant learning cycle, or even useful and
sustained cerebral discovery of the area, to occur.
The initial assumption is a big one!
I see this as more apparent, than assumptive.
It was you who started the paragraph with the word "Assume"!
[]
Post by David Albert Harrell
Post by J. P. Gilliver (John)
You're still not going to reach the 100,000 pixels you want.
Possibly not, but such limits clearly need to be quantified to
discover optimums. Idem.
Agreed, apart from the repetition of Idem which I am beginning to find
wearing, sorry.
Post by David Albert Harrell
Post by J. P. Gilliver (John)
Post by David Albert Harrell
bodysuit providing coverage of all available potentially useful dermal
areas may even be proven most effective. [Notice that the resolution
of the ‘x,y grid’ should approach, or surpass to some degree, the
density of the dermal neural receptors being targeted.]
I suspect the brain would not really adapt to signals received from such
a large area, though I may be wrong.
In contrast, I suspect the larger the area being stimulated, the more
readily it will be adopted and assimilated.
I did not express myself clearly - it was not the size of the area being
used so much as the disparate nature about which I have doubts. I think
an area wrapped around the body - so that it is no longer flat - might
cause problems for the brain. However, I'd be happy to be proved wrong.
[]
Post by David Albert Harrell
Post by J. P. Gilliver (John)
Furthermore, I am convinced the only reason a Tactile Image Projection
Post by David Albert Harrell
system has not already been developed and adapted into practical use
for the blind, is because the inadequacies of low resolution, and this
most critical prerequisite of ‘real-time mobile interaction with the
environment,’ are being overlooked.
You write as if something is not being done because no-one has realised
it is a problem. I am sure this is not the case.
I’m certain this is precisely the case, and the reason “something is
not being done.” And I’m not claiming no one realizes this is ‘a’
problem, I’m declaring no one realizes this is ‘the’ problem.
OK, you go ahead and declare it. We may not agree on various aspects of
the problem and the solution, but the subject needs forceful individuals
like you to actually get things done.
Post by David Albert Harrell
The only way to prove or disprove what I’m saying is to build a higher
resolution prototype intended for portable use. Notice it is not
possible to prove such a ‘mysterious cognitively dependant’ device
will work, without having such a device?
I think there might be some mileage in a low-resolution but portable
device. You may be right that it won't work, but you may be wrong.
Similarly a high-resolution but not very mobile device.
I would resist compromise on these points, since this would only slow
down development and increase costs. Too much time and money has
already been wasted firing far too low to actually hit the target.
And notice I’m suggesting an entirely different target; I’m not aiming
to make standard text, symbols, or still images more available to the
blind, but instead to open a ‘video picture window’ through which they
may view the live world.

This ‘mobile/high-resolution version of Tactile Image Projection’
should perhaps simply be called ‘Tactile Video Projection.’ The
former handle can be, and apparently has been, interpreted as some
form of fingertip text and slideshow.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
If developers understood that a static device cannot approach full
functionality, they would by now [after 43 years] have focused on, and
solved, immobility; even if this required a harness of extended
wires. As for resolution, designs verify they clearly don’t see this
necessity either, whereas I suggest proceeding at once to discover and
Not necessarily: designs reflect what is thought to be achievable.
Doesn't mean the designer wouldn't like to go further. There's no point
in designing something you can't make.
I think few are disputing that the technology is currently available.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
enlist whatever ‘tactile neural receptors’ are available, since
‘generally low density’ appears indeed to be a problem.
You continue to speak as if no-one has done any such work.
Yes I’m suggesting thousands as opposed to a few hundred sensations;
ie this crate won’t fly until we give it [at least] ten times the
current wingspan.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
A bodysuit may be necessary, however I am confident the brain can
flatten this signal out and even piece it together for practical use
(providing symmetry is maintained). (And no, I cannot prove this; and
If you think the brain can flatten the sensor surface (about which I
have doubts but never mind), why do you think symmetry is important?
Interesting question. Generally symmetry would seem to be part of
nature’s harmony; and I don’t anticipate any advantage to ignoring
this key signature when placing emitters.

More specifically:
1) Symmetry has inherent order, tandem equity with an implied
center.
2) The brain is already adept at receiving orderly sensations from
symmetrical fields such as the eyes, ears, nose, and skin.
3) The retina is symmetrical.

Abstractly and psychologically, the brain ‘equates symmetry with
beauty,’ perhaps enhancing any given appetite to embrace the receptor
field. Notice also that you can arrange for such ‘seductive symmetry’
during adaptation training, provided the emitter array is symmetrical.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
yes, I know better than to try.) Resolution will never be sharp by HD
Not sure why you don't want to try.
Post by David Albert Harrell
standards, but I am positive one can far surpass the postage-stamp
dimensions currently being ventured, and that one will certainly never
discover top resolution limits while addressing isolated small areas,
hands, tongues, and finger tips.
There's little need for HD, even for the sighted!
But resolution and size of interface area are separate matters, though
are connected to some extent. But just because an area is small doesn't
mean it has to be low resolution, and a large area doesn't necessarily
mean high resolution either.
Apparently, even grids being placed in areas of renowned high density
are designed with comparatively low hardware resolution, ie
recognizing and addressing these high density areas, but not actually
taking advantage of them.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
Finally notice this post is not about ‘how tactile image projection
works,’ it is about why it has not worked, and why it is not working,
ie the current design is failing to achieve even close to the
potential of a mobile/high-resolution version. And when I refer to
You seem to be dismissing any gain to be made by the intermediate
stages. You may be right that there is a level of resolution and
mobility at which things will suddenly start to work, but it is by no
means proven. And dismissing such work will alienate those who might
help you.
If pointing out crippling design flaws, within a foundering endeavor,
did in fact ‘dismiss someone’s work,’ the whistle would still need to
be blown, and without delay.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
the ‘potential’ of this device, I’m suggesting that some of the more
adaptable and tenacious blind subjects will be able to read street
signs, watch a movie, or even play tennis, allowing that depth
perception problems will be a limiter in any 3D endeavor.
You'd be surprised how little that actually matters: I don't have
stereoscopic vision, which is the basis for all artificial 3D systems,
and yet I have little trouble with depth. (I am a little clumsy with
very fine/close work, but at the distances involved in, for example,
driving, I have no trouble.)
Very encouraging.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
Realistically all that has been demonstrated thus far, after over 40
years of ‘off and on’ research and development, is that ‘tactile-grid
to brain reception’ is possible. Ok, the engine starts, great! Now
turn on the headlights and shift into gear.
But don't dismiss the possibility that you can get somewhere without
headlights, during the day for example.
Not unless you shift into gear. And if you start shifting gears at
night, without headlight resolution, you’re going to bump into many
objects you should have recognized from afar. Ie the solution is in
two parts; I wouldn’t expect to uncover the potential of this device
without implementing both.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
Thank you for your polite and obviously sincere and concerned comments
and questions.
David Albert Harrell
It is a subject I would certainly like to see develop; I just feel that
your stance may antagonize. In fact, it will do so; whether this helps
(by jarring people into action) or hinders (by making the limited
resources - people and funding - go to someone other than you) we will
have to wait and see.
--
J. P. Gilliver
I try to be as diplomatic as possible without compromising the
facts.

Thanks once again for your insightful comments, questions and
encouragement.

David Albert Harrell
J. P. Gilliver (John)
2010-11-26 00:48:29 UTC
Permalink
[Note to other readers: this post is out of sequence in the thread.]

In message
Post by David Albert Harrell
On Nov 22, 12:38 pm, "J. P. Gilliver (John)"
[]
Post by David Albert Harrell
Post by J. P. Gilliver (John)
I suppose my response must be, if it's so obvious, why do _you_ think
nobody but you has seen it?
[]
All I know about the persons involved with developing this concept is
how they spell their names.
Ah, so your assumption that something has been overlooked - basically
the combination of mobility and high resolution, if I understand you -
is just that, an assumption, on your part. Granted, it could equally be
said that the suggestion that this is not the case is also an assumption
on my part, out of touch with the latest developments in the field as I
slightly am. It would be nice if others joined our debate - I hope
we've not driven them out! - especially those with personal experience
of the matter, i. e. VH/VI people.

Do you actually know and live or work with any blind people? I do,
though I admit this particular subject doesn't come up much in
conversation.

(As a brief aside: I believe some work has been done with auditory
input, and in that field, mobility has indeed been a fairly major
consideration, the input to the subject being I think something like a
pair of goggles; I don't know details though. I know that one of my
friends does use sound extensively when moving - he can "hear" a
lamp-post, bus shelter, or similar object, when walking, by how it
affects the ambient sound field; the fact that he has excellent hearing,
and also very minimal vision - he is vaguely aware of shapes on a good
day [which seems to have nothing to do with light level, in his case] -
may be relevant. The white cane is used - in fact this is part of the
training - in this way, not just as an indication to others of the
user's condition: tapping with it provides an impulse, not too
dissimilar to a radar or sonar ping - for when there is insufficient, or
the wrong kind, of ambient noise: or so I have been told/read - neither
of my friends tap with theirs, in his case because his hearing is
sufficient that he doesn't need to, and I fear in her case because her
hearing is too poor. But I've certainly heard that this is part of the
training in use of the cane.)
[]
Post by David Albert Harrell
Post by J. P. Gilliver (John)
would say, then, that starting with a stationary image is not a bad
idea. What might be good, though, is to have some small part of the
image moving, right from the start; I certainly wouldn't have the whole
image move.
By default, I suspect most basic indoctrinating activities would
inherently have minimal motion in the field of ‘vision.’ Whatever the
action however, I would be adamantly opposed to any distortion of the
‘video truth.’ Always “Tell it like it is.” And keep the camera
moving with the head [possibly with the eye muscles in future
I'd certainly agree with maintaining some sort of feedback/link between
the motion of the camera and that of part of the subject, though not
necessarily head - that probably depends on whether the subject has good
binaural hearing or not. If they do, and I suppose it's reasonable to
start with that assumption, then they would indeed already be familiar
with the concept that moving the head changes the perceived environment;
for those with poorer hearing a hand- or shoulder- or chest-mounted
camera might help.

(I don't think there's anything to be gained in incorporating a link to
the eye muscles, unless the subject has some residual vision anyway and
is thus familiar with what those muscles are for. But as you say, that
consideration would be for future designs anyway.)
[]
Post by David Albert Harrell
Coincidentally I wrote a cognitive science paper 15 years ago this
month ‘Topographic Vision 'totally alien formatting'’ in which I make
the case that topographic, or a radar/sonar type representation of
one’s ‘field of vision,’ would be far easier for a computer hosted
entity (or a blind subject) to resolve and assimilate than a
conventional 2-D image. I have however been reluctant to bring up
this perhaps obfuscating complication (concerning blind subjects) for
reasons which by now must be all too obvious.
Understood. (Can't quite see it myself, but I haven't read your paper.
What journal or conference was it in? [Just out of curiosity - I'm
afraid I'm not going to go and read it!])
Post by David Albert Harrell
Post by J. P. Gilliver (John)
Post by David Albert Harrell
I disagree. Idem
You over-use that word - and assume your audience is familiar with it!
[]
I sincerely apologized for the assumption.
Idem [definition] the same, especially a book, article, or chapter
previously referred to
Ah. I think the word is "ibidem", and the usual abbreviation "ibid.".
(P. S.: my spell checker agrees!)
[]
Post by David Albert Harrell
Post by J. P. Gilliver (John)
Ah. If you're suggesting a single laser, but mechanically (probably by
mirrors) scanned, rather than a grid of 100,000 such lasers, then I
agree, it shows good promise. The scanning mechanism implies a certain
volume over the selected are of skin, mind, analogous to the volume
occupied by the tube of a TV set or monitor, but this should not be a
great problem.
[]
I’m suggesting a scanning laser, though perhaps more than one will be
required within a single emitter array.
Right. several small scanners would reduce the thickness of the
equipment, at the cost of increased complexity.
[]
Post by David Albert Harrell
Notice that when you ‘see’ something with this device, you can
frequently reach out and touch it at the same moment. I cannot
emphasize enough the importance of this short-term feedback loop, yet
I cannot prove its importance.
Therefore, you are pushing hard something which may not be correct. For
what it's worth, I think you are right, though.
[]
Post by David Albert Harrell
Post by J. P. Gilliver (John)
Post by David Albert Harrell
I don’t think many people doubt the parameters I’m proposing are
technically possible [ie maximum attainable resolution and functional
mobility]; these are only a question of time and resources. So we
know we can get there, the current issue is ‘do we have the vision’
that produces the enthusiasm we’ll need to make the journey.
I agree. I guess my main point of disagreement with you is your claim
that something has been overlooked - possibly the combination of high
resolution and mobility; I would contend that it is only limited
development resources that have prevented implementation.
Resources are clearly plentiful, it’s ‘vision’ that is blurred and
limited.
If resources were so plentiful, there are plenty of other areas that
would have been developed too (not least, for example, a Braille cell of
a reasonable price; given the mechanical complexity of [ordinary
ink-based] print heads, and the cost they are made for, and other pieces
of common technology, I find it criminal the cost of Braille displays.
And assorted other VH equipment).
[]
Post by David Albert Harrell
Post by J. P. Gilliver (John)
An evil side-thought: there are certain areas of the body with high
nerve-densities, but I think the social acceptability of placing the
transducer onto those areas might take some overcoming!
[]
I suggest coloring outside of such incredible lines if a significant
advantage is discovered.
I was thinking of the head of the penis, and possibly the breasts (I
know those are sensitive, but whether they have similar _resolution_ I
don't know). I suspect equipment connected there - and work in the
development thereof - would not be that acceptable (especially in some
cultures).
[]
Post by David Albert Harrell
Post by J. P. Gilliver (John)
I think there might be some mileage in a low-resolution but portable
device. You may be right that it won't work, but you may be wrong.
Similarly a high-resolution but not very mobile device.
I would resist compromise on these points, since this would only slow
down development and increase costs. Too much time and money has
already been wasted firing far too low to actually hit the target.
And notice I’m suggesting an entirely different target; I’m not aiming
to make standard text, symbols, or still images more available to the
blind, but instead to open a ‘video picture window’ through which they
may view the live world.
Yes, I know that's what you are envisaging; so am I. However, dismissing
the transitional stages could alienate you from some researchers, and/or
mean you'll have to wait longer (for a more generous benefactor): I feel
that the in between stage - especially mobility-with-limited-resolution
rather than high(er)-resolution-but-fixed, because of the feedback
aspect - is worth exploring, and AFAIK hasn't been.
Post by David Albert Harrell
This ‘mobile/high-resolution version of Tactile Image Projection’
should perhaps simply be called ‘Tactile Video Projection.’ The
former handle can be, and apparently has been, interpreted as some
form of fingertip text and slideshow.
Yes, you do need to make clear you are talking about seeing (the world),
not just reading (letters or diagramme shapes). Maybe "non-light video
perception" - "tactile", though it doesn't strictly mean so, means
fingertip to a lot of people. (I'd avoid "projection" until you're sure
that is the way to go.)
Post by David Albert Harrell
Post by J. P. Gilliver (John)
Post by David Albert Harrell
If developers understood that a static device cannot approach full
functionality, they would by now [after 43 years] have focused on, and
solved, immobility; even if this required a harness of extended
wires. As for resolution, designs verify they clearly don’t see this
necessity either, whereas I suggest proceeding at once to discover and
Not necessarily: designs reflect what is thought to be achievable.
Doesn't mean the designer wouldn't like to go further. There's no point
in designing something you can't make.
I think few are disputing that the technology is currently available.
Theoretically, yes. I'm just being realistic - as an engineer rather
than a scientist, if you like.
Post by David Albert Harrell
Post by J. P. Gilliver (John)
Post by David Albert Harrell
enlist whatever ‘tactile neural receptors’ are available, since
‘generally low density’ appears indeed to be a problem.
You continue to speak as if no-one has done any such work.
Yes I’m suggesting thousands as opposed to a few hundred sensations;
ie this crate won’t fly until we give it [at least] ten times the
current wingspan.
That may be the case - but if someone wants to try with three times,
especially if that is all they have the resources (time as well as
money) for, I'd not dissuade them. I hear what you say about wasting
effort, but "saving it up" assumes that the funding (etc.) available
comes from sources who are willing to join forces, which may well not be
the case, for a variety of reasons.
Post by David Albert Harrell
Post by J. P. Gilliver (John)
Post by David Albert Harrell
A bodysuit may be necessary, however I am confident the brain can
flatten this signal out and even piece it together for practical use
(providing symmetry is maintained). (And no, I cannot prove this; and
If you think the brain can flatten the sensor surface (about which I
have doubts but never mind), why do you think symmetry is important?
Interesting question. Generally symmetry would seem to be part of
nature’s harmony; and I don’t anticipate any advantage to ignoring
this key signature when placing emitters.
Interesting. When you were talking about a body suit, I thought you just
meant wrapping round more of the body to get more sensors (nerve
endings).
Post by David Albert Harrell
1) Symmetry has inherent order, tandem equity with an implied
center.
2) The brain is already adept at receiving orderly sensations from
symmetrical fields such as the eyes, ears, nose, and skin.
I'd say of those, the only one where the symmetry is important in
perceiving the world - basically, at a distance - is hearing, where the
relative levels (and slight delays between) sounds reaching the two ears
give some indication of the direction of the source; I don't think we
smell directionally, for example (at least not using inter-nostril
differences, though we might move the head). As for vision, as I've said
I manage without stereoscopic, and I suspect it is something the
learning of which would be unnecessary initially. [Both my eyes work, by
the way - I (possibly due to being cross-eyed when very small and being
operated on at about 5) just don't have the brain wiring that allows me
to use them together to get the extra dimension.]
Post by David Albert Harrell
3) The retina is symmetrical.
If you mean there are two eyes, then see above (plus I think using twice
as many cameras/transducers would be needless expense).
[]
Post by David Albert Harrell
Post by J. P. Gilliver (John)
You seem to be dismissing any gain to be made by the intermediate
stages. You may be right that there is a level of resolution and
mobility at which things will suddenly start to work, but it is by no
means proven. And dismissing such work will alienate those who might
help you.
If pointing out crippling design flaws, within a foundering endeavor,
did in fact ‘dismiss someone’s work,’ the whistle would still need to
be blown, and without delay.
However, just _claiming_ that too low a resolution, for example, is a
"crippling design flaw" isn't going to make potential collaborators love
you. As I've said, there may be a level below which it doesn't "just
work", but work to find what that level is may (in fact, if that
particular goal is being addressed, _will_) involve intermediate
resolutions. (There _may_ well also be a resolution beyond which it's
not worth going - or at least beyond which the returns fall off sharply.
[]
Post by David Albert Harrell
Post by J. P. Gilliver (John)
But don't dismiss the possibility that you can get somewhere without
headlights, during the day for example.
Not unless you shift into gear. And if you start shifting gears at
night, without headlight resolution, you’re going to bump into many
objects you should have recognized from afar. Ie the solution is in
two parts; I wouldn’t expect to uncover the potential of this device
without implementing both.
But not travelling in the daytime because you haven't got headlights yet
is also a bit odd!
[]
Post by David Albert Harrell
I try to be as diplomatic as possible without compromising the
facts.
Thanks once again for your insightful comments, questions and
encouragement.
David Albert Harrell
I hope others join in (on the subject, rather than just correcting my
grammar, though I'm quite happy for him to try to do that - though as a
non-English-speaker, some of his "corrections" are rather odd!).
--
J. P. Gilliver. UMRA: 1960/<1985 MB++G.5AL-IS-P--Ch++(p)***@T0H+Sh0!:`)DNAf

The fool doth think he is wise, but the wise man knows himself to be a fool.
David Albert Harrell
2010-11-28 07:54:28 UTC
Permalink
On Nov 25, 4:48 pm, "J. P. Gilliver (John)"
Post by J. P. Gilliver (John)
[Note to other readers: this post is out of sequence in the thread.]
In message
Post by David Albert Harrell
On Nov 22, 12:38 pm, "J. P. Gilliver (John)"
[]
Post by David Albert Harrell
Post by J. P. Gilliver (John)
I suppose my response must be, if it's so obvious, why do _you_ think
nobody but you has seen it?
[]
All I know about the persons involved with developing this concept is
how they spell their names.
Ah, so your assumption that something has been overlooked - basically
the combination of mobility and high resolution, if I understand you -
is just that, an assumption, on your part.
Not only overlooked, but tripped over, ie there apparently have been
unheeded wake-up calls from sources other than myself. In 1998 a
thread entitled ‘Rickert's paradigm’ ran in comp.ai.philosophy telling
of a serendipitous event during the early 70’s ‘Bach Y Rita
experiments’ with their device.

An operator administering the device to a subject, thinking the
machine was turned off, suddenly picked up and moved the camera. The
subject, still hooked up to the device, ducked, “as if an object were
coming at him.”

This event however had no apparent effect on subsequent experiments or
designs.
Post by J. P. Gilliver (John)
Granted, it could equally be
said that the suggestion that this is not the case is also an assumption
on my part, out of touch with the latest developments in the field as I
slightly am. It would be nice if others joined our debate - I hope
we've not driven them out! - especially those with personal experience
of the matter, i. e. VH/VI people.
All you have to do, to fall behind in the latest technologies, is take
a 30 minute nap.

Hopefully it’s merely my candid high level of confidence, in the
mobile high-resolution ‘Tactile Video Reception’ [TVR] design, that is
perhaps mistaken for intolerance. I have the highest regard for prior
work done with low-resolution static ‘text and symbol’ knothole
versions; these trials have proven ‘tactile 2D field reception’ is
possible. This removes a tremendous obstacle which was preventing
many from even considering the TVR design. In any case however, I’m
not looking to drive this project, only to jump start it.
Post by J. P. Gilliver (John)
Do you actually know and live or work with any blind people? I do,
though I admit this particular subject doesn't come up much in
conversation.
I have no experience with blind people. My primary relevant field of
interest is cognitive science.
Post by J. P. Gilliver (John)
(As a brief aside: I believe some work has been done with auditory
input, and in that field, mobility has indeed been a fairly major
consideration, the input to the subject being I think something like a
pair of goggles; I don't know details though. I know that one of my
friends does use sound extensively when moving - he can "hear" a
lamp-post, bus shelter, or similar object, when walking, by how it
affects the ambient sound field; the fact that he has excellent hearing,
and also very minimal vision - he is vaguely aware of shapes on a good
day [which seems to have nothing to do with light level, in his case] -
may be relevant. The white cane is used - in fact this is part of the
training - in this way, not just as an indication to others of the
user's condition: tapping with it provides an impulse, not too
dissimilar to a radar or sonar ping - for when there is insufficient, or
the wrong kind, of ambient noise: or so I have been told/read - neither
of my friends tap with theirs, in his case because his hearing is
sufficient that he doesn't need to, and I fear in her case because her
hearing is too poor. But I've certainly heard that this is part of the
training in use of the cane.)
[]
Whether a topo format, 2D video, or an overlain combination of both is
eventually developed, this form of ‘single ping focusing’ sonar is a
promising optional supplementary channel, to gain an additional
perspective, on the subject’s environment.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
Post by J. P. Gilliver (John)
would say, then, that starting with a stationary image is not a bad
idea. What might be good, though, is to have some small part of the
image moving, right from the start; I certainly wouldn't have the whole
image move.
By default, I suspect most basic indoctrinating activities would
inherently have minimal motion in the field of ‘vision.’ Whatever the
action however, I would be adamantly opposed to any distortion of the
‘video truth.’ Always “Tell it like it is.” And keep the camera
moving with the head [possibly with the eye muscles in future
I'd certainly agree with maintaining some sort of feedback/link between
the motion of the camera and that of part of the subject, though not
necessarily head - that probably depends on whether the subject has good
binaural hearing or not. If they do, and I suppose it's reasonable to
start with that assumption, then they would indeed already be familiar
with the concept that moving the head changes the perceived environment;
for those with poorer hearing a hand- or shoulder- or chest-mounted
camera might help.
The visual cortex, which will presumably be involved with reception,
may be anticipating [perhaps even when blind from birth] consistent
neck/eye-restricted forward-looking simplicity in the ‘field of
vision.’
Post by J. P. Gilliver (John)
(I don't think there's anything to be gained in incorporating a link to
the eye muscles, unless the subject has some residual vision anyway and
is thus familiar with what those muscles are for. But as you say, that
consideration would be for future designs anyway.)
[]
Yes this presupposes an ocular camera implant and useful eye muscle
coordination; but it seems a logical progression for eventual
experimentation, perhaps seeking more fluent and natural field
selection.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
Coincidentally I wrote a cognitive science paper 15 years ago this
month ‘Topographic Vision 'totally alien formatting'’ in which I make
the case that topographic, or a radar/sonar type representation of
one’s ‘field of vision,’ would be far easier for a computer hosted
entity (or a blind subject) to resolve and assimilate than a
conventional 2-D image. I have however been reluctant to bring up
this perhaps obfuscating complication (concerning blind subjects) for
reasons which by now must be all too obvious.
Understood. (Can't quite see it myself, but I haven't read your paper.
What journal or conference was it in? [Just out of curiosity - I'm
afraid I'm not going to go and read it!])
I don’t recall whether this paper was ever published beyond Usenet. I
reviewed it a few days ago and it now appears a little long, yet
incomplete, perhaps needing a rewrite. The message however was that
topo is many times simpler than 2D; which may be very important if
resolution does turn out to be a significant limiting factor, since
topo would appear to offer less deciphering problems, with more
information, while requiring fewer pixels to represent.

If you wish to read it try running a ‘Search Groups’ on the string,
Topographic Vision "totally alien formatting"
Post by J. P. Gilliver (John)
Post by David Albert Harrell
Post by J. P. Gilliver (John)
Post by David Albert Harrell
I disagree. Idem
You over-use that word - and assume your audience is familiar with it!
[]
I sincerely apologized for the assumption.
Idem [definition] the same, especially a book, article, or chapter
previously referred to
Ah. I think the word is "ibidem", and the usual abbreviation "ibid.".
(P. S.: my spell checker agrees!)
[]
According to the Encarta dictionary all three are correct, having
about the same meaning.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
Post by J. P. Gilliver (John)
Ah. If you're suggesting a single laser, but mechanically (probably by
mirrors) scanned, rather than a grid of 100,000 such lasers, then I
agree, it shows good promise. The scanning mechanism implies a certain
volume over the selected are of skin, mind, analogous to the volume
occupied by the tube of a TV set or monitor, but this should not be a
great problem.
[]
I’m suggesting a scanning laser, though perhaps more than one will be
required within a single emitter array.
Right. several small scanners would reduce the thickness of the
equipment, at the cost of increased complexity.
[]
If the bodysuit indeed covers most of the body as planned, different
areas will require individual laser scan emitters.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
Notice that when you ‘see’ something with this device, you can
frequently reach out and touch it at the same moment. I cannot
emphasize enough the importance of this short-term feedback loop, yet
I cannot prove its importance.
Therefore, you are pushing hard something which may not be correct. For
what it's worth, I think you are right, though.
[]
Well that’s two.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
Post by J. P. Gilliver (John)
Post by David Albert Harrell
I don’t think many people doubt the parameters I’m proposing are
technically possible [ie maximum attainable resolution and functional
mobility]; these are only a question of time and resources. So we
know we can get there, the current issue is ‘do we have the vision’
that produces the enthusiasm we’ll need to make the journey.
I agree. I guess my main point of disagreement with you is your claim
that something has been overlooked - possibly the combination of high
resolution and mobility; I would contend that it is only limited
development resources that have prevented implementation.
Resources are clearly plentiful, it’s ‘vision’ that is blurred and
limited.
If resources were so plentiful, there are plenty of other areas that
would have been developed too (not least, for example, a Braille cell of
a reasonable price; given the mechanical complexity of [ordinary
ink-based] print heads, and the cost they are made for, and other pieces
of common technology, I find it criminal the cost of Braille displays.
And assorted other VH equipment).
[]
Post by David Albert Harrell
Post by J. P. Gilliver (John)
An evil side-thought: there are certain areas of the body with high
nerve-densities, but I think the social acceptability of placing the
transducer onto those areas might take some overcoming!
[]
I suggest coloring outside of such incredible lines if a significant
advantage is discovered.
I was thinking of the head of the penis, and possibly the breasts (I
know those are sensitive, but whether they have similar _resolution_ I
don't know). I suspect equipment connected there - and work in the
development thereof - would not be that acceptable (especially in some
cultures).
[]
I wouldn’t rule out any area that was indicated by results. However
smaller areas of high density appear to have insufficient maximums;
which in any case forces exploration and cultivation of these larger
lower density areas. Such larger areas inherently add ‘space’ to the
engineering equation; this space providing opportunities for creative
technical and biological innovations.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
Post by J. P. Gilliver (John)
I think there might be some mileage in a low-resolution but portable
device. You may be right that it won't work, but you may be wrong.
Similarly a high-resolution but not very mobile device.
I would resist compromise on these points, since this would only slow
down development and increase costs. Too much time and money has
already been wasted firing far too low to actually hit the target.
And notice I’m suggesting an entirely different target; I’m not aiming
to make standard text, symbols, or still images more available to the
blind, but instead to open a ‘video picture window’ through which they
may view the live world.
Yes, I know that's what you are envisaging; so am I. However, dismissing
the transitional stages could alienate you from some researchers, and/or
mean you'll have to wait longer (for a more generous benefactor): I feel
that the in between stage - especially mobility-with-limited-resolution
rather than high(er)-resolution-but-fixed, because of the feedback
aspect - is worth exploring, and AFAIK hasn't been.
I appreciate the risk you are pointing out, however I don’t expect an
aggressive resolution target would be relatively ‘a great deal more
expensive’ than the limited venture you are proposing; and consider
how costly it would be in the long run if ‘production motivating
potential’ was not realized due to a design cutback. Resolution is
probably too basic a graphic ingredient to leave out of the picture.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
This ‘mobile/high-resolution version of Tactile Image Projection’
should perhaps simply be called ‘Tactile Video Projection.’ The
former handle can be, and apparently has been, interpreted as some
form of fingertip text and slideshow.
Yes, you do need to make clear you are talking about seeing (the world),
not just reading (letters or diagramme shapes). Maybe "non-light video
perception" - "tactile", though it doesn't strictly mean so, means
fingertip to a lot of people. (I'd avoid "projection" until you're sure
that is the way to go.)
Post by David Albert Harrell
Post by J. P. Gilliver (John)
Post by David Albert Harrell
If developers understood that a static device cannot approach full
functionality, they would by now [after 43 years] have focused on, and
solved, immobility; even if this required a harness of extended
wires. As for resolution, designs verify they clearly don’t see this
necessity either, whereas I suggest proceeding at once to discover and
Not necessarily: designs reflect what is thought to be achievable.
Doesn't mean the designer wouldn't like to go further. There's no point
in designing something you can't make.
I think few are disputing that the technology is currently available.
Theoretically, yes. I'm just being realistic - as an engineer rather
than a scientist, if you like.
Post by David Albert Harrell
Post by J. P. Gilliver (John)
Post by David Albert Harrell
enlist whatever ‘tactile neural receptors’ are available, since
‘generally low density’ appears indeed to be a problem.
You continue to speak as if no-one has done any such work.
Yes I’m suggesting thousands as opposed to a few hundred sensations;
ie this crate won’t fly until we give it [at least] ten times the
current wingspan.
That may be the case - but if someone wants to try with three times,
especially if that is all they have the resources (time as well as
money) for, I'd not dissuade them. I hear what you say about wasting
effort, but "saving it up" assumes that the funding (etc.) available
comes from sources who are willing to join forces, which may well not be
the case, for a variety of reasons.
Post by David Albert Harrell
Post by J. P. Gilliver (John)
Post by David Albert Harrell
A bodysuit may be necessary, however I am confident the brain can
flatten this signal out and even piece it together for practical use
(providing symmetry is maintained). (And no, I cannot prove this; and
If you think the brain can flatten the sensor surface (about which I
have doubts but never mind), why do you think symmetry is important?
Interesting question. Generally symmetry would seem to be part of
nature’s harmony; and I don’t anticipate any advantage to ignoring
this key signature when placing emitters.
Interesting. When you were talking about a body suit, I thought you just
meant wrapping round more of the body to get more sensors (nerve
endings).
Any part or potion that proves useful can contribute to overall
resolution.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
1) Symmetry has inherent order, tandem equity with an implied
center.
2) The brain is already adept at receiving orderly sensations from
symmetrical fields such as the eyes, ears, nose, and skin.
I'd say of those, the only one where the symmetry is important in
perceiving the world - basically, at a distance - is hearing, where the
relative levels (and slight delays between) sounds reaching the two ears
give some indication of the direction of the source; I don't think we
smell directionally, for example (at least not using inter-nostril
differences, though we might move the head). As for vision, as I've said
I manage without stereoscopic, and I suspect it is something the
learning of which would be unnecessary initially. [Both my eyes work, by
the way - I (possibly due to being cross-eyed when very small and being
operated on at about 5) just don't have the brain wiring that allows me
to use them together to get the extra dimension.]
The olfactory receptor neurons, and the eyes, gather sensory
information from symmetrical fields within the body, though in the
former I don’t think the symmetry is intended by nature to be a
distinguishing factor in perception.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
3) The retina is symmetrical.
If you mean there are two eyes, then see above (plus I think using twice
as many cameras/transducers would be needless expense).
[]
Both eyes would not necessarily get implants, even though this would
probably make coordination impossible.
And a single retina is a symmetrical field within the eyeball. The
‘pair of eyes’ is another level of symmetry referred to above.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
Post by J. P. Gilliver (John)
You seem to be dismissing any gain to be made by the intermediate
stages. You may be right that there is a level of resolution and
mobility at which things will suddenly start to work, but it is by no
means proven. And dismissing such work will alienate those who might
help you.
If pointing out crippling design flaws, within a foundering endeavor,
did in fact ‘dismiss someone’s work,’ the whistle would still need to
be blown, and without delay.
However, just _claiming_ that too low a resolution, for example, is a
"crippling design flaw" isn't going to make potential collaborators love
you.
Perhaps, but it explains why past results have been uninspiring; and
suggests success is attainable, requiring only the will. This changes
the climate and outlook for the device; which, in spite of
technological spikes, does not appear to have substantially progressed
in four decades.
Post by J. P. Gilliver (John)
As I've said, there may be a level below which it doesn't "just
work", but work to find what that level is may (in fact, if that
particular goal is being addressed, _will_) involve intermediate
resolutions. (There _may_ well also be a resolution beyond which it's
not worth going - or at least beyond which the returns fall off sharply.
[]
Reception will no doubt get progressively better with increases in
resolution; and specific areas will eventually max out for various
reasons. Still, I don’t believe high resolution goals will drive the
prototype costs up prohibitively. Also I suspect we are going to need
all the resolution we can discover.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
Post by J. P. Gilliver (John)
But don't dismiss the possibility that you can get somewhere without
headlights, during the day for example.
Not unless you shift into gear. And if you start shifting gears at
night, without headlight resolution, you’re going to bump into many
objects you should have recognized from afar. Ie the solution is in
two parts; I wouldn’t expect to uncover the potential of this device
without implementing both.
But not travelling in the daytime because you haven't got headlights yet
is also a bit odd!
[]
Just a slight incongruence within an illuminating analogy.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
I try to be as diplomatic as possible without compromising the
facts.
Thanks once again for your insightful comments, questions and
encouragement.
David Albert Harrell
Autymn D. C.
2010-11-29 15:06:19 UTC
Permalink
On Nov 25, 4:48 pm, "J. P. Gilliver (John)"
Post by J. P. Gilliver (John)
on my part, out of touch with the latest developments in the field as I
slightly am. It  would be nice if others joined our debate - I hope
we've not driven them out! - especially those with personal experience
of the matter, i. e. VH/VI people.
How would it be http://wiktionary.org/wiki/nice?
Post by J. P. Gilliver (John)
I'd certainly agree with maintaining some sort of feedback/link between
the motion of the camera and that of part of the subject, though not
necessarily head - that probably depends on whether the subject has good
binaural hearing or not. If they do, and I suppose it's reasonable to
start with that assumption, then they would indeed already be familiar
Who are they?
Post by J. P. Gilliver (John)
Post by David Albert Harrell
Post by J. P. Gilliver (John)
You over-use that word - and assume your audience is familiar with it!
[]
I sincerely apologized for the assumption.
Idem [definition] the same, especially a book, article, or chapter
previously referred to
Ah. I think the word is "ibidem", and the usual abbreviation "ibid.".
(P. S.: my spell checker agrees!)
[]
Wrong:
http://etymonline.com/index.php?search=identical&searchmode=term
http://etymonline.com/index.php?search=ibid&searchmode=term
Post by J. P. Gilliver (John)
Post by David Albert Harrell
I’m suggesting a scanning laser, though perhaps more than one will be
required within a single emitter array.
Right. several small scanners would reduce the thickness of the
equipment, at the cost of increased complexity.
[]
Nothing to do with thickness.
thickness -> depth, heihth
Post by J. P. Gilliver (John)
Post by David Albert Harrell
This ‘mobile/high-resolution version of Tactile Image Projection’
should perhaps simply be called ‘Tactile Video Projection.’  The
former handle can be, and apparently has been, interpreted as some
form of fingertip text and slideshow.
Yes, you do need to make clear you are talking about seeing (the world),
not just reading (letters or diagramme shapes). Maybe "non-light video
perception" - "tactile", though it doesn't strictly mean so, means
fingertip to a lot of people. (I'd avoid "projection" until you're sure
that is the way to go.)
superjection
Post by J. P. Gilliver (John)
That may be the case - but if someone wants to try with three times,
especially if that is all they have the resources (time as well as
money) for, I'd not dissuade them. I hear what you say about wasting
effort, but "saving it up" assumes that the funding (etc.) available
comes from sources who are willing to join forces, which may well not be
the case, for a variety of reasons.
1 ≠ 2
Post by J. P. Gilliver (John)
I hope others join in (on the subject, rather than just correcting my
grammar, though I'm quite happy for him to try to do that - though as a
non-English-speaker, some of his "corrections" are rather odd!).
what him, retard?

-Aut
Autymn D. C.
2010-11-23 20:44:15 UTC
Permalink
On Nov 20, 7:05 pm, "J. P. Gilliver (John)"
Post by J. P. Gilliver (John)
I can only comment for the newsgroup a.c.b-u; for we readers of that
for us
Post by J. P. Gilliver (John)
somebody about things he things have not or are not being done.
thinks
not been
Post by J. P. Gilliver (John)
Post by David Albert Harrell
One of the problems with past, and apparently current, thinking is
that the x,y grid is being applied to small areas such as the hands,
tongue, and finger tips.  I realize receptor density is greater in
these areas, but the resolution needed is simply not possible using
hundreds of electrodes as oppose to thousands.  The focus should be on
covering as large an area as possible (perhaps even wrapping around to
large := broad
Post by J. P. Gilliver (John)
Post by David Albert Harrell
the chest and stomach from the back for greater resolution).  A
greater -> more
Post by J. P. Gilliver (John)
You're still not going to reach the 100,000 pixels you want.
4^8 = 65,536. (Eh, forget squares--hecsagòns are better: 1+3 144 145
= 62,641.)

-Aut
J. P. Gilliver (John)
2010-11-24 00:51:07 UTC
Permalink
In message
Post by Autymn D. C.
On Nov 20, 7:05 pm, "J. P. Gilliver (John)"
Post by J. P. Gilliver (John)
I can only comment for the newsgroup a.c.b-u; for we readers of that
for us
Correct (I think)!
Post by Autymn D. C.
Post by J. P. Gilliver (John)
somebody about things he things have not or are not being done.
thinks
Correct - a typo.
Post by Autymn D. C.
not been
Half correct. "Have not been" yes, but "are not been" no.
Post by Autymn D. C.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
One of the problems with past, and apparently current, thinking is
that the x,y grid is being applied to small areas such as the hands,
tongue, and finger tips.  I realize receptor density is greater in
these areas, but the resolution needed is simply not possible using
hundreds of electrodes as oppose to thousands.  The focus should be on
covering as large an area as possible (perhaps even wrapping around to
large := broad
Did you mean "!="? (Actually I think both might be valid, in different
programming languages. And depending on whether you meant "is not the
same as" or "leads to/implies", though your next example suggests you
didn't mean the latter.)
Post by Autymn D. C.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
the chest and stomach from the back for greater resolution).  A
greater -> more
I see what you're getting at, but he wasn't entirely wrong: if you
replace greater with more, you should probably replace resolution with
pixels.
Post by Autymn D. C.
Post by J. P. Gilliver (John)
You're still not going to reach the 100,000 pixels you want.
4^8 = 65,536. (Eh, forget squares--hecsagòns are better: 1+3 144 145
= 62,641.)
-Aut
Where does this power of eight come from? (And your bit in brackets has
at the very least lost something in translation through parts of the
internet.)
--
J. P. Gilliver. UMRA: 1960/<1985 MB++G.5AL-IS-P--Ch++(p)***@T0H+Sh0!:`)DNAf

The fool doth think he is wise, but the wise man knows himself to be a fool.
Autymn D. C.
2010-11-24 15:44:38 UTC
Permalink
On Nov 23, 4:51 pm, "J. P. Gilliver (John)"
Post by J. P. Gilliver (John)
In message
Post by Autymn D. C.
Post by J. P. Gilliver (John)
somebody about things he things have not or are not being done.
thinks
Correct - a typo.
Everything one types (strokes) is a typo; you mean dýstýpe
(misstroke).
Post by J. P. Gilliver (John)
Post by Autymn D. C.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
One of the problems with past, and apparently current, thinking is
that the x,y grid is being applied to small areas such as the hands,
tongue, and finger tips. I realize receptor density is greater in
these areas, but the resolution needed is simply not possible using
hundreds of electrodes as oppose to thousands. The focus should be on
covering as large an area as possible (perhaps even wrapping around to
large := broad
Did you mean "!="? (Actually I think both might be valid, in different
programming languages. And depending on whether you meant "is not the
same as" or "leads to/implies", though your next example suggests you
didn't mean the latter.)
Nes: = means likens; := (or ≡) means is.

at large := at broad

http://google.com/groups?q=%22Comparisons+for+the+illiterate%22
http://google.com/groups?q=Autymn+-autumn+%22length+is+time%22
http://google.com/groups?q=%22big+is+not+a+size%22
http://google.com/groups?q=%22motes+are+fleet%22

A belly is wide; shoulders are broad; a gap or stream is wide or
narrow but still broad or far; a tunnel is wide but slim.
Post by J. P. Gilliver (John)
Post by Autymn D. C.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
the chest and stomach from the back for greater resolution). A
greater -> more
I see what you're getting at, but he wasn't entirely wrong: if you
replace greater with more, you should probably replace resolution with
pixels.
I'd say manier pixels. Greater resolution is when pixels or pitch is
greater, thus not sharper.
Post by J. P. Gilliver (John)
Post by Autymn D. C.
Post by J. P. Gilliver (John)
You're still not going to reach the 100,000 pixels you want.
4^8 = 65,536.  (Eh, forget squares--hecsag ns are better: 1+3 144 145
= 62,641.)
-Aut
Where does this power of eight come from? (And your bit in brackets has
at the very least lost something in translation through parts of the
internet.)
4^8 comes in next under 100,000. The internet forgot a o-grave, and I
forgot a h (hecsaghòns): ἑξαγωνs.
Post by J. P. Gilliver (John)
--
The fool doth think he is wise, but the wise man knows himself to be a fool.
Where do you get this? The wise man doesn't know himself what he's
not.

-Aut
J. P. Gilliver (John)
2010-11-24 22:23:53 UTC
Permalink
In message
<43016b45-21bc-4740-a0bf-***@22g2000prx.googlegroups.com>,
Autymn D. C. <***@sbcglobal.net> writes:
[]
Post by Autymn D. C.
Post by J. P. Gilliver (John)
Correct - a typo.
Everything one types (strokes) is a typo; you mean dýstýpe
(misstroke).
I'm English; in modern English (on both sides of the Atlantic), typo is
a commonly-understood word for "typographical error", or rather "typing
mistake". That word you used is not English. Although there's no
absolute rule, this newsgroup tends to be in English.
[]
Post by Autymn D. C.
Post by J. P. Gilliver (John)
Post by Autymn D. C.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
hundreds of electrodes as oppose to thousands. The focus should be on
covering as large an area as possible (perhaps even wrapping around to
large := broad
Did you mean "!="? (Actually I think both might be valid, in different
programming languages. And depending on whether you meant "is not the
same as" or "leads to/implies", though your next example suggests you
didn't mean the latter.)
Nes: = means likens; := (or 0 >
at large := at broad
"at broad" isn't English. (Do you mean "abroad"? That is synonymous with
"at large", but isn't that common.)
Post by Autymn D. C.
http://google.com/groups?q=%22Comparisons+for+the+illiterate%22
http://google.com/groups?q=Autymn+-autumn+%22length+is+time%22
http://google.com/groups?q=%22big+is+not+a+size%22
http://google.com/groups?q=%22motes+are+fleet%22
I know how to use Google, thank you.
Post by Autymn D. C.
A belly is wide; shoulders are broad; a gap or stream is wide or
narrow but still broad or far; a tunnel is wide but slim.
In English, "far gap" and "far stream" aren't commonly used (except
perhaps poetically to mean one a long way away). "Slim tunnel" wouldn't
be either - we'd use narrow.
[]
Post by Autymn D. C.
I'd say manier pixels. Greater resolution is when pixels or pitch is
greater, thus not sharper.
I'd never say manier pixels - manier is not a word. (More would be the
usual English, I think.) And greater resolution (the normal term would
be higher resolution) is when pixels are closer together, finer pitch.
Post by Autymn D. C.
Post by J. P. Gilliver (John)
Post by Autymn D. C.
Post by J. P. Gilliver (John)
You're still not going to reach the 100,000 pixels you want.
4^8 = 65,536.  (Eh, forget squares--hecsag ns are better: 1+3 144 145
= 62,641.)
-Aut
Where does this power of eight come from? (And your bit in brackets has
at the very least lost something in translation through parts of the
internet.)
4^8 comes in next under 100,000. The internet forgot a o-grave, and I
forgot a h (hecsaghòns): 0
I know my powers of two, but I meant why are you using "4^8" - what does
something to the eighth power have to do with what we are talking about,
synthetic vision?
[]
Post by Autymn D. C.
Post by J. P. Gilliver (John)
The fool doth think he is wise, but the wise man knows himself to be a fool.
Where do you get this? The wise man doesn't know himself what he's
not.
[]
I forget. It is wise to know that there are things you do not know.
--
J. P. Gilliver. UMRA: 1960/<1985 MB++G.5AL-IS-P--Ch++(p)***@T0H+Sh0!:`)DNAf

Everything you've learned in school as `obvious' becomes less and less obvious
as you begin to study the universe. For example, there are no solids in the
universe. There's not even a suggestion of a solid. There are no absolute
continuums. There are no surfaces. There are no straight lines.
-R. Buckminster Fuller, engineer, designer, and architect (1895-1983)
Autymn D. C.
2010-11-29 15:33:52 UTC
Permalink
On Nov 24, 2:23 pm, "J. P. Gilliver (John)"
Post by J. P. Gilliver (John)
In message
[]
Post by J. P. Gilliver (John)
Correct - a typo.
Everything one types (strokes) is a typo; you mean d st pe
(misstroke).
I'm English; in modern English (on both sides of the Atlantic), typo is
a commonly-understood word for "typographical error", or rather "typing
mistake". That word you used is not English. Although there's no
absolute rule, this newsgroup tends to be in English.
[]
There's no such thing as modern English: http://google.com/groups?q=%22Benj+asks+about+history%22.
Misstroke isn't English? If you would write in English, you would
write sheer instead of absolute, stroke instead of type, wries instead
of tends, and so on.
Post by J. P. Gilliver (John)
Post by J. P. Gilliver (John)
Post by Autymn D. C.
Post by J. P. Gilliver (John)
Post by David Albert Harrell
hundreds of electrodes as oppose to thousands. The focus should be on
covering as large an area as possible (perhaps even wrapping around to
large := broad
Did you mean "!="? (Actually I think both might be valid, in different
programming languages. And depending on whether you meant "is not the
same as" or "leads to/implies", though your next example suggests you
didn't mean the latter.)
Nes: = means likens; := (or 0 >
at large := at broad
"at broad" isn't English. (Do you mean "abroad"? That is synonymous with
"at large", but isn't that common.)
"at broad" is fine for English.
Post by J. P. Gilliver (John)
A belly is wide; shoulders are broad; a gap or stream is wide or
narrow but still broad or far; a tunnel is wide but slim.
In English, "far gap" and "far stream" aren't commonly used (except
perhaps poetically to mean one a long way away). "Slim tunnel" wouldn't
be either - we'd use narrow.
[]
away? What way? You mean off. Narrow is not wide, and wide is
besides broad, so narrow is wrong if not broad.
Post by J. P. Gilliver (John)
I'd say manier pixels.  Greater resolution is when pixels or pitch is
greater, thus not sharper.
I'd never say manier pixels - manier is not a word. (More would be the
usual English, I think.) And greater resolution (the normal term would
be higher resolution) is when pixels are closer together, finer pitch.
Manier ouht be a word, and greater and hihher mean none of those at
all, lest one forward (pervert) the English.
Post by J. P. Gilliver (John)
Post by J. P. Gilliver (John)
Post by Autymn D. C.
Post by J. P. Gilliver (John)
You're still not going to reach the 100,000 pixels you want.
4^8 = 65,536. (Eh, forget squares--hecsag ns are better: 1+3 144 145
= 62,641.)
-Aut
Where does this power of eight come from? (And your bit in brackets has
at the very least lost something in translation through parts of the
internet.)
4^8 comes in next under 100,000.  The internet forgot a o-grave, and I
forgot a h (hecsagh ns): 0
I know my powers of two, but I meant why are you using "4^8" - what does
something to the eighth power have to do with what we are talking about,
synthetic vision?
Dumbass, 4^8 comes in next under 100,000.
Post by J. P. Gilliver (John)
Everything you've learned in school as `obvious' becomes less and less obvious
as you begin to study the universe. For example, there are no solids in the
universe. There's not even a suggestion of a solid. There are no absolute
continuums. There are no surfaces. There are no straight lines.
-R. Buckminster Fuller, engineer, designer, and architect (1895-1983)
Planèts are solid. All bodies, motes or clods, bear surfaces if not
supersurfaces.

-Aut
Benj
2010-12-03 22:45:21 UTC
Permalink
Post by Autymn D. C.
On Nov 24, 2:23 pm, "J. P. Gilliver (John)"
Post by J. P. Gilliver (John)
Everything you've learned in school as `obvious' becomes less and less obvious
as you begin to study the universe. For example, there are no solids in the
universe. There's not even a suggestion of a solid. There are no absolute
continuums. There are no surfaces. There are no straight lines.
-R. Buckminster Fuller, engineer, designer, and architect (1895-1983)
Planèts are solid.  All bodies, motes or clods, bear surfaces if not
supersurfaces.
Dear Autymn Womyn,

Planets are mostly empty space. Your body is a clod, your brain a mote
and as for bear (or bare) surfaces on you, I've seen none of that
thank God. T'would simply make me sick.

Gentle ripples form as Autymn Womyn slowly slides her bait into the
internet waters...
Bill Miller
2010-12-04 02:29:44 UTC
Permalink
On Nov 24, 2:23 pm, "J. P. Gilliver (John)"
Post by J. P. Gilliver (John)
Everything you've learned in school as `obvious' becomes less and less obvious
as you begin to study the universe. For example, there are no solids in the
universe. There's not even a suggestion of a solid. There are no absolute
continuums. There are no surfaces. There are no straight lines.
-R. Buckminster Fuller, engineer, designer, and architect (1895-1983)
Planèts are solid. All bodies, motes or clods, bear surfaces if not
supersurfaces.
Dear Autymn Womyn,

Planets are mostly empty space. Your body is a clod, your brain a mote
and as for bear (or bare) surfaces on you, I've seen none of that
thank God. T'would simply make me sick.

Gentle ripples form as Autymn Womyn slowly slides her bait into the
internet waters...

And, if Aut were only a man, she would be a master at the above.
Autymn D. C.
2010-12-07 15:33:24 UTC
Permalink
Post by Benj
On Nov 24, 2:23 pm, "J. P. Gilliver (John)"
Post by J. P. Gilliver (John)
Everything you've learned in school as `obvious' becomes less and less obvious
as you begin to study the universe. For example, there are no solids in the
universe. There's not even a suggestion of a solid. There are no absolute
continuums. There are no surfaces. There are no straight lines.
-R. Buckminster Fuller, engineer, designer, and architect (1895-1983)
Plan ts are solid. All bodies, motes or clods, bear surfaces if not
supersurfaces.
Dear Autymn Womyn,
Planets are mostly empty space. Your body is a clod, your brain a mote
not mostly moot roomhead?
http://google.com/groups?q=Autymn+-autumn+inner+emmer
http://google.com/groups?q=%22quadruple-fork+check%22
http://google.com/groups?q=%2266+stage-states%22
Post by Benj
and as for bear (or bare) surfaces on you, I've seen none of that
thank God. T'would simply make me sick.
You and your God are already very sick, some cretin troll fag twit.
Post by Benj
Gentle ripples form as Autymn Womyn slowly slides her bait into the
internet waters...
And, if Aut were only a man, she would be a master at the above.
a master at what, fish'ing? a man, as in not a dog or ape? You must
mean wapman or werman.

-Aut

Loading...