Computer telepathy is here!!! :: 05-06-2004, 11:38 PM. On the Nasa TV video file resently a scientist working on the MER rover had a major break through!! He placed two ultra sensitive microphones and twoelectrical impulse detectors on his neck over his larnyx. Then put the info on an oscilloscope...well they're computerized what they call squiggly line displays now!! They knew that each word should have its own distinct pattern, even in different languages every word of course its own pattern!!...So he says 'left' squiggle ... 'right' squiggle... But wait...'left' big squiggle .3seconds identical little squiggle!!! 'Right' normal big squiggle .3seconds again identical little squiggle...!!! AN echo!!! Again LEFT big squig little squig WOW!!! Oh recalibrate..No!!! Its not the machinery...check recheck!!!!
Evolving Towards Telepathy :: I recently read with great interest of researcher Chuck Jorgensen's work at NASA's Ames Research Center. It was the kind of news item that made the rounds among the cognoscenti that day, only to be forgotten the next. But it stuck with me for days afterwards. Jorgensen and his team developed a system that captures and converts nerve signals in the vocal chords into computerized speech. It is hoped that the technology will help those who have lost the ability to speak, as well as improve interface communications for people working in spacesuits and noisy environments.
NASA Develops System to Computerize Silent, "Subvocal Speech" :: A second demonstration will be to control a mechanical device using a simple set of commands, according to Jorgensen. His team is planning tests with a simulated Mars rover. "We can have the model rover go left or right using silently 'spoken' words," Jorgensen said. People in noisy conditions could use the system when privacy is needed, such as during telephone conversations on buses or trains, according to scientists.
ScienCentralNews: Secret Speech Aid :: Now NASA researchers are taking a leap in the direction of deciphering speech. Neuroengineer Chuck Jorgensen told Discover Magazine that he's bypassing the physical body's normal requirements by delivering words via machine using subvocal speech. "When you're reading material…sometimes you find that your tongue or your lips are quietly moving but you're not making an audible sound," he explains. "And it's doing that because there's this electronic signal that's being sent to produce that speech but you're intercepting it so it doesn't really say it out loud. That's subvocal speech." In a lab at NASA's Ames Research Center, electrodes similar to those used in a doctor's office cling below Jorgensen's chin and flank his Adam's apple, picking up electronic signals that the body sends to vocal chords. Jorgensen amplifies the signals and uses neural network software to decipher word patterns.
Jorgensen part of Extension of the Human Senses Group[+] at Ames Research Center located at Moffett Field, California. Ames provides extensive support to the space program, with research areas covering Nanotechnology, Traffic Management, Autonomous Systems and Robotics & the largest immersive theatre on the west coast. ironically, the apartment i left in San Francisco used to be an Air Force barracks for Moffet Field. loads more links posted later.