December 13, 2009

Stanford researchers develop the next generation of retinal implants

A team of Stanford researchers has developed a new generation of retinal implants that aims to provide higher resolution and make artificial vision more natural.

This could be a boon to the several million people in the United States who are blind or visually impaired as a result of retinal degeneration. Every year, 50,000 people in the United States become blind, according to the National Federation of the Blind. But only a couple of dozen Americans have retinal implants.

The team, consisting of ophthalmology Associate Professor Daniel Palanker, electrical engineering Assistant Professor Peter Peumans and neurobiology Assistant Professor Stephen Baccus of Stanford, and biophysics Assistant Professor Alexander Sher of the University of California-Santa Cruz, presented their research Dec. 9 at the International Electron Devices Meeting in Baltimore.

Retinal implants are arrays of , placed at the back of the eye, which partially restore vision to people with diseases that cause their light-sensing photoreceptors to die. Typically, a camera embedded in glasses collects and sends it to a computer that converts the images to electrical signals, which are then transmitted to the implant and interpreted by the brain. There are several private companies and universities working on different versions, but most people with implants can only make out fuzzy borders between light and dark areas.

Analogous to high-definition TV

The Stanford implant would allow patients to make out the shape of objects and see meaningful images. "A good analogy is high-def TV," Baccus said. "If you only have a few pixels of stimulation, you're not going to see much. One clear advantage of our implant is high resolution." The Stanford implant has approximately 1,000 electrodes, compared to 60 electrodes commonly found in fully implantable systems.

What's more, patients would not have to move their heads to see, as they do with older implants. Although we don't notice it, images fade when we do not move our eyes, and we make several tiny eye movements each second to prevent fading. With older retinal implants, the camera moves when the head moves, but not when the eyes move.

The Stanford implant, on the other hand, retains the natural link between eye movements and vision, Palanker said. A patient would wear a video camera that transmits images to a processor, which displays the images on an LCD screen on the inside of patient's goggles. The LCD display transmits infrared light pulses that project the image to photovoltaic cells implanted underneath the retina. The photovoltaic cells convert light signals into electrical impulses that in turn stimulate retinal neurons above them.

As patients move their eyes, the light falls on a different part of the implant, just as visible light falls on different parts of the retina. "The Palanker group has developed a device that actually allows patients to see infrared light on the implant and visible light through the normal optics of the eye," Baccus said.

"It's a sophisticated approach," said Shelley Fried, a research scientist working on the Boston Project. "It should definitely be helpful."

This is also the first flexible implant, and it makes use of a material commonly used in computer chips and solar cells. Peumans and his team at the Stanford Nanofabrication Facility engineered a silicon implant with tiny bridges that allow it to fold over the shape of the eye. "The advantage of having it flexible is that relatively large implants can be placed under the retina without being deformed, and the whole image would stay in focus," Palanker said. A set of flexible implants can cover an even larger portion of the retina, allowing patients to see the entire visual field presented on the display.

"It's really a very interesting idea," Fried said. "The ability to get all the electrodes to sit perfectly on the retina would be a very nice advantage." He said that a spring technology allows their device to conform to the contour of the eye, maintaining close contact between electrodes and neurons.

The tiny crevices between the bridges serve a useful function. Distant retinal cells migrate to the implant and fill in the spaces between the electrodes. Previously, one major challenge was to get cells close enough to the device to receive signals, Fried said. "If we can find a way to bring the retinal neurons closer to the electrode, that would have a huge advantage," he said.

Implanted under the retina

The Stanford device is implanted under the retina, at the earliest possible stage in the visual pathway. "In many degenerative diseases where the photoreceptors are lost, you lose the first and second cells in the pathway," Baccus said. "Ideally you want to talk to the next cell that's still there." The goal is to preserve the complex circuitry of the retina so that images appear more natural.

"With most of the current devices, we are replicating only very few elements of normal retinal signaling," Fried said.

To further enhance the naturalness of restored vision, Baccus and Palanker are developing software that performs functions that the retina normally performs. For example, cells in the retina tend to enhance the appearance of edges, or boundaries between objects. What's more, objects that we focus on are seen in better detail than objects that appear at the corners of our eyes.

The researchers hope to incorporate these features in the next generation of retinal implants. Baccus envisions a day when patients will be able to adjust their implants to see objects better, just like an optometrist adjusts the lens while we read a letter chart.

Palanker and his team will test the ability of animals with retinal diseases similar to those in humans to use the implant to discriminate visual patterns.

One of the major challenges is to understand how the retina works, especially after it is damaged. "We operate on the assumption that the photoreceptors are gone, but otherwise it's a normal retina," Baccus said. "This is almost certainly not true."

Future devices should learn, patient by patient, the new language needed to communicate with the altered circuitry of the damaged retina, he said. Even if the retinal circuitry were unaltered, the brain would still have to learn how to interpret the signals. By mimicking normal vision, retinal implants may overcome these obstacles and bring enhanced vision to blind patients.

Provided by Stanford University

October 14, 2009

One step closer to an artificial nerve cell

Scientists at Karolinska Institutet and Linköping University (Sweden) are well on the way to creating the first artificial nerve cell that can communicate specifically with nerve cells in the body using neurotransmitters. The technology has been published in an article in Nature Materials.

The methods that are currently used to stimulate nerve signals in the nervous system are based on electrical stimulation. Examples of this are cochlear implants, which are surgically inserted into the cochlea in the inner ear, and electrodes that are used directly in the brain. One problem with this method is that all cell types in the vicinity of the electrode are activated, which gives undesired effects.

Scientists have now used an electrically conducting plastic to create a new type of "delivery electrode" that instead releases the neurotransmitters that brain cells use to communicate naturally. The advantage of this is that only neighbouring cells that have receptors for the specific neurotransmitter, and that are thus sensitive to this substance, will be activated.

The scientists demonstrate in the article in Nature Materials that the delivery electrode can be used to control the hearing function in the brains of guinea pigs.

"The ability to deliver exact doses of neurotransmitters opens completely new possibilities for correcting the signalling systems that are faulty in a number of neurological disease conditions", says Professor Agneta Richter-Dahlfors who has led the work, together with Professor Barbara Canlon.

The scientists intend to continue with the development of a small unit that can be implanted into the body. It will be possible to program the unit such that the release of neurotransmitters takes place as often or as seldom as required in order to treat the individual patient. Research projects that are already under way are targeted towards hearing, epilepsy and Parkinson's disease.

The research is being carried out in collaboration between the research groups of Professor Agneta Richter-Dahlfors and Professor Barbara Canlon, together with Professor Magnus Berggren's group at Linköping University. The work falls under the auspices of the Center of Excellence in Organic Bioelectronics, financed by the Swedish Foundation for Strategic Research and led by Magnus Berggren and Agneta Richter-Dahlfors.

More information:

Daniel T. Simon, Sindhulakshmi Kurup, Karin C. Larsson, Ryusuke Hori, Klas Tybrandt, Michel Goiny, Edwin W. H. Jager, Magnus Berggren, Barbara Canlon and Agneta Richter-Dahlfors
Organic electronics for precise delivery of neurotransmitters to modulate mammalian sensory function
Nature Materials, Advance Online Publication, 5 June 2009.

Provided by Karolinska Institutet

October 13, 2009

Magnetic Brain Stimulation Improves Skill Learning, Study Finds

ScienceDaily (July 7, 2009) — The use of magnetic pulses to stimulate the dorsal premotor cortex (PMd) region of the brain results in an improved ability to learn a skilled motor task. Researchers show that skilled movements can be stored as memories in the PMd and that magnetic stimulation of this area can facilitate this learning process.

Lara Boyd and Meghan Linsdell, from the University of British Columbia, studied the effect of transcranial magnetic stimulation of the PMd on the ability of 30 volunteers to track a target on a computer screen using a joystick. During the task, the target would move randomly, then enter a programmed pattern and finally return to moving randomly. The participants were not aware of the repeated section, believing that movements were random throughout.

The volunteers received four days of training, during which they were either given excitatory stimulation, inhibitory stimulation or sham stimulation immediately before practicing the motor task. The volunteers were not aware which group they were in. On the fifth day, they were tested to see how well they had learned the task. By comparing the improvements between the random and repeated sections of the task, the researchers were able to separate the general improvement due to practice from the learned motor memory of the repeated section.

Those participants who had received the excitatory stimulation were significantly better than the other groups at tracking the target during the repeated section of the test. They showed no significant difference in improvement during the random sections. The researchers conclude, "Our data support the hypothesis that the PMd is important for continuous motor learning, specifically via off-line consolidation of learned motor behaviors".

Journal reference:

1. Lara A Boyd and Meghan A Linsdell. Excitatory repetitive transcranial magnetic stimulation to left dorsal premotor cortex enhances motor consolidation of new skills. BMC Neuroscience, (in press) [link]

Adapted from materials provided by BioMed Central, via EurekAlert!, a service of AAAS.

October 10, 2009

Brain-to-brain communication demonstrated

Brain-to-brain ("B2B") communication has been achieved for the first time by Dr. Christopher James of the University of Southampton.



While attached to an EEG amplifier, the first person generated and transmitted a series of binary digits by imagining moving their left arm for zero and their right arm for one. That data was sent via the Internet to another PC. The second person was also attached to an EEG amplifier and their PC flashed an LED lamp at two different frequencies, one for zero and the other one for one.

The pattern of the flashing LEDs was too subtle to be detected by the second person, but was picked up by electrodes detecting visual cortex activity. The PC deciphered whether a zero or a one was transmitted, with an end-to-end bandwidth of about .14 bit/sec.

"B2B could be of benefit such as helping people with severe debilitating muscle wasting diseases, or with the so-called 'locked-in' syndrome, to communicate and it also has applications for gaming," said James.

Possible extensions of the research include two-way and multiuser B2B communication with faster, broader-bandwidth transmission by using more complex signal generation and pattern recognition. - Ed.

Source: University of Southampton news release


October 9, 2009

Military Robots to Get a Virtual Touch

A modified game controller will give military bomb-disposal experts remote touch.

iRobot, the company that makes military robots as well as the Roomba vacuuming bot, announced last Friday that it will receive funding for several endeavors from the Robotics Technology Consortium (RTC).
One project will see the company develop controllers that give remote robot operators sensory feedback. The US military currently uses iRobot's wheeled PackBot in Iraq and Afghanistan for tasks such as bomb disposal, detecting hazardous materials and carrying equipment.

The company says that adding force sensing to a PackBot arm could give operators the ability to "feel" the weight of an object or whether it is hard or soft, via the robot's arms.

iRobot plans to use an enhanced version of the Novint Falcon haptic controller--a device designed for computer games that provides a remote sense of touch to the user.
According to the president of iRobot's Government and Industrial Robots division Joe Dyer:
"[This] would greatly improve warfighters' ability to examine and manipulate improvised explosive devices (IEDs) and reduce their time on task, ultimately keeping them safer,"
The RTC funds will also go toward developing better sniper detection and a sensing robotic head for the UGVs.


Original article by the By Kristina Grifantini for MIT tech review editors

October 7, 2009

Nissan's robot cars mimic fish to avoid crashing


Nissan has developed a mini robotic car that can move autonomously in groups while avoiding crashing into obstacles (including other cars).

The Eporo, Nissan says, is the first robot car designed to move in a group by sharing its position and other information. The aim is to incorporate the technology into passenger cars to reduce accidents and traffic jams.

Although a group of Eporos may look like a gang of cybernetic Jawa, Nissan says the cars' design was inspired by the way fish move in schools.

An evolution of the bumblebee-inspired BR23C robot car unveiled last year, the Eporo uses Nissan's collision avoidance technology to travel in groups. Check out BR23C trying to get away from a Japanese lady in this video.

Eporo can dodge obstacles just like fish.

The automaker studied how large schools of fish can move without colliding. It says Eporo imitates three rules of fish movement: avoiding crashes, traveling side by side, and keeping close to other members of the school.

The robots use laser range finders and ultra-wideband radio to determine distance to obstacles. They also communicate with each other to form the most efficient group formation to maneuver through tight spots.

Eporo stands for "Episode O (Zero) Robot." That zinger of a mouthful means zero episodes, as in zero accidents and zero emissions.

Nissan intends to show off Eporo at the Ceatec trade show next week in Tokyo.

Original article by Tim Hornyak for Crave

October 6, 2009

Understanding A Cell's Split Personality Aids Synthetic Circuits


In this colony, the bacteria lighting up in green are those being "turned on," while those in red remain "off."
As scientists work toward making genetically altered bacteria create living "circuits" to produce a myriad of useful proteins and chemicals, they have logically assumed that the single-celled organisms would always respond to an external command in the same way.

Alas, some bacteria apparently have an individualistic streak that makes them zig when the others zag.

A new set of experiments by Duke University bioengineers has uncovered the existence of "bistability," in which an individual cell has the potential to live in either of two states, depending on which state it was in when stimulated.

Taking into account the effects of this phenomenon should greatly enhance the future efficiency of synthetic circuits, said biomedical engineer Lingchong You of Duke's Pratt School of Engineering and the Duke Institute for Genome Sciences & Policy.

In principle, re-programmed bacteria in a synthetic circuit can be useful for producing proteins, enzymes or chemicals in a coordinated way, or even delivering different types of drugs or selectively killing cancer cells, the scientists said.

Researchers in this new field of synthetic biology "program" populations of genetically altered bacteria to direct their actions in much the same way that a computer program directs a computer. In this analogy, the genetic alteration is the software, the cell the computer. The Duke researchers found that not only does the software drive the computer's actions, but the computer in turn influences the running of the software.

"In the past, synthetic biologists have often assumed that the components of the circuit would act in a predictable fashion every time and that the cells carrying the circuit would just serve as a passive reactor," You said. "In essence, they have taken a circuit-centric view for the design and optimization process. This notion is helpful in making the design process more convenient."

But it's not that simple, say You and his graduate student Cheemeng Tan, who published the results of their latest experiments early online in the journal Nature Chemical Biology.

"We found that there can be unintended consequences that haven't been appreciated before," said You. "In a population of identical cells, some can act one way while others act in another. However, this process appears to occur in a predictable manner, which allows us to take into account this effect when we design circuits."

Bistability is not unique to biology. In electrical engineering, for example, bistability describes the functioning of a toggle switch, a hinged switch that can assume either one of two positions – on or off.

"The prevailing wisdom underestimated the complexity of these synthetic circuits by assuming that the genetic changes would not affect the operation of the cell itself, as if the cell were a passive chassis," said Tan. "The expression of the genetic alteration can drastically impact the cell, and therefore the circuit.

"We now know that when the circuit is activated, it affects the cell, which in turn acts as an additional feedback loop influencing the circuit," Tan said. "The consequences of this interplay have been theorized but not demonstrated experimentally."

The scientists conducted their experiments using a genetically altered colony of the bacteria Escherichia coli (E.coli) in a simple synthetic circuit. When the colony of bacteria was stimulated by external cues, some of the cells went to the "on" position and grew more slowly, while the rest went to the "off" position and grew faster.

"It is as if the colony received the command not to expand too fast when the circuit is on," Tan explained. "Now that we know that this occurs, we used computer modeling to predict how many of the cells will go to the 'on' or 'off' state, which turns out to be consistent with experimental measurements"

The experiments were supported by the National Science Foundation, the National Institutes of Health and a David and Lucille Packard Fellowship. Duke's Philippe Marguet was also a member of the research team.


Adapted from materials provided by Duke University, via EurekAlert!, a service of AAAS.

Understanding A Cell's Split Personality Aids Synthetic Circuits

In this colony, the bacteria lighting up in green are those being "turned on," while those in red remain "off."
As scientists work toward making genetically altered bacteria create living "circuits" to produce a myriad of useful proteins and chemicals, they have logically assumed that the single-celled organisms would always respond to an external command in the same way.

Alas, some bacteria apparently have an individualistic streak that makes them zig when the others zag.

A new set of experiments by Duke University bioengineers has uncovered the existence of "bistability," in which an individual cell has the potential to live in either of two states, depending on which state it was in when stimulated.

Taking into account the effects of this phenomenon should greatly enhance the future efficiency of synthetic circuits, said biomedical engineer Lingchong You of Duke's Pratt School of Engineering and the Duke Institute for Genome Sciences & Policy.

In principle, re-programmed bacteria in a synthetic circuit can be useful for producing proteins, enzymes or chemicals in a coordinated way, or even delivering different types of drugs or selectively killing cancer cells, the scientists said.

Researchers in this new field of synthetic biology "program" populations of genetically altered bacteria to direct their actions in much the same way that a computer program directs a computer. In this analogy, the genetic alteration is the software, the cell the computer. The Duke researchers found that not only does the software drive the computer's actions, but the computer in turn influences the running of the software.

"In the past, synthetic biologists have often assumed that the components of the circuit would act in a predictable fashion every time and that the cells carrying the circuit would just serve as a passive reactor," You said. "In essence, they have taken a circuit-centric view for the design and optimization process. This notion is helpful in making the design process more convenient."

But it's not that simple, say You and his graduate student Cheemeng Tan, who published the results of their latest experiments early online in the journal Nature Chemical Biology.

"We found that there can be unintended consequences that haven't been appreciated before," said You. "In a population of identical cells, some can act one way while others act in another. However, this process appears to occur in a predictable manner, which allows us to take into account this effect when we design circuits."

Bistability is not unique to biology. In electrical engineering, for example, bistability describes the functioning of a toggle switch, a hinged switch that can assume either one of two positions – on or off.

"The prevailing wisdom underestimated the complexity of these synthetic circuits by assuming that the genetic changes would not affect the operation of the cell itself, as if the cell were a passive chassis," said Tan. "The expression of the genetic alteration can drastically impact the cell, and therefore the circuit.

"We now know that when the circuit is activated, it affects the cell, which in turn acts as an additional feedback loop influencing the circuit," Tan said. "The consequences of this interplay have been theorized but not demonstrated experimentally."

The scientists conducted their experiments using a genetically altered colony of the bacteria Escherichia coli (E.coli) in a simple synthetic circuit. When the colony of bacteria was stimulated by external cues, some of the cells went to the "on" position and grew more slowly, while the rest went to the "off" position and grew faster.

"It is as if the colony received the command not to expand too fast when the circuit is on," Tan explained. "Now that we know that this occurs, we used computer modeling to predict how many of the cells will go to the 'on' or 'off' state, which turns out to be consistent with experimental measurements"

The experiments were supported by the National Science Foundation, the National Institutes of Health and a David and Lucille Packard Fellowship. Duke's Philippe Marguet was also a member of the research team.


Adapted from materials provided by Duke University, via EurekAlert!, a service of AAAS.

October 4, 2009

Burst of Technology Helps Blind to See


Barbara Campbell is part of a worldwide experiment testing whether electrodes implanted in the eye can restore sight.

Blindness first began creeping up on Barbara Campbell when she was a teenager, and by her late 30s, her eye disease had stolen what was left of her sight.

Reliant on a talking computer for reading and a cane for navigating New York City, where she lives and works, Ms. Campbell, now 56, would have been thrilled to see something. Anything.

Now, as part of a striking experiment, she can. So far, she can detect burners on her stove when making a grilled cheese, her mirror frame, and whether her computer monitor is on.

She is beginning an intensive three-year research project involving electrodes surgically implanted in her eye, a camera on the bridge of her nose and a video processor strapped to her waist.

The project, involving patients in the United States, Mexico and Europe, is part of a burst of recent research aimed at one of science’s most-sought-after holy grails: making the blind see.

Some of the 37 other participants further along in the project can differentiate plates from cups, tell grass from sidewalk, sort white socks from dark, distinguish doors and windows, identify large letters of the alphabet, and see where people are, albeit not details about them.

Linda Morfoot, 65, of Long Beach, Calif., blind for 12 years, says she can now toss a ball into a basketball hoop, follow her nine grandchildren as they run around her living room and “see where the preacher is” in church.

“For someone who’s been totally blind, this is really remarkable,” said Andrew P. Mariani, a program director at the National Eye Institute. “They’re able to get some sort of vision.”

Scientists involved in the project, the artificial retina, say they have plans to develop the technology to allow people to read, write and recognize faces.

Advances in technology, genetics, brain science and biology are making a goal that long seemed out of reach — restoring sight — more feasible.

“For a long time, scientists and clinicians were very conservative, but you have to at some point get out of the laboratory and focus on getting clinical trials in actual humans,” said Timothy J. Schoen, director of science and preclinical development for the Foundation Fighting Blindness. Now “there’s a real push,” he said, because “we’ve got a lot of blind people walking around, and we’ve got to try to help them.”

More than 3.3 million Americans 40 and over, or about one in 28, are blind or have vision so poor that even with glasses, medicine or surgery, everyday tasks are difficult, according to the National Eye Institute, a federal agency. That number is expected to double in the next 30 years. Worldwide, about 160 million people are similarly affected.

“With an aging population, it’s obviously going to be an increasing problem,” said Michael D. Oberdorfer, who runs the visual neuroscience program for the National Eye Institute, which finances several sight-restoration projects, including the artificial retina. Wide-ranging research is important, he said, because different methods could help different causes of blindness.

The approaches include gene therapy, which has produced improved vision in people who are blind from one rare congenital disease. Stem cell research is considered promising, although far from producing results, and other studies involve a light-responding protein and retinal transplants.

Others are implanting electrodes in monkeys’ brains to see if directly stimulating visual areas might allow even people with no eye function to see.

And recently, Sharron Kay Thornton, 60, from Smithdale, Miss., blinded by a skin condition, regained sight in one eye after doctors at the University of Miami Miller School of Medicine extracted a tooth (her eyetooth, actually), shaved it down and used it as a base for a plastic lens replacing her cornea.

It was the first time the procedure, modified osteo-odonto-keratoprosthesis, was performed in this country. The surgeon, Dr. Victor L. Perez, said it could help people with severely scarred corneas from chemical or combat injuries.

Other techniques focus on delaying blindness, including one involving a capsule implanted in the eye to release proteins that slow the decay of light-responding cells. And with BrainPort, a camera worn by a blind person captures images and transmits signals to electrodes slipped onto the tongue, causing tingling sensations that a person can learn to decipher as the location and movement of objects.

Ms. Campbell’s artificial retina works similarly, except it produces the sensation of sight, not tingling on the tongue. Developed by Dr. Mark S. Humayun, a retinal surgeon at the University of Southern California, it drew on cochlear implants for the deaf and is partly financed by a cochlear implant maker.

It is so far being used in people with retinitis pigmentosa, in which photoreceptor cells, which take in light, deteriorate.

Gerald J. Chader, chief scientific officer at the University of Southern California’s Doheny Retinal Institute, where Dr. Humayun works, said it should also work for severe cases of age-related macular degeneration, the major cause of vision loss in older people.

Go -->here<-- to read the rest of the original article from New York Times

October 3, 2009

It's tempting to call them lords of the flies. For the first time, researchers have controlled the movements of free-flying insects from afar, as if t

Green beetles

The Berkeley team implanted electrodes into the brain and muscles of two species: green June beetles called Cotinus texana from the southern US, and the much larger African species Mecynorrhina torquata. Both responded to stimulation in much the same way, but the weight of the electronics and their battery meant that only Mecynorrhina – which can grow to the size of a human palm – was strong enough to fly freely under radio control.

A particular series of electrical pulses to the brain causes the beetle to take off. No further stimulation is needed to maintain the flight. Though the average length of flights during trials was just 45 seconds, one lasted for more than 30 minutes. A single pulse causes a beetle to land again.

The insects' flight can also be directed. Pulses sent to the brain trigger a descent, on average by 60 centimetres. The beetles can be steered by stimulating the wing muscle on the opposite side from the direction they are required to turn, though this works only three-quarters of the time. After each manoeuvre, the beetles quickly right themselves and continue flying parallel to the ground.

Brain insights

Tyson Hedrick, a biomechanist at the University of North Carolina, Chapel Hill, who was not involved in the research, says he is surprised at the level of control achieved, because the controlling impulses were delivered to comparatively large regions of the insect brain.

Precisely stimulating individual neurons or circuits may harness the beetles more precisely, he told New Scientist, but don't expect aerial acrobatics. "It's not entirely clear how much control a beetle has over its own flight," Hedrick says. "If you've ever seen a beetle flying in the wild, they're not the most graceful insects."

The research may be more successful in revealing just how the brain, nerves and muscles of insects coordinate flight and other behaviours than at bringing six-legged cyborg spies into service, Hedrick adds. "It may end up helping biologists more than it will help DARPA."

Brain-recording backpacks

It's a view echoed by Reid Harrison, an electrical engineer at the University of Utah, Salt Lake City, who has designed brain-recording backpacks for insects. "I'm sceptical about their ability to do surveillance for the following reason: no one has solved the power issue."

Batteries, solar cells and piezoelectrics that harvest energy from movement cannot provide enough power to run electrodes and radio transmitters for very long, Harrison says. "Maybe we'll have some advances in those technologies in the near future, but based on what you can get off the shelf now it's not even close."

Journal reference: Frontiers in Integrative Neuroscience, DOI: 10.3389/neuro.07.024.2009

Original article by Ewen Callaway for New Scientist

A Startup That Builds Biological Parts

Ginkgo BioWorks aims to push synthetic biology to the factory level.

In a warehouse building in Boston, wedged between a cruise-ship drydock and Au Bon Pain's corporate headquarters, sits Ginkgo BioWorks, a new synthetic-biology startup that aims to make biological engineering easier than baking bread. Founded by five MIT scientists, the company offers to assemble biological parts--such as strings of specific genes--for industry and academic scientists.

Biological parts: Ginkgo BioWorks, a synthetic-biology startup, is automating the process of building biological machines. Shown here is a liquid-handling robot that can prepare hundreds of reactions.
Credit: Ginkgo BioWorks

"Think of it as rapid prototyping in biology--we make the part, test it, and then expand on it," says Reshma Shetty, one of the company's cofounders. "You can spend more time thinking about the design, rather than doing the grunt work of making DNA." A very simple project, such as assembling two pieces of DNA, might cost $100, with prices increasing from there.

Synthetic biology is the quest to systematically design and build novel organisms that perform useful functions, such as producing chemicals, using genetic-engineering tools. The field is often considered the next step beyond metabolic engineering because it aims to completely overhaul existing systems to create new functionality rather than improve an existing process with a number of genetic tweaks.

Scientists have so far created microbes that can produce drugs and biofuels, and interest among industrial chemical makers is growing. While companies already exist to synthesize pieces of DNA, Ginkgo assembles synthesized pieces of DNA to create functional genetic pathways. (Assembling specific genes into long pieces of DNA is much cheaper than synthesizing that long piece from scratch.)

Ginkgo will build on technology developed by Tom Knight, a research scientist at MIT and one of the company's cofounders, who started out his scientific career as an engineer. "I'm interested in transitioning biology from being sort of a craft, where every time you do something it's done slightly differently, often in ad hoc ways, to an engineering discipline with standardized methods of arranging information and standardized sets of parts that you can assemble to do things," says Knight.

Scientists generally create biological parts by stitching together genes with specific functions, using specialized enzymes to cut and sew the DNA. The finished part is then inserted into bacteria, where it can perform its designated task. Currently, this process is mostly done by a lab technician or graduate student; consequently, the process is slow, and the resulting construct isn't optimized for use in other projects. Knight developed a standardized way of putting together pieces of DNA, called the BioBricks standard, in which each piece of DNA is tagged on both sides with DNA connectors that allow pieces to be easily interchanged.

"If your part obeys those rules, we can use identical reactions every time to assemble those fragments into larger constructs," says Knight. "That allows us to standardize and automate the process of assembly. If we want to put 100 different versions of a system together, we can do that straightforwardly, whereas it would be a tedious job to do with manual techniques." The most complicated part that Ginkgo has built to date is a piece of DNA with 15 genes and a total of 30,000 DNA letters. The part was made for a private partner, and its function has not been divulged.

Assembling parts is only part of the challenge in building biological machines. Different genes can have unanticipated effects on each other, interfering with the ultimate function. "One of the things we'll be able to do is to assemble hundreds or thousands of versions of a specific pathway with slight variations," says Knight. Scientists can then determine which version works best.

So far, Knight says, the greatest interest has come from manufacturing companies making chemicals for cosmetics, perfumes, and flavorings. "Many of them are trying to replace a dirty chemical process with an environmentally friendly, biologically based process," he says.

Ginkgo is one of just a handful of synthetic-biology companies. Codon Devices, a well-funded startup that synthesized DNA, ceased operations earlier this year. "The challenge now is not to synthesize genes; there are a few companies that do that," says Shetty. "It's to build pathways that can make specific chemicals, such as fuels." And unlike Codon, Ginkgo is starting small. The company is funded by seed money and a $150,000 loan from Lifetech Boston, a program to attract biotech to Boston. Its lab space is populated with banks of PCR machines, which amplify DNA, and liquid-handling robots, mostly bought on eBay or from other biotech firms that have gone out of business. And the company already has a commercial product--a kit sold through New England Biolabs that allows scientists to put together parts on their own.

"If successful, they will be providing a very important service for synthetic biology," says Chris Voigt, a synthetic biologist at the University of California, San Francisco. "There isn't anybody else who would be characterizing and providing parts to the community. I think that this type of research needs to occur outside of the academic community--at either a company or a nonprofit institute."

Original article by Emily Singer for MIT Technology Review

October 2, 2009

Locust flight simulator helps robot insects evolve


Right:Smoke signals helps robots fly better (Image: Simon Walker, Animal Flight Group, Oxford University)

A LOCUST flight simulator could be the key to perfecting the ultimate surveillance machine: an artificial flying insect. The simulator can model the way wings of varying shapes and surface features beat, as well as how they change their shape during flight.

The device was created using extremely high-speed flash photography to track the way smoke particles flow over a locust's wings in a wind tunnel - a technique called particle flow velocimetry. This allowed researchers at the University of Oxford to build a computer model of the insect's wing motion. They then built software that mimicked not only this motion, but also how wing surface features, such as structural veins and corrugations, and the wings' deformation as they flap, change aerodynamic performance.

The work has shown that wings' surface structures are crucial to efficient lift generation, says lead researcher Adrian Thomas (Science, DOI: 10.1126/science.1175928).

The simulator could be a big step forward for the many teams around the world who are designing robotic insects, mainly for military purposes, though Thomas expects them to have a massive role as toys, too. "Imagine sitting in your living room doing aerial combat with radio-controlled dragonflies. Everybody would love that," he says.

Imagine sitting in your living room doing aerial combat with remote-controlled dragonflies

Until now, modelling insect wings involved building physical replicas from rigid materials and estimating how they might move from observations of insect flight. Thomas hopes the simulator will take the guesswork out of the process, especially as every flying insect has uniquely shaped wings and wing beat patterns.

Building miniature aircraft is of great interest to the armed forces. In the UK, for example, the Ministry of Defence wants to create a device that can fly in front of a convoy and detect explosives on the road ahead. In the US, the Pentagon's research arm DARPA is funding development of a "nano air vehicle" (NAV) for surveillance that it states must weigh no more than 10 grams and have only a 7.5-centimetre wingspan.

Last month, DARPA contractor AeroVironment of Monrovia, California, demonstrated the first two-winged robot capable of hovering flight (see video at http://bit.ly/18LR8U). It achieved a stable take-off and hovered for 20 seconds. Other DARPA-funded projects by Micropropulsion and Daedalus Flight Systems are also thought to have achieved hovering robotic flight this year.

"Getting stable hover at the 10-gram size scale with beating wings is an engineering breakthrough, requiring much new understanding and invention," says Ronald Fearing, a micromechanics and flight researcher at the University of California, Berkeley. "The next step will be to get the flight efficiency up so hover can work for several minutes."

But how can such machines be made more efficient? Better batteries and lighter materials will help, but most important will be improving wing structure so the aircraft more accurately imitate - or even improve upon - the way insects fly.

So how do insects fly? For a long time no one really knew. In 1919, German aeronautical engineer Wilhelm Hoff calculated that a pollen-laden bumblebee should not have enough lift to get airborne according to the rules of aerodynamics as understood at the time.

It wasn't until 1981 that Tony Maxworthy of the University of Southern California hit on a possible reason: his working model of a fly's wings, immersed in oil, showed large vortices were spinning off the leading edge of the wing as it beat (Annual Review of Fluid Mechanics, vol 13, p 329). Within the vortices air is moving at high velocity, and is therefore at low pressure, hinting at a lift-creating mechanism unlike that of conventional aircraft, in which an angled wing travelling forward deflects air downwards, creating an opposing upward force.

In 1996 Thomas was a member of Charles Ellington's team at the University of Cambridge, which identified the mechanism by which bugs created high lift forces - using a model of a hawkmoth. "We found a leading-edge vortex that was stable over the whole of the downstroke," says Thomas.

The nature of the leading-edge vortex is dependent on the size of the wings, their number, the pattern described by the beating wing and the wing structure.

This work has laid the foundations for researchers such as Robert Wood and his team at Harvard University, who are investigating ways to make insect wings (Bioinspiration and Biomimetics, DOI: 10.1088/1748-3182/4/3/036002). They have developed a new way to build flexible wings from moulds using microchip manufacturing techniques. Using elastic polymers and elegant, vein-like supporting structures, the researchers can build wings with variable camber, and with different corrugations embossed in them, in an attempt to mimic the in-flight aerodynamics and deformation of real insect wings.

Thomas is also focusing on the way insect wings deform in flight. "If we use a wing model with all the complex curves, twists and corrugations of the real insect it is 50 per cent more efficient than a model with rigid flat-plate wings, for the same lift generation. That would be a huge saving in power for a micro air vehicle," he says.

Although the Oxford team's simulator is geared for locust wings at present, the researchers are adjusting the software to model the hoverfly - with other insect types to follow.

"What we've shown is that modern aerodynamics really can accurately model insect flight," Thomas says. "That old myth about aerodynamics not being able to model bumblebee flight really is dead now."

oRIGINAL ARTICLE WRITTEN BY pAUL mARKS FOR nEW sCIENTIST

Nanotech researchers develop artificial pore


CINCINNATI—Using an RNA-powered nanomotor, University of Cincinnati (UC) biomedical engineering researchers have successfully developed an artificial pore able to transmit nanoscale material through a membrane.

In a study led by UC biomedical engineering professor Peixuan Guo, PhD, members of the UC team inserted the modified core of a nanomotor, a microscopic biological machine, into a lipid membrane. The resulting channel enabled them to move both single- and double-stranded DNA through the membrane.

Their paper, “Translocation of double-stranded DNA through membrane-adapted phi29 motor protein nanopores,” will appear in the journal Nature Nanotechnology, Sept. 27, 2009. The engineered channel could have applications in nano-sensing, gene delivery, drug loading and DNA sequencing," says Guo.

Guo derived the nanomotor used in the study from the biological motor of bacteriophage phi29, a virus that infects bacteria. Previously, Guo discovered that the bacteriophage phi29 DNA-packaging motor uses six molecules of the genetic material RNA to power its DNA genome through its protein core, much like a screw through a bolt.

"The re-engineered motor core itself has shown to associate with lipid membranes, but we needed to show that it could punch a hole in the lipid membrane," says David Wendell, PhD, co-first author of the paper and a research assistant professor in UC’s biomedical engineering department. "That was one of the first challenges, moving it from its native enclosure into this engineered environment."

In this study, UC researchers embedded the re-engineered nanomotor core into a lipid sheet, creating a channel large enough to allow the passage of double-stranded DNA through the channel.

Guo says past work with biological channels has been focused on channels large enough to move only single-stranded genetic material.

"Since the genomic DNA of human, animals, plants, fungus and bacteria are double stranded, the development of single pore system that can sequence double-stranded DNA is very important," he says.

By being placed into a lipid sheet, the artificial membrane channel can be used to load double-stranded DNA, drugs or other therapeutic material into the liposome, other compartments, or potentially into a cell through the membrane.

Guo also says the process by which the DNA travels through the membrane can have larger applications.

"The idea that a DNA molecule travels through the nanopore, advancing nucleotide by nucleotide, could lead to the development of a single pore DNA sequencing apparatus, an area of strong national interest," he says.

Using stochastic sensing, a new analytical technique used in nanopore work, Wendell says researchers can characterize and identify material, like DNA, moving through the membrane.

Co-first author and UC postdoctoral fellow Peng Jing, PhD, says that, compared with traditional research methods, the successful embedding of the nanomotor into the membrane may also provide researchers with a new way to study the DNA packaging mechanisms of the viral nanomotor.

"Specifically, we are able to investigate the details concerning how double-stranded DNA translocates through the protein channel," he says.

The study is the next step in research on using nanomotors to package and deliver therapeutic agents directly to infected cells. Eventually, the team's work could enable use of nanoscale medical devices to diagnose and treat diseases.

"This motor is one of the strongest bio motors discovered to date," says Wendell, "If you can use that force to move a nanoscale rotor or a nanoscale machine … you're converting the force of the motor into a machine that might do something useful."

Funding for this study comes from the National Institutes of Health's Nanomedicine Development Center. Guo is the director of one of eight NIH Nanomedicine Development Centers and an endowed chair in biomedical engineering at UC.

Coauthors of the study include UC research assistant professor David Wendell, PhD, postdoctoral fellow Peng Jing, PhD, graduate students Jia Geng and Tae Jin Lee and former postdoctoral fellow Varuni Subramaniam from Guo’s previous lab at Purdue University. Carlo Montemagno, dean of the College of Engineering and College of Applied Science, also contributed to the study.

October 1, 2009

A step toward better brain implants using conducting polymer nanotubes


This illustration depicts neurons firing (green structures in the foreground) and communicating with nanotubes in the background. Credit: Illustration courtesy of Mohammad Reza Abidian

ANN ARBOR, Mich.---Brain implants that can more clearly record signals from surrounding neurons in rats have been created at the University of Michigan. The findings could eventually lead to more effective treatment of neurological disorders such as Parkinson's disease and paralysis.

Neural electrodes must work for time periods ranging from hours to years. When the electrodes are implanted, the brain first reacts to the acute injury with an inflammatory response. Then the brain settles into a wound-healing, or chronic, response.

It's during this secondary response that brain tissue starts to encapsulate the electrode, cutting it off from communication with surrounding neurons.

The new brain implants developed at U-M are coated with nanotubes made of poly(3,4-ethylenedioxythiophene) (PEDOT), a biocompatible and electrically conductive that has been shown to record neural signals better than conventional metal electrodes.

U-M researchers found that PEDOT nanotubes enhanced high-quality unit activity (signal-to-noise ratio >4) about 30 percent more than the uncoated sites. They also found that based on in vivo impedance data, PEDOT nanotubes might be used as a novel method for biosensing to indicate the transition between acute and chronic responses in .

The results are featured in the cover article of the Oct. 5 issue of the journal Advanced Materials. The paper is titled, "Interfacing Conducting Polymer Nanotubes with the : Chronic Neural Recording using Poly(3-4-ethylenedioxythiophene) Nanotubes."

"Microelectrodes implanted in the brain are increasingly being used to treat neurological disorders," said Mohammad Reza Abidian, a post-doctoral researcher working with Professor Daryl Kipke in the Neural Engineering Laboratory at the U-M Department of Biomedical Engineering.

"Moreover, these electrodes enable neuroprosthetic devices, which hold the promise to return functionality to individuals with spinal cord injuries and neurodegenerative diseases. However, robust and reliable chronic application of neural electrodes remains a challenge."

In the experiment, the researchers implanted two neural microelectrodes in the brains of three rats. PEDOT nanotubes were fabricated on the surface of every other recording site by using a nanofiber templating method. Over the course of seven weeks, researchers monitored the electrical impedance of the recording sites and measured the quality of recording signals.

PEDOT nanotubes in the coating enable the electrodes to operate with less electrical resistance than current sites, which means they can communicate more clearly with individual neurons.

"Conducting polymers are biocompatible and have both electronic and ionic conductivity," Abidian said. "Therefore, these materials are good candidates for biomedical applications such as neural interfaces, biosensors and drug delivery systems."

In the experiments, the Michigan researchers applied PEDOT nanotubes to microelectrodes provided by the U-M Center for Neural Communication Technology. The PEDOT nanotube coatings were developed in the laboratory of David C. Martin, now an adjunct professor of materials science and engineering, macromolecular science and engineering, and biomedical engineering. Martin is currently the Karl W. Böer Professor and Chair of the Materials Science and Engineering Department at the University of Delaware.

Martin is also co-founder and chief scientific officer for Biotectix, a U-M spinoff company located in Ann Arbor. The company is working to commercialize conducting polymer-based coatings for a variety of biomedical devices

In previous experiments, Abidian and his colleagues have shown that PEDOT could carry with them drugs to prevent encapsulation.

"This study paves the way for smart recording electrodes that can deliver drugs to alleviate the immune response of encapsulation," Abidian said.

More information: Scientific article: http://www3.interscience.wiley.com/cgi-bin/fulltext/122525755/PDFSTART

Source: University of Michigan

'2B - The Era of Flesh is Over' film to premiere at Woodstock Film Festival Friday

"2B - The Era of Flesh is Over," a science-fiction film set in the near future, will have its world premiere at the 10th anniversary Woodstock Film Festival in Woodstock, NY on Friday, Oct. 2, 2009.

A panel discussion, "Redesigning Humanity -- The New Frontier," moderated by bioethicist James J. Hughes, including Ray Kurzweil, 2B film executive producer Martine Rothblatt, and author Wendell Wallach and streamed live, will explore how AI, nanotech, genetic engineering and other technologies will allow human beings to transcend the limitations of the body and fundamentally change the world over the coming 50 years.

2B portrays a decaying world on the cusp of great transformation. When the world's first transhuman is created by a renegade corporate CEO and bioscientist, the foundations of society's beliefs are threatened in a transhuman world where man merges with technology.


KurzweilAI.net, Oct. 1, 2009

September 28, 2009

The Reality of Robot Surrogates


How far are we from sending robots into the world in our stead?

Imagine a world where you're stronger, younger, better looking, and don't age. Well, you do, but your robot surrogate—which you control with your mind from a recliner at home while it does your bidding in the world—doesn't.

It's a bit like The Matrix, but instead of a computer-generated avatar in a graphics-based illusion, in Surrogates—which opens Friday and stars Bruce Willis—you have a real titanium-and-fluid copy impersonating your flesh and blood and running around under your mental control. Other recent films have used similar concepts to ponder issues like outsourced virtual labor (Sleep Dealer) and incarceration (Gamer).

The real technology behind such fantastical fiction is grounded both in far-out research and practical robotics. So how far away is a world of mind-controlled personal automatons?

"We're getting there, but it will be quite a while before we have anything that looks like Bruce Willis," says Trevor Blackwell, the founder and CEO of Anybots, a robotics company in Mountain View, Calif., that builds "telepresence" robots controlled remotely like the ones in Surrogates.

Telepresence is action at a distance, or the projection of presence where you physically aren't. Technically, phoning in to your weekly staff meeting is a form of telepresence. So is joysticking a robot up to a suspected IED in Iraq so a soldier can investigate the scene while sitting in the (relative) safety of an armored vehicle.

Researchers are testing brain-machine interfaces on rats and monkeys that would let the animals directly control a robot, but so far the telepresence interfaces at work in the real world are physical. Through wireless Internet connections, video cameras, joysticks, and sometimes audio, humans move robots around at the office, in the operating room, underwater, on the battlefield, and on Mars.

A recent study by NextGen Research, a market research firm, projects that in the next five years, telepresence will become a significant feature of the US $1.16 billion personal robotics market, meaning robots for you or your home.

According to the study's project manager, Larry Fisher, telepresence "makes the most sense" for security and surveillance robots that would be used to check up on pets or family members from far away. Such robots could also allow health-care professionals to monitor elderly people taking medication at home to ensure the dosage and routine are correct.

Right now, most commercial teleoperated robots are just mobile webcams with speakers, according to NextGen. They can be programmed to roam a set path, or they can be controlled over the Internet by an operator. iRobot, the maker of the Roomba floor cleaner, canceled its telepresence robot, ConnectR, in January, choosing to wait until such a robot would be easier to use. But plenty of companies, such as Meccano/Erector and WowWee, are marketing personal telepresence bots.

Blackwell's Anybots, for example, has developed an office stand-in called QA. It's a Wi-Fi enabled, vaguely body-shaped wheeled robot with an ET-looking head that has cameras for eyes and a display in its chest that shows an image of the person it's standing in for. You can slap on virtual-reality goggles, sensor gloves, and a backpack of electronics to link to it over the Internet for an immersive telepresence experience. Or you can just connect to the robot through your laptop's browser.

For the rest of the article go to ieee spectrum

Original article posted by Anne-Marie Corley // September 2009

September 26, 2009

Honda's U3-X Personal Mobility Device is the Segway of unicycles

Yeah, we've seen a self-balancing unicycle before, but the brand new U3-X from Honda takes it to another level. A creepy-sterile, awesomely futuristic Honda level, to be precise. What makes the U3-X particularly interesting is it has the regular large wheel of a unicycle, but that wheel is actually made up of several small wheels in a series, which can rotate independently, meaning that the device can go forward, backward, side-to-side and diagonally, all being controlled with a simple lean. Honda credits its ASIMO research for this multi-directional capability, though we're not sure we see it -- ASIMO is biped, after all -- but far be it from us to discredit an excuse to keep up the good work on the ASIMO front. Right now the "experimental model" of the U3-X gets a single hour of battery and weighs under 22 pounds, with a seat and foot rests that fold into the device for extra portability. No word of course on when the thing might make it to market, but Honda plans to show it off next month at the Tokyo Motor Show. A devastatingly short video of the U3-X in action is after the break.

September 24, 2009

Stimulating Sight: Retinal Implant Could Help Restore Useful Level Of Vision To Certain Groups Of Blind People


Retinal Implant receives visual data from a camera mounted on a pair of glasses. The coil sends the images to a chip attached to the side of the eyeball, which processes the data and sends it to electrodes implanted below the retina. (Credit: Courtesy of Shawn Kelly)
Inspired by the success of cochlear implants that can restore hearing to some deaf people, researchers at MIT are working on a retinal implant that could one day help blind people regain a useful level of vision.

The eye implant is designed for people who have lost their vision from retinitis pigmentosa or age-related macular degeneration, two of the leading causes of blindness. The retinal prosthesis would take over the function of lost retinal cells by electrically stimulating the nerve cells that normally carry visual input from the retina to the brain.

Such a chip would not restore normal vision but it could help blind people more easily navigate a room or walk down a sidewalk.

"Anything that could help them see a little better and let them identify objects and move around a room would be an enormous help," says Shawn Kelly, a researcher in MIT's Research Laboratory for Electronics and member of the Boston Retinal Implant Project.

The research team, which includes scientists, engineers and ophthalmologists from Massachusetts Eye and Ear Infirmary, the Boston VA Medical Center and Cornell as well as MIT, has been working on the retinal implant for 20 years. The research is funded by the VA Center for Innovative Visual Rehabilitation, the National Institutes of Health, the National Science Foundation, the Catalyst Foundation and the MOSIS microchip fabrication service.

Led by John Wyatt, MIT professor of electrical engineering, the team recently reported a new prototype that they hope to start testing in blind patients within the next three years.

Electrical stimulation

Patients who received the implant would wear a pair of glasses with a camera that sends images to a microchip attached to the eyeball. The glasses also contain a coil that wirelessly transmits power to receiving coils surrounding the eyeball.

When the microchip receives visual information, it activates electrodes that stimulate nerve cells in the areas of the retina corresponding to the features of the visual scene. The electrodes directly activate optical nerves that carry signals to the brain, bypassing the damaged layers of retina.

One question that remains is what kind of vision this direct electrical stimulation actually produces. About 10 years ago, the research team started to answer that by attaching electrodes to the retinas of six blind patients for several hours.

When the electrodes were activated, patients reported seeing a small number of "clouds" or "drops of blood" in their field of vision, and the number of clouds or blood drops they reported corresponded to the number of electrodes that were stimulated. When there was no stimulus, patients accurately reported seeing nothing. Those tests confirmed that retinal stimulation can produce some kind of organized vision in blind patients, though further testing is needed to determine how useful that vision can be.

After those initial tests, with grants from the Boston Veteran's Administration Medical Center and the National Institutes of Health, the researchers started to build an implantable chip, which would allow them to do more long-term tests. Their goal is to produce a chip that can be implanted for at least 10 years.

One of the biggest challenges the researchers face is designing a surgical procedure and implant that won't damage the eye. In their initial prototypes, the electrodes were attached directly atop the retina from inside the eye, which carries more risk of damaging the delicate retina. In the latest version, described in the October issue of IEEE Transactions on Biomedical Engineering, the implant is attached to the outside of the eye, and the electrodes are implanted behind the retina.

That subretinal location, which reduces the risk of tearing the retina and requires a less invasive surgical procedure, is one of the key differences between the MIT implant and retinal prostheses being developed by other research groups.

Another feature of the new MIT prototype is that the chip is now contained in a hermetically sealed titanium case. Previous versions were encased in silicone, which would eventually allow water to seep in and damage the circuitry.

While they have not yet begun any long-term tests on humans, the researchers have tested the device in Yucatan miniature pigs, which have roughly the same size eyeballs as humans. Those tests are only meant to determine whether the implants remain functional and safe and are not designed to observe whether the pigs respond to stimuli to their optic nerves.

So far, the prototypes have been successfully implanted in pigs for up to 10 months, but further safety refinements need to be made before clinical trials in humans can begin.

Wyatt and Kelly say they hope that once human trials begin and blind patients can offer feedback on what they're seeing, they will learn much more about how to configure the algorithm implemented by the chip to produce useful vision.

Patients have told them that what they would like most is the ability to recognize faces. "If they can recognize faces of people in a room, that brings them into the social environment as opposed to sitting there waiting for someone to talk to them," says Kelly.


Journal reference:

  1. Shire, D. B.; Kelly, S. K.; Chen , J.; Doyle , P.; Gingerich, M. D.; Cogan, S. F.; Drohan, W. A.; Mendoza, O.; Theogarajan, L.; Wyatt, J. L.; Rizzo, J. F. Development and Implantation of a Minimally Invasive Wireless Subretinal Neurostimulator. IEEE Transactions on Biomedical Engineering, October 2009 DOI: 10.1109/TBME.2009.2021401
Adapted from materials provided by Massachusetts Institute of Technology. Original article written by Anne Trafton, MIT News Office.

September 23, 2009

Video surveillance system that reasons like a human brain

BRS Labs announced a video-surveillance technology called Behavioral Analytics, which leverages cognitive reasoning, and processes visual data on a level similar to the human brain.

It is impossible for humans to monitor the tens of millions of cameras deployed throughout the world, a fact long recognized by the international security community. Security video is either used for forensic analysis after an incident has occurred, or it employs a limited-capability technology known as Video Analytics – a video-motion and object-classification-based software technology that attempts to watch video streams and then sends an alarm on specific pre-programmed events. The problem is that this legacy solution generates a great number of false alarms that effectively renders it useless in the real world.

BRS Labs has created a technology it calls Behavioral Analytics. It uses cognitive reasoning, much like the human brain, to process visual data and to identify criminal and terroristic activities. Built on a framework of cognitive learning engines and computer vision, AISight, provides an automated and scalable surveillance solution that analyzes behavioral patterns, activities and scene content without the need for human training, setup, or programming.

The system learns autonomously, and builds cognitive “memories” while continuously monitoring a scene through the “eyes” of a CCTV security camera. It sees and then registers the context of what constitutes normal behavior, and the software distinguishes and alerts on abnormal behavior without requiring any special programming, definition of rules or virtual trip lines.

AISight is currently fielded across a wide variety of global critical infrastructure assets, protecting major international hotels, banking institutions, seaports, nuclear facilities, airports and dense urban areas plagued by criminal activity.

Original article by Helpnet security

September 21, 2009

Robots get smarter by asking for help


To the right:Robot recharging itself

ASKING someone for help is second nature for humans, and now it could help robots overcome one of the thorniest problems in artificial intelligence.

That's the thinking behind a project at Willow Garage, a robotics company in Palo Alto, California. Researchers there are training a robot to ask humans to identify objects it doesn't recognise. If successful, it could be an important step in developing machines capable of operating with consistent autonomy.

Object recognition has long troubled AI researchers. While computers can be taught to recognise simple objects, such as pens or mugs, they often make mistakes when the lighting conditions or viewing angle change. This makes it difficult to create robots that can navigate safely around buildings and interact with objects, a problem Willow Garage encountered when building its Personal Robot 2 (PR2).

Where AI struggles, humans excel, finding this sort of recognition task almost effortless. So Alex Sorokin, a computer scientist at the University of Illinois at Urbana-Champaign, who collaborates with Willow Garage, decided to take advantage of this by building a system that allows PR2 to ask humans for help.

The system uses Amazon's Mechanical Turk, an online marketplace which pairs up workers with employers that have simple tasks they need completing. The robot takes a photo of the object it doesn't recognise and sends it to Mechanical Turk. Workers can then use Sorokin's software to draw an outline around an object in the image and attach a name to it, getting paid between 3 and 15 cents for each image they process.

In initial tests, the robot moved through Willow Garage's offices, sending images to be processed every few seconds. Labelled images started coming back a few minutes later. The accuracy rate was only 80 per cent, but Sorokin says this can be improved by paying other workers to verify that the responses are valid.

Sorokin believes his system will help robots learn about new environments. A cleaning robot, for example, could spend its first week in a new building taking pictures and having people label them, helping it to build up a model of the space and the objects it contained. If it got stuck, it could always ask for help again.

"This is a fantastic idea," says John Leonard, a roboticist at the Massachusetts Institute of Technology. Potentially this could allow robots to operate for long periods without direct intervention from a human operator, he adds.

The next step for the programmers is to enable PR2 to make sense of the human responses and then act upon them, Sorokin says.

September 17, 2009

The Eyeborg Project (Eye Socket Camera)

(Not The Movie Eyeborgs)

Eyeborg Phase II from eyeborg on Vimeo.



Is Rob Spence's( a filmaker) and Kosta Grammatis's( a former SpaceX avionics systems engineer) project to embed a video camera and transmitter in a prosthetic eye that will then record the world from a perspective never seen before. The only thing I'd be concerned with is it getting hacked into since it has a wireless transmitter.

Check it out at Eyeborgproject .com

Check out their blog -->here<--

If the video loads too slowly check it out at youtube -->here<--

September 16, 2009

Cyborg crickets could chirp at the smell of survivors


To the right: Could Modified insects be joining rescue workers in the search for survivors in the future?(Image: KPA/Zuma / Rex Features)

IF you're trapped under rubble after an earthquake, wondering if you'll see daylight again, the last thing you need is an insect buzzing around your face. But that insect could save your life, if a scheme funded by the Pentagon comes off.

The project aims to co-opt the way some insects communicate to give early warning of chemical attacks on the battlefield - the equivalent of the "canary in a coal mine". The researchers behind it say the technology could be put to good use in civilian life, from locating disaster victims to monitoring for pollution and gas leaks, or acting as smoke detectors.

Pentagon-backed researchers have already created insect cyborgs by implanting them with electrodes to control their wing muscles. The latest plan is to create living communication networks by implanting a package of electronics in crickets, cicadas or katydids - all of which communicate via wing-beats. The implants will cause the insects in these OrthopterNets to modulate their calls in the presence of certain chemicals.

"We could do this by adjusting the muscle tension or some other parameter that affects the sound-producing movements. The insect itself might not even notice the modulation," says Ben Epstein of OpCoast, who came up with the idea during a visit to China, where he heard cicadas changing calls in response to each other. The firm, which is based in Point Pleasant Beach, New Jersey, has been awarded a six-month contract to develop a mobile communications network for insects.

As well as a biochemical sensor and a device for modulating the wing muscles, the electronics package would contain an acoustic sensor designed to respond to the altered calls of other insects. This should ensure the "alarm" signal is passed quickly across the network and is ultimately picked up by ground-based transceivers.

The Pentagon's priority is for the insects to detect chemical and biological agents on the battlefield, but Epstein says they could be modified to respond to the scent of humans and thus be used to find survivors of earthquakes and other disasters.

The real challenge will be to miniaturise the electronics. "Given a big enough insect it wouldn't be a problem," says Epstein. But the company is looking at ubiquitous species such as crickets, which tend to be smaller. Each network is likely to use hundreds or thousands of insects, though they could be spread far apart: some katydids can be heard a kilometre away.

Are OrthopterNets feasible? "I don't see why not," says Peter Barnard, director of science at the Royal Entomological Society in London. "Although insects might appear to be limited by the anatomy of their sound-producing organs, we know that they can produce different signals for different purposes." Since there is already evidence of modulation within quite broad bandwidths of frequencies for communication, it might be possible to modify and exploit these abilities, he says.

Originally posted in New scientist