June 23, 2012

Aging is recorded in our genes


Researchers at the Bellvitge Biomedical Research Institute in Barcelona, Spain have found a significantly higher amount of cytosine methylation in the newborn than in the centenarian: 80.5% of all cytosine nucleotides, compared with 73%.

Recent research suggests that changes in DNA methylation patterns as a person gets older may contribute to human diseases for which risk increases with age, including cancer.

Background:

DNA is made up of four basic building blocks — adenine, thymine, guanine, and cytosine—and the sequence of these nucleotides within a gene determines what protein it makes. Genes can be switched on and off as needed, and the regulation of genes often involves what are called epigenetic mechanisms in which chemical alterations are made to the DNA. One of the most common of these epigenetic changes involves a methyl group — one carbon atom and three hydrogen atoms — binding to a nucleotide, usually cytosine. In general, this binding, called methylation, turns off the gene in question.

The researchers looked at two extreme cases: A newborn male baby and a man aged 103 years.

The team extracted DNA from white blood cells taken from the blood of the elderly man and from the umbilical cord blood of the baby and determined its methylation pattern using a fairly new technique called whole-genome bisulfite sequencing (WGBS).

With WGBS, DNA is exposed to the chemical sodium bisulfite, which has no effect on cytosines with methyl groups bound to them but turns nonmethylated cytosines into another nucleotide called uracil. The result is an epigenetic map that shows exactly which DNA sites are methylated and which are not.

To look at an intermediate case, the team also performed WGBS on the DNA of a 26-year-old male subject; the methylation level was also intermediate, about 78%.

Differences between newborn and centenarian

They then took a closer look at the differences between the DNA of the newborn and of the centenarian, but restricted the comparison to regions of the genome where the DNA nucleotide sequences were identical so that only the epigenetic differences would stand out.

The team identified nearly 18,000 “differentially methylated regions” (DMRs) of the genome, covering many types of genes. More than a third of the DMRs occurred in genes that have already been linked with cancer risk. Moreover, in the centenarian, 87% of the DMRs involved the loss of the methyl group, while only 13% involved the gain of one.

Finally, to expand the study, the team looked at the methylation patterns of 19 newborns and 19 people aged between 89 and 100 years old. This analysis confirmed that older people have a lower amount of cytosine methylation than newborns.

Increased risk of infection and diabetes

The authors conclude that the degree of methylation decreases in a cumulative fashion over time. Moreover, in the centenarian, the loss of methyl groups, which turns genes back on, often occurred in genes that increase the risk of infection and diabetes when they are turned on during adulthood. In contrast, the small number of genes in the centenarian that had greater methylation levels were often those that needed to be kept turned on to protect against cancer.

The new work is the first to compare the complete, genome-wide DNA methylation patterns of these two diverse age groups, says Martin Widschwendter, an oncologist at University College London in the United Kingdom who has studied the link between methylation and cancer.

Widschwendter, who likens the DNA sequence to the genome’s “hardware” and epigenetic changes to its “software,” says that the team’s study supports earlier research suggesting that “as a function of age and environmental exposure, this software accumulates defects” that can cause “age-related cancer and degenerative diseases.”

Ref.: Holger Heyn et al., Distinct DNA methylomes of newborns and centenarians, Proceedings of the National Academy of Sciences, 2012, DOI: 10.1073/pnas.1120658109 (open access)

[ Science Now ]

June 18, 2012

New energy source for future medical implants: brain glucose


The Matrix was right: humans will act as batteries

-CAPTION TO THE RIGHT: Brain power: harvesting power from the cerebrospinal fluid within the subarachnoid space. Inset at right: a micrograph of a prototype, showing the metal layers of the anode (central electrode) and cathode contact (outer ring) patterned on a silicon wafer. (Credit: Karolinska Institutet/Stanford University))--

MIT engineers have developed a fuel cell that runs on glucose for powering highly efficient brain implants of the future that can help paralyzed patients move their arms and legs again — batteries included.

The fuel cell strips electrons from glucose molecules to create a small electric current.

The researchers, led by Rahul Sarpeshkar, an associate professor of electrical engineering and computer science at MIT, fabricated the fuel cell on a silicon chip, allowing it to be integrated with other circuits that would be needed for a brain implant.

In the 1970s, scientists showed they could power a pacemaker with a glucose fuel cell, but the idea was abandoned in favor of lithium-ion batteries, which could provide significantly more power per unit area than glucose fuel cells.

These glucose fuel cells also used enzymes that proved to be impractical for long-term implantation in the body, since they eventually ceased to function efficiently.

How to generate hundreds of microwatts from sugar
[+]silicon_wafer_glucose

A silicon wafer with glucose fuel cells of varying sizes; the largest is 64 by 64 mm. (credit: Sarpeshkar Lab)

The new fuel cell is fabricated from silicon, using the same technology used to make semiconductor electronic chips, with no biological components.

A platinum catalyst strips electrons from glucose, mimicking the activity of cellular enzymes that break down glucose to generate ATP, the cell’s energy currency. (Platinum has a proven record of long-term biocompatibility within the body.)

So far, the fuel cell can generate up to hundreds of microwatts — enough to power an ultra-low-power and clinically useful neural implant.

Benjamin Rapoport, a former graduate student in the Sarpeshkar lab and the first author on the new MIT study, calculated that in theory, the glucose fuel cell could get all the sugar it needs from the cerebrospinal fluid (CSF) that bathes the brain and protects it from banging into the skull.

There are very few cells in the CSF, so it’s highly unlikely that an implant located there would provoke an immune response, the researchers say.
[+]glucose_fuel_cell

Structure of the glucose fuel cell and the oxygen and glucose concentration gradients crucially associated with its cathode and anode half-cell reactions (credit: Benjamin I. Rapoport, Jakub T. Kedzierski, Rahul Sarpeshkar/PLoS One)

There is also significant glucose in the CSF, which does not generally get used by the body. Since only a small fraction of the available power is utilized by the glucose fuel cell, the impact on the brain’s function would likely be small.

Implantable medical devices

“It will be a few more years into the future before you see people with spinal-cord injuries receive such implantable systems in the context of standard medical care, but those are the sorts of devices you could envision powering from a glucose-based fuel cell,” says Rapoport.

Karim Oweiss, an associate professor of electrical engineering, computer science and neuroscience at Michigan State University, says the work is a good step toward developing implantable medical devices that don’t require external power sources.

“It’s a proof of concept that they can generate enough power to meet the requirements,” says Oweiss, adding that the next step will be to demonstrate that it can work in a living animal.

A team of researchers at Brown University, Massachusetts General Hospital and other institutions recently demonstrated that paralyzed patients could use a brain-machine interface to move a robotic arm; those implants have to be plugged into a wall outlet.

Ultra-low-power bioelectronics

Sarpeshkar’s group is a leader in the field of ultra-low-power electronics, having pioneered such designs for cochlear implants and brain implants. “The glucose fuel cell, when combined with such ultra-low-power electronics, can enable brain implants or other implants to be completely self-powered,” says Sarpeshkar, author of the book Ultra Low Power Bioelectronics.

The book discusses how the combination of ultra-low-power and energy-harvesting design can enable self-powered devices for medical, bio-inspired and portable applications.

Sarpeshkar’s group has worked on all aspects of implantable brain-machine interfaces and neural prosthetics, including recording from nerves, stimulating nerves, decoding nerve signals and communicating wirelessly with implants.

One such neural prosthetic is designed to record electrical activity from hundreds of neurons in the brain’s motor cortex, which is responsible for controlling movement. That data is amplified and converted into a digital signal so that computers — or in the Sarpeshkar team’s work, brain-implanted microchips — can analyze it and determine which patterns of brain activity produce movement.

The fabrication of the glucose fuel cell was done in collaboration with Jakub Kedzierski at MIT’s Lincoln Laboratory. “This collaboration with Lincoln Lab helped make a long-term goal of mine — to create glucose-powered bioelectronics — a reality,” Sarpeshkar says.

Although he has begun working on bringing ultra-low-power and medical technology to market, he cautions that glucose-powered implantable medical devices are still many years away.

Ref.: Benjamin I. Rapoport, Jakub T. Kedzierski, Rahul Sarpeshkar, A Glucose Fuel Cell for Implantable Brain-Machine Interfaces, PLoS ONE, 2012, DOI: 10.1371/journal.pone.0038436 (open access)

June 1, 2012

Research shows cue-giving robots help students learn


The well-known fact is that humans can teach robots, but the newer turn in educational circles is all about how robots can teach humans. The stepped-up robots are “animated” and "adaptive" agents that communicate effectively with humans by using subtle, human-like cues to engage their listeners. Two researchers from the University of Wisconsin-Madison have demonstrated that such robots can improve how much students remember from their lessons.

The researchers believe that “embodied agents hold great promise as educational assistants, exercise coaches, and team members in collaborative work. These roles require agents to closely monitor the behavioral, emotional, and mental states of their users and provide appropriate, effective responses.”

Bilge Mutlu and Dan Szafir of the Department of Computer Sciences at the University of Wisconsin-Madison worked up a robotic teacher that could tell when students were losing focus and helped them re-engage with the lesson. They programmed a Wakamaru humanoid robot to tell students a story and then tested the students to see how much of the story they retained. Engagement levels were monitored using measurements from electroencephalography (EEG) to monitor learning and concentration.

Human teachers have strategies for “reviving” students’ waning focus by changing tone of voice or gesturing. When a significant decrease in certain brain signals indicated that the student's attention level had fallen, the system sent a signal to the robot to trigger such human-like cues.

During the reading of a long Japanese folk tale, “My Lord Bag of Rice,” the robot similarly raised its voice or used arm gestures, pointed at itself or toward the listeners and used its robot arms to indicate a high mountain.

Students who were given cues by the robot when their attention span was fading were better at recalling the story than the other groups. The more successful group answered an average of nine out of 14 questions correctly; those who sat with a robot giving no human-like cues got 6.3 right.

The researchers are on to a line of investigation considered important as education expands to incorporate digital learning within live classrooms as well as online courses. “Virtual” teachers can be modeled toward human-like interactions with students through such nonverbal cues, which may support a better learning experience and results. That kind of focus has "significant implications for the field of education," according to Andrew Ng, director of Stanford University's Artificial Intelligence Lab. "The vision of automatically measuring student engagement so as to build a more interactive teacher is very exciting."

Earlier this month, Mutlu and Szafir presented a paper on the design of “adaptive agents” at the Conference on Human Factors in Computing Systems in Austin, Texas. Their paper was titled “Pay Attention! Designing Adaptive Agents that Monitor and Improve User Engagement.

More information: Pay Attention! Designing Adaptive Agents that Monitor and Improve User Engagement - Paper

Abstract
Embodied agents hold great promise as educational assistants, exercise coaches, and team members in collaborative work. These roles require agents to closely monitor the behavioral, emotional, and mental states of their users and provide appropriate, effective responses. EdNewscientistucational agents, for example, will have to monitor student attention and seek to improve it when student engagement decreases. In this paper, we draw on techniques from brain-computer interfaces (BCI) and knowledge from educational psychology to design adaptive agents that monitor student attention in real time using measurements from electroencephalography (EEG) and recapture diminishing attention levels using verbal and nonverbal cues. An experimental evaluation of our approach showed that an adaptive robotic agent employing behavioral techniques to regain attention during drops in engagement improved student recall abilities 43% over the baseline regardless of student gender and significantly improved female motivation and rapport. Our findings offer guidelines for developing effective adaptive agents, particularly for educational settings.

Want your own programmable robot.....Check out the #1 rated self-programmable robot 5 years running
 LEGO Mindstorms NXT 2.0

original article by Newscientist