Category Archives: Carnegie Mellon University

How card-playing A.I. beat top poker pros

An artificial intelligence called Libratus beat four top professional poker players in No-Limit Texas Hold’em by breaking the game into smaller, more manageable parts and adjusting its strategy as play progressed during the competition, researchers report.

In a new paper in Science, Tuomas Sandholm, professor of computer science at Carnegie Mellon University, and Noam Brown, a PhD student in the computer science department, detail how their AI achieved superhuman performance in a game with more decision points than atoms in the universe.

AI programs have defeated top humans in checkers, chess, and Go—all challenging games, but ones in which both players know the exact state of the game at all times. Poker players, by contrast, contend with hidden information: what cards their opponents hold and whether an opponent is bluffing.

Imperfect information

In a 20-day competition involving 120,000 hands this past January at Pittsburgh’s Rivers Casino, Libratus became the first AI to defeat top human players at Head’s-Up, No-Limit Texas Hold’em—the primary benchmark and longstanding challenge problem for imperfect-information game-solving by AIs.

Libratus beat each of the players individually in the two-player game and collectively amassed more than $1.8 million in chips. Measured in milli-big blinds per hand (mbb/hand), a standard used by imperfect-information game AI researchers, Libratus decisively defeated the humans by 147 mmb/hand. In poker lingo, this is 14.7 big blinds per game.

“The techniques in Libratus do not use expert domain knowledge or human data and are not specific to poker,” Sandholm and Brown write in the paper. “Thus, they apply to a host of imperfect-information games.”

Such hidden information is ubiquitous in real-world strategic interactions, they note, including business negotiation, cybersecurity, finance, strategic pricing, and military applications.

Three modules

Libratus includes three main modules, the first of which computes an abstraction of the game that is smaller and easier to solve than by considering all 10161 (the number 1 followed by 161 zeroes) possible decision points in the game. It then creates its own detailed strategy for the early rounds of Texas Hold’em and a coarse strategy for the later rounds. This strategy is called the blueprint strategy.

One example of these abstractions in poker is grouping similar hands together and treating them identically.

“Intuitively, there is little difference between a king-high flush and a queen-high flush,” Brown says. “Treating those hands as identical reduces the complexity of the game and, thus, makes it computationally easier.” In the same vein, similar bet sizes also can be grouped together.

In the final rounds of the game, however, a second module constructs a new, finer-grained abstraction based on the state of play. It also computes a strategy for this subgame in real-time that balances strategies across different subgames using the blueprint strategy for guidance—something that needs to be done to achieve safe subgame solving. During the January competition, Libratus performed this computation using the Pittsburgh Supercomputing Center’s Bridges computer.

When an opponent makes a move that is not in the abstraction, the module computes a solution to this subgame that includes the opponent’s move. Sandholm and Brown call this “nested subgame solving.” DeepStack, an AI created by the University of Alberta to play Heads-Up, No-Limit Texas Hold’em, also includes a similar algorithm, called continual re-solving. DeepStack has yet to be tested against top professional players, however.

How artificial intelligence can teach itself slang

The third module is designed to improve the blueprint strategy as competition proceeds. Typically, Sandholm says, AIs use machine learning to find mistakes in the opponent’s strategy and exploit them. But that also opens the AI to exploitation if the opponent shifts strategy. Instead, Libratus’ self-improver module analyzes opponents’ bet sizes to detect potential holes in Libratus’ blueprint strategy. Libratus then adds these missing decision branches, computes strategies for them, and adds them to the blueprint.

AI vs. AI

In addition to beating the human pros, researchers evaluated Libratus against the best prior poker AIs. These included Baby Tartanian8, a bot developed by Sandholm and Brown that won the 2016 Annual Computer Poker Competition held in conjunction with the Association for the Advancement of Artificial Intelligence Annual Conference.

Whereas Baby Tartanian8 beat the next two strongest AIs in the competition by 12 (plus/minus 10) mbb/hand and 24 (plus/minus 20) mbb/hand, Libratus bested Baby Tartanian8 by 63 (plus/minus 28) mbb/hand. DeepStack has not been tested against other AIs, the authors note.

“The techniques that we developed are largely domain independent and can thus be applied to other strategic imperfect-information interactions, including nonrecreational applications,” Sandholm and Brown conclude. “Due to the ubiquity of hidden information in real-world strategic interactions, we believe the paradigm introduced in Libratus will be critical to the future growth and widespread application of AI.”

To spur innovation, teach A.I. to find analogies

The technology has been exclusively licensed to Strategic Machine Inc., a company Sandholm founded to apply strategic reasoning technologies to many different applications.

The National Science Foundation and the Army Research Office supported this research.

Source: Carnegie Mellon University

The post How card-playing A.I. beat top poker pros appeared first on Futurity.

brain-sound_740

How your brain singles out 1 sound among many

Researchers have developed a new way to find out how the brain singles out specific sounds in distracting settings, non-invasively mapping sustained auditory selective attention in the human brain.

The study lays crucial groundwork to track deficits in auditory attention due to aging, disease, or brain trauma and to create clinical interventions, like behavioral training, to potentially correct or prevent hearing issues.

“Deficits in auditory selective attention can happen for many reasons—concussion, stroke, autism or even healthy aging. They are also associated with social isolation, depression, cognitive dysfunction and lower work force participation. Now, we have a clearer understanding of the cognitive and neural mechanisms responsible for how the brain can select what to listen to,” says Lori Holt, professor of psychology in the Dietrich College of Humanities and Social Sciences and a faculty member of the Center for the Neural Basis of Cognition (CNBC) at Carnegie Mellon University.

brain and sound
The image above shows auditory cortical maps of sound frequency and attention. (Credit: Carnegie Mellon)

To determine how the brain can listen out for important information in different acoustic frequency ranges—similar to paying attention to the treble or bass in a music recording—eight adults listened to one series of short tone melodies and ignored another distracting one, responding when they heard a melody repeat.

To understand how paying attention to the melodies changed brain activation, the researchers took advantage of a key way that sound information is laid out across the surface, or cortex, of the brain.

The cortex contains many “tonotopic” maps of auditory frequency, where each map represents frequency a little like an old radio display, with low frequencies on one end, going to high on the other. These maps are put together like pieces of a puzzle in the top part of the brain’s temporal lobes.

When people in the MRI scanner listened to the melodies at different frequencies, the parts of the maps tuned to these frequencies were activated. What was surprising was that just paying attention to these frequencies activated the brain in a very similar way—not only in a few core areas, but also over much of the cortex where sound information is known to arrive and be processed.

The researchers then used a new high-resolution brain imaging technique called multiparameter mapping to see how the activation to hearing or just paying attention to different frequencies related to another key brain feature, or myelination. Myelin is the “electrical insulation” of the brain, and brain regions differ a lot in how much myelin insulation is wrapped around the parts of neurons that transmit information.

In comparing the frequency and myelin maps, the researchers found that they were very related in specific areas: if there was an increase in the amount of myelin across a small patch of cortex, there was also an increase in how strong a preference neurons had for particular frequencies.

Pitch discovery could lead to better cochlear implants

“This was an exciting finding because it potentially revealed some shared ‘fault lines’ in the auditory brain,” says Frederic Dick, professor of auditory cognitive neuroscience at Birkbeck College and University College London.

“Like earth scientists who try to understand what combination of soil, water, and air conditions makes some land better for growing a certain crop, as neuroscientists we can start to understand how subtle differences in the brain’s functional and structural architecture might make some regions more ‘fertile ground’ for learning new information like language or music,” Dick says.

The researchers report their findings in the Journal of Neuroscience.

Carnegie Mellon’s Matt Lehet and Tim Keller, University College London’s Martina F. Callaghan, and Martin Sereno of San Diego State University also participated in the research.

Carnegie Mellon alumus Jonathan Rothberg funded this work.

Source: Carnegie Mellon University

The post How your brain singles out 1 sound among many appeared first on Futurity.

This method cuts toxic side effects of nanodrug chemo

Administering an FDA-approved nutrition source called Intralipid before nanodrug chemotherapy can reduce the amount of the toxic drugs that settle in the spleen, liver, and kidneys, researchers report.

“This methodology could have a major impact in the delivery of nanodrugs…”

Nanodrugs, drugs attached to tiny biocompatible particles, show great promise in treatment of a number of diseases, including cancer. Delivery of these drugs, however, is not very efficient—only about 0.7 percent of chemotherapy nanodrugs reach their target tumor cells. Cells, including those in the liver, spleen, and kidneys, absorb the remainder.

When the drugs build up in these organs, they cause toxicity and side effects that negatively affect a patient’s quality of life.

Chien Ho, a professor of biological sciences at Carnegie Mellon University, and his colleagues found that administering Intralipid temporarily blunts the reticuloendothelial system—a network of cells and tissues found throughout the body, including in the blood, lymph nodes, spleen, and liver, that play an important role in the immune system.

Ho and colleagues tested their technique in a rat model of cancer using three FDA-approved chemotherapy nanodrugs, Abraxane, Marqibo, and Onivyde, and one experimental platinum-based anti-cancer nanodrug.

In the study, they administered Intralipid one hour before giving the animal a chemotherapy nanodrug. They found that their method reduced the amount of the drug that was found in the liver, spleen, and kidneys and reduced the drugs’ toxic side-effects. They also found that more of the drug was available to attack tumor cells. Additionally, the Intralipid treatment had no harmful impact on tumor growth or drug efficacy.

Heavy metal found in meteoroids kills cancer cells

The researchers believe that they can apply their drug delivery methodology to a variety of nanodrugs without any modifications to the drugs.

“This methodology could have a major impact in the delivery of nanodrugs not only for patients undergoing chemotherapy for cancer treatment but also to those being treated with nanodrugs for other conditions,” says Ho.

The researchers report their findings in the journal Scientific Reports.

Source: Carnegie Mellon University

The post This method cuts toxic side effects of nanodrug chemo appeared first on Futurity.

Updated EEG offers high-res view into brain

A new high-density EEG captures the brain’s neural activity at a higher spatial resolution than ever before, report researchers.

The next-generation brain-interface technology—the first non-invasive, high-resolution system of its kind—offers higher density and coverage than any existing system and has the potential to revolutionize future clinical and neuroscience research as well as brain-computer interfaces, scientists say.

To test the system, researchers had 16 participants view pattern-reversing black and white checkerboards while wearing the new “super-Nyquist density” EEG. They compared the results from all electrodes to results when using only a subset of the electrodes, which is an accepted standard for EEG density.

The results, which appear in  Scientific Reports, show that the system captured more information from the visual cortex than any of the four standard versions tested.

“These results are crucial in showing that EEG has enormous potential for future research,” says lead author Amanda K. Robinson, a postdoctoral fellow in the psychology department at Carnegie Mellon University and the Center for the Neural Basis of Cognition at the time of the study.

“Ultimately, capturing more neural information with EEG means we can make better inferences about what is happening inside the brain. This has the potential to improve source detection, for example in localizing the source of seizures in epilepsy.”

To create the new tool, researchers modified an EEG head cap from a 128-electrolode system, which increased its sensor density by two to three folds over occipitotemporal brain regions. They designed the experiments to use visual stimuli with low, medium, and high spatial frequency content.

Gene network is a new target for epilepsy treatments

They then used a visual paradigm designed to elicit neural responses with differing spatial frequencies in the brain and examined how the new EEG performed. The subtle patterns of neural activity uncovered by the new system were closely related to a model of primary visual cortex.

This “opens doors for utilizing higher-density EEG systems for clinical and neuroscientific applications,” says Pulkit Grover, assistant professor of electrical and computer engineering. “It also validates some of our fundamental information-theoretic studies in the past few years.

Additional researchers from Carnegie Mellon the University of Pittsburgh participated in the study. Early financial support to modify and test the new EEG came from Carnegie Mellon’s BrainHub initiative and ProSEED program. Instrumentation of the novel cap was in part funded by the SONIC center of the Semiconductor Research Corporation.

Source: Carnegie Mellon University

The post Updated EEG offers high-res view into brain appeared first on Futurity.

Mindfulness apps with acceptance training can reduce stress

Mindfulness meditation apps can reduce the body’s response to biological stress, new research suggests.

“…this study shows that it’s possible to learn skills that improve the way our bodies respond to stress…”

Acceptance, or learning how to be open and accepting of the way things are in each moment, is particularly important for impacting stress biology and benefitting from the training’s stress reduction effects, the researchers found.

The research offers the first scientific evidence that a brief mindfulness meditation mobile app that incorporates acceptance training reduces cortisol and systolic blood pressure in response to stress.

“We have known that mindfulness training programs can buffer stress, but we haven’t figured out how they work,” says David Creswell, an associate professor of psychology in Carnegie Mellon University’s Dietrich College of Humanities and Social Sciences.

“This study, led by Emily Lindsay in my lab, provides initial evidence that the acceptance training component is critical for driving the stress reduction benefits of mindfulness training programs,” Creswell says.

For the study, 144 stressed adults participated in one of three randomly assigned smartphone-based interventions: training in monitoring the present moment with acceptance, training in monitoring the present moment only, or active control training.

Each participant completed one 20-minute daily lesson for 14 days. Then, they were placed in a stressful situation while their cortisol levels and blood pressure were measured.

Mindfulness training cools inflammation

The results showed that the participants in the combined monitoring and acceptance program had reduced cortisol and systolic blood pressure reactivity. Their blood pressure responses were approximately 20 percent lower than those in the two interventions that did not include acceptance training. Their cortisol responses were also more than 50 percent lower.

“Not only were we able to show that acceptance is a critical part of mindfulness training, but we’ve demonstrated for the first time that a short, systematic smartphone mindfulness program helps to reduce the impact of stress on the body,” says Lindsay, who received her PhD in psychology and is now a postdoctoral research fellow at the University of Pittsburgh.

“We all experience stress in our lives, but this study shows that it’s possible to learn skills that improve the way our bodies respond to stress with as little as two weeks of dedicated practice. Rather than fighting to get rid of unpleasant feelings, welcoming and accepting these feelings during stressful moments is key,” Lindsay says.

Shinzen Young and 01 Expert Systems developed the app used for the study.

Mindfulness may not work as well for men

The researchers report their findings in the journal Psychoneuroendocrinology.

Additional researchers contributing to the work are from Penn State and Virginia Commonwealth University.

The Yoga Science Foundation, the Mind & Life Institute, and the American Psychological Association funded this research.

Source: Carnegie Mellon University

The post Mindfulness apps with acceptance training can reduce stress appeared first on Futurity.

To spur innovation, teach A.I. to find analogies

A method for teaching artificial intelligence analogies through crowdsourcing could allow a computer to search data for comparisons between disparate problems and solutions, highlighting important—but potentially unrecognized—underlying similarities.

“Once you can search for analogies, you can really crank up the speed of innovation…”

The method could enable A.I. to search through databases of patents, inventions, and researcher papers, identifying ideas that can be repurposed to solve new problems or create new products.

As anyone who enjoyed watching TV’s MacGyver disarm a missile with a paperclip or staunch a sulfuric acid leak with a chocolate bar could tell you, analogies can provide critical insights and inspiration for problem-solving. Tapping huge databases of inventions could spur innovation, but doing so without the help of analogies is, well, like finding a needle in a haystack.

Computer scientists solved the analogy problem by combining crowdsourcing and a type of artificial intelligence known as deep learning. By observing how people found analogies, they obtained insights they used to train computer software to find even more analogies.

“After decades of attempts, this is the first time that anyone has gained traction computationally on the analogy problem at scale,” says Aniket Kittur, associate professor in Carnegie Mellon University’s Human-Computer Interaction Institute.

“Once you can search for analogies, you can really crank up the speed of innovation,” says Dafna Shahaf, a computer scientist at Hebrew University. “If you can accelerate the rate of innovation, that solves a lot of other problems downstream.”

The research team will present its findings in a paper at KDD 2017, the Conference on Knowledge Discovery and Data Mining, in Halifax, Nova Scotia.

Analogies have played a role in any number of discoveries. Italian microbiologist Salvador Luria conceived an experiment on bacterial mutation—which later earned him a Nobel Prize—while watching a slot machine. The Wright Brothers used insights about balance and weight acquired while building bicycles to help them achieve powered flight. A trick for removing a loose cork from a wine bottle inspired an Argentinian car mechanic to invent a device to ease difficult childbirths.

Finding analogies is not always easy, particularly for computers, which do not understand things on a deep semantic level like humans do.

Researchers have tried handcrafting data structures, but this approach is time consuming and expensive—not scalable for databases that can include 9 million US patents or 70 million scientific research papers. Others have tried inferring this structure from large amounts of text, but this approach identifies primarily surface similarities, not the deep understanding that is useful for problem-solving.

To pursue a new approach, Kittur, who has spent years studying crowdsourcing as a means of finding analogies, joined forces with Shahaf, who has specialized in computational analogies.

Can Siri learn to ‘grasp’ our metaphors?

Along with Shahaf’s doctoral student Tom Hope and postdoctoral researcher Joel Chan, they devised a scheme in which crowd workers hired through Amazon Mechanical Turk would look for analogous products in the Quirky.com product innovation website. Based on the product descriptions, they would look for those that had similar purposes or employed similar mechanisms.

“We were able to look inside these people’s brains because we forced them to show their work,” Chan explains.

A description for a yogurt maker, for instance, might yield words such as “concentrate,” “food,” and “reduce,” associated with its purpose and words such as “liquid,” “pump,” and “heating” associated with its mechanism.

“In terms of analogies, this isn’t about yogurt, but about concentrating things,” he notes.

Based on these insights, the computer could learn to analyze additional product descriptions and identify its own analogies, many of which reflected similarities between seemingly disparate products, not simply surface similarities.

When crowd workers subsequently used the analogies to suggest new products, these “distant” analogies yielded the most innovative ideas, Hope says.

The same approach could be used to tailor computer programs to find analogies in patent applications or scientific research papers.

How artificial intelligence can teach itself slang

The National Science Foundation supported this research, as did Bosch, Google, and Carnegie Mellon University’s Web 2020 initiative.

Source: Carnegie Mellon University

The post To spur innovation, teach A.I. to find analogies appeared first on Futurity.

Catalyst clears 99 percent of BPA from water

Scientists have developed a method for removing more than 99 percent of bisphenol A (also known as BPA) from water quickly and cheaply.

BPA, a ubiquitous and dangerous chemical used in the manufacturing of many plastics, is found in water sources around the world.

…BPA can be found in products from DVDs and eyeglass lenses to cash register receipts—and people and wildlife are regularly exposed.

In a new paper, which appears in Green Chemistry, chemist Terrence J. Collins and his research team also compiled evidence of BPA’s presence in a multitude of products and water sources, as well as the chemical’s toxicity.

The research team builds a strong case for the need to effectively remediate BPA-contaminated water, especially industrial waste streams and landfill runoff, and they offer a simple solution.

BPA is a chemical used primarily in the production of polycarbonate plastic and epoxy resins. Its use is widespread—BPA can be found in products from DVDs and eyeglass lenses to cash register receipts—and people and wildlife are regularly exposed.

BPA is dangerous because it mimics estrogen, a naturally occurring hormone, and can affect the body’s endocrine system. Studies in fish, mammals, and human cells have shown that BPA adversely affects brain and nervous system development, growth, and metabolism, and the reproductive system.

Concerns over BPA’s health effects prompted manufacturers to start making BPA-free products like baby bottles and water bottles starting in 2010. Many BPA replacements also have similar toxicity to BPA itself.

“BPA replacements have often not been adequately tested despite the fact that testing is easy to do,” says Collins, a professor of green chemistry at Carnegie Mellon University. Collins says environmental health scientists and green chemists developed a methodology called the Tiered Protocol for Endocrine Disruption (TiPED) for identifying endocrine disruptors to the highest levels of contemporary science, which was published in Green Chemistry in 2013.

With more than 15 billion pounds of BPA being produced annually, BPA contamination and cleanup present a significant challenge.

“There is no escape from BPA—for any living creature,” Collins says. “The massive global use of BPA burdens an already overstrained water treatment infrastructure and most BPA water releases simply never reach a water treatment facility. Our approach has high potential to be a much better remediation strategy for BPA-contaminated waste streams.”

BPA-contaminated water such as industrial waste or landfill runoff may or may not be treated before being released into the environment or to wastewater treatment plants.

Collins’ team offers a simple, effective, and cheap cleanup solution. Their system involves a group of catalysts called TAML activators, small molecules that mimic oxidizing enzymes. When combined with hydrogen peroxide, TAML activators very effectively break down harmful chemicals in water.

In the paper, the researchers demonstrate the efficacy and safety of TAML activators in breaking down BPA. Adding TAMLs and hydrogen peroxide to water heavily contaminated with BPA resulted in a 99 percent reduction of BPA within 30 minutes at near neutral pH, which is the pH norm for wastewater treatment.

BPA may nudge breast cancer cells to grow

TAML treatment at this pH caused BPA to assemble into larger units called oligomers, which clump together and precipitate out of the water. According to Collins, the oligomers could be filtered and disposed of in a BPA water treatment facility.

Most importantly, extensive studies by Collins and his collaborators found the oligomers are themselves not harmful. The nature of the bonds that stick the BPA molecules together doesn’t allow the oligomers to revert to BPA.

To ensure the safety of the decontaminated water, including the oligomers, the researchers tested it with TiPED assays. They found the TAML-treated BPA water did not show estrogen activity or cause abnormalities in yeast and developing zebrafish embryos.

The researchers also tested the efficacy of TAML treatment on BPA-laden water at a pH of 11. At this higher pH, there was a greater than 99.9 percent reduction in BPA within 15 minutes. In contrast with pH 8.5 treatment, the BPA molecules were destroyed, and no oligomers were detected.

“Because TAML/hydrogen peroxide treatment eliminates BPA from water so easily at concentrations that are similar to a variety of waste streams including paper plant processing solutions and landfill leachate, assuming the lab studies transfer to the real world, we can now offer a new and simple procedure for reducing BPA exposures worldwide,” Collins says.

Additional authors of the study are from Carnegie Mellon; Oregon State University; and the University of Auckland.

Dogs have 3X more BPA after eating canned food

Carnegie Mellon, the University of Auckland, the Alexander von Humboldt Foundation, Carnegie Mellon’s Steinbrenner Institute for Environmental Education and Research, the Heinz Endowments, and the National Science Foundation supported the research and the researchers.

Source: Carnegie Mellon University

The post Catalyst clears 99 percent of BPA from water appeared first on Futurity.

telescoping-lizard-bot_740

Telescoping design would make awesome robots

Researchers have created a way to design telescoping structures that can twist and bend, which could allow the creation of robots that collapse themselves to make transport easier or stretch out to reach over large obstacles.

The researchers devised algorithms that can take a suggested shape that includes curves or twists and design a telescoping structure to match. They also created a design tool that enables even a novice to create complex, collapsible assemblies, outlined in a new paper on the research.

The design possibilities range from something as practical as a rapidly deployable shelter to fanciful creations, such as a telescoping lizard with legs, head, and tail that readily retract.

telescoping lizard bot design
The researchers explored a number of designs in simulation, including shapes mimicking lizards and other animals. (Credit: Carnegie Mellon)

“Telescoping mechanisms are very useful for designing deployable structures,” says Keenan Crane, assistant professor of computer science at Carnegie Mellon University. “They can collapse down into really small volumes and, when you need them, are easily expanded.”

The researchers explored a number of designs in simulation, including shapes mimicking lizards and other animals.

But most telescoping devices are similar to a pirate’s telescope—a set of straight, nested cylinders. In this study, Crane, along with Stelian Coros, assistant professor of robotics, and Christopher Yu, a doctoral student in computer science, set out to find out what kinds of telescoping shapes are possible and to develop computational methods for designing and fabricating those shapes.

The researchers explored a number of designs in simulation, including shapes mimicking lizards and other animals.

They found that spherical, ring-shaped, and helical telescopes are possible. Once a designer selects the desired curve for a structure, their algorithms can devise a telescoping structure that can extend or contract without bumping into itself and that includes no wasted space between the nested pieces. They also devised connectors that would combine several such telescopes into a larger assembly.

collapsing lizard bot
(Credit: Carnegie Mellon)

The researchers devised algorithms that can take a target shape that includes curves or twists and design a telescoping structure to match. They also created a design tool that enables even a novice to create complex, collapsible assemblies

Though the nested sections can have a variety of cross-sections, they focused on those with circular cross sections, just like the pirate’s spyglass. Once extended, they noted, the circular cross sections make it possible for each of the curved segments to rotate, adding 3D twists to what otherwise would be 2D shapes.

Another was a robotic arm and claw that could emerge from a compact cylinder and reach up and over obstacles.

The simulations also enabled the researchers to analyze how the telescoping devices might move if they were actuated.

Watch new rescue robot grow and twist like a vine

“We found that characters with telescoping parts are capable of surprisingly organic movements,” Coros says.

The National Science Foundation supported this research. The researchers will present their findings at the SIGGRAPH Conference on Computer Graphics and Interactive Techniques.

Source: Carnegie Mellon University

The post Telescoping design would make awesome robots appeared first on Futurity.

Body feedback could make assisted walking easier

Researchers are using feedback from the human body to develop designs for exoskeletons and prosthetic limbs.

The work, called human-in-the-loop optimization, lessens the amount of energy needed for walking with exoskeleton assistance or prosthetic limbs.

“…the biggest challenge has remained the human element…”

“Existing exoskeleton devices, despite their potential, have not improved walking performance as much as we think they should,” says Steven Collins, a professor of mechanical engineering at Carnegie Mellon University.

“We’ve seen improvements related to computing, hardware, and sensors, but the biggest challenge has remained the human element—we just haven’t been able to guess how they will respond to new devices,” he says.

The algorithm that enables this optimization represents a step forward in the field of biomechatronics. The software algorithm is combined with versatile emulator hardware that automatically identifies optimal assistance strategies for individuals.

How stretching skin makes prosthetic hand more useful

During experiments, each user received a unique pattern of assistance from an exoskeleton worn on one ankle. The algorithm tested responses to 32 patterns over the course of an hour, making adjustments based on measurements of the user’s energy use with each pattern.

The optimized assistance pattern produced larger benefits than any exoskeleton to date, including devices acting at all joints on both legs.

“When we walk, we naturally optimize coordination patterns for energy efficiency,” Collins says. “Human-in-the-loop optimization acts in a similar way to optimize the assistance provided by wearable devices.

“We are really excited about this approach because we think it will dramatically improve energy economy, speed, and balance for millions of people, especially those with disabilities,” Collins adds.

‘Smart’ liner detects how leg prosthetics fit

A paper describing the research appears in the journal Science.

Source: Carnegie Mellon University

The post Body feedback could make assisted walking easier appeared first on Futurity.

Algorithms decode complex thoughts from brain scans

Scientists can now use brain activation patterns to identify complex thoughts like “The witness shouted during the trial.”

The research uses machine-learning algorithms and brain-imaging technology to “mind read.”

The findings indicate that the mind’s building blocks for constructing complex thoughts are formed by the brain’s various sub-systems and are not word-based. Published in Human Brain Mapping, the study offers new evidence that the neural dimensions of concept representation are universal across people and languages.

“One of the big advances of the human brain was the ability to combine individual concepts into complex thoughts, to think not just of ‘bananas,’ but ‘I like to eat bananas in evening with my friends,’” says Marcel Just, professor of psychology in Carnegie Mellon University’s Dietrich College of Humanities and Social Sciences.

“We have finally developed a way to see thoughts of that complexity in the fMRI signal. The discovery of this correspondence between thoughts and brain activation patterns tells us what the thoughts are built of.”

Previous work by Just and his team showed that thoughts of familiar objects, like bananas or hammers, evoke activation patterns that involve the neural systems that we use to deal with those objects. For example, how you interact with a banana involves how you hold it, how you bite it, and what it looks like.

The new study demonstrates that the brain’s coding of 240 complex events, sentences like the shouting during the trial scenario uses an alphabet of 42 meaning components, or neurally plausible semantic features, consisting of features, like person, setting, size, social interaction, and physical action. Each type of information is processed in a different brain system—which is how the brain also processes the information for objects. By measuring the activation in each brain system, the program can tell what types of thoughts are being contemplated.

For seven adult participants, the researchers used a computational model to assess how the brain activation patterns for 239 sentences corresponded to the neurally plausible semantic features that characterized each sentence. Then the program was able to decode the features of the 240th left-out sentence. They went through leaving out each of the 240 sentences in turn, in what is called cross-validation.

Brain ‘reads’ sentence the same way in 2 languages

The model was able to predict the features of the left-out sentence, with 87 percent accuracy, despite never being exposed to its activation before. It was also able to work in the other direction, to predict the activation pattern of a previously unseen sentence, knowing only its semantic features.

“Our method overcomes the unfortunate property of fMRI to smear together the signals emanating from brain events that occur close together in time, like the reading of two successive words in a sentence,” Just says. “This advance makes it possible for the first time to decode thoughts containing several concepts. That’s what most human thoughts are composed of.”

He adds, “A next step might be to decode the general type of topic a person is thinking about, such as geology or skateboarding. We are on the way to making a map of all the types of knowledge in the brain.”

Funding for the work came from the Intelligence Advanced Research Projects Activity (IARPA).

Source: Carnegie Mellon University

The post Algorithms decode complex thoughts from brain scans appeared first on Futurity.