Category Archives: Carnegie Mellon University

3D-printed plastic folds itself into amazing shapes under heat

Researchers have taken advantage of a common defect of the least-expensive kind of 3D printer to produce flat plastic items that, when heated, fold themselves into predetermined shapes, such as a rose, a boat, and even a bunny.

The objects are a first step toward products such as flat-pack furniture that assume their final shapes with the help of a heat gun, says Lining Yao, assistant professor in the Human-Computer Interaction Institute and director of the Morphing Matter Lab at Carnegie Mellon University.

The technology could also lead to emergency shelters that could ship flat and fold into shape under the warmth of the sun.

Self-folding materials are quicker and cheaper to produce than solid 3D objects, making it possible to replace noncritical parts or produce prototypes using structures that approximate the solid objects. The materials could be useful for creating molds for boat hulls and other fiberglass products inexpensively.

Other researchers have explored self-folding materials, but have typically used exotic materials or depended on sophisticated processing techniques not widely available.

Yao and colleagues created their self-folding structures by using the least expensive type of 3D printer—an FDM printer—and by taking advantage of warpage, a common problem with them.

“We wanted to see how self-assembly could be made more democratic—accessible to many users,” Yao says.

FDM printers work by laying down a continuous filament of melted thermoplastic. The materials contain residual stress and, as the material cools and the stress is relieved, the thermoplastic tends to contract. This can result in warped edges and surfaces.

“People hate warpage,” Yao says. “But we’ve taken this disadvantage and turned it to our advantage.”

To create self-folding objects, the researchers precisely control the process by varying the speed at which the printer deposits thermoplastic material and by combining warp-prone materials with rubber-like materials that resist contracture.

This 3D printing method makes a better nose

The objects emerge from the 3D printer as flat, hard plastic. When the plastic is placed in water hot enough to turn it soft and rubbery—but not hot enough to melt it—the folding process is triggered.

Though they used a 3D printer with standard hardware, the researchers replaced the machine’s open source software with their own code that automatically calculates the print speed and patterns necessary to achieve particular folding angles.

“The software is based on new curve-folding theory representing banding motions of curved area. The software based on this theory can compile any arbitrary 3D mesh shape to an associated thermoplastic sheet in a few seconds without human intervention,” says Byoungkwon An, a research affiliate in HCII.

“It’s hard to imagine this being done manually,” Yao says.

Though these early examples are at a desktop scale, making larger self-folding objects appears feasible.

“We believe the general algorithm and existing material systems should enable us to eventually make large, strong self-folding objects, such as chairs, boats, or even satellites,” says Jianzhe Gu, HCII research intern.

Yao will present her research, called Thermorph, at the Conference on Human Factors in Computing Systems (CHI 2018).

5 ways 3D printing could totally change medicine

An, Gu, and Ye Tao are lead authors of the paper. Other coauthors are from Carnegie Mellon, Zhejiang University, Syracuse University, the University of Aizu, and TU Wien.

Source: Carnegie Mellon University

The post 3D-printed plastic folds itself into amazing shapes under heat appeared first on Futurity.

Walldouble-touch_770

Paint transforms walls into interactive touchpads

With a few applications of conductive paint and some electronics, researchers can create walls that sense human touch, and detect things like gestures and when appliances are in use.

The researchers found that they could transform dumb walls into smart walls at relatively low cost—about $20 per square meter—using simple tools and techniques, such as a paint roller.

These new capabilities might enable users to place or move light switches or other controls anywhere on a wall that’s most convenient, or to control videogames by using gestures. By monitoring activity in the room, this system could adjust light levels when a TV is turned on or alert a user in another location when a laundry machine or electric kettle turns off.

Wall++ double-touch
Researchers at CMU and Disney Research used simple tools and techniques to transform dumb walls into smart ones. (Credit: Carnegie Mellon)

“Walls are usually the largest surface area in a room, yet we don’t make much use of them other than to separate spaces, and perhaps hold up pictures and shelves,” says Chris Harrison, assistant professor in Carnegie Mellon University’s Human-Computer Interaction Institute (HCII). “As the internet of things and ubiquitous computing become reality, it is tempting to think that walls can become active parts of our living and work environments.”

Yang Zhang, a PhD student in the HCII, will present a research paper on this sensing approach, called Wall++, at CHI 2018, the Conference on Human Factors in Computing Systems.

The researchers found that they could use conductive paint to create electrodes across the surface of a wall, enabling it to act both as a touchpad to track users’ touch and an electromagnetic sensor to detect and track electrical devices and appliances.

“Walls are large, so we knew that whatever technique we invented for smart walls would have to be low cost,” says Yang Zhang, a PhD student in the HCII. He and his colleagues thus dispensed with expensive paints, such as those containing silver, and picked a water-based paint containing nickel.

They also wanted to make it easy to apply the special coating with simple tools and without special skills. Using painter’s tape, they found they could create a cross-hatched pattern on a wall to create a grid of diamonds, which testing showed was the most effective electrode pattern. After applying two coats of conductive paint with a roller, they removed the tape and connected the electrodes. They then finished the wall with a top coat of standard latex paint to improve durability and hide the electrodes.

How spray paint can turn guitar into touchpad

The electrode wall can operate in two modes—capacitive sensing and electromagnetic (EM) sensing. In capacitive sensing, the wall functions like any other capacitive touchpad: when a person touches the wall, the touch distorts the wall’s electrostatic field at that point. In EM sensing mode, the electrode can detect the distinctive electromagnetic signatures of electrical or electronic devices, enabling the system to identify the devices and their locations.

Similarly, if a person is wearing a device that emits an EM signature, the system can track the location of that person, Zhang says.

Wall++ hasn’t been optimized for energy consumption, Zhang says, but he estimates the wall-sized electrodes consume about as much power as a standard touch screen.

Additional researchers contributing to the work are from Carnegie Mellon University and Disney Research.

Source: Carnegie Mellon University

The post Paint transforms walls into interactive touchpads appeared first on Futurity.

New speech in French Revolution paved way for change

Different ways of speaking may have played a significant role in winning acceptance for the new principles of governance during the French Revolution, a new study suggests.

The French Revolution was one of the most important political transformations in history. Even more than 200 years later, it is held up as a model of democratic nation building.

For years, historians and political scientists have wondered how the democratic trailblazers of the French Revolution managed to pull off the creation of an entirely new kind of governance.

“There are a lot of new turns of phrase that people were offering in the political lobby and the audience didn’t go for it. On the other hand, there were other things that did stick.”

Researchers, including Simon DeDeo from Carnegie Mellon University, used machine learning techniques to comb through transcripts of nearly 40,000 speeches from the deliberations of the makeshift assembly formed during the revolution’s early days to hash out the laws and institutions of the new government.

The researchers analyzed speech patterns to determine how novel they were and whether they persisted or disappeared over time. They also categorized speech patterns by political affiliation and context—whether the speech occurred during the assembly’s public deliberations or in a committee held behind closed doors.

In general, assembly members who broke from convention and made their case in new ways were more effective in getting their proposals adopted. For example, revolutionary leader Maximilien Robespierre used new turns of phrase to communicate fresh ideas that then became new principles of the nascent government.

But not every new idea was well accepted. More conservative members of the assembly, who tended to use more traditional language, may not have been as influential as their more inventive counterparts on the left, but they often played an important role in keeping debates focused and infusing them with a dose of practicality.

“You see crucial players on the left wing who are sources of new ideas and new patterns of speaking,” says DeDeo, assistant professor of social and decision sciences in Carnegie Mellon University’s Dietrich College of Humanities and Social Sciences who also holds an appointment at the Santa Fe Institute. “When they introduce these patterns they stick around. These are people who are bringing new ideas to the table, ideas that persist downstream.”

On how conservatives played a different role, DeDeo says, “They tend not to introduce new things—they’re following the course of the discussion, keeping the conversation on track. So you can see Robespierre introduce something—human rights, say—and the right doesn’t dismiss it, it discusses: ‘Let’s pause here and take that up.’”

Rebecca Spang, professor of history at Indiana University, notes how “a lot of the novelty doesn’t stick.”

“There are a lot of new turns of phrase that people were offering in the political lobby and the audience didn’t go for it. On the other hand, there were other things that did stick. And that’s what we call the revolution,” Spang says.

An unexpected insight from the analysis was that some of the most important work of the revolution was done in the committees, which were formed to work out particularly difficult issues and then present a recommendation to the full assembly. The small group dynamic allowed assembly members who may not have been powerful orators to exert influence.

“One thing we didn’t realize until doing this study was how many lesser-known revolutionaries were also working assiduously in the assembly, doing it through the assembly’s committees,” says Spang.

America’s political speech was nasty at the start

DeDeo believes the analysis shows how important individual members of a government body can be in shaping policy and that the lessons apply to today’s lawmakers as well.

“Some of the lessons we get out of it is that individuals do matter,” he says. “So you might say, who’s the Robespierre of Congress in 2018? And who’s keeping the conversation on track? Today, you might find that it’s the right wing introducing new things, with the left acting as a brake on what the right wants to do.”

The group will release the software it developed for the project so that other researchers can use it to conduct analyses of other government bodies around the world.

The researchers report their findings in the Proceedings of the National Academy of Sciences.

Source: Carnegie Mellon University

The post New speech in French Revolution paved way for change appeared first on Futurity.

Software makes knitting machines more like 3D printers

A new software system can translate a wide variety of 3D shapes into stitch-by-stitch instructions that allow a computer-controlled knitting machine to automatically produce those shapes.

“Knitting machines could become as easy to use as 3D printers.”

The ability to generate knitting instructions without human expertise could make on-demand machine knitting possible, computer scientists say.

The developers’ vision is to use the same machines that routinely crank out thousands of knitted hats, gloves, and other apparel to produce customized pieces one at a time or in small quantities. Gloves, for instance, might be designed to precisely fit a customer’s hands. Athletic shoe uppers, sweaters, and hats might have unique color patterns or ornamentation.

“Knitting machines could become as easy to use as 3D printers,” says James McCann, assistant professor in the Robotics Institute at Carnegie Mellon University and leader of its Textiles Lab.

That’s in stark contrast to the world of knitting today.

“Now, if you run a floor of knitting machines, you also have a department of engineers,” says McCann, who noted that garment designers rarely have the specialized expertise necessary to program the machines. “It’s not a sustainable way of doing one-off customized pieces.”

McCann and colleagues developed a method for transforming 3D meshes—a common method for modeling 3D shapes—into instructions for V-bed knitting machines.

The widely used machines manipulate loops of yarn with hook-shaped needles, which lie in parallel needle beds angled toward each other in an inverted V shape. The machines are highly capable, but are limited in comparison with hand knitting, says Vidya Narayanan, a computer science PhD student.

The new algorithm takes these constraints into account, producing instructions for patterns that work within the limits of the machine and reduce the risk of yarn breaks or jams.

A front-end design system such as this is common in 3D printing and in computer-driven machine shops, but not in the knitting world, McCann says.

Likewise, 3D printing and machine shops use common languages and file formats to run their equipment, while knitting machines use a variety of languages and tools that are specific to particular brands of knitting machines.

McCann led an earlier effort to create a common knitting format, called Knitout, which can work with any brand of knitting machine.

This 3D printing method makes a better nose

Further work is needed to make on-demand knitting a reality. For instance, the system now only produces smooth knitted cloth, without the patterned stitching that can make knitted garments distinctive. The knitting ecosystem also needs to be expanded, with design tools that will work with any machine. But progress could speed up at this point, McCann says.

“The knitting hardware is already really good. It’s the software that needs a little push. And software can improve rapidly because we can iterate so much faster.”

McCann and colleagues will present the work this summer at SIGGRAPH 2018, the Conference on Computer Graphics and Interactive Techniques in Vancouver, Canada. Additional collaborators are from Carnegie Mellon and ETH Zurich.

Source: Carnegie Mellon University

The post Software makes knitting machines more like 3D printers appeared first on Futurity.

Algorithms decode complex thoughts from brain scans

Scientists can now use brain activation patterns to identify complex thoughts like “The witness shouted during the trial.”

The research uses machine-learning algorithms and brain-imaging technology to “mind read.”

The findings indicate that the mind’s building blocks for constructing complex thoughts are formed by the brain’s various sub-systems and are not word-based. Published in Human Brain Mapping, the study offers new evidence that the neural dimensions of concept representation are universal across people and languages.

“One of the big advances of the human brain was the ability to combine individual concepts into complex thoughts, to think not just of ‘bananas,’ but ‘I like to eat bananas in evening with my friends,’” says Marcel Just, professor of psychology in Carnegie Mellon University’s Dietrich College of Humanities and Social Sciences.

“We have finally developed a way to see thoughts of that complexity in the fMRI signal. The discovery of this correspondence between thoughts and brain activation patterns tells us what the thoughts are built of.”

Previous work by Just and his team showed that thoughts of familiar objects, like bananas or hammers, evoke activation patterns that involve the neural systems that we use to deal with those objects. For example, how you interact with a banana involves how you hold it, how you bite it, and what it looks like.

The new study demonstrates that the brain’s coding of 240 complex events, sentences like the shouting during the trial scenario uses an alphabet of 42 meaning components, or neurally plausible semantic features, consisting of features, like person, setting, size, social interaction, and physical action. Each type of information is processed in a different brain system—which is how the brain also processes the information for objects. By measuring the activation in each brain system, the program can tell what types of thoughts are being contemplated.

For seven adult participants, the researchers used a computational model to assess how the brain activation patterns for 239 sentences corresponded to the neurally plausible semantic features that characterized each sentence. Then the program was able to decode the features of the 240th left-out sentence. They went through leaving out each of the 240 sentences in turn, in what is called cross-validation.

Brain ‘reads’ sentence the same way in 2 languages

The model was able to predict the features of the left-out sentence, with 87 percent accuracy, despite never being exposed to its activation before. It was also able to work in the other direction, to predict the activation pattern of a previously unseen sentence, knowing only its semantic features.

“Our method overcomes the unfortunate property of fMRI to smear together the signals emanating from brain events that occur close together in time, like the reading of two successive words in a sentence,” Just says. “This advance makes it possible for the first time to decode thoughts containing several concepts. That’s what most human thoughts are composed of.”

He adds, “A next step might be to decode the general type of topic a person is thinking about, such as geology or skateboarding. We are on the way to making a map of all the types of knowledge in the brain.”

Funding for the work came from the Intelligence Advanced Research Projects Activity (IARPA).

Source: Carnegie Mellon University

The post Algorithms decode complex thoughts from brain scans appeared first on Futurity.

Body feedback could make assisted walking easier

Researchers are using feedback from the human body to develop designs for exoskeletons and prosthetic limbs.

The work, called human-in-the-loop optimization, lessens the amount of energy needed for walking with exoskeleton assistance or prosthetic limbs.

“…the biggest challenge has remained the human element…”

“Existing exoskeleton devices, despite their potential, have not improved walking performance as much as we think they should,” says Steven Collins, a professor of mechanical engineering at Carnegie Mellon University.

“We’ve seen improvements related to computing, hardware, and sensors, but the biggest challenge has remained the human element—we just haven’t been able to guess how they will respond to new devices,” he says.

The algorithm that enables this optimization represents a step forward in the field of biomechatronics. The software algorithm is combined with versatile emulator hardware that automatically identifies optimal assistance strategies for individuals.

How stretching skin makes prosthetic hand more useful

During experiments, each user received a unique pattern of assistance from an exoskeleton worn on one ankle. The algorithm tested responses to 32 patterns over the course of an hour, making adjustments based on measurements of the user’s energy use with each pattern.

The optimized assistance pattern produced larger benefits than any exoskeleton to date, including devices acting at all joints on both legs.

“When we walk, we naturally optimize coordination patterns for energy efficiency,” Collins says. “Human-in-the-loop optimization acts in a similar way to optimize the assistance provided by wearable devices.

“We are really excited about this approach because we think it will dramatically improve energy economy, speed, and balance for millions of people, especially those with disabilities,” Collins adds.

‘Smart’ liner detects how leg prosthetics fit

A paper describing the research appears in the journal Science.

Source: Carnegie Mellon University

The post Body feedback could make assisted walking easier appeared first on Futurity.

telescoping-lizard-bot_740

Telescoping design would make awesome robots

Researchers have created a way to design telescoping structures that can twist and bend, which could allow the creation of robots that collapse themselves to make transport easier or stretch out to reach over large obstacles.

The researchers devised algorithms that can take a suggested shape that includes curves or twists and design a telescoping structure to match. They also created a design tool that enables even a novice to create complex, collapsible assemblies, outlined in a new paper on the research.

The design possibilities range from something as practical as a rapidly deployable shelter to fanciful creations, such as a telescoping lizard with legs, head, and tail that readily retract.

telescoping lizard bot design
The researchers explored a number of designs in simulation, including shapes mimicking lizards and other animals. (Credit: Carnegie Mellon)

“Telescoping mechanisms are very useful for designing deployable structures,” says Keenan Crane, assistant professor of computer science at Carnegie Mellon University. “They can collapse down into really small volumes and, when you need them, are easily expanded.”

The researchers explored a number of designs in simulation, including shapes mimicking lizards and other animals.

But most telescoping devices are similar to a pirate’s telescope—a set of straight, nested cylinders. In this study, Crane, along with Stelian Coros, assistant professor of robotics, and Christopher Yu, a doctoral student in computer science, set out to find out what kinds of telescoping shapes are possible and to develop computational methods for designing and fabricating those shapes.

The researchers explored a number of designs in simulation, including shapes mimicking lizards and other animals.

They found that spherical, ring-shaped, and helical telescopes are possible. Once a designer selects the desired curve for a structure, their algorithms can devise a telescoping structure that can extend or contract without bumping into itself and that includes no wasted space between the nested pieces. They also devised connectors that would combine several such telescopes into a larger assembly.

collapsing lizard bot
(Credit: Carnegie Mellon)

The researchers devised algorithms that can take a target shape that includes curves or twists and design a telescoping structure to match. They also created a design tool that enables even a novice to create complex, collapsible assemblies

Though the nested sections can have a variety of cross-sections, they focused on those with circular cross sections, just like the pirate’s spyglass. Once extended, they noted, the circular cross sections make it possible for each of the curved segments to rotate, adding 3D twists to what otherwise would be 2D shapes.

Another was a robotic arm and claw that could emerge from a compact cylinder and reach up and over obstacles.

The simulations also enabled the researchers to analyze how the telescoping devices might move if they were actuated.

Watch new rescue robot grow and twist like a vine

“We found that characters with telescoping parts are capable of surprisingly organic movements,” Coros says.

The National Science Foundation supported this research. The researchers will present their findings at the SIGGRAPH Conference on Computer Graphics and Interactive Techniques.

Source: Carnegie Mellon University

The post Telescoping design would make awesome robots appeared first on Futurity.

Catalyst clears 99 percent of BPA from water

Scientists have developed a method for removing more than 99 percent of bisphenol A (also known as BPA) from water quickly and cheaply.

BPA, a ubiquitous and dangerous chemical used in the manufacturing of many plastics, is found in water sources around the world.

…BPA can be found in products from DVDs and eyeglass lenses to cash register receipts—and people and wildlife are regularly exposed.

In a new paper, which appears in Green Chemistry, chemist Terrence J. Collins and his research team also compiled evidence of BPA’s presence in a multitude of products and water sources, as well as the chemical’s toxicity.

The research team builds a strong case for the need to effectively remediate BPA-contaminated water, especially industrial waste streams and landfill runoff, and they offer a simple solution.

BPA is a chemical used primarily in the production of polycarbonate plastic and epoxy resins. Its use is widespread—BPA can be found in products from DVDs and eyeglass lenses to cash register receipts—and people and wildlife are regularly exposed.

BPA is dangerous because it mimics estrogen, a naturally occurring hormone, and can affect the body’s endocrine system. Studies in fish, mammals, and human cells have shown that BPA adversely affects brain and nervous system development, growth, and metabolism, and the reproductive system.

Concerns over BPA’s health effects prompted manufacturers to start making BPA-free products like baby bottles and water bottles starting in 2010. Many BPA replacements also have similar toxicity to BPA itself.

“BPA replacements have often not been adequately tested despite the fact that testing is easy to do,” says Collins, a professor of green chemistry at Carnegie Mellon University. Collins says environmental health scientists and green chemists developed a methodology called the Tiered Protocol for Endocrine Disruption (TiPED) for identifying endocrine disruptors to the highest levels of contemporary science, which was published in Green Chemistry in 2013.

With more than 15 billion pounds of BPA being produced annually, BPA contamination and cleanup present a significant challenge.

“There is no escape from BPA—for any living creature,” Collins says. “The massive global use of BPA burdens an already overstrained water treatment infrastructure and most BPA water releases simply never reach a water treatment facility. Our approach has high potential to be a much better remediation strategy for BPA-contaminated waste streams.”

BPA-contaminated water such as industrial waste or landfill runoff may or may not be treated before being released into the environment or to wastewater treatment plants.

Collins’ team offers a simple, effective, and cheap cleanup solution. Their system involves a group of catalysts called TAML activators, small molecules that mimic oxidizing enzymes. When combined with hydrogen peroxide, TAML activators very effectively break down harmful chemicals in water.

In the paper, the researchers demonstrate the efficacy and safety of TAML activators in breaking down BPA. Adding TAMLs and hydrogen peroxide to water heavily contaminated with BPA resulted in a 99 percent reduction of BPA within 30 minutes at near neutral pH, which is the pH norm for wastewater treatment.

BPA may nudge breast cancer cells to grow

TAML treatment at this pH caused BPA to assemble into larger units called oligomers, which clump together and precipitate out of the water. According to Collins, the oligomers could be filtered and disposed of in a BPA water treatment facility.

Most importantly, extensive studies by Collins and his collaborators found the oligomers are themselves not harmful. The nature of the bonds that stick the BPA molecules together doesn’t allow the oligomers to revert to BPA.

To ensure the safety of the decontaminated water, including the oligomers, the researchers tested it with TiPED assays. They found the TAML-treated BPA water did not show estrogen activity or cause abnormalities in yeast and developing zebrafish embryos.

The researchers also tested the efficacy of TAML treatment on BPA-laden water at a pH of 11. At this higher pH, there was a greater than 99.9 percent reduction in BPA within 15 minutes. In contrast with pH 8.5 treatment, the BPA molecules were destroyed, and no oligomers were detected.

“Because TAML/hydrogen peroxide treatment eliminates BPA from water so easily at concentrations that are similar to a variety of waste streams including paper plant processing solutions and landfill leachate, assuming the lab studies transfer to the real world, we can now offer a new and simple procedure for reducing BPA exposures worldwide,” Collins says.

Additional authors of the study are from Carnegie Mellon; Oregon State University; and the University of Auckland.

Dogs have 3X more BPA after eating canned food

Carnegie Mellon, the University of Auckland, the Alexander von Humboldt Foundation, Carnegie Mellon’s Steinbrenner Institute for Environmental Education and Research, the Heinz Endowments, and the National Science Foundation supported the research and the researchers.

Source: Carnegie Mellon University

The post Catalyst clears 99 percent of BPA from water appeared first on Futurity.

To spur innovation, teach A.I. to find analogies

A method for teaching artificial intelligence analogies through crowdsourcing could allow a computer to search data for comparisons between disparate problems and solutions, highlighting important—but potentially unrecognized—underlying similarities.

“Once you can search for analogies, you can really crank up the speed of innovation…”

The method could enable A.I. to search through databases of patents, inventions, and researcher papers, identifying ideas that can be repurposed to solve new problems or create new products.

As anyone who enjoyed watching TV’s MacGyver disarm a missile with a paperclip or staunch a sulfuric acid leak with a chocolate bar could tell you, analogies can provide critical insights and inspiration for problem-solving. Tapping huge databases of inventions could spur innovation, but doing so without the help of analogies is, well, like finding a needle in a haystack.

Computer scientists solved the analogy problem by combining crowdsourcing and a type of artificial intelligence known as deep learning. By observing how people found analogies, they obtained insights they used to train computer software to find even more analogies.

“After decades of attempts, this is the first time that anyone has gained traction computationally on the analogy problem at scale,” says Aniket Kittur, associate professor in Carnegie Mellon University’s Human-Computer Interaction Institute.

“Once you can search for analogies, you can really crank up the speed of innovation,” says Dafna Shahaf, a computer scientist at Hebrew University. “If you can accelerate the rate of innovation, that solves a lot of other problems downstream.”

The research team will present its findings in a paper at KDD 2017, the Conference on Knowledge Discovery and Data Mining, in Halifax, Nova Scotia.

Analogies have played a role in any number of discoveries. Italian microbiologist Salvador Luria conceived an experiment on bacterial mutation—which later earned him a Nobel Prize—while watching a slot machine. The Wright Brothers used insights about balance and weight acquired while building bicycles to help them achieve powered flight. A trick for removing a loose cork from a wine bottle inspired an Argentinian car mechanic to invent a device to ease difficult childbirths.

Finding analogies is not always easy, particularly for computers, which do not understand things on a deep semantic level like humans do.

Researchers have tried handcrafting data structures, but this approach is time consuming and expensive—not scalable for databases that can include 9 million US patents or 70 million scientific research papers. Others have tried inferring this structure from large amounts of text, but this approach identifies primarily surface similarities, not the deep understanding that is useful for problem-solving.

To pursue a new approach, Kittur, who has spent years studying crowdsourcing as a means of finding analogies, joined forces with Shahaf, who has specialized in computational analogies.

Can Siri learn to ‘grasp’ our metaphors?

Along with Shahaf’s doctoral student Tom Hope and postdoctoral researcher Joel Chan, they devised a scheme in which crowd workers hired through Amazon Mechanical Turk would look for analogous products in the Quirky.com product innovation website. Based on the product descriptions, they would look for those that had similar purposes or employed similar mechanisms.

“We were able to look inside these people’s brains because we forced them to show their work,” Chan explains.

A description for a yogurt maker, for instance, might yield words such as “concentrate,” “food,” and “reduce,” associated with its purpose and words such as “liquid,” “pump,” and “heating” associated with its mechanism.

“In terms of analogies, this isn’t about yogurt, but about concentrating things,” he notes.

Based on these insights, the computer could learn to analyze additional product descriptions and identify its own analogies, many of which reflected similarities between seemingly disparate products, not simply surface similarities.

When crowd workers subsequently used the analogies to suggest new products, these “distant” analogies yielded the most innovative ideas, Hope says.

The same approach could be used to tailor computer programs to find analogies in patent applications or scientific research papers.

How artificial intelligence can teach itself slang

The National Science Foundation supported this research, as did Bosch, Google, and Carnegie Mellon University’s Web 2020 initiative.

Source: Carnegie Mellon University

The post To spur innovation, teach A.I. to find analogies appeared first on Futurity.

Mindfulness apps with acceptance training can reduce stress

Mindfulness meditation apps can reduce the body’s response to biological stress, new research suggests.

“…this study shows that it’s possible to learn skills that improve the way our bodies respond to stress…”

Acceptance, or learning how to be open and accepting of the way things are in each moment, is particularly important for impacting stress biology and benefitting from the training’s stress reduction effects, the researchers found.

The research offers the first scientific evidence that a brief mindfulness meditation mobile app that incorporates acceptance training reduces cortisol and systolic blood pressure in response to stress.

“We have known that mindfulness training programs can buffer stress, but we haven’t figured out how they work,” says David Creswell, an associate professor of psychology in Carnegie Mellon University’s Dietrich College of Humanities and Social Sciences.

“This study, led by Emily Lindsay in my lab, provides initial evidence that the acceptance training component is critical for driving the stress reduction benefits of mindfulness training programs,” Creswell says.

For the study, 144 stressed adults participated in one of three randomly assigned smartphone-based interventions: training in monitoring the present moment with acceptance, training in monitoring the present moment only, or active control training.

Each participant completed one 20-minute daily lesson for 14 days. Then, they were placed in a stressful situation while their cortisol levels and blood pressure were measured.

Mindfulness training cools inflammation

The results showed that the participants in the combined monitoring and acceptance program had reduced cortisol and systolic blood pressure reactivity. Their blood pressure responses were approximately 20 percent lower than those in the two interventions that did not include acceptance training. Their cortisol responses were also more than 50 percent lower.

“Not only were we able to show that acceptance is a critical part of mindfulness training, but we’ve demonstrated for the first time that a short, systematic smartphone mindfulness program helps to reduce the impact of stress on the body,” says Lindsay, who received her PhD in psychology and is now a postdoctoral research fellow at the University of Pittsburgh.

“We all experience stress in our lives, but this study shows that it’s possible to learn skills that improve the way our bodies respond to stress with as little as two weeks of dedicated practice. Rather than fighting to get rid of unpleasant feelings, welcoming and accepting these feelings during stressful moments is key,” Lindsay says.

Shinzen Young and 01 Expert Systems developed the app used for the study.

Mindfulness may not work as well for men

The researchers report their findings in the journal Psychoneuroendocrinology.

Additional researchers contributing to the work are from Penn State and Virginia Commonwealth University.

The Yoga Science Foundation, the Mind & Life Institute, and the American Psychological Association funded this research.

Source: Carnegie Mellon University

The post Mindfulness apps with acceptance training can reduce stress appeared first on Futurity.