Category Archives: Georgia Institute of Technology

Sodium-based batteries could be great alternative to lithium

New evidence suggests batteries based on sodium and potassium hold promise as a potential alternative to lithium-based batteries.

The growth in battery technology has led to concerns that the world’s supply of lithium, the metal at the heart of many of the new rechargeable batteries, may eventually be depleted.

“One of the biggest obstacles for sodium- and potassium-ion batteries has been that they tend to decay and degrade faster and hold less energy than alternatives,” says Matthew McDowell, an assistant professor in the George W. Woodruff School of Mechanical Engineering and the School of Materials Science and Engineering at Georgia Tech.

“But we’ve found that’s not always the case,” he adds.

For the study, which appears in the journal Joule, the research team looked at how three different ions—lithium, sodium, and potassium—reacted with particles of iron sulfide, also called pyrite and fool’s gold.

As batteries charge and discharge, ions are constantly reacting with and penetrating the particles that make up the battery electrode. This reaction process causes large volume changes in the electrode’s particles, often breaking them up into small pieces. Because sodium and potassium ions are larger than lithium, it’s traditionally been thought that they cause more significant degradation when reacting with particles.

In their experiments, the reactions that occur inside a battery were directly observed inside an electron microscope, with the iron sulfide particles playing the role of a battery electrode. The researchers found that iron sulfide was more stable during reaction with sodium and potassium than with lithium, indicating that such a battery based on sodium or potassium could have a much longer life than expected.

The difference between how the different ions reacted was stark visually. When exposed to lithium, iron sulfide particles appeared to almost explode under the electron microscope. On the contrary, the iron sulfide expanded like a balloon when exposed to the sodium and potassium.

“We saw a very robust reaction with no fracture—something that suggests that this material and other materials like it could be used in these novel batteries with greater stability over time,” says graduate student Matthew Boebinger.

The study also casts doubt on the notion that large volume changes that occur during the electrochemical reaction are always a precursor to particle fracture, which causes electrode failure leading to battery degradation.

The researchers suggest that one possible reason for the difference in how the different ions reacted with the iron sulfide is that the lithium was more likely to concentrate its reaction along the particle’s sharp cube-like edges, whereas the reaction with sodium and potassium was more diffuse along all of the surface of the iron sulfide particle.

As a result, the iron sulfide particle when reacting with sodium and potassium developed a more oval shape with rounded edges.

While there’s still more work to be done, the new research findings could help scientists design battery systems that use these types of novel materials.

Sugar cubes solve big problem with lithium metal batteries

“Lithium batteries are still the most attractive right now because they have the most energy density—you can pack a lot of energy in that space,” McDowell says.

“Sodium and potassium batteries at this point don’t have more density, but they are based on elements a thousand times more abundant in the earth’s crust than lithium. So they could be much cheaper in the future, which is important for large scale energy storage—backup power for homes or the energy grid of the future.”

The National Science Foundation and the US Department of Energy funded the research. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors.

Source: Georgia Tech

The post Sodium-based batteries could be great alternative to lithium appeared first on Futurity.

r2d2_c3po_770-770x459

5 big questions about the science of ‘Star Wars’

As Star Wars: The Force Awakens cleaned up at the box office, researchers from Georgia Tech took a closer look at the science of the films. They answered five big questions about the worlds depicted in the movies and what’s possible in reality. We’re revisiting their responses to celebrate the release of the 2018 installment in the series, Solo: A Star Wars Story.

1. Is light speed even possible?

Han Solo isn’t a bashful hero. So it’s no surprise that it took him only a few moments after we first met him to brag that his Millennium Falcon was the “fastest ship in the galaxy.” But how fast is fast? Solo said his ship can go .5 past light speed.

Deirdre Shoemaker, associate professor in the Georgia Tech School of Physics, explains in this video how fast light speed really is, why it’s not fast enough, and what needs to happen for something to actually travel 186,000 miles per second:

2. Could these new worlds exist in our universe?

The Star Wars universe depicts a diverse set of worlds containing a variety of inhabitants. John Wise, assistant professor in the School of Physics, studies early galaxies and distant objects in the universe. He wonders if there are planets somewhere out there that resemble the ones imagined by George Lucas:

“Until 1991, the only planets known to humans were in our Solar System. In that same year, astronomers discovered the first extrasolar planet, now dubbed as exoplanets, by measuring the Doppler shift of stellar spectral lines, effectively witnessing the planet play gravitational tug-of-war with its parent star as it orbits. Over the next decade or so, astronomers refined their planet hunting skills and found more than 30 exoplanets.

“Imagine how many planets are littered among the 100 billion galaxies in the observable universe. Perhaps planets from a long time ago in a galaxy far, far away?”

“This all changed with the launch of NASA’s Kepler Mission, which continually monitored a patch of sky for brightness variations in 150,000 stars. Any dip in brightness can be caused by a planet passing in front of its star, blocking a small fraction of its light. In its four-year run, Kepler detected and confirmed nearly 2,000 planetary systems, ranging from “Hot Jupiters” to frozen, rocky worlds. Intriguingly, a select few lie within the Goldilocks zone where liquid water could exist because the planet isn’t too hot or too cold.

“This planetary diversity is also seen in Star Wars—Endor, the home of the Ewoks, that orbits a gaseous giant planet; Hoth, where Luke Skywalker almost froze to death; Alderaan, a blue-green orb not unlike our Earth until it was destroyed by the Death Star; and Tatooine, Luke and Anakin Skywalker’s home planet. One of the most vivid scenes of Episode IV happens when Luke gazes toward the horizon at a binary sunset. When the original was released in 1977, such a scene was restricted to the sci-fi realm, but this is no longer the case. Kepler has now discovered 10 planets that orbit binary star systems, whose possible inhabitants see a similar sight every day.

“The Kepler Mission was just the first step in humankind’s discovery of planetary systems in the Milky Way. It only observed 1/400th of the sky. It could only detect planets out to 3,000 light years, which is tiny compared to the Milky Way’s size of 100,000 light years. Using Kepler’s detections, astronomers have estimated that there could be as many as 40 billion planets in our galaxy. But that is only one galaxy! Imagine how many planets are littered among the 100 billion galaxies in the observable universe. Perhaps planets from a long time ago in a galaxy far, far away?”

r2-d2 and c3po watch sunset
(Credit: Michael Li/Flickr)

3. Are C-3PO and R2-D2 coming soon?

Even though C-3PO and R2-D2 lived (in a galaxy) a long time ago, today’s roboticists still haven’t found a way to create their current-day cousins. The College of Computing’s Sonia Chernova is one of many on campus trying to bring robots out of the lab and into the world so that people can have their own droids. She says:

“Robots tend to be on one extreme or the other these days. One kind is found on Mars, battlefields, and in operating rooms. These robots are extensions of humans—they’re rarely autonomous because a human is always in the loop.

“As for R2-D2 and his friends, we’re not that far from personal robots.”

“Others are autonomous. We see this mostly on manufacturing floors, where machines are programmed to do the same repetitive task with extreme precision. Not only are they limited by what they can do, but they’re also often separated from people for safety reasons.

“I’m focused on something in the middle. Full autonomy for personal robots would be great, but it’s not yet practical given today’s technology. Humans are too unpredictable and environments are ever changing. Rather than setting 100 percent autonomy as the goal for getting robots into our lives, we should deploy them when they’re simply “good enough.” Once they’re with us, they can learn the rest.

“Here’s an example: in hospitals, a delivery robot could pass out towels and medication. If it were to get stuck leaving a room, the machine could call a command center where a human technician would figure out the problem and free the robot. Here’s the key: every time a person made a fix, the robot would keep that new information and use it to perform differently the next time it leaves the room. With humans in the mix, this robot could learn from its mistakes and continually push toward 100 percent autonomy.

“As for R2-D2 and his friends, we’re not that far from personal robots. I don’t think we’ll have to clean our houses in 20 years because we’ll have robot helpers. I’m not sure what they’ll cost or if people will psychologically be ready to give up that part of their lives, but we’ll have the software and hardware in place to make it happen.

4. What would it be like to master the Force?

Imagine lifting a spaceship with the tip of your finger like Yoda in The Empire Strikes Back. Nepomuk Otte of the School of Physics says there are a few things you might want to consider: 

“Didn’t we learn from physics classes about Newton’s third law? For every action, there is an equal and opposite reaction. If true, it would mean that when Yoda exerts a force on the X-wing, Luke Skywalker’s spaceship should also exert the same amount of force on Yoda. So why doesn’t the little fella get squished like a mosquito?

“Violating action and reaction would shatter one of the most sacred laws in physics—momentum conservation. But Yoda moves the spacecraft with ease and shuffles away unscathed. The Jedi Master must be surrounded by some sort of shield that absorbs the reaction part of the force. When you attempt to use the Force, make sure you have one of those shields, too, or you might suffer the consequences.”

5. Can the Force be a new interaction that we haven’t discovered yet?

Flavio Fenton of the School of Physics responds—and offers a few questions of his own:

“When the Death Star’s superlaser destroyed Princess Leia’s home planet of Alderaan, Obi-Wan Kenobi delivered one of the saga’s most famous quotes: ‘I felt a great disturbance in the Force, as if millions of voices suddenly cried out in terror and were suddenly silenced. I fear something terrible has happened.’

“…if we were to study the Force from a subatomic level, we should consider that, like any other interaction we know in nature, there exist force carriers.”

“The death of the entire planet sent shock waves through the Force, weakening those who were able to feel them. That included Obi-Wan, who briefly became faint. This action at a distance is explained in physics by what is called a field. For example, we are well aware of gravitational and electromagnetic fields. Objects that are affected by a field carry “something” that allows them to interact. For gravity, it is mass. For electricity, it is charge.

“Because there is a Light and a Dark Side of the Force, a field would require that we assume two types of charges, similar to positive and negative charges in the electromagnetic force. Here’s an example: Darth Vader can strangle people by using the Force without physical contact. That means his victims would have to carry both types of charges in equal amounts, and the effects of the two types cancel each other. How does it happen?

“One explanation is that the dark force Vader unleashes attracts the light charge of his victim, leaving the person unbalanced with an excess of dark charge. In this case, all the dark charges then try to come together along the neck, squeezing and nearly choking the person to death. This means that unlike electric charge, particles with equal force charges attract and repel when they have different charges. This could explain why a neutral force charge is common to all objects. It could also explain why the Dark Side has an addictive aspect: when a Jedi turns to the Dark Side, it’s a slippery slope filled with continuous evil.

“Going just a bit deeper for my fellow physics fanatics—if we were to study the Force from a subatomic level, we should consider that, like any other interaction we know in nature, there exist force carriers. These are particles that give rise to forces between other particles. For example, the electromagnetic force between two electrons can be explained by the exchange of virtual photons and gravitation by the exchange of virtual gravitons. Therefore the two Force charges should have a carrier. Should we call them Jedi-nos? Should the Large Hadron Collider search for these new particles now that it has found the Higgs particle?”

Source: Georgia Tech (Originally published December 30, 2015)

The post 5 big questions about the science of ‘Star Wars’ appeared first on Futurity.

Just how gross are airplane cabins really?

The bacterial communities accompanying airline passengers at 30,000 feet have a lot in common with the bacterial communities surrounding people in their homes and offices, according to a new study.

“Airline passengers should not be frightened by sensational stories about germs on a plane…”

Using advanced sequencing technology, researchers studied the bacteria found on three components of an airliner cabin that are commonly touched by passengers: tray tables, seat belt buckles, and the handles of lavatory doors. They swabbed those items before and after ten transcontinental flights and also sampled air in the rear of the cabin during flight.

What they found was surprisingly unexciting.

“Airline passengers should not be frightened by sensational stories about germs on a plane,” says Vicki Stover Hertzberg, a professor in Emory University’s Nell Hodgson Woodruff School of Nursing and a coauthor of the study in Microbial Ecology. “They should recognize that microbes are everywhere and that an airplane is no better and no worse than an office building, a subway car, home, or a classroom. These environments all have microbiomes that look like places occupied by people.”

Given the unusual nature of an aircraft cabin, the researchers hadn’t known what to expect from their microbiome study. On transcontinental flights, passengers spend four or five hours in close proximity breathing a very dry mix of outdoor air and recycled cabin air that passes through special filters, similar to those found in operating rooms.

“There were reasons to believe that the communities of bacteria in an aircraft cabin might be different from those in other parts of the built environment, so it surprised me that what we found was very similar to what other researchers have found in homes and offices,” says Howard Weiss, a professor in Georgia Institute of Technology’s School of Mathematics and the study’s corresponding author. “What we found was bacterial communities that were mostly derived from human skin, the human mouth—and some environmental bacteria.”

The sampling found significant variations from flight to flight, which is consistent with the differences other researchers have found among the cars of passenger trains, Weiss notes. Each aircraft seemed to have its own microbiome, but the researchers did not detect statistically significant differences between preflight and post-flight conditions on the flights they studied.

“I carry a bottle of hand sanitizer in my computer bag whenever I travel…”

“We identified a core airplane microbiome—the genera that were present in every sample we studied,” Weiss adds. The core microbiome included genera Propionibacterium, Burkholderia, Staphylococcus, and Strepococcus (oralis).

Though the study revealed bacteria common to other parts of the built environment, Weiss still suggests travelers exercise reasonable caution.

“I carry a bottle of hand sanitizer in my computer bag whenever I travel,” says Weiss. “It’s a good practice to wash or sanitize your hands, avoid touching your face, and get a flu shot every year.”

This new information on the aircraft microbiome provides a baseline for further study, and could lead to improved techniques for maintaining healthy aircraft.

“The finding that airplanes have their own unique microbiome should not be totally surprising since we have been exploring the unique microbiome of everything from humans to spacecraft to salt ponds in Australia. The study does have important implications for industrial cleaning and sterilization standards for airplanes,” says Christopher Dupont, another coauthor and an associate professor in the microbial and environmental genomics department at the J. Craig Venter Institute, which provided bioinformatics analysis of the study’s data.

The 229 samples researchers obtained from the aircraft cabin testing were subjected to 16S rRNA sequencing, which was done at the HudsonAlpha Institute for Biotechnology in Huntsville, Alabama. The small amount of genetic material captured on the swabs and air sampling limited the level of detail the testing could provide to identifying genera of bacteria, Weiss says.

In March, in the journal Proceedings of the National Academy of Sciences, the researchers reported on the results of another component of the FlyHealthy study that looked at potential transmission of respiratory viruses on aircraft. They found that an infectious passenger with influenza or other droplet-transmitted respiratory infection will most likely not transmit infection to passengers seated farther away than two seats laterally and one row in front or back on an aircraft.

Here’s whose germs can infect you on a plane

That portion of the study was designed to assess rates and routes of possible infectious disease transmission during flights, using a model that combines estimated infectivity and patterns of contact among aircraft passengers and crew members to determine likelihood of infection. FlyHealthy team members monitored specific areas of the passenger cabin, developing information about contacts between passengers as they moved around.

Among next steps, the researchers would like to study the microbiome of airport areas, especially the departure lounges where passengers congregate before boarding. They would also like to study long-haul international flights in which passengers spend more time together—and are more likely to move about the cabin.

Additional coatuhors are from the HudsonAlpha Institute for Biotechnology and the Boeing Company. A contract between the Georgia Institute of Technology and the Boeing Company supported the work.

Source: Georgia Tech

The post Just how gross are airplane cabins really? appeared first on Futurity.

Why ‘2001: A Space Odyssey’ still matters in 2018

2001: A Space Odyssey (1968) is arguably the world’s most influential science fiction film. Stanley Kubrick’s space epic inspired a generation of filmmakers, including George Lucas, Steven Spielberg, and Christopher Nolan, who likened his film Interstellar (2015) to 2001.

Fifty years after its initial release, the film is getting renewed attention, including the debut of a new 70mm print at the Cannes Film Festival and a limited theatrical release beginning May 18.

Jay Telotte, professor of film studies in the School of Literature, Media, and Communication at Georgia Tech, explains why the legacy of the film endures:

The post Why ‘2001: A Space Odyssey’ still matters in 2018 appeared first on Futurity.

Ring, wristband combo could make texting really subtle

A new way to control text or other mobile apps involves acoustic chirps that go from ring to wristband, like a smartwatch.

The system can recognize 22 different micro finger gestures that could be programmed to various commands—including a T9 keyboard interface, a set of numbers, or application commands like playing or stopping music.

A video demonstration of the technology shows how, at a high rate of accuracy, the system can recognize hand poses using the 12 bones of the fingers and digits 1 through 10 in American Sign Language (ASL).

“Some interaction is not socially appropriate,” says Cheng Zhang, the PhD student in the Georgia Tech School of Interactive Computing who led the effort. “A wearable is always on you, so you should have the ability to interact through that wearable at any time in an appropriate and discreet fashion. When we’re talking, I can still make some quick reply that doesn’t interrupt our interaction.”

The system is also a preliminary step to being able to recognize ASL as a translator in the future, Zhang says. Other techniques utilize cameras to recognize sign language, but that can be obtrusive and is unlikely to be carried everywhere.

“If my wearable can translate it for me, that’s the long-term goal,” Zhang says.

Unlike other technology that requires the use of a glove or a more obtrusive wearable, this technique, called “FingerPing,” is just a thumb ring and a watch. The ring produces acoustic chirps that travel through the hand, which receivers on the watch pick up. There are specific patterns in which sound waves travel through structures, including the hand, that hand poses can alter. Utilizing those poses, the wearer can achieve up to 22 pre-programmed commands.

The gestures are small and non-invasive, as simple as tapping the tip of a finger or posing your hand in classic “1,” “2,” or “3” gestures.

“The receiver recognizes these tiny differences,” Zhang says. “The injected sound from the thumb will travel at different paths inside the body with different hand postures. For instance, when your hand is open there is only one direct path from the thumb to the wrist. Any time you do a gesture where you close a loop, the sound will take a different path and that will form a unique signature.”

With this device, bending your finger generates power

Zhang says that the research is a proof of concept for a technique that could expand and improve in the future.

A paper on the research was part of the 2018 ACM Conference on Human Factors in Computing Systems (CHI).

Source: Georgia Institute of Technology

The post Ring, wristband combo could make texting really subtle appeared first on Futurity.

A robot is teaching itself to dress hospital patients

A robot can successfully slide hospital gowns onto people’s arms, a potential first step towards dressing people.

The machine doesn’t use its eyes as it pulls the cloth. Instead, it relies on the forces it feels as it guides the garment onto a person’s hand, around the elbow, and onto the shoulder.

More than 1 million Americans require daily physical assistance to get dressed because of injury, disease, and advanced age. Robots could potentially help, but cloth and the human body are complex.

“The more robots can understand about us, the more they’ll be able to help us…”

The new machine, a PR2, taught itself in one day, by analyzing nearly 11,000 simulated examples of a robot putting a gown onto a human arm. Some of those attempts were flawless. Others were spectacular failures—the simulated robot applied dangerous forces to the arm when the cloth would catch on the person’s hand or elbow.

From these examples, the PR2’s neural network learned to estimate the forces applied to the human. In a sense, the simulations allowed the robot to learn what it feels like to be the human receiving assistance.

“People learn new skills using trial and error. We gave the PR2 the same opportunity,” says Zackory Erickson, the lead PhD student on the research team. “Doing thousands of trials on a human would have been dangerous, let alone impossibly tedious. But in just one day, using simulations, the robot learned what a person may physically feel while getting dressed.”

The robot also learned to predict the consequences of moving the gown in different ways. Some motions made the gown taut, pulling hard against the person’s body. Other movements slid the gown smoothly along the person’s arm. The robot uses these predictions to select motions that comfortably dress the arm.

After success in simulation, the PR2 attempted to dress people. Participants sat in front of the robot and watched as it held a gown and slid it onto their arms. Rather than vision, the robot used its sense of touch to perform the task based on what it learned about forces during the simulations.

“The key is that the robot is always thinking ahead,” says Charlie Kemp, an associate professor biomedical engineering department at Georgia Tech and Emory University and the lead faculty member on the project. “It asks itself, ‘if I pull the gown this way, will it cause more or less force on the person’s arm? What would happen if I go that way instead?’”

The researchers varied the robot’s timing and allowed it to think as much as a fifth of a second into the future while strategizing about its next move. Less than that caused the robot to fail more often.

“The more robots can understand about us, the more they’ll be able to help us,” Kemp says. “By predicting the physical implications of their actions, robots can provide assistance that is safer, more comfortable, and more effective.”

Abstract thinking makes robots better planners

The robot is currently putting the gown on one arm. The entire process takes about 10 seconds. The team says fully dressing a person is something that is many steps away from this work.

The researchers will present a paper on the robot in Australia during the International Conference on Robotics and Automation (ICRA). The work is part of a larger effort on robot-assisted dressing that the National Science Foundation funds and Liu leads.

An NSF award, AWS Cloud Credits for Research, and the NSF NRT Traineeship funded the research in part. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors.

Kemp is a cofounder, a board member, an equity holder, and the CTO of Hello Robot Inc., which is developing products related to this research. This research could affect his personal financial status. Georgia Tech has reviewed and approved the terms of this arrangement in accordance with its conflict of interest policies.

Source: Georgia Institute of Technology

The post A robot is teaching itself to dress hospital patients appeared first on Futurity.

Can ‘HoneyBot’ keep factories safe from hackers?

It’s small enough to fit inside a shoebox, yet this robot on four wheels has a big mission: protecting factories and other large facilities from hackers. It’s the HoneyBot.

The diminutive device lures in digital troublemakers who have set their sights on industrial facilities and then tricks them into giving up valuable information to cybersecurity professionals.

The decoy robot arrives as more and more devices—never designed to operate on the internet—are showing up online in homes and factories alike, opening up a new range of possibilities for hackers hoping to wreak havoc in both the digital and physical world.

Attack the attackers

“Robots do more now than they ever have, and some companies are moving forward with, not just the assembly line robots, but free-standing robots that can actually drive around factory floors,” says Raheem Beyah, professor and interim chair in Georgia Tech’s School of Electrical and Computer Engineering.

“In that type of setting, you can imagine how dangerous this could be if a hacker gains access to those machines. At a minimum, they could cause harm to whatever products are being produced. If it’s a large enough robot, it could destroy parts or the assembly line. In a worst-case scenario, it could injure or cause death to the humans in the vicinity.”

Internet security professionals have long employed decoy computer systems known as “honeypots” as a way to throw cyber attackers off the trail. Researchers applied the same concept to the HoneyBot. Once hackers gain access to the decoy, they leave behind valuable information that can help companies further secure their networks.

“A lot of cyber attacks go unanswered or unpunished because there’s this level of anonymity afforded to malicious actors on the internet, and it’s hard for companies to say who is responsible,” says graduate student Celine Irvene, who worked with Beyah to devise the new robot.

“Honeypots give security professionals the ability to study the attackers, determine what methods they are using, and figure out where they are or potentially even who they are.”

Tricking hackers

Operators can monitor and control the gadget through the internet. But unlike other remote-controlled robots, the HoneyBot’s special ability is tricking its operators into thinking it is performing one task, when in reality it’s doing something completely different.

“The idea behind a honeypot is that you don’t want the attackers to know they’re in a honeypot,” Beyah says. “If the attacker is smart and is looking out for the potential of a honeypot, maybe they’d look at different sensors on the robot, like an accelerometer or speedometer, to verify the robot is doing what it had been instructed. That’s where we would be spoofing that information as well. The hacker would see from looking at the sensors that acceleration occurred from point A to point B.”

In a factory setting, such a HoneyBot robot could sit motionless in a corner, springing to life when a hacker gains access—a visual indicator that a malicious actor is targeting the facility.

Rather than allowing the hacker to then run amok in the physical world, researchers could design the robot to follow certain commands deemed harmless—such as meandering slowly about or picking up objects—but stopping short of actually doing anything dangerous.

So far, their technique seems to be working.

In experiments designed to test how convincing the false sensor data would be to individuals remotely controlling the device, volunteers in December 2017 used a virtual interface to control the robot and could not see what was happening in real life.

Cockroaches teach robots to clamber and scurry

To entice the volunteers to break the rules, at specific spots within the maze, they encountered forbidden “shortcuts” that would allow them to finish the maze faster.

In the real maze back in the lab, no shortcut existed, and if the participants opted to go through it, the robot instead remained still. Meanwhile, researchers fed the volunteers—who have now unwittingly become hackers for the purposes of the experiment—simulated sensor data indicating they passed through the shortcut and continued along.

“We wanted to make sure they felt that this robot was doing this real thing,” Beyah says.

In surveys after the experiment, participants who actually controlled the device the whole time and those who researchers fed simulated data about the fake shortcut both indicated that the data was believable at similar rates.

Abstract thinking makes robots better planners

“This is a good sign because it indicates that we’re on the right track,” Irvene says.

The National Science Foundation supported the work. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

Source: Georgia Tech

The post Can ‘HoneyBot’ keep factories safe from hackers? appeared first on Futurity.

observation-chart-airplane-infection_770

Here’s whose germs can infect you on a plane

A new study assesses rates and routes of possible infectious disease transmission during flights.

An infectious passenger with influenza or other droplet-transmitted respiratory infection will most likely not transmit infection to passengers seated farther away than two seats laterally and one row in front or back on an aircraft, the new research indicates.

Vicki Hertzberg, professor at Emory University’s Nell Hodgson Woodruff School of Nursing and Howard Weiss, professor in the School of Mathematics at the Georgia Institute of Technology, led tracking efforts in their FlyHealthy study, developing a model that combines estimated infectivity and patterns of contact among aircraft passengers and crew members to determine likelihood of infection.

observation chart
This chart shows the number of passenger contacts by row, for aisle, middle, and window seats for a flight studied by the researchers. (Credit: Georgia Tech.)

FlyHealthy team members monitored specific areas of the passenger cabin, and made five round trips from the East to West Coast recording movements of passengers and crew. In addition, they collected air samples and obtained surface samples from areas most likely to harbor microbes. They leveraged the movement data to create thousands of simulated flight scenarios and possibilities for direct exposure to droplet-transmitted respiratory diseases.

“Respiratory diseases are often spread within populations through close contact,” explains Hertzberg. “We wanted to determine the number and duration of social contacts between passengers and crew, but we could not use our regular tracking technology on an aircraft. With our trained observers, we were able to observe where and when contacts occurred on flights. This allows us to model how direct transmission might occur.”

“We now know a lot about how passengers move around on flights. For instance, around 40 percent of passengers never leave their seats, another 40 percent get up once during the flight, and 20 percent get up two or more times. Proximity to the aisle was also associated with movement. About 80 percent of passengers in aisle seats got up during flights, in comparison to 60 percent of passengers in middle seats and 40 percent in window seats. Passengers who leave their seats are up for an average of five minutes,” Hertzberg says.

The researchers also point to fomite transmission—exposure to viruses that remain on certain surfaces such as tray tables, seat belts, and lavatory handles—as additional likely contributors to disease transmission. They provide public health recommendations to help prevent the spread of infectious disease.

How to avoid spreading the flu at work

“We found that direct disease transmission outside of the one-meter area of an infected passenger is unlikely,” explains Weiss. Respiratory infections can also be transmitted indirectly through contact with an infected surface. This could happen if a sick passenger coughs into their hand, and later touches a lavatory surface or overhead bin handle.

“Passengers and flight crews can eliminate this risk of indirect transmission by exercising hand hygiene and keeping their hands away from their nose and eyes.”

The study only evaluated the potential spread of infectious agents on an aircraft. Transmission could also occur at other points in a passenger’s journey, underscoring the need to maintain healthy habits, Weiss adds.

Complete findings of the study appear in the Proceedings of the National Academy of Sciences. Funding for the research came from Boeing.

Source: Georgia Tech

The post Here’s whose germs can infect you on a plane appeared first on Futurity.

A.I. may spot heart failure signs early

A new method that uses deep learning to analyze vast amounts of personal health record data could identify early signs of heart failure, researchers say.

A paper, which appears in the Journal of the American Medical Informatics Association (JAMIA), describes how the method addresses temporality in the data—something previously ignored by conventional machine learning models in health care applications.

The research uses a deep learning model to allow earlier detection of the incidents and stages that often lead to heart failure within 6-18 months. To achieve this, researchers use a recurrent neural network (RNN) to model temporal relations among events in electronic health records.

Temporal relationships communicate the ordering of events or states in time. This type of relation is traditionally used in natural language processing. However, researchers saw a new opportunity to leverage the power of RNNs.

“I studied deep learning and I was wondering if RNNs could be introduced into health care. It is a very popular model for processing sequences and is traditionally used for translation,” says Edward Choi, a PhD student at Georgia Tech, working with Jimeng Sun, an associate professor at the School of Computational Science and Engineering.

By utilizing RNN, the algorithm can anticipate early stages of heart failure, which will ultimately lead to better preventative care for patients at risk of heart disease.

“Machine learning is being used in every aspect of health care. From diagnosis and treatments to recommendations for patient care after surgeries. This particular model is focused on deep learning, which has had great success in many industries. However, in health care, we are on the front of pioneering deep learning and Edward is one of the first ones to apply it,” Sun says.

‘Deep learning’ goes faster with organized data

According to the Centers for Disease Control and Prevention, heart failure affects 5.7 million adults in the United States, and half of those who develop heart failure die within 5 years of diagnosis costing the nation an estimated $30.7 billion each year.

The new findings could provide relief to millions of Americans each year by allowing doctors to offer patients early intervention.

“This is a preliminary work, it showed potential that it can do better than classical models—it makes a good promise for how deep learning can make a positive impact in the health care industry,” says Choi.

The National Institutes of Health in collaboration with Sutter Health funded the work.

Source: Georgia Tech

The post A.I. may spot heart failure signs early appeared first on Futurity.

Try out your code on a swarm of real robots

Researchers from around the globe can write their own computer programs, upload them, then get the results as machines at Georgia Tech carry out the commands. The researchers will even send video evidence of the experiment.

This Robotarium opens this month. It’s a 725-square-foot facility that houses nearly 100 rolling and flying swarm robots.

The concept is easy, Magnus Egerstedt says: Robots for everyone.

“Building and maintaining a world-class, multi-robot lab is too expensive for a large number of current and budding roboticists. It creates a steep barrier for entry into our field,” says Egerstedt, a professor of electrical and computer engineering.

“Too many robot labs are hidden away behind closed doors.”

“We need to provide more access in order to continue creating the next generation of robots and robot-assisted technologies. The Robotarium will allow that at an unprecedented scale.”

In the facility, motion capture cameras cling from the ceiling and peer down at the lab’s centerpiece: a white, bowl-shaped arena that looks like a 12′ x 14′ hockey rink. That’s where up to 80 palm-sized, rolling robots scoot around the surface.

They automatically activate when given a program from someone in the room or a remote coder in a different state or country. Once it finishes the experiment, the swarm autonomously returns to wireless charging slots on the edge of the rink and waits to be activated for its next mission.

The lab is currently set up for the 3D-printed rolling machines. In a few weeks, autonomous quadcopters the size of small dinner plates will whiz through the air for remote flying experiments (a retractable net will keep them from slamming into walls or people if things unexpectedly get out of control). A large window allows curious onlookers to watch the organized chaos.

“The Robotarium is a terrarium for robots,” Egerstedt says. “We wanted to create a space where anyone, at any time of the day or night, can walk past the lab and see robots in action. Too many robot labs are hidden away behind closed doors.”

That’s exactly how Egerstedt’s team worked for the last year and a half. They experimented using a tabletop version of the Robotarium. The mini surface allowed them to iron out kinks and identify potential problems with open-access robotics. For instance, what if someone purposely uploaded code that would cause the bots to collide and demolish each other?

Toilet or chair? Robots that ‘see’ in 3D can tell

“That’s why we created algorithms that wrap a virtual barrier around each machine to prevent collisions,” says Siddharth Mayya, a PhD student in the lab. “We also had to worry about hackers.”

“I want to do for robotics what MOOCs have done for education—now anyone who knows how to code can work with robots.”

Part of the work included developing processes to protect the system from cyber threats.

Not everything always went smoothly. When PhD student Li Wang hit a button that sent his swarm of quadcopters shooting toward the ceiling, “It rained robots that day,” he recalls.

Another time, a rolling swarm descended on the same charging station at the same time. The robots literally fought for a spot until they reached the metal rail, which shorted them out and sprayed sparks across the room.

That’s why the Robotarium’s charging stations are now wireless.

To date, more than 100 research groups have logged on and used the mini-version.

Most are roboticists without access to swarm technology. Others are biologists. One team chose to use robots, instead of computer simulations, to better understand how ants interact with each other when choosing a new queen.

Telescoping design would make awesome robots

Egerstedt thinks the new facility will foster more collaboration within the robotics community, allowing scientists and engineers to share their findings more widely and build on successes. The open access setting will counter the lack of resources that sometime stands in the way of research.

“I want to do for robotics what MOOCs (massive open online courses) have done for education—now anyone who knows how to code can work with robots,” he says.

He already has a new recruit. This past April, a group of fifth graders stopped in for a tour. Egerstedt saw one of the 10-year-olds stuffing one of the robots into his pocket while turning to leave.

“I asked him why he took it,” Egerstedt remembers. “He said he wanted to make it better.”

How?

“By adding a flamethrower.”

The National Science Foundation and Office of Naval Research funded the lab.

Source: Georgia Tech

The post Try out your code on a swarm of real robots appeared first on Futurity.