Saturn’s moon Titan is one of the most interesting bodies in the solar system – but not in the visible spectrum. Under normal lighting conditions, the moon just looks like a dull, featureless yellow ball, thanks to a soupy atmosphere. Now, using 13 years’ worth of infrared data from Cassini, astronomers have stitched together the clearest images yet of Titan’s surface.
Scientists have previously overlooked the astonishing physical strength of the thin outer membrane that clings to E. coli‘s stout cell wall, according to a new study.
For over a century, scientists have studied E. coli, one of the bacteria that cause food poisoning, as a model for fighting infections. Such research has led to a variety of antibiotics that penetrate the protective cell walls of bacteria to kill them.
The new research, however, reveals that E. coli has managed to keep a big secret about its defenses.
Scientists had long known that many bacteria have outer membranes. But until now researchers thought of it like a layer of shrink wrap that simply made it tougher to get antibiotics into cells. But as the new study shows, the outer membrane physically protects the cell and could be a good target for a new class of antibacterial drugs.
“We’ve discovered that the outer membrane can act as a suit of armor that is actually stronger than the cell wall,” says K. C. Huang, an associate professor of bioengineering and of microbiology and immunology at Stanford University. “It’s humbling to think that this function had been hiding in plain sight for all these years.”
Huang says the findings suggest new infection-fighting strategies for the roughly half of all bacterial species that, like E. coli, have outer membranes.
“If we can attack the outer membrane, infectious bacteria will be pre-weakened for targeting with antibiotic treatments that disrupt cells in other ways,” he says.
Behind the shield
All bacteria have a cell wall that surrounds and protects the cell’s inner workings. Many decades ago, scientists discovered that E. coli and many other bacteria have an additional layer, called an outer membrane, that surrounds their cell walls.
Since its discovery, this outer membrane has been used as a way to classify bacteria into those that do and do not react to a common staining technique, called a Gram stain. Bacteria with outer membranes do not react to the chemical stain are called Gram-negative. Bacteria with naked cell walls react to the stain and are classified as Gram-positive.
Both kinds of bacteria can become infectious and, when this occurs, the presence or absence of an outer membrane can also help determine how responsive they will be to antibiotics. Gram-negative bacteria—which have outer membranes—tend to be more resistant to antibiotics.
“Scientists knew that outer membranes were chemical shields,” Huang says. “Thus, it was easy to relegate this third layer to an annoyance when dosing the cell with antibiotics.”
Tests of strength
In recent years, however, researchers have had clues that the outer membrane is more important than they’d thought. In one study, Huang’s lab removed E. coli‘s cell wall but left its outer membrane intact. Unsurprisingly, the bacteria lost their cucumber shape and became blobs. But a large fraction of these blobs survived, multiplied and ultimately regenerated new cucumber-shaped E. coli.
“…a strong outer membrane is the difference between life and death…”
Enrique Rojas, a former postdoctoral scholar in Huang’s lab and first author of the new paper, says that study was a clue that the outer membrane must play important structural and protective roles.
“We just listened to the data. Science is about data, not dogma,” says Rojas, now an assistant professor of biology at New York University.
Over the last four years, the group members tested the outer membrane’s structural powers.
They suddenly collapsed the pressure inside the bacteria, but instead of causing the cell wall to massively shrink, as prevailing assumptions would have predicted, they found that the outer membrane was strong enough to almost entirely maintain E. coli‘s cucumber shape.
In other experiments, they put E. coli cells through two hours of rapid increases and decreases in pressure. E. coli cells normally shrug off these repeated insults and grow as if no changes at all had occurred. However, when the researchers weakened the outer membrane, cells died quickly.
“The presence or absence of a strong outer membrane is the difference between life and death,” Huang says.
The experiments identified a handful of components that give the outer membrane its surprising strength. Drugs that destabilize the deceptively thin outer layer could help destroy infectious bacteria, Huang says.
Huang adds that the findings are part of an emerging field of study called mechanobiology. Whereas scientists once viewed cells as sacks of chemicals to study by chemical means, today a confluence of tools reveal the infinitely complex structural properties that make cells and organs tick.
“It’s a very exciting time to be studying biology,” Huang says. “We are approaching the point at which our tools and techniques are becoming precise enough to discern, sometimes at almost the atomic level, the physical rules that give rise to life.”
Additional coauthors are from Stanford; the University of California, San Francisco; and the University of Wisconsin-Madison.
Funding for the research came from the National Institutes of Health; the National Science Foundation; the Stanford Systems Biology Center and Simbios Center for Physics-Based Computation at Stanford; the Howard Hughes Medical Institute; the Swiss National Science Foundation; and the Allen Discovery Center program through the Paul G. Allen Frontiers Group.
Poor air quality in national parks may put a damper on visitation, according to a new study.
As reported in Science Advances, the researchers studied ozone levels in 33 of the largest national parks in the US. The researchers found that from 1990 to 2014 average ozone concentrations in national parks were statistically indistinguishable from those of the 20 largest US metropolitan areas—conditions that previously sparked federal legislation. To protect parks, the Clean Air Act (CAA) Amendments of 1977 and 1990 designated national parks as Federal Class I Areas.
“The US has spent billions of dollars over the last three decades to improve air quality,” says David Keiser, assistant professor of economics at Iowa State University. “Given the popularity of national parks, as well as the fact that people go to parks to be outside, we believed it was worth better understanding air quality trends in these areas and whether people, through their actions, respond to changes in air quality in parks.”
…air quality in many national parks remains unhealthy for sensitive groups on average for two-and-one-half to three weeks per year.
The study found that ozone levels improved in metropolitan areas starting in 1990; however, national parks improvements have only been apparent since the early 2000s, corresponding to the passage of the Regional Haze Rule, a 1999 EPA regulation that strengthened air quality protections for national parks and wilderness areas.
The authors first compiled data from extensive ozone monitoring efforts led by the National Park Service and the EPA. Data show that since 1990, national parks have seen only modest reductions in days with ozone concentrations exceeding 70 parts per billion, levels deemed unhealthy by the EPA.
The researchers then matched the pollution data to monthly park visitation statistics at 33 of the most heavily visited national parks and found that visitation responds most to ozone during months with poor air quality. Unsurprisingly, this response is largest in summer and fall, the seasons when park visitation is highest.
They also explored two potential causes for this result: air quality warnings (AQI) issued by parks and poor visibility. They found that the visitation response is more strongly associated with potential health warnings and less correlated with visibility.
A recent survey found that nearly 90 percent of respondents had visited a national park area in their lifetime, with one-third of respondents anticipating visiting a park in the coming year. Despite improvements over the last two decades, air quality in many national parks remains unhealthy for sensitive groups on average for two-and-one-half to three weeks per year.
Indeed, despite the decrease in visitation that the authors found during months with poor air quality, an estimated 35 percent of all visitor days occurred when ozone exceeded the 55 ppb “moderate” AQI threshold, and nearly 9 percent of visitor days when ozone levels exceeded 70 ppb. Exposure to these elevated ozone levels has important health implications—visitors have an increased chance of adverse health outcomes, including hospitalization, respiratory symptoms, and mortality for sensitive individuals.
The number of park visits suggests potentially large human health benefits to further air quality improvements at national parks and elsewhere.
Coauthors of the study are from Iowa State and Cornell University.
Just two hours of vigorous yard work in the summer sun without drinking fluids could be enough to blunt concentration, according to a new study.
After statistically analyzing data from multiple peer-reviewed research papers on dehydration and cognitive ability, researchers found that that cognitive functions often wilt as water departs the body. The data point to functions like attention, coordination, and complex problem solving suffering the most, and activities like reacting quickly when prompted not diminishing much.
“The simplest reaction time tasks were least impacted, even as dehydration got worse, but tasks that require attention were quite impacted,” says Mindy Millard-Stafford, a professor in the School of Biological Sciences at Georgia Tech.
No fluid, no focus
As the bodies of test subjects in various studies lost water, the majority of participants increasingly made errors during attention-related tasks that were mostly repetitive and unexciting, such as punching a button in varying patterns for quite a few minutes. There are situations in life that challenge attentiveness in a similar manner, and when it lapses, snafus can happen.
There’s no hard and fast rule about when exactly such lapses can pop up.
“Maintaining focus in a long meeting, driving a car, a monotonous job in a hot factory that requires you to stay alert are some of them,” says Millard-Stafford, the study’s principal investigator. “Higher-order functions like doing math or applying logic also dropped off.”
The researchers have been concerned that dehydration could raise the risk of an accident, particularly in scenarios that combine heavy sweating and dangerous machinery or military hardware.
Millard-Stafford and first author Matthew Wittbrodt, a former graduate research assistant at Georgia Tech and now a postdoctoral researcher at Emory University, report their work in the journal Medicine & Science in Sports & Exercise.
There’s no hard and fast rule about when exactly such lapses can pop up, but the researchers examined studies with one to six percent loss of body mass due to dehydration and found more severe impairments started at two percent. That level has been a significant benchmark in related studies.
“If you weigh 200 pounds and you go work out for a few of hours, you drop four pounds, and that’s two percent body mass…”
“There’s already a lot of quantitative documentation that if you lose two percent in water it affects physical abilities like muscle endurance or sports tasks and your ability to regulate your body temperature,” says Millard-Stafford. “We wanted to see if that was similar for cognitive function.”
The researchers looked at 6,591 relevant studies for their comparison, then narrowed them down to 33 papers with scientific criteria and data comparable enough to do metadata analysis. They focused on acute dehydration, which anyone could experience during exertion, heat, and/or not drinking as opposed to chronic dehydration, which can result from a disease or disorder.
How much is too much?
How much fluid loss adds up to two percent body mass loss?
“If you weigh 200 pounds and you go work out for a few of hours, you drop four pounds, and that’s two percent body mass,” Millard-Stafford says. And it can happen fast. “With an hour of moderately intense activity, with a temperature in the mid-80s, and moderate humidity, it’s not uncommon to lose a little over two pounds of water.”
“If you do 12-hour fluid restriction, nothing by mouth, for medical tests, you’ll go down about 1.5 percent,” she says. “Twenty-four hours fluid restriction takes most people about three percent down.”
And that begins to affect more than cognition or athletic abilities and concentration.
“If you drop four or five percent, you’re going to feel really crummy,” Millard-Stafford says. “Water is the most important nutrient.”
She warns that older people can dry out more easily because they often lose their sensation of thirst and also, their kidneys are less able to concentrate urine, which makes them retain less fluid. People with high body fat content also have lower relative water reserves than do lean folks.
A warning about water
Hydration is important, but so is moderation.
“You can have too much water, something called hyponatremia,” Millard-Stafford says. “Some people overly aggressively, out of a fear of dehydration, drink so much water that they dilute their blood and their brain swells.”
Rising seas threaten more than 4,000 miles of buried fiber optic cables in densely populated US coastal regions, report researchers. Seattle is one of three cities at most risk of internet disruptions.
In a talk to internet network researchers, Ramakrishnan Durairajan, an assistant professor in the computer and information science department at the University of Oregon, warned that most of the damage could come in the next 15 years. Strategies to reduce potential problems should be under consideration sooner rather than later, he says.
“Most of the damage that’s going to be done in the next 100 years will be done sooner than later…”
The Durairajan-led study is the first risk assessment of climate change to the internet.
“Our analysis is conservative in that we only looked at the static dataset of sea level rise and then overlapped that over the infrastructure to get an idea of risk,” Durairajan says. “Sea level rise can have other factors—a tsunami, a hurricane, coastal subduction zone earthquakes—all of which could provide additional stresses that could be catastrophic to infrastructure already at risk.”
By 2033, the study also found, that more than 1,100 internet traffic hubs will be surrounded by water. New York City and Miami are the other two most susceptible cities, but the impacts could ripple out and potentially disrupt global communications.
“Most of the damage that’s going to be done in the next 100 years will be done sooner than later,” says the study’s senior author Paul Barford, a computer scientist at the University of Wisconsin-Madison who was Durairajan’s academic adviser while he completed the study as part of his doctoral work. “That surprised us. The expectation was that we’d have 50 years to plan for it. We don’t have 50 years.”
Barford is a leading expert on the “physical internet,” the buried fiber optic cables, data centers, traffic exchanges, and termination points that are the nerve centers, arteries, and hubs of the global information network.
“The first instinct will be to harden the infrastructure, but keeping the sea at bay is hard.”
The study, which only considered US infrastructure, combined data from the Internet Atlas, a comprehensive global map of the internet’s physical structure, and projections of sea level incursion from the National Oceanic and Atmospheric Administration.
The roots of the danger emerged inadvertently during the internet’s rapid growth in the 1980s, says Durairajan. Neither a vision of a global grid nor planning for climate change was considered during the technology explosion.
“When commercialization of the internet happened, everybody wanted to make money,” he says. “Companies started their own infrastructure deployments. Everyone had their own policies and deployed everything that they wanted in ways that were good for them.”
Over time, layers of infrastructure were placed on top of each other. Despite advances in the technology, he says, those fiber lines remain in place and face the greatest risk. Buried fiber optic cables are designed to be water resistant, but unlike the marine cables that ferry data under the ocean, they are not waterproof.
Conduits at most risk are already close to sea level. Only a slight rise in ocean levels due to melting polar ice and thermal expansion will be needed to expose buried fiber optic cables to seawater, the study found. Service disruptions during catastrophic storm surges and flooding that accompanied hurricanes Sandy and Katrina hinted at the problems to come, Barford and Durairajan note.
Mitigation strategies are needed to strengthen the coastal infrastructure so that failures there do not become cascading failures that take out inland stations, Durairajan says. The effects of building seawalls, according to the study, are difficult to predict.
“The first instinct will be to harden the infrastructure,” Barford says. “But keeping the sea at bay is hard. We can probably buy a little time, but in the long run it’s just not going to be effective.”
The study also examined risks to buried assets of individual internet service providers, finding that Century Link, Inteliquent, and AT&T are at highest risk.
Durairajan shared the findings with academic and industry researchers at the Applied Networking Workshop in Montreal on July 16, a meeting of the Association for Computing Machinery, the Internet Society, and the Institute of Electrical and Electronics Engineers.
Researchers have discovered an ancient and dramatic head-on collision between the Milky Way and a smaller object, dubbed the “Sausage” galaxy.
The cosmic crash was a defining event in the early history of the Milky Way and reshaped the structure of our galaxy, fashioning both its inner bulge and its outer halo, astronomers report in a series of new papers.
The astronomers propose that around 8 billion to 10 billion years ago, an unknown dwarf galaxy smashed into our own Milky Way. The dwarf did not survive the impact: It quickly fell apart, and the wreckage is now all around us.
“The collision ripped the dwarf to shreds, leaving its stars moving in very radial orbits” that are long and narrow like needles, says Vasily Belokurov of the University of Cambridge and the Center for Computational Astrophysics at the Flatiron Institute in New York City. The stars’ paths take them “very close to the center of our galaxy. This is a telltale sign that the dwarf galaxy came in on a really eccentric orbit and its fate was sealed.”
The researchers used data from the European Space Agency’s Gaia satellite. This spacecraft has been mapping the stellar content of our galaxy, recording the journeys of stars as they travel through the Milky Way. Thanks to Gaia, astronomers now know the positions and trajectories of our celestial neighbors with unprecedented accuracy.
The paths of the stars from the galactic merger earned them the moniker “the Gaia Sausage,” explains Wyn Evans of Cambridge. “We plotted the velocities of the stars, and the sausage shape just jumped out at us. As the smaller galaxy broke up, its stars were thrown onto very radial orbits. These Sausage stars are what’s left of the last major merger of the Milky Way.”
When looking at the distribution of star velocities in the Milky Way, the stars of the Sausage galaxy form a characteristic sausage-like shape. This unique shape is caused by the strong radial motions of the stars. As the sun lies in the center of this enormous cloud of stars, the distribution does not include the slowed-down stars currently making a U-turn back toward the galaxy’s center.
Sergey Koposov, a member of Carnegie Mellon University’s McWilliams Center for Cosmology, studied the kinematics of the Sausage stars and globular clusters in detail.
The Milky Way continues to collide with other galaxies, such as the puny Sagittarius dwarf galaxy. However, the Sausage galaxy was much more massive. Its total mass in gas, stars, and dark matter was more than 10 billion times the mass of our sun.
When the Sausage crashed into the young Milky Way, its piercing trajectory caused a lot of mayhem. The Milky Way’s disk was probably puffed up or even fractured following the impact and would have needed to regrow. And Sausage debris was scattered all around the inner parts of the Milky Way, creating the “bulge” at the galaxy’s center and the surrounding “stellar halo.”
Numerical simulations of the galactic mashup can reproduce these features, says Denis Erkal of the University of Surrey. In simulations run by Erkal and colleagues, stars from the Sausage galaxy enter stretched-out orbits. The orbits are further elongated by the growing Milky Way disk, which swells and becomes thicker following the collision.
Evidence of this galactic remodeling is seen in the paths of stars inherited from the dwarf galaxy, says Alis Deason of Durham University. “The Sausage stars are all turning around at about the same distance from the center of the galaxy.” These U-turns cause the density in the Milky Way’s stellar halo to decrease dramatically where the stars flip directions.
This discovery was especially pleasing for Deason, who predicted this orbital pileup almost five years ago. The new work explains how the stars fell into such narrow orbits in the first place.
The new research also identified at least eight large, spherical clumps of stars called globular clusters that were brought into the Milky Way by the Sausage galaxy. Small galaxies generally do not have globular clusters of their own, so the Sausage galaxy must have been big enough to host a collection of clusters.
“While there have been many dwarf satellites falling onto the Milky Way over its life, this was the largest of them all,” Koposov says.
Can a simple graph communicate the complexity of a city plan? That was the question developed by University of California Berkeley professor Geoff Boeing during his urban planning dissertation. The result is an elegant visualization method that shows how simple–or crazily complicated–a city’s grid is, using just a single image.
To create the viz, Boeing developed a Python software application that analyzes the orientation–for instance, north/south, east/west, or more complex variations in between–of every street in a given city (in this case, he analyzed 25 cities). The software, which you can download from his page, records the orientation of every street it finds, obtaining a frequency for each orientation. It places the frequency of each orientation on a polar histogram.
“Each bar’s direction represents the compass bearings of the streets and its length represents the relative frequency of streets with those bearings,” Boeing explains on his research page. If you look at the Manhattan diagram below, you’ll see that almost half of its streets are oriented north to south and almost the other half are oriented east to west, with the rest of them–mostly the old streets in the southern tip of the island–are distributed in various compass orientations. Those are represented by the small bars at the center of the diagram, which clearly conveys the idea that Manhattan is a relatively orderly city.
Boston, by comparison, has a wildly disorganized street grid, with a hugely varied distribution of orientations. “We can learn something about the spatial ‘logic’ of the city,” Boeing said via email. “While these visualizations don’t tell us about the scale or grain of the urban fabric, they do tell us about how the circulation network is oriented. These street networks organize all the human activity and circulation in the city . . . Some are very carefully and deliberately planned with certain design paradigms, development goals, transportation technologies in mind,” he continues, while “others are very organic.”
In the end, Boeing analyzed 25 cities in total, creating a comprehensive visualization of American city planning paradigms. While understand the complexities of the urban fabric in total can be nearly impossible, his images synthesize thousands of datapoints into a single easily-digestible graphic. For him, “these visuals can help make otherwise dry or technical city planning concepts more salient and approachable for laypersons. You can easily see and comprehend your own city and how it relates to others’ patterns.”
New research reveals a physical link between the speed and location of the jet stream and the strength of the polar vortex, a swirl of air that usually hovers over the Arctic.
If you can predict the path of the jet stream, the upper atmosphere’s undulating river of wind, then you can predict weather—not just for a week or two, but for an entire season. The new study moves toward that level of foresight.
“The jet stream sets everything,” says Aditi Sheshadri, lead author and assistant professor of Earth system science in the School of Earth, Energy, & Environmental Sciences at Stanford University. “Storms ride along it. They interact with it. If the jet stream shifts, the place where the storms are strongest will also shift.”
The new study identifies two distinct modes in how air flows within the jet stream and the layers of atmosphere that sandwich it.
In one mode, changes in wind speed and direction start close to the equator in the troposphere, the wet, stormy layer of atmosphere below the jet stream and closest to Earth’s surface. Shifts of wind in this mode quickly propagate up through the jet stream and into the polar vortex in the dry, upper layer of atmosphere known as the stratosphere.
In the other mode, the strength of the stratosphere’s polar vortex influences the path and strength of the jet stream—and how it interacts with storms in the troposphere. In this mode, the polar vortex sends a signal all the way down to the surface like a pulse. A weaker vortex produces a weak jet stream that slips toward the equator; a stronger vortex intensifies the jet stream while drawing it poleward.
“These deep vertical structures haven’t been shown before,” Sheshadri says. “It’s something fundamental about the system itself.”
Her analysis could help explain the surface weather impacts of an event that occurred in early 2018, when the vortex weakened so much that it ripped in two—a phenomenon that scientists know can blast up to two months of extreme weather into western Europe. Until now, understanding of these interactions has been based on observations and statistical modeling rather than knowledge of their physical foundation.
These modes could be key to predicting the long-term effects of certain environmental changes on Earth’s surface. While air is thought to flow relatively independently within the troposphere and stratosphere in normal winters, depleted ozone, high levels of greenhouse gases, ocean warming, reduced snow cover, and other disturbances can rattle this independence, affecting both the vortex and jet stream in complex ways.
Greenhouse gas emissions, for example, can strengthen the vortex while simultaneously boosting waves that propagate up from the troposphere and weaken the vortex as they break.
“We don’t know which of these two effects of increasing greenhouse gases will win out,” Sheshadri says.
Making better climate models
To help find answers, Sheshadri’s team set out to understand the climate as a system that responds in a predictable way to known forces, despite internal dynamics that are a mix of random and systematic fluctuations. They took a mathematical theorem used for nearly a century to predict seemingly random behavior in quantum mechanical systems and applied it to data representing Earth’s atmosphere in wintertime.
“We have 35 years of wind data,” Sheshadri says. “Can we say something just from those observations about how the winds will change if, for instance, you increase carbon dioxide? That’s what got this whole thing started.”
Current climate models excel at showing temperature changes throughout the atmosphere’s layers over time and with varying levels of substances like ozone or carbon dioxide. “We’re pretty certain about how the temperature structure of the atmosphere is going to change,” Sheshadri says. “However, if you look at changes in things like wind or rain or snow—anything that’s a dynamical quantity—we really have very little idea of what’s going on.”
And yet, these are some of the most vivid metrics for a changing climate. “No one feels the global mean temperature,” Sheshadri says. “How many times over the next 10 years are we going to have to deal with floods or cold snaps in a particular region? That’s the sort of question this might help answer.”
By revealing the physical processes that underpin some of these dynamic variables, the method developed in this study could also help weed out flaws in climate models.
“The way that we currently do this is that you take a model and you run it forward,” checking the model’s predictions against observed data, Sheshadri explains. But many models built upon the same historic data produce different predictions for the future, in part because they make different assumptions about how the troposphere and stratosphere interact and how the jet stream fluctuates. Until now there has not been a way to check those assumptions against the atmosphere’s actual variability.
“We need to be sure the models are right, and for the right reasons,” Sheshadri says. The new work provides a way to resolve that uncertainty—and to anticipate storms months into the future.
Additional coauthors are from the Massachusetts Institute of Technology and ETH Zurich. The Simons Foundation, the National Science Foundation, and the Swiss National Science Foundation funded the work.
Scientists have found the first evidence of a source of high-energy cosmic neutrinos, ghostly subatomic particles that can travel unhindered for billions of light years from the most extreme environments in the universe to Earth.
“For years, we’ve had a long list of potential sources for high-energy neutrinos. Now we have a specific source—blazars—that we can look at very carefully.”
The observations, from the IceCube Neutrino Observatory at the Amundsen–Scott South Pole Station in coordination with telescopes around the globe and in Earth’s orbit, help resolve a more than a century-old riddle about what sends subatomic particles such as neutrinos and cosmic rays speeding through the universe.
Since their first detection over one hundred years ago, cosmic rays—highly energetic particles that continuously rain down on Earth from space—have posed an enduring mystery: What creates and launches these particles across such vast distances? Where do they come from?
Because cosmic rays are charged particles, their paths cannot be traced directly back to their sources due to the magnetic fields that fill space and warp their trajectories. But the powerful cosmic accelerators that produce them will also produce neutrinos. Neutrinos are uncharged particles, unaffected by even the most powerful magnetic field. Because they rarely interact with matter and have almost no mass—hence their nickname “ghost particle”—neutrinos travel nearly undisturbed from their accelerators, giving scientists an almost direct pointer to their source.
Two new papers (first, second) in the journal Science for the first time provide evidence for a known blazar as a source of high-energy neutrinos detected by the IceCube observatory. This blazar, designated by astronomers as TXS 0506+056, was first singled out following a neutrino alert sent by IceCube on September 22, 2017.
“IceCube-170922A—a high-energy neutrino detected by IceCube on September 22, 2017—had an energy of 300 trillion electron volts and a trajectory pointing back to a small patch of sky in the constellation Orion,” says coauthor Azadeh Keivani, a postdoctoral scholar at Penn State.
“The era of multi-messenger astrophysics is here. Each messenger gives us a more complete understanding of the universe and important new insights into the most powerful objects and events in the sky,” says NSF director France Córdova. “Such breakthroughs are only possible through a long-term commitment to fundamental research and investment in superb research facilities.”
A blazar is a galaxy with a super-massive black hole at its core. A signature feature of blazars is twin jets of light and elementary particles emitted from the poles along the axis of the black hole’s rotation. In this blazar, one of the jets points toward Earth. This blazar is situated in the night sky just off the left shoulder of the constellation Orion and is about four billion light years from Earth.
“Scientifically, this is very good news,” says Ignacio Taboada, an associate professor in Georgia Tech’s School of Physics and member of the Center for Relativistic Astrophysics also at Georgia Tech. As leader of the “Transients Science Working Group” within IceCube, he oversaw all the studies that inquired on the correlation TXS 0506+056’s gamma ray flare and the neutrino alert of September 22, 2017. “For years, we’ve had a long list of potential sources for high-energy neutrinos. Now we have a specific source—blazars—that we can look at very carefully.”
Georgia Tech PhD student Chun Fai (Chris) Tung contributed to the publications by reconstructing archival IceCube data searching for very-high energy neutrinos that might be correlated with blazars other than TXS 0506+056.
“At the highest energies, the universe is essentially opaque to very high energy gamma rays, and the farther away you are, the more opaque the universe is,” Taboada says. “If the blazar had been closer we likely would have seen it with HAWC,” the Higher Altitude Water Cherenkov gamma-ray observatory in central Mexico.
One in a million
Equipped with a nearly real-time alert system—triggered when a very high-energy neutrino collides with an atomic nucleus in the Antarctic ice in or near the IceCube detector—the observatory broadcast coordinates of the September 22 neutrino alert to telescopes worldwide for follow-up observations.
Two gamma-ray observatories, NASA’s orbiting Fermi Gamma-ray Space Telescope and the Major Atmospheric Gamma Imaging Cherenkov Telescope, or MAGIC, in the Canary Islands, detected a flare of high-energy gamma rays associated with TXS 0506+056, a convergence of observations that convincingly implicated the blazar as the most likely source.
Fermi was the first telescope to identify enhanced gamma-ray activity from TXS 0506+056 within 0.06 degrees of the IceCube neutrino direction. In a decade of Fermi observations of this source, this was the strongest flare in gamma rays. A later follow-up by MAGIC detected gamma rays of even higher energies.
These observations prove that TXS 056+056 is one of the most luminous sources in the known universe and, thus, add support to a multimessenger observation of a cosmic engine powerful enough to accelerate high-energy cosmic rays and produce the associated neutrinos. Because neutrinos interact so weakly with matter, IceCube detected only one out of many millions that sailed through Antarctica’s ice on September 22.
Bolstering these observations are coincident measurements from other instruments, including optical, radio, and X-ray telescopes. “The ability to globally marshal telescopes to make a discovery using a variety of wavelengths in cooperation with a neutrino detector like IceCube marks a milestone in what scientists call multi-messenger astronomy,” says Halzen.
A mystery since 1912
Austrian physicist Victor Hess showed, in 1912, that ionizing particles detected in the atmosphere arrive from space. These cosmic rays are the highest energy particles ever observed, with energies up to a hundred million times the energies of particles in the Large Hadron Collider at CERN in Switzerland, the most powerful human-made particle accelerator.
These extremely high-energy cosmic rays can only be created outside our galaxy and their sources have remained a mystery until now. Scientists had speculated that the most violent objects in the cosmos, like the mysterious gamma ray bursts, colliding galaxies, and the energetic black hole cores of galaxies known as active galactic nuclei, such as blazars, could be the sources.
“Fermi has been monitoring some 2,000 blazars for a decade, which is how we were able to identify this blazar as the neutrino source,” says Regina Caputo, the analysis coordinator for the Fermi Large Area Telescope collaboration. “High-energy gamma rays can be produced either by accelerated electrons or protons. The observation of a neutrino, which is a hallmark of proton interactions, is the first definitive evidence of proton acceleration by black holes.”
“Now, we have identified at least one source of cosmic rays because it produces cosmic neutrinos. Neutrinos are the decay products of pions. In order to produce them, you need a proton accelerator,” says Halzen.
Cosmic rays are mostly protons and are sent speeding across the universe because the places where they are created act in the same way as particle accelerators on Earth, only they are far more powerful. “Theories predict that the emission of neutrinos will be accompanied by the release of gamma rays,” explains Razmik Mirzoyan, the spokesperson of the MAGIC Collaboration. But there are still a lot of questions on how blazars could accelerate particles to the highest energies. “Gamma rays provide information on how the ‘power plants’ in supermassive black holes work,” adds Mirzoyan.
Neutrinos ‘hardly ever stop to interact’
As the latest astrophysical messenger to enter the game, neutrinos bring crucial new information to uncovering the inner workings of these cosmic ray accelerators. In particular, measurements of neutrinos can reveal the mechanisms for particle acceleration of the proton beam in the densest environments that even high-energy gamma rays may not escape.
“For the most part, neutrinos go through everything and hardly ever stop to interact.”
Following the September 22 detection, the IceCube team quickly scoured the detector’s archival data and discovered a flare of over a dozen astrophysical neutrinos detected in late 2014 and early 2015, coincident with the same blazar, TXS 0506+056. This independent observation greatly strengthens the initial detection of a single high-energy neutrino and adds to a growing body of data that indicates TXS 0506+056 is the first known accelerator of the highest energy neutrinos and cosmic rays.
Detecting high-entry astrophysical neutrinos—particles from outside our galaxy—is no easy task. These particles pass through the Earth as if it were glass and are only detectable when they interact with atomic protons and neutrons that are massive enough to stop them. “For the most part, neutrinos go through everything and hardly ever stop to interact,” says Taboada.
A team at the University of Wisconsin-Madison operates the IceCube Neutrino Observatory, which the National Science Foundation primarily funds.
About 20 observatories on Earth and in space have participated in the identification of what scientists deem to be a source of very high-energy neutrinos and, thus, of cosmic rays. Several follow-up observations are detailed in a few other papers that are also being published.
Nearly every parent who’s working to support a family feels constrained by their career choices: Providing financial security for your children usually takes precedence over fulfilling your own dreams and aspirations. If you’re especially fortunate, you don’t have to choose or compromise. But many of us do, even though most of us never start out thinking that way about our working lives. As kids, we aspire to be doctors or astronauts or pop stars, and only as adults–and particularly as parents–do we begin to adjust our career decisions, first to the realities of the workforce and later to the needs and demands of other people (partners, spouses, parents, children).
My dad is an accountant. When I was very young, he served as a comptroller and then started working in small accounting firms, eventually moving out on his own. The older I got, the clearer it became that accounting wasn’t my father’s passion, even though it paid the bills. When I got to college, it struck me that he spent an awful lot of time doing things he didn’t particularly like. And this observation motivated me to think differently, and pursue career paths I’d find more fulfilling than he seemed to find his.
There’s been a lot of debate in recent years about “passion careers” and “dream jobs,” including whether they’re reasonable things to pursue in the first place. But those conversations typically focus narrowly on individual (and implicitly unattached) job seekers, with seemingly little to say to working parents whose career choices are always influenced by concern for their kids. In reality, though, opting to find purpose and fulfillment in the work you do can benefit your children in unexpected ways. It just takes a shift in perspective to understand how.
Vocation versus calling
The past 25 years has witnessed a boom in positive psychology research, a field pioneered by psychologist Martin Seligman, who pointed out that researchers were mainly focusing on mental illness and neglecting to understand what good mental health consists of. A number of researchers, like Ed Diener, for example, have extended into the workplace the focus that Seligman argued for, and there’s now data to help us distinguish conceptually between “vocations” and “callings.” A calling is a motivation to engage in activities (at work) that serve a broader purpose, often to the benefit of other people or society at large. A vocation is a job that satisfies that calling.
Research suggests that people who have a calling and view their work as a vocation are both more dedicated to that work and happier at work than those who don’t. Crucially, it’s all a matter of mind-set; the ability to see your professional life this way is independent of the specific tasks your job entails. For example, people working at animal shelters and clean dirty kennels may still feel that their work benefits abandoned animals.
If you have children, chances are they’re sensitive to the ways you talk about work and react to its pressures. They can see when you truly love the things you do, and when you’re just punching the clock. And even if you never make your feelings toward your work explicit, you’re nonetheless teaching them a lot about how they should view their work lives as they get older.
There’s nothing wrong with simply using your job to pay the bills–your kids rely on you to do that. When there’s something else that you’d much rather be doing for a living, you might hesitate over how changing course might impact your household. But while that’s always a good instinct that’s worth considering carefully, it’s not a watertight rationale for playing things safe.
The eyes of beholders
Indeed, taking measured, purposeful risks can benefit your kids as well as you personally. Perhaps all that means for you is enrolling in an evening class toward an advanced degree. Many of the graduate students I teach are parents who tell me how wonderful it is to study alongside their children. Yes, that may mean pulling back on some family activities in order to make time, but you’re giving something back in the process: Your kids gain a model of lifelong learning in action, all in the interests of pursuing meaningful work.
Plus, building a work life that’s rich in desirable challenges can feed back into your home life in surprising and positive ways. Research on motivation suggests that our overall attitudes toward our own lives depends on what we choose to notice about them. When you’re pursuing positive outcomes at work, you’re more likely to take note of the good things elsewhere in your life. The reverse is true, too: When you spend your working hours grinding your teeth and weathering daily catastrophes, you’ll zero in on all the downsides to the rest of your life, too.
The point here isn’t that you should strike out and take careers risks unthinkingly or selfishly. It’s that you should think more intentionally about the risks you do and don’t take, because your children internalize all of that anyway. So start just by paying more attention to the ways (good, bad, and middling) that your professional life influences your family life. You may come to conclude that pursuing your calling is one of the best parenting decisions you can make.