Category Archives: Stanford University

Will Trump’s executive order change things at Mexican border?

US Attorney General Jeff Sessions recently announced changes to asylum requirements, leading to thousands of asylum seekers being charged with federal crimes and imprisoned—their children detained separately.

Here, Jayashri Srikantiah, a professor of law and director of the Immigrants’ Rights Clinic at Stanford University, and Lisa Weismann-Ward, clinical supervising attorney and lecturer in law at the university, discuss the evolving policies and President Trump’s new executive order.

The post Will Trump’s executive order change things at Mexican border? appeared first on Futurity.

Give up on ‘finding your passion’ and try this instead

The advice to “find your passion” might undermine how interests actually develop, according to new research.

In a series of laboratory studies, researchers examined beliefs that may lead people to succeed or fail at developing their interests.

Mantras like “find your passion” carry hidden implications, the researchers say. They imply that once an interest resonates, pursuing it will be easy. But, the researchers found that when people encounter inevitable challenges, that mindset makes it more likely people will surrender their newfound interest.

And the idea that passions are found fully formed implies that the number of interests a person has is limited. That can cause people to narrow their focus and neglect other areas.

Fixed mindsets

To better understand how people approach their talents and abilities, the researchers began with prior research from Carol Dweck, a professor of psychology at Stanford University who also contributed to the new work, on fixed versus growth mindsets about intelligence. When children and adults believe that intelligence is fixed—you either have it or you don’t—they can be less resilient to challenges in school.

“If you are overly narrow and committed to one area, that could prevent you from developing interests and expertise…”

Here, the researchers looked at mindsets about interests: Are interests fixed qualities that are inherently there, just waiting to be discovered? Or are interests qualities that take time and effort to develop?

To test how these different belief systems influence the way people hone their interests, the researchers conducted a series of five experiments involving 470 participants.

In the first set of experiments, the researchers recruited a group of students who identified either as “techie” or a “fuzzy”—Stanford vernacular to describe students interested in STEM topics (techie) versus the arts and humanities (fuzzy). The researchers had both groups of students read two articles, one tech-related and the other related to the humanities.

They found that students who held a fixed mindset about interests were less open to an article that was outside their interest area.

A fixed view may be problematic, says Gregory Walton, an associate professor of psychology at the School of Humanities and Sciences at Stanford. Being narrowly focused on one area could prevent individuals from developing knowledge in other areas that could be important to their field at a later time, he says.

“Many advances in sciences and business happen when people bring different fields together, when people see novel connections between fields that maybe hadn’t been seen before,” he says.

“In an increasingly interdisciplinary world, a growth mindset can potentially lead to this type of innovation, such as seeing how the arts and sciences can be fused,” adds Paul O’Keefe, who was a postdoctoral fellow at Stanford, and is now an assistant professor of psychology at Yale-National University of Singapore College.

“If you are overly narrow and committed to one area, that could prevent you from developing interests and expertise that you need to do that bridging work,” Walton says.

Not interested

The research also found that a fixed mindset can even discourage people from developing in their own interest area.

In another experiment, the researchers piqued students’ interest by showing them an engaging video about black holes and the origin of the universe. The video fascinated most students.

But, then, after reading a challenging scientific article on the same topic, students’ excitement dissipated within minutes. The researchers found that the drop was greatest for students with a fixed mindset about interests.

Learning ‘people can change’ boosts cooperation

This can lead people to discount an interest when it becomes too challenging.

“Difficulty may have signaled that it was not their interest after all,” the researchers write. “Taken together, those endorsing a growth theory may have more realistic beliefs about the pursuit of interests, which may help them sustain engagement as material becomes more complex and challenging.”

Developing passions

The authors suggest that “develop your passion” is more fitting advice.

“If you look at something and think, ‘that seems interesting, that could be an area I could make a contribution in,’ you then invest yourself in it,” says Walton. “You take some time to do it, you encounter challenges, over time you build that commitment.”

The right kind of motivation comes from you

Dweck notes: “My undergraduates, at first, get all starry-eyed about the idea of finding their passion, but over time they get far more excited about developing their passion and seeing it through. They come to understand that that’s how they and their futures will be shaped and how they will ultimately make their contributions.”

Source: Stanford University

The post Give up on ‘finding your passion’ and try this instead appeared first on Futurity.

Little nectar ‘worlds’ show how species live together

New research unravels the relative importance of two theories about how species coexist.

Picture, for example, a sticky drop of nectar clinging to the tip of a hummingbird’s beak that drips into the next flower the bird visits. With that subtle change, the microbes within that drop are now in a new environment, teeming with other microbes. This is a small example of species forced to live together in the real world.

It turns out that a less popular theory, one having to do with the way organisms respond and contribute to environmental fluctuations, likely plays a bigger role than ecologists had thought—this according to the study of the nectar-dwelling yeast of Stanford University’s Jasper Ridge Biological Preserve. The work, published in the Proceedings of the National Academy of Sciences, could influence how scientists model the effects of climate change on organisms.

“This particular experiment was motivated by basic curiosity about how species coexist,” says Tadashi Fukami, associate professor of biology. “We experimented with nectar-colonizing yeasts because we had gathered data about them in the wild, such as hummingbirds visits, interactions with flowers, effects of resources. This way we can design lab experiments that have a clear natural context.”

Two theories

Scientists have proposed two mechanisms to explain how species coexist in variable environments, called the storage effect and relative nonlinearity. The storage effect holds that species can coexist if they can store gains for lean times and their lean times don’t overlap, which means they are mostly competing with individuals belonging to their own species for resources during favorable times.

The concept of relative nonlinearity maintains that coexistence can occur when one species thrives off fluctuation in resources, the other thrives off stability in resources, and each species’ use of resources contributes to the state—fluctuation or stability—that benefits the other.

Andrew Letten, senior author of the paper, led the study as a postdoctoral fellow in the Fukami Lab. The goal was to understand each mechanism’s relative importance to coexistence. He found inspiration in a paper led by a theoretical ecologist at Cornell University, which outlined a new method for quantifying the storage effect through statistical simulations.

“Up until that paper, there was no realistic means of quantifying the relative contribution of the two mechanisms,” says Letten, who is now a postdoctoral fellow at the University of Canterbury in New Zealand. “When I read it, I literally felt giddy because it was so serendipitously tailored to what we were already doing, but enabled us to take it so much further.”

Relative nonlinearity wins out

By creating thousands of microcosms, each growing one species of nectar yeasts, the researchers gathered high-resolution data about the complex ways in which the yeasts respond to environmental conditions. Next, they used those data to create scenarios where the yeasts grew in pairs and applied the new method to disentangle the influence of the storage effect from that of relative nonlinearity on the yeasts’ coexistence.

“The idea is, you can mathematically model these coexistence mechanisms, knock them out in the simulations, and then that shows you how those species grow without that mechanism,” explains Po-Ju Ke, graduate student in biology and coauthor of the paper. “For example, relative nonlinearity relies on fluctuations in amino acids in nectar, a primary resource for yeast growth, so we simulated a stable level of amino acids to remove the influence of that mechanism.”

Lastly, the researchers compared their simulated results with the results of experiments where two species were grown together. This work is the first to experimentally tease apart the two mechanisms in real organisms and it agreed with the simulations 83 percent of the time.

Is symbiosis just a sneaky way to take, take, take?

Looking at their findings, the big surprise was that there were instances where a lack of relative nonlinearity led to one species dying out. This contradicts a common assumption among ecologists.

“Storage effect, maybe because it’s an older concept and more intuitive and easier to get data on, has always been assumed to be the main mechanism,” says Fukami. “We found they both can be important, but the main finding is that relative nonlinearity is much more important than most ecologists assumed.”

From micro to macro

Cell and molecular biology often concentrates on studying a particular pathway in intricate detail, whereas ecology tends to focus on larger systems, studying them holistically. In their current work, the Fukami Lab is pursuing research that can apply to both levels.

“As ecologists, we are working to understand holistically how these yeasts are interacting—in the world, with pollinators, with each other, in the nectar—but we can also use the tools that cell biologists have developed to study baker’s yeast to study nectar yeasts in order to gain more precise ecological understanding,” says Callie Chappell, a graduate student in the Fukami Lab and lead author of a different paper in the journal Yeast about developing a general ecological theory using nectar yeasts as new model organisms.

In addition to the fundamental insights into how species can survive together with few resources, the research team hopes that the experiments detailed in the current work will cause ecologists to reconsider how climate change may affect species. Climate change will lead to increased fluctuation in the environment, such as severe weather events, and this work shows that multiple mechanisms of species coexistence in fluctuating environments must be considered simultaneously to predict the fate of species under climate change.

Manpreet Dhami, a former postdoctoral fellow at Stanford, now at Landcare Research, New Zealand, is also a coauthor of the PNAS paper. That work had funding from sources at Stanford and from the National Science Foundation.

Source: Stanford University

The post Little nectar ‘worlds’ show how species live together appeared first on Futurity.

Turning blood samples into neurons takes just 4 proteins

Researchers have discovered how to convert human immune cells in blood directly into functional neurons in the laboratory in about three weeks with the addition of just four proteins.

The dramatic transformation doesn’t require the cells to first enter a state called pluripotency but instead occurs through a more direct process called transdifferentiation.

“Blood is one of the easiest biological samples to obtain…. Nearly every patient who walks into a hospital leaves a blood sample…”

The conversion occurs with relatively high efficiency—generating as many as 50,000 neurons from 1 milliliter of blood—and can be achieved with fresh or previously frozen and stored blood samples, which vastly enhances opportunities for the study of neurological disorders such as schizophrenia and autism.

“Blood is one of the easiest biological samples to obtain,” says Marius Wernig, associate professor of pathology at Stanford University and a member of Stanford’s Institute for Stem Cell Biology and Regenerative Medicine, and senior author of the new paper that appears in the Proceedings of the National Academy of Sciences.

“Nearly every patient who walks into a hospital leaves a blood sample, and often these samples are frozen and stored for future study. This technique is a breakthrough that opens the possibility to learn about complex disease processes by studying large numbers of patients.”

The transdifferentiation technique was first developed in Wernig’s laboratory in 2010 when he and his colleagues showed that they could convert mouse skin cells into mouse neurons without first inducing the cells to become pluripotent—a developmentally flexible stage from which the cells can become nearly any type of tissue. They went on to show that they could also use the technique on human skin and liver cells.

But each approach has been dogged by challenges, particularly for researchers wishing to study genetically complex mental disorders, such as autism or schizophrenia, for which researchers require many hundreds of individual, patient-specific samples in order to suss out the relative contributions of dozens or more disease-associated mutations.

“Generating induced pluripotent stem cells from large numbers of patients is expensive and laborious. Moreover, obtaining skin cells involves an invasive and painful procedure,” Wernig says. “The prospect of generating iPS cells from hundreds of patients is daunting and would require automation of the complex reprogramming process.”

Although it’s possible to directly convert skin cells to neurons, the biopsied skin cells first have to grow in the laboratory for a period of time until their numbers increase—a process likely to introduce genetic mutations not found in the person from whom the cells were obtained.

The researchers wondered if there was an easier, more efficient way to generate patient-specific neurons.

In the new study, Wernig and his colleague focused on highly specialized immune cells called T cells that circulate in the blood. T cells protect us from disease by recognizing and killing infected or cancerous cells. In contrast, neurons are long and skinny cells capable of conducting electrical impulses along their length and passing them from cell to cell. But despite the cells’ vastly different shapes, locations, and biological missions, the researchers found it unexpectedly easy to complete their quest.

Tinkering with T-cells aims them at deadly brain cancer

“It’s kind of shocking how simple it is to convert T cells into functional neurons in just a few days,” Wernig says. “T cells are very specialized immune cells with a simple round shape, so the rapid transformation is somewhat mind-boggling.”

The resulting human neurons aren’t perfect. They lack the ability to form mature synapses, or connections, with one another. But they are able to carry out the main fundamental functions of neurons, and Wernig and his colleague are hopeful they will be able to further optimize the technique in the future. In the meantime, they’ve started to collect blood samples from children with autism.

“We now have a way to directly study the neuronal function of, in principle, hundreds of people with schizophrenia and autism,” Wernig says “For decades we’ve had very few clues about the origins of these disorders or how to treat them. Now we can start to answer so many questions.”

‘Blood scraps’ offer clues to how our immune system works

Former postdoctoral scholar Koji Tanabe, and graduate student Cheen Ang are the study’s lead authors. The National Institutes of Health, the California Institute for Regenerative Medicine, the New York Stem Cell Foundation, the Howard Hughes Medical Institute, the Siebel Foundation, the Stanford Schizophrenia Genetics Research Fund, and the Stanford pathology department supported the work.

Source: Stanford University

The post Turning blood samples into neurons takes just 4 proteins appeared first on Futurity.

Tiny tremors may not warn of big earthquakes to come

Tiny underground tremors called foreshocks were thought to be able to predict the possibility of a big earthquake on the way. Now, a new study suggests they may be indistinguishable from ordinary earthquakes.

The previous evidence came from a 7.6 magnitude earthquake in 1999 near Izmit, Turkey, that killed more than 17,000 people. A 2011 study in the journal Science found that the deadly quake was preceded by a series of small foreshocks—potential warning signs that a big seismic event was imminent.

“We’ve gone back to the Izmit earthquake and applied new techniques looking at seismic data that weren’t available in 2011,” says William Ellsworth, a professor (research) of geophysics at Stanford University’s School of Earth, Energy & Environmental Sciences and lead author of the new paper, which appears in Nature Geoscience.

“Unfortunately, our study doesn’t lead to new optimism about the science of earthquake prediction.”

“We found that the foreshocks were just like other small earthquakes. There was nothing diagnostic in their occurrence that would suggest that a major earthquake was about to happen.”

“We’d all like to find a scientifically valid way to warn the public before an earthquake begins,” says coauthor Fatih Bulut, an assistant professor of geodesy at Boğaziçi University’s Kandilli Observatory and Earthquake Research Institute. “Unfortunately, our study doesn’t lead to new optimism about the science of earthquake prediction.”

Scientists have proposed two ideas of how major earthquakes form, one of which—if scientists can detect them—could warn of a larger quake.

“About half of all major earthquakes are preceded by smaller foreshocks,” Ellsworth says. “But foreshocks only have predictive value if they can be distinguished from ordinary earthquakes.”

One idea, known as the cascade model, suggests that foreshocks are ordinary earthquakes that travel along a fault, one quake triggering another one nearby. A series of smaller cascading quakes could randomly trigger a major earthquake, but could just as easily peter out. In this model, a series of small earthquakes wouldn’t necessarily predict a major quake.

“It’s a bit like dominos,” Bulut says. “If you put dominos on a table at random and knock one over, it might trigger a second or third one to fall down, but the chain may stop. Sometimes you hit that magic one that causes the whole row to fall.”

Another theory suggests that foreshocks are not ordinary seismic events but distinct signals of a pending earthquake driven by slow slip of the fault. In this model, foreshocks repeatedly rupture the same part of the fault, causing it to slowly slip and eventually trigger a large earthquake.

In the slow-slip model, repeating foreshocks emanating from the same location could be early warnings that a big quake is coming. The question had been whether scientists could detect a slow slip when it is happening and distinguish it from any other series of small earthquakes.

In 2011, a team argued in Science that the foreshocks preceding the 1999 quake in Izmit were driven by slow slip, and could have been detected with the right equipment—the first evidence that foreshocks would be useful for predicting a major earthquake.

“That result has had a large influence in thinking about the question of whether foreshocks can be predictive,” Ellsworth says.

The city of Izmit is located on the North Anatolian Fault, which stretches about 900 miles (1,500 kilometers) across Turkey. For the 2011 study, a team analyzed data from a single seismic station several miles from the earthquake epicenter, which ultimately recorded seismograms of 18 foreshocks occurring about 9 miles (15 kilometers) below the surface—very close to where the larger earthquake began—and each with similar waveforms.

Those similarities led the authors to conclude that all of the foreshocks repeatedly broke the same spot on the fault, driven by slow slip that ultimately triggered the major earthquake. They concluded that monitoring similar events could provide timely warning that a big quake is imminent.

‘Deep tremors’ could warn us about earthquakes

“The Science paper concluded that there was a lot of slow slip, and had we been there with the right instruments we might have seen it,” Ellsworth says “We decided to test that idea that the foreshocks were co-located.”

Instead of relying on data from one seismic station, Ellsworth and Bulut analyzed seismograms recorded at nine additional stations during the 1999 earthquake.

With more stations, Ellsworth and Bulut identified a total of 26 foreshocks. None were identical, and the largest ones progressively moved from west to east along the fault. This finding is consistent with the cascade model, where an ordinary earthquake triggers another quake on a neighboring part of the fault, but doesn’t necessarily predict a major quake.

Bulut and Ellsworth found no evidence that slow slip played a role in triggering the Izmit earthquake.

“The authors of the Science paper were quite optimistic, but what they proposed had happened did not happen,” Ellsworth says.

What the researchers don’t know is why this series of cascading foreshocks triggered a massive earthquake when so many others don’t. Without better seismic instrumentation, important challenges like our ability to predict earthquakes will remain.

How jiggling underground cables could monitor earthquakes

“We’re not giving up on foreshocks just because we currently can’t tell them apart from other earthquakes,” Ellsworth says. “We want to understand if they have predictive value and if not, why not.

Answering that question will require observations made close to the action, deep in the heart of the earthquake machine, not as we currently do from the surface where we’re blind to changes deep underground.”

Source: Stanford University

The post Tiny tremors may not warn of big earthquakes to come appeared first on Futurity.

How to use stats to fight racial inequality, not support it

Using statistics to inform the public about racial disparities can backfire. Worse yet, it can cause some people to be more supportive of the policies that create those inequalities, according to new research.

“One of the barriers of reducing inequality is how some people justify and rationalize it,” says Rebecca Hetey, a psychology researcher at Stanford University. “A lot of people doing social justice work wonder why attitudes are so immune to change. Our research shows that simply presenting the numbers is not enough.”

If raw numbers don’t always work, what might?

In a new research paper published in Current Directions in Psychological Science, Hetey and psychology professor Jennifer Eberhardt propose strategies anyone could use to talk about racial disparities that exist across society, from education to health care and criminal justice systems.

Facts should come along with context that challenges stereotypes, the researchers say, noting that discussions should emphasize the importance of policies in shaping racial inequalities.

Misunderstood findings

The new paper builds on research Eberhardt, Hetey, and colleagues have conducted over several years about the role of race in policing and in the criminal justice system more broadly. In a 2017 study, the researchers worked with the Oakland Police Department and found that, although Oakland officers are professional overall, they spoke less respectfully to black residents than to their white counterparts.

“Stripped of context, standalone statistics may simply be used as ‘evidence’ of the stereotype that blacks are prone to criminality.”

“We are working hard to better understand the sources and consequences of this racial disparity in language use,” Eberhardt says.

In 2014, the researchers also found that white Americans did not show support for criminal justice reform after being informed of statistics about racial disparities in prisons. Although nearly 40 percent of the prison population is African-American, blacks make up only 13 percent of the US population. Instead, the study participants became more supportive of punitive policies like California’s Three Strikes law and New York City’s stop-and-frisk policy. As the researchers pointed out, these laws disproportionately affected people of color and contributed to the United States having the largest per-capita prison population in the world.

When that research was first published, Hetey and Eberhardt noticed how their findings sometimes were misunderstood.

“Some people concluded that we should stop talking about race and inequality at all,” Hetey says. “And that is not the answer here. The fact is that race matters, and stereotypes can be very powerful.”

Ways to improve

Hetey and Eberhardt encourage providing context alongside statistics.

For example, they said it might backfire to only say that 60 percent of traffic stops made in Oakland, California, were of African Americans. They suggest providing other background information, like the fact that African Americans make up 28 percent of the city’s population or that African Americans are stopped for less severe traffic offenses than whites are.

“Stripped of context, standalone statistics may simply be used as ‘evidence’ of the stereotype that blacks are prone to criminality,” the researchers write.

It is important to offer information about the history of these disparities in the US and how they came about, which might help convey that racial inequality is not natural or due to fixed stereotypical traits, the researchers says.

Another strategy is to talk about the role policy plays—especially policy change—in perpetuating or preventing inequality.

60% of black women killed by police were unarmed

For example, research has shown racial disparities in certain types of searches that police conduct. Blacks are disproportionately subjected to consent searches compared to whites.

In response, officials have enacted policy changes that mandate officers get written consent or explicitly tell those they stop that they have the right to deny an officer’s search request. In Oakland, this policy change led to a huge reduction in the number of consent searches and an overall reduction of the racial disparity, the researchers say.

“We know that persistent inequality has a lot to do with institutions and their practices,” Hetey says. “If we ignore this, we become blind to the way institutions contribute to producing and continuing inequality.”

Source: Stanford University

The post How to use stats to fight racial inequality, not support it appeared first on Futurity.

Will ACA dropouts unravel the insurance market?

People who take advantage of the Affordable Care Act when they need health care and then drop out when that need is no longer a factor could threaten the insurance market, a new working paper warns.

“If you have too many people who drop out after a few months of coverage, you might end up in a situation where insurers don’t want to offer any insurance at all in the market,” says Petra Persson, an assistant professor of economics in the Stanford University School of Humanities and Sciences.

The ACA, also known as Obamacare, passed in March 2010 with the goal of making health insurance more accessible. It established a competitive marketplace where individuals could shop for federal and state-level health care plans. Over 2014 and 2015—the first two years of the program—the share of Americans covered by individually purchased health insurance rose by 50 and 75 percent, respectively.

Little risk

Health care consumption surged, especially in low-income households and families with young children. But, so did attrition, researchers say. Dropout was sharpest after just one month of coverage. And only half of all new enrollees committed a full year to an insurance program.

To analyze enrollment and attrition, researchers studied 104,233 households that purchased health insurance in California either before or after the ACA came into effect.

The researchers examined spending habits and income sources for possible explanations of why people might have discontinued health care coverage. For example, did they drop out because they could no longer afford it, because of a job loss, or other large expense?

The researchers found that this was the case before the ACA came into effect. Pre-ACA, people often dropped out early because they experienced a loss of income, like unemployment. But post-ACA, the loss of income was much less important in explaining early dropout.

“These findings indicate that the ACA limited the risk of being forced to drop insurance coverage due to unexpected liquidity shocks,” Persson says.

If not income shock, then what?

The researchers found that some people strategically drop coverage after they have used the health care services they need.

“Our analysis shows that many consumers are strategically signing up for insurance to help defray the costs of non-chronic, potentially discretionary, health care needs and then dropping coverage once they have satisfied these needs,” says Rebecca Diamond, an assistant professor in the Graduate School of Business.

“The regulatory structure of the ACA law potentially incentivizes exactly this behavior,” the researchers write, noting that because the ACA prevents insurers from discriminating against applicants, they cannot legally reject applicants who strategically dropped coverage the previous year.

This behavior makes it difficult for insurers to set prices, Persson says.

Unexpected response

When people consume a year’s worth of health care in only a three-month period—and only pay a portion of the annual premium—it can be incredibly expensive for insurers. They can only guess what fraction of policyholders will end up dropping out midyear.

The researchers discovered a counterintuitive response from insurers: Health care plans that experienced more dropouts reduced their premium prices the following year.

“Insurers are trying to increase the demand from the pool of consumers who don’t drop out,” says Diamond, observing that these are the people who are more price sensitive to the cost of an annual plan.

“People who drop out are going to be less sensitive to the price set by the plan. They are always going to be willing to pay a higher monthly premium because they know they are not going to pay the full annual amount.”

While lowered annual premiums may seem like a beneficial result for committed health care consumers, the presence of dropouts undermines the stability of the market, the researchers say, adding that as a result, insurers may be unwilling to offer plans in the individual market.

Poll finds most and least popular parts of ACA

The ACA has been especially effective in providing lower-income households with health care coverage through a market that previously had largely served more affluent households, Persson says. But for ACA to continue being effective, enrollees must stay enrolled, she says.

While the ACA originally came with penalties for ceasing coverage early, it was not enough, researchers say. It was still cheaper for new enrollees to pay the fine for dropping out midyear than paying a full year of annual premiums, the cost analysis shows.

ACA freed up money for rent and mortgage bills

The recent removal of the individual mandate will likely increase the midyear dropout rate, Diamond says. “More dropout will raise financial pressure on insurers, increasing the possibility that the market unravels completely.”

Timothy McQuade of the Graduate School of Business and Michael J. Dickstein of New York University are coauthors of the paper.

Source: Stanford University

The post Will ACA dropouts unravel the insurance market? appeared first on Futurity.

Too much pumping gets arsenic into drinking water

New research suggests that, as intensive groundwater pumping makes the ground sink, it also allows arsenic to move into groundwater aquifers that supply drinking water for 1 million people.

For decades, intensive groundwater pumping has caused ground beneath California’s San Joaquin Valley to sink, damaging infrastructure.

In their new work, researchers found that satellite-derived measurements of ground sinking could predict arsenic concentrations in groundwater. This technique could be an early warning system to prevent dangerous levels of arsenic contamination in aquifers with certain characteristics worldwide.

“We’re just starting to recognize that this is a danger…”

“Arsenic in groundwater has been a problem for a really long time,” says lead author Ryan Smith, a doctoral candidate in geophysics at Stanford University’s School of Earth, Energy & Environmental Sciences.

It’s naturally present in Earth’s crust and a frequent concern in groundwater management because of its ubiquity and links to heart disease, diabetes, cancer, and other illnesses. “But the idea that overpumping for irrigation could increase arsenic concentrations is new,” Smith says.

Importantly, the group found signs that aquifers contaminated as a result of overpumping can recover if withdrawals stop. Areas that showed slower sinking compared to 15 years earlier also had lower arsenic levels.

“Groundwater must have been largely turned over,” says coauthor Scott Fendorf, a professor of Earth system science and a senior fellow at the Stanford Woods Institute for the Environment.

A sinking feeling

The research team analyzed arsenic data for hundreds of wells in two different drought periods alongside centimeter-level estimates of land subsidence, or sinking, captured by satellites. They found that when land in the San Joaquin Valley’s Tulare basin sinks faster than 3 inches per year, the risk of finding hazardous arsenic levels in groundwater as much as triples.

Aquifers in the Tulare basin are made up of sand and gravel zones separated by thin layers of clay. The clay acts like a sponge, holding tight to water as well as arsenic soaked up from ancient river sediments. Unlike the sand and gravel layers, these clays contain relatively little oxygen, which creates conditions for arsenic to be in a form that dissolves easily in water.

When pumping draws too much water from the sand and gravel areas, the aquifer compresses and land sinks.

“Sands and gravels that were being propped apart by water pressure are now starting to squeeze down on that sponge,” Fendorf explains. Arsenic-rich water then starts to seep out and mix with water in the main aquifer.

When water pumping slows enough to put the brakes on subsidence—and relieve the squeeze on trapped arsenic—clean water soaking in from streams, rain, and natural runoff at the surface can gradually flush the system clean.

However, coauthor Rosemary Knight, a professor of geophysics and affiliated faculty at the Woods Institute, warns against banking too much on a predictable recovery from overpumping.

Device makes clean water with paper and sunlight

“How long it takes to recover is going to be highly variable and dependent upon so many factors,” she says.

The researchers say overpumping in other aquifers could produce the same contamination issues seen in the San Joaquin Valley if they have three attributes: alternating layers of clay and sand; a source of arsenic; and relatively low oxygen content, which is common in aquifers located beneath thick clays.

The threat may be more widespread than once thought. Only in the last few years have scientists discovered that otherwise well-aerated aquifers considered largely immune to arsenic problems can in fact be laced with clays that have the low oxygen levels necessary for arsenic to move into most groundwater.

“We’re just starting to recognize that this is a danger,” says Fendorf.

Warning system

The revelation that remote sensing can raise an alarm before contamination threatens human health offers hope for better water monitoring.

“Instead of having to drill wells and take water samples back to the lab, we have a satellite getting the data we need,” says Knight.

While well data is important to validate and calibrate satellite data, she explains, on-the-ground monitoring can never match the breadth and speed of remote sensing.

“You’re never sampling a well frequently enough to catch that arsenic the moment it’s in the well,” says Knight. “So how fantastic to have this remote sensing early warning system to let people realize that they’re approaching a critical point in terms of water quality.”

The study builds on research led in 2013 by Laura Erban, then a doctoral student working in Vietnam’s Mekong Delta. “That’s where we started saying, ‘Oh no,’” says Fendorf, who was a coauthor of that paper.

Treating water for one toxic thing can create another

As in the San Joaquin Valley, areas of the Mekong Delta where land was sinking more showed higher arsenic concentrations.

“Now we have two sites in totally different geographic regions where the same mechanisms appear to be operating,” says Fendorf. “That sends a trigger that we need to be thinking about managing groundwater and making sure that we’re not overdrafting the aquifers.”

The National Science Foundation and the US Department of Energy funded the work.

Source: Josie Garthwaite for Stanford University

The post Too much pumping gets arsenic into drinking water appeared first on Futurity.

11 million may be taking the wrong drugs for heart health

More than 11 million Americans may have incorrect prescriptions for aspirin, statins, and blood pressure medications, according to a new study.

Researchers based their findings on an updated set of calculations—known as pooled cohort equations, or PCEs—used to determine the risk of a heart attack or stroke.

The PCEs are the foundation for cardiovascular-disease-prevention guidelines in the United States. They help physicians decide whether to prescribe aspirin, blood pressure, or statin medications, or some combination of these, by estimating the risk a patient may have for a heart attack or stroke.

“…relying on our grandparents’ data to make our treatment choices is probably not the best idea.”

Most physicians calculate a patient’s risk using a PCE web calculator or a smartphone app; the equations are also built into many electronic health records so that a patient’s risk is automatically calculated during an office visit.

But there has been debate over whether the PCEs are based on outdated data and therefore putting some patients at risk for over- or under-medication.

“We found that there are probably at least two major ways to improve the 2013 equations,” says Sanjay Basu, assistant professor of primary care outcomes research at the School of Medicine at Stanford University and a core faculty member at Stanford Health Policy. “The first was well-known: that the data used to derive the equations could be updated.”

For example, he says, one of the main data sets used to derive the original equations had information from people who were 30-62 years old in 1948, and who would therefore be 100 to 132 years old in 2018—that is, likely dead. The older equations were often estimating people’s risk as too high, possibly by an average of 20 percent across risk groups.

“A lot has changed in terms of diets, environments, and medical treatment since the 1940s,” says Basu, lead author of the study in Annals of Internal Medicine. “So, relying on our grandparents’ data to make our treatment choices is probably not the best idea.”

Furthermore, the researchers found that the old data may not have had a sufficient sample of African Americans. For many African Americans, physicians may have been estimating the risks of heart attacks or strokes as too low.

“So while many Americans were being recommended aggressive treatments that they may not have needed according to current guidelines, some Americans—particularly African Americans—may have been given false reassurance and probably need to start treatment given our findings,” Basu says.

For their study, the researchers updated the PCEs with newer data in an effort to substantially improve the accuracy of the cardiovascular risk estimates.

A second improvement to the equations, the authors found, was to update the statistical methods used to derive the equations.

New guidelines: Almost half of US adults have high blood pressure

“We found that by revising the PCEs with new data and statistical methods, we could substantially improve the accuracy of cardiovascular disease risk estimates,” the authors write.

Researchers from the University of Michigan, the University of Washington, and the University of Mississippi also contributed to the study.

Grants from the National Institutes of Health and a Stanford graduate fellowship supported the research, as did Stanford’s medicine department.

Source: Stanford University

The post 11 million may be taking the wrong drugs for heart health appeared first on Futurity.

Why the welfare backlash? Fear of lost racial status

Fear of losing their socioeconomic standing in the face of demographic change may be driving white Americans’ opposition to welfare programs, even though whites are major beneficiaries of government poverty assistance, according to new research.

Whites comprised 43% of Medicaid recipients, 36% of food stamp recipients, and 27% of the beneficiaries of Temporary Aid to Needy Families.

While social scientists have long posited that racial resentment fuels opposition to such anti-poverty programs as food stamps, Medicaid, and Temporary Aid to Needy Families, this is the first study to show the correlation experimentally, demonstrating a causal relationship between attitudes to welfare and threatened racial status.

“With policymakers proposing cuts to the social safety net, it’s important to understand the dynamics that drive the welfare backlash,” says lead author Rachel Wetts, a PhD student in sociology at the University of California, Berkeley. “This research suggests that when whites fear their status is on the decline, they increase opposition to programs intended to benefit poorer members of all racial groups.”

The findings, published in the journal Social Forces, highlight a welfare backlash that swelled around the 2008 Great Recession and election of Barack Obama.

Notably, the study found anti-welfare sentiment to be selective insofar as threats to whites’ standing led whites to oppose government assistance programs they believed largely benefit minorities, while not affecting their views of programs they thought were more likely to be of advantage to whites.

“Our findings suggest that these threats lead whites to oppose programs they perceive as primarily benefiting racial minorities,” says senior author Robb Willer, a professor of sociology and social psychology at Stanford University.

Welfare is ‘race-blind’

The work is particularly timely in the face of conservative Republican lawmakers’ efforts to cut federal spending by putting social safety net programs like Medicaid and the Supplemental Nutrition Assistance Program, formerly known as food stamps, on the chopping block.

In the study, researchers tracked a shift in attitudes to welfare around 2008 when America elected Obama, the nation’s first black president, and the country was suffering from a major recession whose reverberations continue to affect tens of millions of whites and non-whites alike.

According to census figures, 43 million Americans lived in poverty in 2016. Whites comprised 43 percent of Medicaid recipients, 36 percent of food stamp recipients, and 27 percent of the beneficiaries of Temporary Aid to Needy Families.

“Welfare programs are race-blind in that all low-income Americans are eligible to receive them,” Willer says. “So opposition to them, especially during tough economic times, threatens the same safety net that helps whites, as well as minorities, endure economic hardship.”

Turning point in 2008

In three separate studies, researchers analyzed nationally representative survey data of over 7,000 adult American men and women. In addition, they conducted two experiments with 400 participants via Amazon’s Mechanical Turk, an online marketplace.

First, an examination of attitudes to race and welfare in a nationally-representative survey found that whites’ racial resentment rose in 2008, the same year of the Great Recession and election of Barack Obama, suggesting that perceptions of increased political power among minorities were leading whites to sense a threat to their group’s status. At the same time, researchers discovered, whites’ opposition to welfare increased relative to that of minorities.

Childhood poverty in U.S. cost over $1 trillion in 2015

Next, researchers conducted an experiment in which participants saw one of two graphs highlighting different aspects of US population trends: One emphasized a stable white majority, and the other emphasized the declining white population in the US. White participants who saw information highlighting a decline in the white population reported heightened racial resentment and opposition to welfare programs. And, when asked how they would trim the federal budget, they recommended larger cuts to welfare.

In the third experiment, researchers found that when whites saw a threat to their economic advantage over minorities, they were more likely to want to cut social safety net programs, but only if those programs were portrayed as primarily benefiting minorities, not if they were portrayed as benefiting whites.

“Overall, these results suggest whites’ perceptions of rising minority power and influence lead them to oppose welfare programs,” Wetts says.

Source: UC Berkeley

The post Why the welfare backlash? Fear of lost racial status appeared first on Futurity.