Category Archives: Stanford University

Stress in utero harms cognitive skills of poor children

Exposure to an acute stress in utero can have long-term consequences extending into childhood—but only among children in poor households, according to a new study.

The study, which took place in Chile, did not find the same effect among children in upper- or middle-class families.

“These children performed worse on a diverse set of skills critical for educational success, including arithmetic reasoning, verbal fluency, spatial analysis, logical thinking and problem-solving skills,” says study leader Florencia Torche, sociology professor in the Stanford University School of Humanities and Sciences.

Torche also found that while middle- and upper-class families have the resources to mitigate the effects from the event, disadvantaged children without extra help can fall up to half a year behind, according to the research in Demography.

The ability to catch up depends on the family’s socioeconomic resources, she finds.

“This is a troubling finding because it shows that acute stress exacerbates disadvantages that poor children already face,” Torche says.

Stress doesn’t occur on its own

While previous research has examined the effects of chronic stress, little is known about the long-term consequences of an acutely stressful event during pregnancy, says Torche. An acute stress a pregnant woman could experience include witnessing a violent event, falling victim to a crime, almost suffering a serious injury, or losing a job.

But because stress is often correlated with other challenging situations—like family turmoil, relationship difficulties, or financial problems—it can be difficult to study, says Torche. That’s why she used a disaster event to create a natural experiment: a 7.9 magnitude earthquake that occurred June 13, 2005, in Tarapaca, Chile.

“If we want to disentangle the effect of stress from these other common correlates, we need to isolate it,” Torche says.

“It was only when I broke the results down by socioeconomic status that I found a very strong negative effect among the most disadvantaged families.”

Unlike most natural disasters with devastating consequences—such as property damage, long-term displacement, or public health emergencies—the losses from the Tarapaca earthquake were relatively small: 11 people died, 130 were injured, and 180 homes were destroyed. With limited spillover effects that could have influenced health outcomes of a mother and her unborn child, Torche was able to more clearly isolate the direct impact of an acute stress on pregnant women.

Torche then combined birth records with a random sample of 591 children whose mothers experienced the earthquake during their pregnancy and compared that data with a control group of 558 randomly selected children born in the same time period in Chilean counties the earthquake didn’t affect.

Torche has closely studied these children since birth. Her 2011 study found that exposure to an acute stress during pregnancy increased the number of preterm births.

“Given that preterm birth is associated with health and developmental problems during childhood, this finding provided initial evidence that prenatal exposure to acute stress could have negative consequences for children,” she says.

Half a year behind

Here, Torche checked in with these children who were now 7 years old and starting school.

With a team of trained field researchers, Torche conducted a series of cognitive tests with each child in the treatment and control groups.

“The effect of prenatal exposure to an acute stressor emerged only among the most disadvantaged members of society.”

They assessed abilities such as verbal comprehension, spatial reasoning, memory, and how quickly children processed information needed to perform a task.

At first, Torche found no statistically significant effects when she looked at the results for the entire sample. But as she dug deeper into the data, she made a striking discovery: only the children from poor households showed negative effects. There was no effect on children from middle- and upper-class families.

“It was only when I broke the results down by socioeconomic status that I found a very strong negative effect among the most disadvantaged families,” she says.

Torche then broke it down even further. Because poor children face a range of educational disparities, how did disadvantaged children who experienced the earthquake compare to poor children in the control group who did not?

Torche found a difference that amounted to more than half a year of cognitive development. In other words, a low-income child in the second grade who experienced stress in utero was performing closer to a first-grade level.

Access to resources

After establishing an unequal effect of stress, Torche conducted a set of qualitative interviews to understand why children from middle- and upper-class families were unaffected. At the time of these interviews, the children were mostly 9 years old and in fourth grade.

In their interviews, upper- and middle-class parents shared that they constantly assessed their children’s strengths and weaknesses. If a child showed signs of struggling in any way, they mobilized resources to intervene. This included hiring tutors, signing up for structured activities, and interacting more with teachers and the school to help their child inside and outside of the classroom.

Racial health disparities start early in life

“While some disadvantaged families have also resorted to the assistance of experts and educators, and have requested institutional support, they face substantial barriers in terms of time, economic resources, and, equally important, access to social networks and mastery of cultural resources to effectively negotiate with institutions for advantages for their children,” Torche writes in the paper.

Torche notes that this finding shows that class-based parental responses that minimized effects of prenatal stress could further exacerbate social class disparities.

This research is yet another piece of evidence that shows the importance of supporting disadvantaged women and their children, Torche says.

“The effect of prenatal exposure to an acute stressor emerged only among the most disadvantaged members of society. Given that these women are particularly vulnerable, and less likely to have access to health care, increasing access to health care and sources of support for this population is an important task,” she says.

Source: Stanford University

The post Stress in utero harms cognitive skills of poor children appeared first on Futurity.

Frequent skin cancer may be a huge warning sign

People who develop abnormally frequent cases of a skin cancer known as basal cell carcinoma appear to be at significantly increased risk for the development of other cancers, including blood, breast, colon, and prostate cancers, according to a new, preliminary study.

“[Skin is] the best organ to detect genetic problems that could lead to cancers.”

Mutations in a panel of proteins responsible for repairing DNA damage likely cause the increased susceptibility, researchers say.

“We discovered that people who develop six or more basal cell carcinomas during a 10-year period are about three times more likely than the general population to develop other, unrelated cancers,” says senior author Kavita Sarin, assistant professor of dermatology at Stanford University.

“We’re hopeful that this finding could be a way to identify people at an increased risk for a life-threatening malignancy before those cancers develop.”

The research appears in JCI Insight.

Canary in the coal mine

The skin is the largest organ of the body and the most vulnerable to DNA damage caused by the sun’s ultraviolet rays. Try as one might, it’s just not possible to completely avoid sun exposure, which is why proteins that repair DNA damage are important to prevent skin cancers like basal cell carcinoma.

Most of the time this system works well. But sometimes the repair team can’t keep up. Basal cell carcinomas are common—more than 3 million cases a year are diagnosed in the United States alone—and usually highly treatable.

“About 1 in 3 Caucasians will develop basal cell carcinoma at some point in their lifetime…”

Sarin and lead author Hyunje Cho, a medical student, wondered whether the skin could serve as a kind of canary in the coal mine to reveal an individual’s overall cancer susceptibility. “The skin is basically a walking mutagenesis experiment,” Sarin says. “It’s the best organ to detect genetic problems that could lead to cancers.”

Sarin and Cho studied 61 people treated for unusually frequent basal cell carcinomas—an average of 11 per patient over a 10-year period. They investigated whether these people may have mutations in 29 genes that code for DNA-damage-repair proteins.

“We found that about 20 percent of the people with frequent basal cell carcinomas have a mutation in one of the genes responsible for repairing DNA damage, versus about 3 percent of the general population. That’s shockingly high,” Sarin says.

Virus sleeps for years then wakes up to cause skin cancer

Furthermore, 21 of the 61 people reported a history of additional cancers, including blood cancer, melanoma, prostate cancer, breast cancer, and colon cancer—a prevalence that suggests the frequent basal cell carcinoma patients are three times more likely than the general population to develop cancers.

‘A strong correlation’

To confirm the findings, the researchers applied a similar analysis to a large medical insurance claims database. Over 13,000 people in the database had six or more basal cell carcinomas; these people also were over three times more likely to have developed other cancers, including colon, melanoma, and blood cancers.

Finally, the researchers identified an upward trend: the more basal cell carcinomas an individual reported, the more likely that person was to have had other cancers as well.

“I was surprised to see such a strong correlation,” Sarin says. “But it’s also very gratifying. Now we can ask patients with repeated basal cell carcinomas whether they have family members with other types of cancers, and perhaps suggest that they consider genetic testing and increased screening.”

The researchers are continuing to enroll patients in the study, which is ongoing, to learn whether particular mutations in genes responsible for repairing DNA damage are linked to the development of specific malignancies. They’d also like to conduct a similar study in patients with frequent melanomas. But they emphasized that there’s no reason for people with occasional basal cell carcinomas to worry.

This generic skin cream may cut carcinoma risk

“About 1 in 3 Caucasians will develop basal cell carcinoma at some point in their lifetime,” Sarin says. “That doesn’t mean that you have an increased risk of other cancers. If, however, you’ve been diagnosed with several basal cell carcinomas within a few years, you may want to speak with your doctor about whether you should undergo increased or more intensive cancer screening.”

The Dermatology Foundation, the National Institutes of Health, the Stanford Society of Physician Scholars, the American Skin Association, and Pellepharm Inc. supported the research. Stanford’s dermatology department also supported the work.

Two of the coauthors are cofounders, directors, and officers of Pellepharm, a biotechnology company focused on rare dermatological conditions.

Source: Stanford University

The post Frequent skin cancer may be a huge warning sign appeared first on Futurity.

Moral outrage online can backfire big time

When outcry against offensive behavior on social media goes viral, people may see those challenging the behavior less as noble heroes doing the right thing and more as bullies doling out excessive punishment, according to a new study.

Through a series of laboratory studies, Benoît Monin, a professor of ethics, psychology, and leadership at the Graduate School of Business and professor of psychology at Stanford University, and PhD candidate Takuya Sawaoka found that while comments against offensive behavior are seen as legitimate and even admirable as individual remarks, they may lead to greater sympathy for the offender when they multiply.

Viral anger

“One of the features of the digital age is that anyone’s words or actions can go viral, whether they intend to or not,” says Sawaoka.

“In many cases, the social media posts that are met with viral outrage were never intended to be seen by people outside of the poster’s social circle. Someone doesn’t even need to be on social media in order for their actions to go viral.”

“We’ve all either been in one of those maelstroms of outrage or just one step away from one as bystanders on our social media news feeds…”

Because of social media, responses to questionable behavior reach further than ever before.

“We’ve all either been in one of those maelstroms of outrage or just one step away from one as bystanders on our social media news feeds,” says Monin, noting how frequent these public outcries have become on social media.

For example, in 2013 there was public outcry over a young woman who tweeted that she couldn’t get AIDS while traveling to Africa because she was white. Her post, which she says she intended as a joke, went viral across social media and quickly made its way into the news. It led to her losing her job.

“On the one hand, speaking out against injustice is vital for social progress, and it’s admirable that people feel empowered to call out words and actions they believe are wrong,” says Sawaoka. “On the other hand, it’s hard not to feel somewhat sympathetic for people who are belittled by thousands of strangers online, and who even lose friends and careers as a result of a poorly thought-out joke.”

‘Outrage at the outrage’

Sawaoka and Monin put their observations to the test. They conducted six experiments with a total of 3,377 participants to examine how people perceived public outcry to an offensive or controversial post on social media. The researchers set up a variety of scenarios, including asking people how they felt when there were only one or two comments versus a mass of replies.

In one study, the researchers showed participants a post taken from a real story of a charity worker who posted a photograph of herself making an obscene gesture and pretending to shout next to a sign that read “Silence and Respect” at Arlington National Cemetery.

“There is a balance between sympathy and outrage…”

They asked participants how offensive they found the photograph, as well as what they thought about the responses to the post.

The researchers found that when participants saw the post with just a single comment condemning it, they found the reaction applaudable.

When they saw that reply echoed by many others, they viewed the original reply—which had been praiseworthy in isolation—more negatively. Early commenters were de facto penalized for later, independent responses, they say.

“There is a balance between sympathy and outrage,” says Monin about their findings. “The outrage goes up and up but at some point sympathy kicks in. Once a comment becomes part of a group, it can appear problematic. People start to think, ‘This is too much—that’s enough.’ We see outrage at the outrage.”

What about a white supremacist?

The researchers were curious to know whether people would feel less sympathetic depending on the status of the offender. Would they feel differently if something offensive was says by a well-known person, or by someone many people regard as abhorrent, like a white supremacist?

“Obviously, the implication is not that people should simply stay silent about others’ wrongdoing.”

In one study, participants were shown a social media post taken from a real story where a comedian ridiculed overweight women. The researchers set up two conditions: one where they referred to him as an average social media user, and another where they said he was an up-and-coming comedy actor.

Mirroring their earlier findings, the researchers found that a high-profile persona did not elicit any less sympathy than the average person—despite the fact that people believed they could cause more harm from their post. And like their previous results, the researchers found that people viewed individual commenters less favorably after outrage went viral.

When Sawaoka and Monin tested for affiliation to a white supremacist organization, they found similar results. Although participants were less sympathetic toward a white supremacist making a racist comment, they did not view the individuals who participated in the outrage any differently. They still perceived the display of viral outrage as bullying.

Negative posts out-do flops in social media marketing

“These results suggest that our findings are even more broadly applicable than we had originally anticipated, with viral outrage leading to more negative impressions of individual commenters even when the outrage is directed toward someone as widely despised as a white supremacist,” Sawaoka and Monin write.

No quick fix

The question about how to respond to injustice in the digital age is complex, Sawaoka and Monin conclude in the paper.

“Our findings illustrate a challenging moral dilemma: A collection of individually praiseworthy actions may cumulatively result in an unjust outcome,” Sawaoka says.

Depression more likely for social media addicts

“Obviously, the implication is not that people should simply stay silent about others’ wrongdoing,” he clarifies. “But I think it is worth reconsidering whether the mass shaming of specific individuals is really the best way to achieve social progress.”

Source: Stanford University

The post Moral outrage online can backfire big time appeared first on Futurity.

Fishing bans protect wildlife without harming fishers

Fishing bans don’t have to hurt fishing communities in favor of environmental and wildlife protection, according to a new study.

“…the challenge remains where conservation and socioeconomic goals conflict…”

Researchers tracked vessels during a short-lived trawling moratorium in the Adriatic Sea and found that fishers maintained their catch levels by fishing elsewhere. The findings suggest that such bans can protect overfished regions without hurting people’s livelihoods and could influence efforts to protect other sensitive regions.

“Our findings demonstrate how even in areas where there’s intense and complex use, it is possible for different parties to achieve success,” says study coauthor Fiorenza Micheli, professor of marine science at Stanford University. The findings informed a European Union decision to extend the protection.

The study tracked fishing vessels through a common onboard technology, Automatic Identification System, that regularly transmits data as a way of preventing collisions. By tracking the boats, researchers found fishing vessels that complied with a one-year fishing ban maintained their catch levels by moving to other areas.

In addition to supporting more permanent protection in the Adriatic, the results hold promise for other highly exploited areas around the world where enforcement is challenging.

“We’re protecting the sea’s ability to heal itself and ensuring our own economic health in the process—both in the short and long term.”

The Adriatic Sea hosts a large fraction of all recorded Mediterranean marine species. Because of its richness, the sea has long been exploited and suffers from habitat damage.

In particular, the intensely exploited Jabuka-Pomo area in the central Adriatic serves as an important breeding ground for a number of commercially valuable species like European hake and Norway lobster.

To protect these resources, Italy and Croatia enacted a one-year ban in the area against trawling the sea floor with large nets. The ban, enacted in 2015, came after decades of scientific investigation. At the time, the fishing industry voiced concerns about potential lost income from not being able to access their traditional fishing grounds.

Wealthy nations dominate ocean fishing

Based on the group’s findings that the ban did not hurt yields, the General Fisheries Commission for the Mediterranean Sea and the European Union extended the closure for an additional three years. Pending evaluation of the ban’s benefits, it may be extended further.

“The decision to extend protection was a significant step forward, but the challenge remains where conservation and socioeconomic goals conflict,” says lead author Robin Elahi, a postdoctoral fellow. The study found that some of the displaced fishing affected other sensitive habitats—a downside that Elahi and Micheli plan to examine more closely in upcoming research.

The group says its approach could be used to observe fishing behavior around other marine protected areas and to track whether the bans force fishers into other sensitive ecosystems. It could even serve as a deterrent to violating fishing restrictions.

“If we want to continue to catch fish, we have to create places where there is no fishing,” says Micheli, a senior fellow at the Stanford Woods Institute for the Environment and co-director of the Stanford Center for Ocean Solutions.

“We’re protecting the sea’s ability to heal itself and ensuring our own economic health in the process—both in the short and long term.”

See an almost real-time map of global fishing

The research will appear in Frontiers in Ecology and Environment. Additional coauthors are from Stanford, the Polytechnic University of Marche, the Consiglio Nazionale delle Ricerche, and Navama Technology for Nature.

The Stanford Woods Institute for the Environment’s Environmental Venture Projects program and Oceans 5 supported the work.

Source: Nicole Kravec for Stanford University

The post Fishing bans protect wildlife without harming fishers appeared first on Futurity.

After cereal, even healthy people’s blood sugar spikes

The level of sugar in an individual’s blood—especially in individuals who are considered healthy—fluctuates more than traditional means of monitoring, like the one-and-done finger-prick method, would have us believe, according to a new study.

“…folks who think they’re healthy actually are misregulating glucose—sometimes at the same severity of people with diabetes—and they have no idea…”

Often, these fluctuations come in the form of “spikes,” or a rapid increase in the amount of sugar in the blood, after eating specific foods—most commonly, carbohydrates. Using a device that keeps extra-close tabs on the ups and downs of blood glucose levels, the new research reveals that most people see only a partial picture of the sugar circulating in their blood.

“There are lots of folks running around with their glucose levels spiking, and they don’t even know it,” says Michael Snyder, professor and chair of genetics at Stanford University and senior author of the study, which appears in PLOS Biology.

The covert spikes are a problem because high blood sugar levels, especially when prolonged, can contribute to cardiovascular disease risk and a person’s tendencies to develop insulin resistance, which is a common precursor to diabetes, he says.

“We saw that some folks who think they’re healthy actually are misregulating glucose—sometimes at the same severity of people with diabetes—and they have no idea,” Snyder says.

The insight came to Snyder after he and his collaborators gave study participants a continuous glucose-monitoring device, which superficially pokes into the surface layer of the skin and takes constant readings of sugar concentrations in the blood as it circulates. With the constant readouts providing more detailed data, Snyder’s group saw not only that glucose dysregulation is more common than previously thought, but they also used the data to start building a machine-learning model to predict the specific foods to which people spike.

The goal is to one day use the framework to compile data from an individual and, based on their continuous glucose readout, direct them away from particularly “spikey” foods.

3 different ‘glucotypes’

Most people who periodically check their blood sugar levels do so with a quick lance to the finger and a device that reads out the blood glucose concentration. The problem with this method is that it captures only a snapshot in time.

The amount of sugar in a person’s blood is not a constant; it ebbs and flows depending on what the person has eaten that day, down to the specific kind of carbohydrate. (For instance, rice, breads, and potatoes are all different kinds of carbohydrates, yet people often digest them differently.)

“We’re very interested in what it means to be ‘healthy’ and finding deviations from that…”

To get a better read on glucose levels, Snyder fitted 57 people with a device that continuously took blood glucose readings over about two weeks. Most of the participants were healthy or showing signs of prediabetes, and five had type 2 diabetes. Data sent back to the lab showed that there were multiple types of spikers, which were classified into three overarching “glucotypes.” The glucotype categories—low, moderate, and severe—are basically rankings of spike intensity.

“We’re very interested in what it means to be ‘healthy’ and finding deviations from that,” says Snyder, professor of genetics. These glucotypes, he says, are subject to change based on diet. The researchers ultimately have two goals for their work: When people spike, catch it early; and understand what makes a person spike, and adjust their diet to bring the glucotype into the “low” range.

Often people who are prediabetic have no idea they’re prediabetic. In fact, this is the case about 90 percent of the time. It’s a big deal, Snyder says, as about 70 percent of people who are prediabetic will eventually develop the disease.

“We think that these continuous glucose monitors will be important in providing the right information earlier on so that people can make changes to their diet should they need to,” he says.

Breakfast and blood sugar

In getting at the subtleties of spiking, Snyder conducted a sub-study in which 30 participants using the continuous glucose monitor alternated between three breakfasts: a bowl of cornflakes with milk, a peanut butter sandwich, and a protein bar.

Some people with diabetes lack blood sugar ‘awareness’

The trio of tests yielded some fairly startling results: After eating one or more of the meals, more than half of the group—whose prior blood sugar tests showed that they were “healthy”—spiked at the same levels as those of people who were prediabetic or diabetic.

What’s more, nearly everyone spiked after eating the cereal.

“We saw that 80 percent of our participants spiked after eating a bowl of cornflakes and milk,” Snyder says. “Make of that what you will, but my own personal belief is it’s probably not such a great thing for everyone to be eating.”

Still, the variables that elicit spikes in an individual—genetics; the population of microbes that live in our bodies; and epigenetics, or changes to gene expression—are critical to understanding glucose dysregulation and the foods that cause glucose spikes. Those parameters are not set in stone, which is why Snyder encourages everyone—including those who think of themselves as healthy—to check their blood sugar with continuous glucose monitoring about once a year.

“Right now we have information about people who do and don’t spike, or are super-spikers, but we need to get smart about why it’s happening,” Snyder says. “I think understanding the microbiome and manipulating it is going to be a big part of this, and that’s where our research is headed next.”

Exposure to bright light may alter blood sugar

The National Institutes of Health and the National Science Foundation supported the study. Stanford’s genetics department also supported the work.

Source: Stanford University

The post After cereal, even healthy people’s blood sugar spikes appeared first on Futurity.

These non-medical talks elicit better end-of-life plans

New research suggests better results when people with serious diseases discuss their end-of-life decisions with a non-clinical worker.

The findings suggest that patients with a serious illness are more at ease with decisions about their care when they discuss their care preferences with someone outside the medical context, say the researchers.

Patients with advanced cancer who spoke with a trained nonclinical worker about personal goals for care were more likely to talk with doctors about their preferences, report higher satisfaction with their care, and incur lower health costs in their final month of life, researchers from the Stanford University School of Medicine report.

Lead author Manali Patel, assistant professor of medicine, and her colleagues employed a lay health worker to conduct conversations with 213 patients about their personal desires for care and to encourage them to share this information with providers.

The intervention, which appears in JAMA Oncology, was based on prior research that Patel conducted when she was a fellow at Stanford’s Clinical Excellence Research Center, in which patients expressed a preference for having these discussions with nonclinical workers.

“A goals-of-care conversation is not about prognosis. It’s a holistic approach to understanding the patient’s wishes and how they want to experience their life,” Patel says. “You don’t need higher-level training to have that conversation. You just need a very supportive ear.”

Questions about end-of-life

Patel and her fellow researchers followed patients at the Veterans Affairs Palo Alto Health Care System for 15 months after they received a diagnosis of stage-3 or -4 cancer or with recurrent cancer. Half were randomly assigned to speak with a lay health worker about goals of care over a six-month period.

The lay health worker had participated in a training curriculum that Patel created that included an 80-hour online seminar, as well as four weeks of observational training with the hospital’s palliative care team. During several telephone and in-person conversations, the worker led patients through a structured program that addressed questions, such as:

  • “What is your understanding of your cancer?”
  • “What is important to you?”
  • “Have you thought about a time when you could be sicker?”
  • “How would you want to spend your time in that situation?”

Together, they also established care preferences, identified a surrogate decision-maker, and filed an advance directive.

“We trained the worker to address these questions over multiple time periods and to revisit the conversation when unexpected events occurred, such as an emergency department visit or bad scan results,” Patel says.

“How a patient feels and what they express as their desires today may be different from how they may feel a week from now, if they had a really horrible side effect from the chemotherapy that they’re receiving and they’re finding themselves in the hospital for two weeks rather than spending the time with their family.”

Higher satisfaction

Patients in the study who participated in conversations with the lay health worker were more likely to have documentation of end-of-life care preferences in their electronic health records within six months of those conversations starting (92 percent compared with 18 percent in the control group). Researchers used this documentation to gauge whether patients had discussed the topic with their doctors.

Patients in the intervention group also rated their oncology care higher, giving it an average satisfaction score of 9.16 out of 10, compared with the average of 7.83 from the control group. They also posted higher satisfactions scores when queried about their care-related decision-making.

Questions game makes end-of-life planning easier

“This indicates that patients in the intervention were having a better experience with their providers despite having been prompted and activated to discuss really difficult topics,” Patel says. “This is consistent with what other studies have shown indicating that patients value honest and open communication regarding their prognosis.”

Lower costs of care

The researchers also monitored health-care costs and use among patients in the two groups.

They found few significant differences over 15 months; however, for patients who died during the study, the final 30 days diverged markedly. Those who discussed goals of care with the lay health worker were six times less likely to visit the emergency department or be hospitalized than members of the control group, and twice as likely to use hospice services. Their median health care cost within 30 days of death was $1,048, compared with $23,482 for the control group.

Overall, patients who participated in conversations with the lay health worker used hospice at higher rates than the control group—a finding that tracks with other research, Patel says.

Talk it out to ease tough end-of-life decisions

“Consistently, patients who understand that they have an incurable cancer are more likely to choose less aggressive care, and we see that same result here,” she says. “Communication and listening to patients seem to be the common theme because when providers listen to patients and they’re receiving care that’s concordant with their goals, they seem to have better outcomes, especially at the end of life.”

Support for the work came from the California Health Care Foundation, Veterans Affairs Office of Healthcare Transformation Specialty Care, and the National Institutes of Health, as well as Stanford’s departments of Medicine and of Health Research & Policy.

Source: Stanford University

The post These non-medical talks elicit better end-of-life plans appeared first on Futurity.

Test could give people decades to prevent osteoporosis

A new genetic screen may predict a person’s future risk of osteoporosis and bone fracture, according to new research.

Specifically, the study, one of the largest of its kind, identified 899 regions in the human genome associated with low bone-mineral density, 613 of which have never before been identified.

People deemed to be at high risk—about 2 percent of those tested—were about 17 times more likely than others to develop osteoporosis and about twice as likely to experience a bone fracture in their lifetimes. In comparison, about 0.2 percent of women tested will have a cancer-associated mutation in the BRCA2 gene, which increases their risk of breast cancer to about six times that of a woman without a BRCA2 mutation.

Early identification of people with an increased genetic risk for osteoporosis could be an important way to prevent or reduce the incidence of bone fracture, which according to the National Osteoporosis Foundation affects 2 million people each year and accounts for $19 billion in annual health care costs.

“There are lots of ways to reduce the risk of a stress fracture, including vitamin D, calcium, and weight-bearing exercise,” says Stuart Kim, an emeritus professor of developmental biology at Stanford University School of Medicine and author of the paper in PLOS ONE. “But currently there is no protocol to predict in one’s 20s or 30s who is likely to be at higher risk, and who should pursue these interventions before any sign of bone weakening. A test like this could be an important clinical tool.”

Bone loss

Kim originally approached his investigation as a way to help elite athletes or members of the military learn if they are at risk of bone injury during strenuous training. Once he had compiled the results, however, he saw a strong correlation between people predicted to have the highest risk of low bone-mineral density and the development of osteoporosis and fracture.

Osteoporosis is responsible for as many as 1 in 2 fractures in women and 1 in 4 in men over the age of 50.

Osteoporosis, or porous bone, is a disease that results in a reduction in bone mass due to bone loss or defects in bone production, or both. It’s correlated with a high incidence of bone fracture because the weakened bone is less able to withstand the stress of slips and falls, or sometimes even normal daily activity. It affects millions of Americans and is responsible for as many as 1 in 2 fractures in women and 1 in 4 in men over the age of 50.

Two previous studies have shown that there is a genetic component to osteoporosis; you’re more likely to develop it if you have a family history of the condition. In addition to genetics, your behaviors, including the frequency and type of exercise you prefer and your diet, as well as your weight and gender, also play a large role in bone health.

Recently, genetic studies on large groups of individuals have shown that hundreds of genes are likely involved, each making its own small contribution to either increased or decreased risk of the disease.

An osteoporosis diagnosis often results from a bone-mineral density test that uses X-rays to measure the amount of minerals, such as calcium, in a person’s hip, spine, or heel. But bone-mineral density tests are usually only performed on people with a family history of osteoporosis or those who have experienced a recent fracture from a simple fall.

“The most common clinical algorithm used to detect or predict osteoporosis is called FRAX,” Kim says. “But the catch is that the two largest components of the FRAX algorithm are bone-mineral density and prior fracture. So it’s kind of a circular argument.”

400,000 people

Kim analyzed the genetic data and health information of nearly 400,000 people in the UK Biobank—a vast compendium of de-identified information freely available to public health researchers around the world. For each participant, Kim collected data on bone-mineral density, age, height, weight, and sex, as well that participant’s genome sequence. He then developed a computer algorithm to identify naturally occurring genetic differences among people found with low bone-mineral density.

Using the algorithm, Kim was able to identify 1,362 independent differences, or single-nucleotide polymorphisms, that correlated with low bone-mineral density readings. He then used a machine-learning method called LASSO to further hone the data.

The resulting algorithm assigned a score to each of the nearly 400,000 participants to indicate their risk of low bone-mineral density; subsequent analyses showed that those in the bottom 2.2 percent of these scores were 17 times more likely than their peers to have been diagnosed with osteoporosis and nearly twice as likely to have experienced a bone fracture.

Working out may keep young women from shrinking later

“The analysis worked really well,” Kim says. “This is one of the largest genome-wide association studies ever completed for osteoporosis, and it clearly shows the genetic architecture that underlies this important public health problem.”

Kim is now planning to arrange a clinical trial to investigate whether elite athletes and select members of the military identified by the algorithm as being at high risk for osteoporosis and potential fracture can increase their bone-mineral density with simple preventive measures. He’s also interested in conducting a similar study among younger people with no obvious clinical symptoms of bone weakening.

“Fifteen million people in this country have already accessed their genome sequences using direct-to-consumer testing services,” Kim says. “I think this analysis has the potential to become the standard of care in the coming years. It would be a relatively simple measure to identify those who should have their bone-mineral density tested and perhaps take steps at an early age to ensure their future bone health.”

The National Institute on Aging supported the work, as did Stanford’s developmental biology department.

Source: Stanford University

The post Test could give people decades to prevent osteoporosis appeared first on Futurity.

People with depression have low blood levels of this stuff

People with depression have low blood levels of a substance called acetyl-L-carnitine, according to a new study.

“[Depression is] the No. 1 reason for absenteeism at work, and one of the leading causes of suicide…”

Naturally produced in the body, acetyl-L-carnitine is also widely available in drugstores, supermarkets, and health food catalogs as a nutritional supplement. People with severe or treatment-resistant depression, or whose bouts of depression began earlier in life, have particularly low blood levels of the substance.

The findings, which appear in the Proceedings of the National Academy of Sciences, build on extensive animal research. They mark the first rigorous indication that the link between acetyl-L-carnitine levels and depression may apply to people, too.

They also point the way to a new class of antidepressants that could be freer of side effects and faster-acting than those in use today, and that may help patients for whom existing treatments don’t work or have stopped working.

Trying to understand depression

Natalie Rasgon, professor of psychiatry and behavioral sciences at Stanford University, describes the findings as “an exciting addition to our understanding of the mechanisms of depressive illness.”

“As a clinical psychiatrist, I have treated many people with this disorder in my practice,” she says.

Depression, also called major depressive disorder or clinical depression, is the most prevalent mood disorder in the United States and the world, affecting 8-10 percent of the general population at any given time, with every fourth person likely to experience the condition over the course of a lifetime.

“It’s the No. 1 reason for absenteeism at work, and one of the leading causes of suicide,” Rasgon says. “Worse, current pharmacological treatments are effective for only about 50 percent of the people for whom they’re prescribed. And they have numerous side effects, often decreasing long term compliance.”

“In rodent experiments… a deficiency of acetyl-L-carnitine was associated with depression-like behavior,” McEwen says. Oral or intravenous administration of acetyl-L-carnitine reversed the animals’ symptoms and restored their normal behavior, he says.

In those studies, the animals responded to acetyl-L-carnitine supplementation within a few days. Current antidepressants, in contrast, typically take two to four weeks to kick in—in animal experiments as well as among patients.

Animal studies by Carla Nasca, a postdoctoral scholar in McEwen’s lab, suggest that acetyl-L-carnitine, a crucial mediator of fat metabolism and energy production throughout the body, plays a special role in the brain, where it works at least in part by preventing the excessive firing of excitatory nerve cells in brain regions called the hippocampus and frontal cortex.

Rasgon cautions against rushing to the store to pick up a bottle of acetyl-L-carnitine and self-medicating for depression.

The new study, which Nasca also initiated, recruited 20- to 70-year-old men and women who had been diagnosed with depression and, amid episodes of acute depression, had been admitted to either Weill Cornell Medicine or Mount Sinai School of Medicine, both in New York City, for treatment.

These participants went through screening via a detailed questionnaire and clinical assessment, plus blood samples and medical histories. Twenty-eight of them were judged to have moderate depression, and 43 had severe depression.

In comparing their blood samples with those of 45 demographically matched healthy people, the depressed patients’ acetyl-L-carnitine blood levels were found to be substantially lower. These findings held true for both men and women, regardless of age.

A word of caution

Further analysis showed that the lowest levels occurred among participants whose symptoms were most severe, whose medical histories indicated they were resistant to previous treatments, or whose onset of the disorder occurred early in life.

New clues to genetics of depression are ‘game-changing’

Acetyl-L-carnitine levels were also lower among those patients reporting a childhood history of abuse, neglect, poverty, or exposure to violence.

These patients, who collectively account for 25-30 percent of all people with major depression disorder, are precisely the ones most in need of effective pharmacological interventions, says Rasgon, who performed the bulk of the advanced data analysis for the study.

But she cautions against rushing to the store to pick up a bottle of acetyl-L-carnitine and self-medicating for depression.

Carnitine before pregnancy may lower autism risk

“We have many previous examples of how nutritional supplements widely available over the counter and unregulated by the Food and Drug Administration—for example, omega-3 fatty acids or various herbal substances—are touted as panaceas for you-name-it, and then don’t pan out,” she says.

Big questions remain, she adds. “We’ve identified an important new biomarker of major depression disorder. We didn’t test whether supplementing with that substance could actually improve patients’ symptoms. What’s the appropriate dose, frequency, duration? We need to answer many questions before proceeding with recommendations, yet. This is the first step toward developing that knowledge, which will require large-scale, carefully controlled clinical trials.”

‘Bad thoughts’ connect lousy sleep and depression

Additional researchers from Rockefeller University; Weill Cornell Medical College; the Icahn School of Medicine at Mount Sinai; Duke University; and the Karolinska Institute in Stockholm, Sweden also contributed to the work.

Stanford University shares in a multi-institutional agreement concerning intellectual property resulting from this research. The Hope for Depression Research Foundation, the Pritzker Neuropsychiatric Disorders Research Consortium, and the Robertson Foundation funded the study. Stanford’s psychiatry and behavioral sciences department also supported the work.

Source: Stanford University

The post People with depression have low blood levels of this stuff appeared first on Futurity.

Gut bacteria byproduct protects against Salmonella

Researchers have identified a molecule that serves as natural protection against one of the most common intestinal pathogens.

Salmonella causes about 1.2 million illnesses, 23,000 hospitalizations, and 450 deaths nationwide each year.

Propionate, a byproduct of metabolism by a group of bacteria called the Bacteroides, inhibits the growth of Salmonella in the intestinal tract of mice, according to the researchers. The finding may help to explain why some people are better able to fight infection by Salmonella and other intestinal pathogens and lead to the development of better treatment strategies.

The researchers determined that propionate doesn’t trigger the immune response to thwart the pathogen. Instead, the molecule prolongs the time it takes the pathogen to start dividing by increasing its internal acidity.

Salmonella infections often cause diarrhea, fever, and abdominal cramps. Most people recover within four to seven days. However, the illness may be severe enough to require hospitalization for some patients. Salmonella causes about 1.2 million illnesses, 23,000 hospitalizations, and 450 deaths nationwide each year, according to the Centers for Disease Control and Prevention. Contaminated food causes most cases.

A complex ecosystem

“Humans differ in their response to exposure to bacterial infections. Some people get infected and some don’t, some get sick and others stay healthy, and some spread the infection while others clear it,” says Denise Monack, professor of microbiology and immunology at Stanford University and the senior author of the paper.

“It has been a real mystery to understand why we see these differences among people. Our finding may shed some light on this phenomenon,” Monack says.

“Trillions of bacteria, viruses, and fungi form complex interactions with the host and each other…”

For years, scientists have been using different strains of mice to determine how various genes might influence susceptibility to infection by intestinal pathogens. But this is the first time that researchers have looked at how the variability of gut bacteria in these mice might contribute to their different responses to pathogens.

“The gut microbiota is an incredibly complex ecosystem. Trillions of bacteria, viruses, and fungi form complex interactions with the host and each other in a densely packed, heterogeneous environment,” says Amanda Jacobson, the paper’s lead author and a graduate student in microbiology and immunology. “Because of this, it is very difficult to identify the unique molecules from specific bacteria in the gut that are responsible for specific characteristics like resistance to pathogens.”

Looking at unique reactions

The scientists started with an observation that has been recognized in the field for years: Two inbred strains of mice harbor different levels of Salmonella in their guts after being infected with the pathogen.

“The biggest challenge was to determine why this was happening,” Jacobson says.

First, they determined that the differences in Salmonella growth could be attributed to the natural composition of bacteria in the intestines of each mouse strain. They did this by performing fecal transplants, which involved giving mice antibiotics to kill off their usual composition of gut bacteria and then replacing the microbial community with the feces of other mice, some of whom were resistant to Salmonella infection.

Then, the researchers determined which microbes were responsible for increased resistance to Salmonella infection by using machine-learning tools to identify which groups of bacteria were different between the strains.

“Reducing the use of antibiotics is an added benefit because overuse of antibiotics leads to increased incidence of antibiotic-resistant microbes…”

They identified a specific group of bacteria, the Bacteroides, which was more abundant in mice transplanted with the microbiota that was protective against Salmonella. Bacteroides produce short-chain fatty acids such as formate, acetate, butyrate, and propionate during metabolism, and levels of propionate were threefold higher in mice that were protected against Salmonella growth.

Then, the researchers sought to figure out whether propionate protected against Salmonella by boosting the immune system like other short-chain fatty acids do.

Oops! Our bodies can make Salmonella more toxic

The scientists examined their Salmonella model for the potential impact of propionate on the immune system but found that the molecule had a more direct effect on the growth of Salmonella. Propionate acts on Salmonella by dramatically decreasing its intracellular pH and thus increasing the time it takes for the bacterium to start dividing and growing, the study found.

“Collectively, our results show that when concentrations of propionate, which is produced by Bacteroides, in the gut are high, Salmonella are unable to raise their internal pH to facilitate cellular functions required for growth,” Jacobson says. “Of course, we would want to know how translatable this is to humans.”

Going forward

“The next steps will include determining the basic biology of the small molecule propionate and how it works on a molecular level,” Jacobson says.

In addition, the researchers will work to identify additional molecules made by intestinal microbes that affect the ability of bacterial pathogens like Salmonella to infect and “bloom” in the gut. They are also trying to determine how various diets affect the ability of these bacterial pathogens to infect and grow in the gut and then shed into the environment.

“These findings will have a big impact on controlling disease transmission,” Monack says.

Test finds dangerous Salmonella in cows much faster

The findings could also influence treatment strategies. Treating Salmonella infections sometimes require the use of antibiotics, which may make Salmonella-induced illness or food poisoning worse since they also kill off the “good” bacteria that keep the intestine healthy, according to Monack. Using propionate to treat these infections could overcome this limitation.

“Reducing the use of antibiotics is an added benefit because overuse of antibiotics leads to increased incidence of antibiotic-resistant microbes,” Monack says.

The research appears in Cell Host and Microbe. The National Institutes of Health, the Paul Allen Stanford Discovery Center on Systems Modeling of Infection, and the National Science Foundation funded the study. Stanford’s microbiology and immunology department also supported the work.

Source: Kimber Price for Stanford University

The post Gut bacteria byproduct protects against Salmonella appeared first on Futurity.

Why some working women prefer ‘intentional invisibility’

Professional women have strong reasons to ignore recommendations that urge them to have a more visible presence at work, according to a new study.

While research has shown that visibility in the workplace is critical for professional advancement, the reality is that for some women, it’s easier said than done.

For two years, three sociologists from Stanford University immersed themselves in a women’s professional development program at a large nonprofit organization in the United States. They conducted interviews with 86 program participants and observed 36 discussion groups and 15 program-wide meetings where many of the women shared the barriers and biases they encountered at their organization, as well as the strategies they used to overcome them. The research appears in Sociological Perspectives.

They found that for many of the women they studied, there are competing expectations that get in the way of them following common career tips like “take a seat at the table,” “speak with authority,” and “interject at meetings.”

A double bind

Many of the women participating in the study told researchers that they felt a double bind: If they worked on the sidelines, they could be overshadowed by their colleagues and overlooked for job promotions. But having a more assertive presence in the office, many women thought, could also backfire.

Instead, these women adopted a strategy that the researchers called “intentional invisibility,” a risk-averse, conflict-avoidant approach to navigating unequal workplaces.

“To craft careers that felt rewarding, women sought to reduce the chances for interpersonal conflict and to increase opportunities for friendly relationships within their work teams…”

While women in the study recognized that being less visible in the office could hurt their odds of a promotion or other career opportunities, they acknowledged that violating feminine norms—like being assertive or authoritative when they are expected to be nice, collaborative, and communal—could have the same effect.

One woman in the study shared how she was worried that conflict at work could disrupt her relationships with colleagues. She told the researchers that at meetings, men would mistake her for a secretary, when in fact she was a software engineer. Rather than confront the stereotype, she chose to shrug it off. In order to minimize exposure to conflict, she opted to keep a low profile and incrementally advance in her career without backlash.

“To craft careers that felt rewarding, women sought to reduce the chances for interpersonal conflict and to increase opportunities for friendly relationships within their work teams,” the researchers write.

‘I’m never going to be big’

Working behind the scenes also resonated with many women in the study who equated a visible presence with attention-seeking behaviors like being aggressive or self-promoting. This felt at odds with their own character, they reported.

In a discussion group the researchers observed, one woman says to her peers, “I mean I’m never going to be big, I just never am.” She says that while there were men in her office with large personalities, that approach did not resonate with her own style.

“…I was very uncomfortable with the word ‘leadership’ until I was able to redefine it for myself.”

These women questioned the norm that effective employees need to call attention to themselves. “Real leaders don’t really have to say what their title is, or have to brag about their accolades or whatever,” says one woman. “Your work should speak for itself.”

Rather than emulate behaviors they viewed as inauthentic and masculine, many women chose to quietly challenge conventional definitions of professional success by embracing a different work style, the researchers say.

As one woman says in an interview, “Not that there is anything wrong with people who want to promote themselves and make money and have great titles—it’s just that I was very uncomfortable with the word ‘leadership’ until I was able to redefine it for myself.”

Balancing act

In line with previous research that shows that women generally shoulder a disproportionate share of familial responsibilities, the researchers found that remaining behind the scenes was a particularly common strategy for women caring for children at home. Staying out of the spotlight at work helped these women maintain both professional and personal stability.

Minimizing visibility in order to create work/life balance, though, came at the cost of making big career moves for some women.

“Women in our study chose this strategy from a limited set of options…”

For example, one woman said she scaled back her ambitions at work when one of her children was diagnosed with a medical condition that required more adult supervision. She changed from an upper-level role to a less stressful and less visible job.

Many women in the study, the researchers write, “find that they can only pursue their ambitions to a point to ensure stability.” Women adjusting to evolving family needs often concluded that embracing a behind-the-scenes approach allowed them to be effective while staying out of the spotlight and avoiding negative backlash.

Progress toward workplace gender equality has ‘stalled out’

“Women in our study chose this strategy from a limited set of options,” says coauthor Priya Fielding-Singh. “Because there was no clear path to having it all, many chose to prioritize authenticity and conflict reduction at work and home.”

Rethinking visibility

In the end, the authors say, it is organizations—not the women embedded within them—that need to adapt to create gender equality.

“Organizations should realize that asking women to be visible without recognizing the toll that such visibility takes is not really leveling the playing field,” coauthor Swethaa Ballakrishnen says. “To be truly equal workplaces, organizations need to rethink the ways in which they assign and reward visibility.”

Although their study did not track the effects of the strategies women took, the authors suspect that working behind the scenes may disadvantage women aiming for top positions in their organizations. Until organizations become level playing fields, there will be incentives for women to continue adopting this strategy.

5 wage gap myths about women at work

Looking ahead, they say, organizations need to ensure that women will not face backlash from their managers and peers when they do take on visible roles.

“In the meantime, it is important to understand how structural barriers impact women’s choices and, ultimately, their career outcomes,” Fielding-Singh says.

Stanford’s Clayman Institute for Gender Research supported the research.

Source: Stanford University

The post Why some working women prefer ‘intentional invisibility’ appeared first on Futurity.