WHO launches health review after microplastics found in 90% of bottled water

Researchers find levels of plastic fibres in popular bottled water brands could be twice as high as those found in tap water

The World Health Organisation (WHO) has announced a review into the potential risks of plastic in drinking water after a new analysis of some of the worlds most popular bottled water brands found that more than 90% contained tiny pieces of plastic. A previous study also found high levels of microplastics in tap water.

In the new study, analysis of 259 bottles from 19 locations in nine countries across 11 different brands found an average of 325 plastic particles for every litre of water being sold.

In one bottle of Nestl Pure Life, concentrations were as high as 10,000 plastic pieces per litre of water. Of the 259 bottles tested, only 17 were free of plastics, according to the study.

Scientists based at the State University of New York in Fredonia were commissioned by journalism project Orb Media to analyse the bottled water.

The scientists wrote they had found roughly twice as many plastic particles within bottled water compared with their previous study of tap water, reported by the Guardian.

A colourful microfibre of plastic found in bottled water. Photograph: Abigail Barrows

According to the new study, the most common type of plastic fragment found was polypropylene the same type of plastic used to make bottle caps. The bottles analysed were bought in the US, China, Brazil, India, Indonesia, Mexico, Lebanon, Kenya and Thailand.

Scientists used Nile red dye to fluoresce particles in the water the dye tends to stick to the surface of plastics but not most natural materials.

The study has not been published in a journal and has not been through scientific peer review. Dr Andrew Mayes, a University of East Anglia scientist who developed the Nile red technique, told Orb Media he was satisfied that it has been applied carefully and appropriately, in a way that I would have done it in my lab.

The brands Orb Media said it had tested were: Aqua (Danone), Aquafina (PepsiCo), Bisleri (Bisleri International), Dasani (Coca-Cola), Epura (PepsiCo), Evian (Danone), Gerolsteiner (Gerolsteiner Brunnen), Minalba (Grupo Edson Queiroz), Nestle Pure Life (Nestle), San Pellegrino (Nestle) and Wahaha (Hangzhou Wahaha Group).

A World Health Organisation spokesman told the Guardian that although there was not yet any evidence on impacts on human health, it was aware it was an emerging area of concern. The spokesman said the WHO would review the very scarce available evidence with the objective of identifying evidence gaps, and establishing a research agenda to inform a more thorough risk assessment.

A second unrelated analysis, also just released, was commissioned by campaign group Story of Stuff and examined 19 consumer bottled water brands in the US.It also found plastic microfibres were widespread.

The brand Boxed Water contained an average of 58.6 plastic fibres per litre. Ozarka and Ice Mountain, both owned by Nestle, had concentrations at 15 and 11 pieces per litre, respectively. Fiji Water had 12 plastic fibres per litre.

Abigail Barrows, who carried out the research for Story of Stuff in her laboratory in Maine, said there were several possible routes for the plastics to be entering the bottles.

Plastic microfibres are easily airborne. Clearly thats occurring not just outside but inside factories. It could come in from fans or the clothing being worn, she said.

Stiv Wilson, campaign coordinator at Story of Stuff, said finding plastic contamination in bottled water was problematic because people are paying a premium for these products.

Jacqueline Savitz, of campaign group Oceana, said: We know plastics are building up in marine animals and this means we too are being exposed, some of us every day. Between the microplastics in water, the toxic chemicals in plastics and the end-of-life exposure to marine animals, its a triple whammy.

Nestle criticised the methodology of the Orb Media study, claiming in a statement to CBC that the technique using Nile red dye could generate false positives.

Coca-Cola told the BBC it had strict filtration methods, but acknowledged the ubiquity of plastics in the environment meant plastic fibres may be found at minute levels even in highly treated products.

A Gerolsteiner spokesperson said the company, too, could not rule out plastics getting into bottled water from airborne sources or from packing processes. The spokesperson said concentrations of plastics in water from their own analyses were lower than those allowed in pharmaceutical products.

Danone claimed the Orb Media study used a methodology that was unclear. The American Beverage Association said it stood by the safety of its bottled water, adding that the science around microplastics was only just emerging.

The Guardian contacted Nestle and Boxed Water for comment on the Story of Stuff study, but had not received a response at the time of publication.

Read more: https://www.theguardian.com/environment/2018/mar/15/microplastics-found-in-more-than-90-of-bottled-water-study-says

What Are Screens Doing to Our Eyesand Our Ability to See?

The eyes are unwell. Their childhood suppleness is lost. The lenses, as we log hours on this earth, thicken, stiffen, even calcify. The eyes are no longer windows on souls. They’re closer to teeth.

To see if your own eyes are hardening, look no further than your phone, which should require no exertion; you’re probably already there. Keep peering at your screen, reading and staring, snubbing life’s third dimension and natural hues. The first sign of the eyes’ becoming teeth is the squinting at phones. Next comes the reflexive extending of the arm, the impulse to resize letters into the preschool range. And at last the buying of drugstore readers.

Virginia Heffernan (@page88) is an Ideas contributor at WIRED. She is the author of Magic and Loss: The Internet as Art. She is also a cohost of Trumpcast, an op-ed columnist at the Los Angeles Times, and a frequent contributor to Politico. Before coming to WIRED she was a staff writer at the New York Times—first a TV critic, then a magazine columnist, and then an opinion writer. She has a bachelor’s degree from the University of Virginia and a master’s degree and PhD in English from Harvard. In 1979 she stumbled onto the internet, when it was the back office of weird clerics, and she’s been in the thunderdome ever since.

Modern medicine offers little apart from magnifying glasses to treat presbyopia (from the Greek presbus, meaning “old man”). But those $3.99 specs will get you on your feet just fine, which is to say, you can once again relish your phone without squinting or arm-stretching. A remedy for farsightedness evidently succeeds to the degree that it restores a woman or man to the comfortable consumption of texts, email, ecommerce, and social media on a glazed rectangle of aluminum alloys held at a standard reading distance of 16 inches. With reading glasses we live again.

Doesn’t this seem like an unwholesome loop? The eyes may be unwell, but the primary object of our eyesight seems corrosive. We measure our vision against the phone, all the while suspecting the phone itself is compromising our ability to see it.

Even if we don’t say out loud that failing vision has something to do with our vastly narrowed visual field, our bodies seem to know what’s up. How convenient, for example, that you can turn up a phone’s contrast and brightness with a few taps. If perception can’t be improved, objects can be made more perceivable, right? But then the brightness seems, like morphine, to produce a need for more brightness, and you find yourself topping out, hitting the button in vain for more light only to realize that’s it. You’ve blinded yourself to the light that was already there.

Having recently, in my forties, gotten reading glasses, I now find myself having to choose between reading and being, since I can’t read without them and I can’t see the world with them. The glasses date from a time when reading was much rarer a pastime than being; you’d grope for them to see a book, while relying on your naked eyes for driving, talking, walking.

But of course now so many of us read all day long. And I opt to flood my field of vision with the merry play of pixels and emoji rather than the less scintillating, brown-gray “real world.” This means wearing the reading glasses, even on the street, and affecting blindness to everything but my phone.

What might modern vision be today without the phone as its reason for being? If you were a nomadic goatherd in the Mongolian grasslands, you might not even consider presbyopia a pathology. Many nomads carry cell phones for calls and music, but, except to play games, they rarely gaze at them. Instead, they rest their eyes on the ever-moving flock, alert to vagaries in the animals’ collective configuration and inclinations; but simultaneously they soften the vision to wide angle, so as to detect peripheral anomalies and threats. On camelback in the wide-open grasslands, the eyes line easily with the horizon, which means their eyes take in distance, proximity, an unpixelated spectrum, and unsimulated movement. A panoramic view of the horizon line roots the beholder in the geometer’s simplest concepts of perspective: foreshortening, a vanishing point, linearity, and the changeable shadows cast by the movement of the sun over and under the horizon line. That third dimension—depth—is never, ever forgotten by the nomads. The sun rises and sets on depth.

See more from the Life Issue.
April 2018. Subscribe to WIRED.

Nik Mirus

Depending on your after-hours curriculum in Mongolia (cooking, talking, playing the fiddle), you might rarely even need to do what digital moderns never stop doing: recruit the eye’s ciliary muscle and contract it, releasing tension in the ligaments that suspend the eye to acutely curve the lens and train it to a pixelated 1.4-milimeter letter x on, for instance, a mobile news app. If you explained to a nomad the failures of her aging eyes, she might shrug: Who needs anxious ciliary muscles?

Indeed. And the use of those muscles by digital moderns gets even more complicated when we encounter our x’s not on paper—carbon-­black ink, like liquid soot, inscribed on bleached pulpwood—but on screens. That’s where we come across the quivering and uncertain symbols that play across the—surface, is it? Where are they exactly? Somewhere on or in our devices. No wonder the eyes are unwell.

Every vocation has consequences for eyesight. Ice fishermen can go snowblind. Welders suffer arc eye. Ships’ lookouts hallucinate. Academics develop myopia. And texters—call it an avocation—have blurred vision.

There are at least two recorded cases of something called smartphone blindness. The New England Journal of Medicine notes that both patients had been reading their phones in bed, on their sides, faces half-hidden, in the dark. “We hypothesized that the symptoms were due to differential bleaching of photo-­pigment, with the viewing eye becoming light-adapted.” Differential bleaching of the eyes! Fortunately, smartphone blindness of this kind is transient.

The blanket term for screen-borne eyesight problems is computer vision syndrome, an unsatisfactory name given to the blurring, dry eyes, and headaches suffered by the people of the screen. The name is unsatisfactory because, like many syndromes, it describes a set of phenomena without situating them in a coherent narrative—medical or otherwise. For contrast, arc eye is a burn: Welders get it from their exposure to bright ultraviolet light. Snowblindness is caused when corneas are sunburned by light reflecting off snow. Hallucinations afflict lookouts because, as Ishmael explains in Moby-Dick, they’re up at odd hours and alone, parsing the “blending cadence of waves with thoughts” for danger, whales, or other vessels; the brain and eyes are inclined to make meaning and mirages of undifferentiated land- and seascapes where none exist.

Computer vision syndrome is not nearly as romantic. The American Optometric Association uses it to describe the discomfort that people report feeling after looking at screens for a “prolonged” period of time. When screens pervade the field of vision all day, what counts as prolonged? (Moreover, reports of discomfort seem like not much to predicate a whole syndrome on.) But the AOA’s treatment of the syndrome is intriguing. This is the so-called 20-20-20 rule, which asks that screen people take a 20-second break to look at something 20 feet away every 20 minutes.

The remedy helps us reverse-engineer the syndrome. This suffering is thought to be a function not of blue light or intrusive ads or bullying and other scourges. It’s thought to be a function of unbroken concentration on a screen 8 inches to 2 feet from the eyes. The person suffering eyestrain is taught to look 20 feet away but she might presumably look at a painting or a wall. Twenty feet, though, suggests it’s depth she may be thirsty for.

The naming of a syndrome discharges the latest anxiety about screens, which have always been a source of social suspicion. People who are glued to screens to the exclusion of other people are regarded with disdain: narcissistic, withholding, deceitful, sneaky. This was true even with the panels that prefigured electronic screens, including shoji, as well as mirrors and newspaper broadsheets. The mirror-gazer may have been the first selfie fanatic, and in the heyday of mirrors the truly vain had handheld mirrors they toted around the way we carry phones. And hand fans and shoji—forget it. The concealing and revealing of faces allowed by fans and translucent partitions suggest the masquerade and deceptions of social media. An infatuation with screens can easily slide into a moral failing.

Not long ago a science writer named Gabriel Popkin began leading tree walks for city dwellers in Washington, DC, whose monomaniacal attention to screens had left them tree-blind. That’s right, tree blindness—and the broader concept of blindness to the natural world—might actually be the real danger screens pose to vision. In 2012, Popkin had learned about trees to cure this blindness in himself and went from a naif who could barely pick out an oak tree to an amateur arboriculturist who can distinguish hundreds of trees. The biggest living beings in his city suddenly seemed like friends to him, with features he could recognize and relish.

I opt to flood my field of vision with the merry play of pixels and emoji rather than the brown-gray “real world.” This means wearing reading glasses, even on the street, and affecting blindness to everything but my phone.

Once he could see trees, they became objects of intense interest to him—more exhilarating than apps, if you can believe it. “Take a moment to watch and listen to a flowering redbud tree full of pollen-drunk bumblebees,” he has written. “I promise you won’t be bored.”

If computer vision syndrome has been invented as a catch-all to express a whole range of fears, those fears may not be confined to what blue light or too much close-range texting are doing to the eyesight. Maybe the syndrome is a broader blindness—eyes that don’t know how to see and minds that increasingly don’t know how to recognize nondigital artifacts, especially nature.

Lately, when I pull away from the screen to stare into the middle distance for a spell, I take off my glasses. I try to find a tree. If I’m inside, I open a window; if I’m outside, I will even approach a tree. I don’t want mediation or glass. The trees are still strangers; I hardly know their names yet, but I’m testing myself on leaf shapes and shades of green. All I know so far is that trees are very unlike screens. They’re a prodigious interface. Very buggy. When my eyes settle after a minute or two, I—what’s that expression, “the scales fell from my eyes”? It’s almost, at times, like that.

Read More

Real Wedding, Virtual SpaceThe Pursuit of YouthThe True Screen AddictsGamers Age OutRebooting ReproductionSilicon Valley's Brotox BoomThe Next Steve JobsSolving Health Issues at All Stages

Virginia Heffernan (@page88) is a contributing editor at WIRED and the author of Magic and Loss: The Internet as Art.

This article appears in the April issue. Subscribe now.

Read more: https://www.wired.com/story/failing-vision-screens-blindness/

The human microbiome: why our microbes could be key to our health

Studies suggest the microbes inside us could hold the key to treating a plethora of conditions. Nicola Davis explains why

What are microbiomes?

Both inside and out, our bodies harbour a huge array of micro-organisms. While bacteria are the biggest players, we also host single-celled organisms known as archaea, as well as fungi, viruses and other microbes including viruses that attack bacteria. Together these are dubbed the human microbiota. Your bodys microbiome is all the genes your microbiota contains, however colloquially the two terms are often used interchangeably.


Hang on, arent microbes supposed to be dangerous?

Its a bit of a spectrum: some are pathogens, but others only become harmful if they get in the wrong place or boom in number, and some are very useful to the body such as by helping to break down the array of sugars found in human breast milk. These sugars are not broken down by the infant, said Prof John Cryan, a neuropharmacologist and microbiome expert from University College Cork. Instead, microbes in the babys gut do the job.

Other key roles of our microbes include programming the immune system, providing nutrients for our cells and preventing colonisation by harmful bacteria and viruses.

Where do my gut microbes come from? Do I just pick them up from my surroundings?

Partly. But it is more complicated than that. It is still a little bit controversial but for the most part it is thought that we are sterile when we are in utero, and as we are being born, as we emerge through the birth canal from our mums, we get this handover bacteria, said Cryan. It is like a gulp at birth. Those bacteria are really important for starting the whole process.

Cryan notes that during pregnancy a mothers microbiome shifts, apparently to an optimum mix for offspring. If you are not born by vaginal delivery, but are born by [caesarean] section, things start off being different, he said. Indeed, studies have suggested that these differences could be one of the reasons why babies born by caesarean section have a higher risk of conditions including asthma and type 1 diabetes. That said, doctors have cautioned parents against attempting to seed babies born by caesarean section with vaginal bacteria.

Our gut microbiome changes quickly over our first year or two, shaped by microbes in breast milk, the environment and other factors, and stabilises by the time we are about three years old. But our environment, our long-term diet, stress and the drugs we take, such as antibiotics, continue to play a role as we age, meaning our microbiome can change throughout our life.

Crime scene microbes

It seems like microbes are everywhere how many are we talking about?

The figure that has been bandied out since the 1970s is that microbes outnumber our own cells by about 10 to one. But a study from 2016 suggests that in fact microbial cells and human cells coexist in somewhere around a 1.3 to one ratio suggesting they only slightly outnumber our own cells, although that doesnt count viruses and viral particles.

Does this mean I am not human?

Some say we should be seen as a holobiont, a term that reflects the intimate, co-dependent relationship humans have with microbes. I tell this joke that the next time someone goes to the bathroom and they get rid of some of their microbes they are becoming more human, said Cryan.

But Ellen Clarke, a philosopher of biology at the University of Leeds, is not convinced. It all depends on what you mean by human in the first place, she said. If you think that a human is a collection of cells that all share copies of the same chromosomes, then it is shocking to be told that our bodies contain cells with bacterial DNA.

But as Clarke points out, human cells dont just contain chromosomes, but also carry DNA within our cellular powerhouses, mitochondria, which are evolutionary descendants of bacteria. Our genome also contains stretches of genetic material called transposons that, at least in some cases, are thought to have been introduced long ago by viruses. I prefer to define a human in evolutionary terms, and if we do this then mitochondria are parts of a human, and so are transposons, but gut microbes are not, and neither are prosthetic limbs nor unborn foetuses, said Clarke, pointing out that microbes can escape the body and live without us.

Are microbes the same in my gut as on my skin?

No, different parts of the body the skin, vagina, gut all have very different, distinct communities of microbes. While gut microbes have gained a lot of attention, microbes elsewhere are also important: in recent studies, scientists have found that bacteria commonly found on the skin might help to protect against skin cancer.

Microbiomes also differ from person to person. When you look at the overall active microbiomes between two healthy people, even if they are living in the same city, you see a tremendous amount of disagreement in their microbiome, said Rob Knight, professor of paediatrics, computer science and engineering at the University of California San Diego and an expert on the human microbiome.

Variability in the gut microbiome, Knight notes, helps to explain why people respond differently to the same foods. Whether tomatoes are good or bad for you, whether rice is good for you or worse for you than ice cream and so on is explained by your microbiome, he said.

Why has the microbiome become such a hot topic for research?

Over recent years the gut microbiome in particular has been linked to a plethora of diseases and conditions, from diabetes to autism and anxiety to obesity.

The gut microbiome has also been linked to how individuals respond to certain drugs, including how cancer patients respond to chemotherapy, and it has even, tentatively, been suggested that it could be linked with how well we sleep.

Meanwhile, a range of studies have raised the importance of other aspects of our microbiome, including that the vaginal microbiome is important in whether an HIV-prevention drug applied to the vagina is effective.

Why do we think the microbiome is linked to all these conditions?

While some links have come from comparing the microbiomes of different groups of people, such as those with a particular disease compared with healthy individuals, a big player in microbiome research is the germ-free mouse.

This organism is raised in a sterile environment and can then be exposed to particular microbes, or groups of microbes, to explore their impact. Such studies have been key in raising possible links between the gut microbiome and numerous aspects of our health, including mood and obesity.

Is it that particular microbes are important, or is it about the microbial community as a whole?

This is the knotty issue. In some experiments, particular strains of bacteria have been linked to particular effects or conditions, while others have shown that the diversity of the microbiome, or relative abundances of species, is important.

It is a bit like a rainforest: you might have a very nice fern that is very happy but if that is the only thing in your rainforest and you dont have a diversity it is not going to be good [for the] soil, said Tim Spector, professor of genetic epidemiology at Kings College London and author of The Diet Myth. When it comes to the microbiome, its having the right community of bacteria that are working together and together producing the right chemicals for your body.


So might microbes be affecting our weight, or even our brains? That sounds a bit sci-fi.

When it comes to obesity, there are several ways gut microbes might influence matters, including through appetite, production of gases, efficiency of using food, and impact on the immune system and inflammation.

When it comes to affecting mood, there are also several mechanisms. One is via the vagus nerve, a two-way highway that runs from our brain to various organs in the body, including the gut.

With the microbiome linked to so many conditions, does tinkering with it promise a whole range of new treatments?

It is worth being cautious: many studies show associations rather than cause and effect, and some are based only on studies in germ-free mice and have not been explored in humans. Even in mice things arent straightforward effects are not always the same for both sexes and can differ for different strains of mice.

And there are other factors to consider: For obesity what it looks like is in different human populations, different kinds of microbes are involved in the differences between lean and obese humans, said Knight.

Spector said: I think everyone is right to be sceptical, and a lot of the links may just be that [microbes] are not necessarily the cause of [a disease], but they might be a secondary effect of it.

Others say it isnt surprising that our microbiome might be closely linked to our health. All of human development and all the systems in the body have all evolved, or co-evolved, with our microbes, said Cryan. As humans we are very much human-focused and we feel that human cells and genes have primacy, but the microbes were there first.

Pass the poo

Does any of this actually affect patients?

Up to a point. The field has already led to advances in the treatment of C difficile an infection that causes serious diarrhoea and can prove deadly. Patients can now receive faecal transplants from a donor with a healthy microbiome to reset their inner community a procedure that has been shown to rapidly cure the condition.

Some researchers, including Cryan, believe microbiome research could lead to the development of new mental health therapies. We have coined the term psychobiotic [by which we mean] a targeted intervention of the microbiome for brain health, he said.

While that may be some way off, Cryan believes it will become routine for doctors to keep an eye on the makeup of patients microbiomes. I think personally that bacteria- or microbiome-derived medicine is the future of precision medicine, he said.

Lets cut to the chase: what can I do to keep my microbiome in good shape?

This is where prebiotics and probiotics come in: the former are substances, such as the fibre inulin, on which useful microbes can thrive, while the latter are microbes themselves that are thought to be beneficial for health, such as the Lactobacillus and Bifidobacterium species.

While both prebiotics and probiotics can be taken as supplements, whether you should shell out for them is another matter: there is little advice on which prebiotics or probiotics people should consume for a particular situation, and when it comes to probiotics it isnt a dead cert that the microbes will colonise your gut when they get there, or if they will offer benefits to already healthy people, such as preventing diseases. That said, if you are taking antibiotics or have IBS, there is some evidence probiotics might be a good idea.

It is not clear yet whether youre better off just having lots of yoghurt and other fermented foods or actually taking these formulations, said Spector, adding that in general he recommends opting for tweaking your diet to get a dose of probiotics, since it isnt clear which strains individuals should take. The same goes for prebiotics: there is more variety in food in terms of the fibre, therefore more variety in the microbes, he said. Ideally you combine a prebiotic and a probiotic: something like sauerkraut or kimchi.

What next?

The spotlight is on unpicking the mechanisms by which microbes are linked to human health. Among the conundrums is how and why the different strains of bacteria have different effects, while researchers are also developing studies to explore how the microbiome influences our response to food, and how different diets can tweak the microbiome. There is also a need to take more of the exciting findings from mouse studies and probe them in humans, preferably through randomised control trials.

Further reading:

I Contain Multitudes, by Ed Yong

Gut: The Inside Story of Our Bodys Most Underrated Organ, by Giulia Enders

The Psychobiotic Revolution, by Scott C Anderson with John Cryan and Ted Dinan

Follow Your Gut: How the Ecosystem in Your Gut Determines Your Health, Mood, and More, by Rob Knight

Illustrations: Pete Gamlen

Read more: https://www.theguardian.com/news/2018/mar/26/the-human-microbiome-why-our-microbes-could-be-key-to-our-health

The Struggle to Predictand PreventToxic Masculinity

Terrie Moffitt has been trying to figure out why men are terrible for more than 25 years. Or, to calibrate: Why some men are really terrible—violent, criminal, dangerous—but most men are not. And, while she’s at it, how to tell which man is going to become which.

A small number of people are responsible for the vast majority of crimes. Many of those people display textbook “antisocial behavior”—technically, a serious disregard for other people’s rights—as adolescents. The shape of the problem is called the age-crime curve, arrests plotted against the age of the offender. It looks like a shark’s dorsal fin, spiking in the teenage years and then long-tailing off to the left.

In 1992, Moffitt, now a psychologist at Duke University, pitched an explanation for that shape: The curve covers two separate groups. Most people don’t do bad things. Some people only do them as teenagers. And a very small number start doing them as toddlers and keep doing them until they go to prison or die. Her paper became a key hypothesis in psychology, criminology, and sociology, cited thousands of times.

In a review article in Nature Human Behaviour this week, Moffitt takes a ride through two decades of attempts to validate the taxonomy. Not for girls, Moffitt writes, because even though she studies both sexes, “findings have not reached consensus.” But for boys and men? Oh yeah.

To be clear, Moffitt isn’t trying to develop a toxicology of toxic masculinity here. As a researcher she’s interested in the interactions of genes and environment, and the reasons some delinquent children—but not all—turn into crime-committing adults. That’s a big enough project. But at this exact cultural moment, with women of the #MeToo movement calling sexual harassers and abusers to account just as mass shootings feel as if they’ve become a permanent recurring event—and when almost every mass shooter, up to and including the recent school shooting in Parkland, Florida, has been a man—I’m inclined to try to find explanations anywhere that seems plausible. US women are more likely to be killed by partners than anyone else. Men commit the vast majority of crimes in the US. So it’s worth querying Moffitt’s taxonomy to see if it offers any order to that chaos, even if it wasn’t built for it.

“Grown-ups who use aggression, intimidation, and force to get what they want have invariably been pushing other people around since their very early childhood,” Moffitt says via email from a rural vacation in New Zealand. “Their mothers report they were difficult babies, nursery day-care workers say they are difficult to control, and when all the other kids give up hitting and settle in as primary school pupils, teachers say they don’t. Their record of violating the rights of others begins surprisingly early, and goes forward from there.”

So if you could identify those kids then, maybe you could make things better later? Of course, things are way more complicated than that.

Since that 1993 paper, hundreds of studies have tested pieces of Moffitt’s idea. Moffitt herself has worked on a few prospective studies, following kids through life to see if they fall into her categories, and then trying to figure out why.

For example, she worked with the Dunedin Study, which followed health outcomes for more than 1,000 boys and girls in New Zealand starting in the early 1970s. Papers published from the data have included looks at marijuana use, physical and mental health, and psychological outcomes. Moffitt and her colleagues found that about a quarter of the males in the study fit the criteria she’d laid out for “adolescence limited” antisociability; they’re fine until they hit their teens, then they do all sorts of bad stuff, and then they stop. And 10 percent were “life-course persistent”—they have trouble as children, and it doesn’t stop. As adolescents, all had about the same rates of bad conduct.

But as children, the LCP boys scored much higher on a set of specific risks. Their mothers were younger. They tended to have been disciplined more harshly, and have experienced more family strife as kids. They scored lower on reading, vocabulary, and memory tests, and had a lower resting heart rate—some researchers think that people feel lower heart rates as discomfort and undertake riskier behaviors in pursuit of the adrenaline highs that’ll even them out. “LCP boys were impulsive, hostile, alienated, suspicious, cynical, and callous and cold toward others,” Moffitt writes of the Dunedin subjects in her Nature Human Behaviour article. As adults, “they self-reported excess violence toward partners and children.” They had worse physical and mental health in their 30s, were more likely to be incarcerated, and were more likely to attempt suicide.

Other studies have found much the same thing. A small number of identifiable boys turn into rotten, violent, unhappy men.

Could Moffitt’s taxonomy account for sexual harassers and abusers? In one sense, it seems unlikely: Her distinction explicitly says by adulthood there should only be a small number of bad actors, yet one of the lessons of #MeToo has been that every woman, it seems, has experienced some form of harassment.

Meta-analyses of the incidence of workplace sexual harassment vary in their outcomes, but a large-scale one from 2003 that covered 86,000 women reported that 56 percent experienced “potentially harassing” behaviors and 24 percent had definitely been harassed. Other studies get similar results.

But as pollsters say, check the cross-tabs. Harassment has sub-categories. Many—maybe most—women experience the gamut of harassing behaviors, but sub-categories like sexual coercion (being forced to have sex as a quid pro quo or to avoid negative consequences) or outright assault are rarer than basic institutional sexism and jerky, inappropriate comments. “What women are more likely to experience is everyday sexist behavior and hostility, the things we would describe as gender harassment,” says John Pryor, a psychologist at Illinois State University who studies harassment.

Obviously, any number greater than zero here is too high. And studies of prevalence can’t tell you if so many women are affected because all men harass at some low, constant ebb or few men do it, like, all the time. Judging by reports of accusations, the same super-creepy men who plan out sexual coercion may also impulsively grope and assault women. Those kind of behaviors, combined with the cases where many more accusers come forward after the first one, seem to me to jibe with the life-course persistent idea. “Sometimes people get caught for the first time as an adult, but if we delve into their history, the behavior has been there all along,” Moffitt says. “Violating the rights of others is virtually always a life-long lifestyle and an integral part of a person’s personality development.”

That means it’s worth digging into people’s histories. Whisper networks have been the de facto means of protecting women in the workplace; the taxonomy provides an intellectual framework for giving them a louder voice, because it suggests that men with a history of harassment and abuse probably also have a future of it.

Now, some writers have used the idea of toxic masculinity to draw a line between harassment, abuse, and mass shootings. They’re violent, and the perpetrators tend to be men. But here, Moffitt’s taxonomy may be less applicable.

Despite what the past few years have felt like, mass shootings are infrequent. And many mass shooters end up committing suicide or being killed themselves, so science on them is scant. “Mass shootings are such astonishingly rare, idiosyncratic, and multicausal events that it is impossible to explain why one individual decides to shoot his or her classmates, coworkers, or strangers and another does not,” write Benjamin Winegard and Christopher Ferguson in their chapter of The Wiley Handbook of the Psychology of Mass Shootings.

That said, researchers have found a few commonalities. The shooters are often suicidal, or more precisely have stopped caring whether they live or die, says Adam Lankford, a criminologist at the University of Alabama. Sometimes they’re seeking fame and attention. And they share a sense that they themselves are victims. “That’s how they justify attacking others,” Lankford says. “Sometimes the perceptions are based in reality—I was bullied, or whatever—but sometimes they can be exacerbated by mental health problems or personality characteristics.”

Though reports on mass shooters often say that more than half of them are also domestic abusers, that number needs some unpacking. People have lumped together mass shootings of families—domestic by definition—with public mass shootings like the one in Las Vegas, or school shootings. Disaggregate the public active shooters from the familicides and the number of shooters with histories of domestic abuse goes down. (Of course, that doesn’t change preposterously high number of abused women murdered by their partners outside of mass shooting events.)

What may really tip the mass shooter profile away from Moffitt’s taxonomy, though, is that people in the life-course persistent cohort do uncontrolled, crazy stuff all the time. Yes, some mass shooters have a history of encounters with law enforcement, let’s say. But some don’t. Mass shootings are, characteristically, highly planned events. “I’m not saying it’s impossible to be a mass shooter and have poor impulse control, but if you have poor impulse control you won’t be able to go for 12 months of planning an attack without ending up in jail first,” Lankford says.

Moffitt isn’t trying to build a unified field theory of the deadly patriarchy. When I suggest that the societal structures that keep men in power relative to women, generally, might explain the behavior of her LCP cohort, she disagrees. “If sexual harassment and mass shootings were the result of cultural patriarchy and societal expectations for male behavior, all men would be doing it all the time,” Moffitt says. “Even though media attention creates the impression that these forms of aggression are highly prevalent and all around us, they are nevertheless still extremely rare. Most men are trustworthy, good, and sensible.”

She and her colleagues continue to look for hard markers for violence or lack of impulse control, genes or neurobiological anomalies. (A form of the gene that codes for a neurotransmitter called monoamine oxidase inhibitor A might give some kids protection against lifelong effects of maltreatment, she and her team have found. By implication not having that polymorphism, then, could predispose a child raised under adverse circumstances to psychopathology as an adult.) Similarly, nobody yet knows what digital-native kids in either cohort will do when they move their bad behavior online. One might speculate that it looks a lot like GamerGate and 4chan, though that sociological and psychological work is still in early days.

But for now, Moffitt and her co-workers have identified risk factors and childhood conditions that seem to create these bad behaviors, or allow them to flourish. That’s the good news. “We know a lifestyle of aggression and intimidation toward others starts so young,” Moffitt says. “It could be preventable.”

Read more: https://www.wired.com/story/the-struggle-to-predictand-preventtoxic-masculinity/

Spread of breast cancer linked to compound in asparagus and other foods

Using drugs or diet to reduce levels of asparagine may benefit patients, say researchers

Spread of breast cancer linked to compound in asparagus and other foods

Using drugs or diet to reduce levels of asparagine may benefit patients, say researchers

Read more: https://www.theguardian.com/science/2018/feb/07/cutting-asparagus-could-prevent-spread-of-breast-cancer-study-shows

You Don’t Need a Personal Genetics Test to Take Charge of Your Health

The online storefront for the consumer genetics company Orig3n features an image of a young woman facing toward a sepia horizon. Her tresses are wavy, her triceps enviably toned. Her determined stance complements the copy floating beside her: "Take charge of your future," it reads. "Orig3n DNA tests uncover the links between your genes and how you think, act, and feel. The more you know, the easier it is to reach your highest potential."

It's the promise of a growing number of services: Genetic insights you can act on. There are tests tailored to tell you about your diet, your fitness, your complexion—even your wine preference. Helix, another consumer genetics company, sells a Wine Explorer service that recommends wine "scientifically selected based on your DNA."

But researchers will tell you to approach lifestyle-tailored testing kits with extreme skepticism. "What you see in the consumer genetics market is that legitimate genetic findings, often from studies with very large sample sizes, are being turned around and marketed to people in a way that implies it's going to be actionable for individuals," says Harvard geneticist Robert Green, who's been researching direct-to-consumer genetic testing for close to 20 years. But in most cases, he says, the extent to which consumers can act upon their results "really remains to be proven."

Not that researchers aren't trying. On the contrary: This week, scientists led by Christopher Gardner, director of nutrition studies at the Stanford Prevention Research Center, published one of the most rigorous investigations to date on whether dieters can use personal DNA results to identify more effective weight-loss strategies. The researchers compared the effectiveness of low fat and low carbohydrate diets in a year-long randomized controlled trial involving more than 600 test subjects. And crucially, the researchers also looked at whether test subjects' genes impacted their results. Earlier studies, some led by Gardner, had suggested that a combination of mutations in PPARG, ADRB2, and PABP2, three genes linked to the metabolism of fat and carbohydrates, could predispose test subjects to lose more weight on one diet than the other.

But the results, which appear in this week's issue of the Journal of American Medicine, found no association between test subjects' genetic profiles and their success with either program; test subjects lost the same amount of weight, regardless of which diet they were assigned. And study participants who were assigned diets that "matched" their genetic profile fared no better than those who weren't.

"When I saw the results, this wave of disappointment washed over me," Gardner says. "It was like, wait, it didn’t work? None of the genetic variants had an effect?”

Nope. The study was big, well-designed, and pricey (it received funding from the Nutrition Science Initiative, a non-profit devoted to funding rigorous nutrition research), yet it failed to replicate smaller, less carefully controlled studies. Such is science! It also illustrates why establishing the usefulness of home DNA kits will be so difficult and time consuming. Research into the link between genetic determinants and diet will probably continue; for the JAMA study, Gardner and his colleagues examined the predictive power of mutations in just three genes, but there are dozens to consider, in a staggering number of combinations. It's plausible—even likely, Gardner says—that some of these genetic signatures could lead people to more effective diets. "But nutrition is just so complex, it's not likely there's going to be an answer soon."

And that's for nutrition. The odds of somebody funding a rigorous, controlled investigation into the link between your DNA and your ideal exercise regimen are … well … let's just say that kind of research isn’t very high on researchers’ to do list.

"It's hard to make a case for studying anything in the lifestyle realm, because it's pretty low stakes," says geneticist Lawrence Brody, director of the Division of Genomics and Society at the National Human Genome Research Institute. Personalized cancer treatments, rare-disease diagnosis, reproductive health screening—you know, the urgent stuff—these are the types of genomic investigations that receive funding. Which is why, even in a field as large as nutrition research, Brody says there are few examples of studies examining the link between genetics and diet with the level of rigor you find in Gardner's JAMA study. "A lot of researchers don't think it's a high enough priority, or likely enough to show results, to conduct and fund a full randomized trial."

And just think: If it's that hard to conduct a solid study on actionable associations between DNA and diet, imagine how unlikely it is we'll see RCTs on personalized skin care plans, "what makes your child unique," or your "unique superhero traits".

You might expect consumer genetics companies to fund this kind of research themselves. Guess again. Most don't have the cash, and, even for those that do, it's risky to perform such studies in the first place, in the event they turn up results like Gardner's. “Most companies don’t feel they need those kinds of studies to sell a narrative that supports the purchase of their products,” Green says.

All of which should make consumers wary of lifestyle-oriented commercial DNA kits, which occupy a gray area somewhere between tests for ancestry and, say, cancer-associated mutations. The experts I spoke with were all optimistic about the long-term future of in-home DNA kits, and supportive of people's right to access their genetic information. But right now, for most tests, the evidence base just isn't there. Something to keep in mind the next time a personal genetics company's motivational ad implies their kit can point you toward a more effective diet or workout, or tell you "whether your genes have the raw potential football legend John Lynch looks for in a player.".

On the upside, the participants in Gardner's JAMA study lost a combined 6,500 pounds, averaging 13 pounds of weight loss apiece, regardless of their genetic profile, and regardless of their assigned diet. A lot of people hear "genetics" and think "destiny," but the vast majority of the time, that's not how genes work. Which means that the vast majority of the time, you don't need a personal genetics test to take charge of your future.

More Consumer Genetics

Read more: https://www.wired.com/story/you-dont-need-a-personal-genetics-test-to-take-charge-of-your-health/

To Stop Climate Change, Educate Girls and Give them Birth Control

Climate change is a ubiquitous hydra, a many-headed beast that affects everyone and everything in some form. Solutions to climate change range from the effective and the practical to the potentially catastrophically dangerous—but, in this somewhat heated debate, a potent weapon in our arsenal is falling by the wayside: the empowerment of women.



Robin George Andrews (@squigglyvolcano and robingeorgeandrews.com) is a volcanologist and science writer whose work appears in IFLScience, Forbes, and elsewhere.

Last year a coalition of scientists, economists, policymakers, researchers, and business people published Project Drawdown, a compendium of ways to prevent carbon dioxide from escaping skywards. Drawing from a plethora of peer-reviewed research, the document ranks 80 practical, mitigating measures—along with 20 near-future concepts—that could push back the oncoming storm.

Ranked in order of carbon emissions locked down by 2050, the usual suspects made the list. A moderate expansion of solar farms (number 8), onshore wind turbines (number 2), and nuclear power (number 20) would all save tens of billions of tons of equivalent carbon dioxide emissions. Increasing the number of people on plant-rich diets (number 4) and using electric vehicles (number 26) are effective carbon-cutting measures often proposed by climate hawks, and rightly so. The top spot went to managing refrigerants like HFCs, which are incredibly effective at trapping heat within our atmosphere.

But two lesser-known solutions also made this most practical of lists: the education of girls (number 6) and family planning (number 7). This is a stunning revelation, one that couldn’t be more pertinent, and yet, for the most part, discussions of mitigation and de-carbonization focus heavily on other matters, from the perceived perils and bona fide benefits of nuclear power, to just how quickly solar power is proliferating.

The link between the education of girls and a smaller carbon footprint isn’t as intuitively obvious as, say, phasing out fossil fuels. But dig a little deeper, and the evidence is overwhelming. It’s clear that getting more girls into school, and giving them a quality education, has a series of profound, cascading effects: reduced incidence of disease, higher life expectancies, more economic prosperity, fewer forced marriages, and fewer children. Better educational access and attainment not only equips women with the skills to deal with the antagonizing effects of climate change, but it gives them influence over how their communities militate against it.

Although the education of girls in a small number of countries is at, or approaching, parity with boys, for most of the planet, this remains distressingly elusive. Poverty, along with community traditions, tends to hold back girls as boys are prioritized.

Then there's family planning, something that’s indivisible from the education of girls. The planet is overpopulated, and the demands of its citizens greatly exceed the natural resources provided by our environment.

Contraception and prenatal care is denied to women across the world, from those in the United States, to communities in low-income nations. It’s either not available, not affordable, or social and/or religious motives ensure that it’s banned or heavily restricted. As a consequence, the world’s population will rise rapidly, consume ever more resources, and power its ambitions using fossil fuels. Carbon dioxide will continue to accumulate in the atmosphere.

The education of girls and family planning can be considered as a single issue involving the empowerment of women in communities across the world. Drawdown calculated that, by taking steps toward universal education and investing in family planning in developing nations, the world could nix 120 billion tons of emissions by 2050. That’s roughly 10 years’ worth of China’s annual emissions as of 2014, and it’s all because the world's population won't rise quite so rapidly.

It's farcical that this isn’t forming a major part of the debate over climate change mitigation. It’s not entirely clear why this is the case, but I’d suspect that regressive societal attitudes, along with the tendency of commentators to focus on the battle between different energy sectors, play suppressive roles in this regard.

Project Drawdown isn't the only group that has recently tied population growth to climate change. A study published last summer also found that having just one fewer child is a far more effective way for individuals in the developed world to shrink their carbon footprint than, say, recycling or eating less meat. For women in wealthy countries, these decisions are often freely made, and fertility rates in those countries are already fairly low. In low-income countries, such individual agency—not to mention contraception—is frequently absent, and fertility rates remain high.

Just as policymakers, climate advocates, and science communicators should pay attention to Drawdown’s findings, individuals should also do what they can to make sure such a solution comes to pass. Non-government organizations, like Hand In Hand International, Girls Not Brides, and the Malala Fund aren’t just uplifting women, but they’re helping to save the planet too, and they deserve support.

It's a grim assessment of civilization that, in 2018, humans are still grappling with gender equality. The world would clearly benefit if women were on par with men in every sector of society. We shouldn’t need any more convincing, but the fact that the social ascension of women would deal a severe blow to anthropogenic warming should be shouted from the rooftops.

Incidentally, the solutions in Drawdown also had associated economic benefits or costs associated with them. If 10 percent of the world's electricity was generated using solar farms, then it’d save $5 trillion by 2050, for example. No such value could be put to educating girls and family planning—two human rights with incalculable benefits.

WIRED Opinion publishes pieces written by outside contributors and represents a wide range of viewpoints. Read more opinions here.

Our Warming Climate

Read more: https://www.wired.com/story/to-stop-climate-change-educate-girls-and-give-them-birth-control/

It’s Time For a Serious Talk About the Science of Tech “Addiction”

To hear Andrew Przybylski tell it, the American 2016 presidential election is what really inflamed the public's anxiety over the seductive power of screens. (A suspicion that big companies with opaque inner workings are influencing your thoughts and actions will do that.) "Psychologists and sociologists have obviously been studying and debating about screens and their effects for years," says Przybylski, who is himself a psychologist at the Oxford Internet Institute with more than a decade's experience studying the impact of technology. But society's present conversation—"chatter," he calls it—can be traced back to three events, beginning with the political race between Hillary Clinton and Donald Trump.

Then there were the books. Well-publicized. Scary-sounding. Several, really, but two in particular. The first, Irresistible: The Rise of Addictive Technology and the Business of Keeping Us Hooked, by NYU psychologist Adam Alter, was released March 2, 2017. The second, iGen: Why Today's Super-Connected Kids are Growing Up Less Rebellious, More Tolerant, Less Happy – and Completely Unprepared for Adulthood – and What That Means for the Rest of Us, by San Diego State University psychologist Jean Twenge, hit stores five months later.

Last came the turncoats. Former employees and executives from companies like Facebook worried openly to the media about the monsters they helped create. Tristan Harris, a former product manager at Google and founder of the nonprofit "Time Well Spent" spoke with this publication's editor in chief about how Apple, Google, Facebook, Snapchat, Twitter, Instagram—you know, everyone—design products to steal our time and attention.

Bring these factors together, and Przybylski says you have all the ingredients necessary for alarmism and moral panic. What you're missing, he says, is the only thing that matters: direct evidence.

Which even Alter, the author of that first bellwether book, concedes. "There's far too little evidence for many of the assertions people make," he says. "I've become a lot more careful with what I say, because I felt the evidence was stronger when I first started speaking about it."

"People are letting themselves get played," says Przybylski. "It's a bandwagon." So I ask him: When WIRED says that technology is hijacking your brain, and the New York Times says it's time for Apple to design a less addictive iPhone, are we part of the problem? Are we all getting duped?

"Yeah, you are," he says. You absolutely are."

Of course, we've been here before. Anxieties over technology's impact on society are as old as society itself; video games, television, radio, the telegraph, even the written word—they were all, at one time, scapegoats or harbingers of humanity's cognitive, creative, emotional, and cultural dissolution. But the apprehension over smartphones, apps, and seductive algorithms is different. So different, in fact, that our treatment of past technologies fails to be instructive.

A better analogy is our modern love-hate relationship with food. When grappling with the promises and pitfalls of our digital devices, it helps to understand the similarities between our technological diets and our literal ones.

Today's technology is always with you; a necessary condition, increasingly, of existence itself. These are some of the considerations that led MIT sociologist Sherry Turkle to suggest avoiding the metaphor of addiction, when discussing technology. "To combat addiction, you have to discard the addicting substance," Turkle wrote in her 2011 book Alone Together: Why We Expect More from Technology and Less from Each Other. "But we are not going to 'get rid' of the Internet. We will not go ‘cold turkey’ or forbid cell phones to our children. We are not going to stop the music or go back to the television as the family hearth."

Food addicts—who speak of having to take the "tiger of addiction" out of the cage for a walk three times a day—might take issue with Turkle's characterization of dependence. But her observation, and the food addict's plight, speak volumes about our complicated relationships with our devices and the current state of research.

People from all backgrounds use technology—and no two people use it exactly the same way. "What that means in practice is that it's really hard to do purely observational research into the effects of something like screen time, or social media use," says MIT social scientist Dean Eckles, who studies how interactive technologies impact society's thoughts and behaviors. You can't just divide participants into, say, those with phones and those without. Instead, researchers have to compare behaviors between participants while accounting for variables like income, race, and parental education.

Say, for example, you’re trying to understand the impact of social media on adolescents, as Jean Twenge, author of the iGen book, has. When Twenge and her colleagues analyzed data from two nationally representative surveys of hundreds of thousands of kids, they calculated that social media exposure could explain 0.36 percent of the covariance for depressive symptoms in girls.

But those results didn’t hold for the boys in the dataset. What's more, that 0.36 percent means that 99.64 percent of the group’s depressive symptoms had nothing to do with social media use. Przybylski puts it another way: "I have the data set they used open in front of me, and I submit to you that, based on that same data set, eating potatoes has the exact same negative effect on depression. That the negative impact of listening to music is 13 times larger than the effect of social media."

In datasets as large as these, it's easy for weak correlational signals to emerge from the noise. And a correlation tells us nothing about whether new-media screen time actually causes sadness or depression. Which are the same problems scientists confront in nutritional research, much of which is based on similarly large, observational work. If a population develops diabetes but surveys show they’re eating sugar, drinking alcohol, sipping out of BPA-laden straws, and consuming calories to excess, which dietary variable is to blame? It could just as easily be none or all of the above.

Decades ago, those kinds of correlational nutrition findings led people to demonize fat, pinning it as the root cause of obesity and chronic illness in the US. Tens of millions of Americans abolished it from their diets. It's taken a generation for the research to boomerang back and rectify the whole baby-bathwater mistake. We risk similar consequences, as this new era of digital nutrition research gets underway.

Fortunately, lessons learned from the rehabilitation of nutrition research can point a way forward. In 2012, science journalist Gary Taubes and physician-researcher Peter Attia launched a multimillion-dollar undertaking to reinvent the field. They wanted to lay a new epistemological foundation for nutrition research, investing the time and money to conduct trials that could rigorously establish the root causes of obesity and its related diseases. They called their project the Nutrition Science Initiative.

Today, research on the link between technology and wellbeing, attention, and addiction finds itself in need of similar initiatives. They need randomized controlled trials, to establish stronger correlations between the architecture of our interfaces and their impacts; and funding for long-term, rigorously performed research. "What causes what? Is it that screen time leads to unhappiness or unhappiness leads to screen time?" says Twenge. "So that’s where longitudinal studies come in." Strategies from the nascent Open Science Framework—like the pre-registration of studies, and data sharing—could help, too.

But more than any of that, researchers will need buy-in from the companies that control that data. Ours is a time of intense informational asymmetry; the people best equipped to study what's happening—the people who very likely are studying what's happening—are behind closed doors. Achieving balance will require openness and objectivity from those who hold the data; clear-headed analysis from those who study it; and measured consideration by the rest of us.

"Don't get me wrong, I'm concerned about the effects of technology. That's why I spend so much of my time trying to do the science well," Przybylski says. He says he's working to develop a research proposal strategy by which scientists could apply to conduct specific, carefully designed studies with proprietary data from major platforms. Proposals would be assessed by independent reviewers outside the control of Facebook etc. If the investigation shows the potential to answer an important question in a discipline, or about a platform, the researchers outside the company get paired with the ones inside.

"If it’s team based, collaborative, and transparent, it’s got half a chance in hell of working," Przybylski says.

And if we can avoid the same mistakes that led us to banish fat from our food, we stand a decent chance of keeping our technological diets balanced and healthy.

Your Technology and You

  • Wired's editor-in-chief Nick Thompson spoke with Tristan Harris, the prophet behind the "Time Well Spent" movement that argues are minds are being hijacked by the technology we use.

  • One writer takes us through his extreme digital detox, a retreat that took him offline for a whole month.

  • Technology is demonized for making us distractible, but the right tech can help us form new, better digital habits—like these ones.

Read more: https://www.wired.com/story/its-time-for-a-serious-talk-about-the-science-of-tech-addiction/

Why No Gadget Can Prove How Stoned You Are

If you’ve spent time with marijuana—any time at all, really—you know that the high can be rather unpredictable. It depends on the strain, its level of THC and hundreds of other compounds, and the interaction between all these elements. Oh, and how much you ate that day. And how you took the cannabis. And the position of the North Star at the moment of ingestion.

OK, maybe not that last one. But as medical and recreational marijuana use spreads across the United States, how on Earth can law enforcement tell if someone they’ve pulled over is too high to be driving, given all these factors? Marijuana is such a confounding drug that scientists and law enforcement are struggling to create an objective standard for marijuana intoxication. (Also, I’ll say this early and only once: For the love of Pete, do not under any circumstances drive stoned.)

Sure, the cops can take you back to the station and draw a blood sample and determine exactly how much THC is in your system. “It's not a problem of accurately measuring it,” says Marilyn Huestis, coauthor of a new review paper in Trends in Molecular Medicine about cannabis intoxication. “We can accurately measure cannabinoids in blood and urine and sweat and oral fluid. It's interpretation that is the more difficult problem.”

You see, different people handle marijuana differently. It depends on your genetics, for one. And how often you consume cannabis, because if you take it enough, you can develop a tolerance to it. A dose of cannabis that may knock amateurs on their butts could have zero effect on seasoned users—patients who use marijuana consistently to treat pain, for instance.

The issue is that THC—what’s thought to be the primary psychoactive compound in marijuana—interacts with the human body in a fundamentally different way than alcohol. “Alcohol is a water-loving, hydrophilic compound,” says Huestis, who sits on the advisory board for Cannabix, a company developing a THC breathalyzer.1 “Whereas THC is a very fat-loving compound. It's a hydrophobic compound. It goes and stays in the tissues.” The molecule can linger for up to a month, while alcohol clears out right quick.

But while THC may hang around in tissues, it starts diminishing in the blood quickly—really quickly. “It's 74 percent in the first 30 minutes, and 90 percent by 1.4 hours,” says Huestis. “And the reason that's important is because in the US, the average time to get blood drawn [after arrest] is between 1.4 and 4 hours.” By the time you get to the station to get your blood taken, there may not be much THC left to find. (THC tends to linger longer in the brain because it’s fatty in there. That’s why the effects of marijuana can last longer than THC is detectable in breath or blood.)

So law enforcement can measure THC, sure enough, but not always immediately. And they’re fully aware that marijuana intoxication is an entirely different beast than drunk driving. “How a drug affects someone might depend on the person, how they used the drug, the type of drug (e.g., for cannabis, you can have varying levels of THC between different products), and how often they use the drug,” California Highway Patrol spokesperson Mike Martis writes in an email to WIRED.

Accordingly, in California, where recreational marijuana just became legal, the CHP relies on other observable measurements of intoxication. If an officer does field sobriety tests like the classic walk-and-turn maneuver, and suspects someone may be under the influence of drugs, they can request a specialist called a drug recognition evaluator. The DRE administers additional field sobriety tests—analyzing the suspect’s eyes and blood pressure to try to figure out what drug may be in play.

The CHP says it’s also evaluating the use of oral fluid screening gadgets to assist in these drug investigations. (Which devices exactly, the CHP declines to say.) “However, we want to ensure any technology we use is reliable and accurate before using it out in the field and as evidence in a criminal proceeding,” says Martis.

Another option would be to test a suspect’s breath with a breathalyzer for THC, which startups like Hound Labs are chasing. While THC sticks around in tissues, it’s no longer present in your breath after about two or three hours. So if a breathalyzer picks up THC, that would suggest the stuff isn’t lingering from a joint smoked last night, but one smoked before the driver got in a car.

This could be an objective measurement of the presence of THC, but not much more. “We are not measuring impairment, and I want to be really clear about that,” says Mike Lynn, CEO of Hound Labs. “Our breathalyzer is going to provide objective data that potentially confirms what the officer already thinks.” That is, if the driver was doing 25 in a 40 zone and they blow positive for THC, evidence points to them being stoned.

But you might argue that even using THC to confirm inebriation goes too far. The root of the problem isn’t really about measuring THC, it’s about understanding the galaxy of active compounds in cannabis and their effects on the human body. “If you want to gauge intoxication, pull the driver out and have him drive a simulator on an iPad,” says Kevin McKernan, chief scientific officer at Medicinal Genomics, which does genetic testing of cannabis. “That'll tell ya. The chemistry is too fraught with problems in terms of people's individual genetics and their tolerance levels.”

Scientists are just beginning to understand the dozens of other compounds in cannabis. CBD, for instance, may dampen the psychoactive effects of THC. So what happens if you get dragged into court after testing positive for THC, but the marijuana you consumed was also a high-CBD strain?

“It significantly compounds your argument in court with that one,” says Jeff Raber, CEO of the Werc Shop, a cannabis lab. “I saw this much THC, you're intoxicated. Really, well I also had twice as much CBD, doesn't that cancel it out? I don't know, when did you take that CBD? Did you take it afterwards, did you take it before?

“If you go through all this effort and spend all the time and money and drag people through court and spend taxpayer dollars, we shouldn't be in there with tons of question marks,” Raber says.

But maybe one day marijuana roadside testing won’t really matter. “I really think we're probably going to see automated cars before we're going to see this problem solved in a scientific sense,” says Raber. Don’t hold your breath, then, for a magical device that tells you you’re stoned.

1 UPDATE: 1/29/18, 2:15 pm ET: This story has been updated to disclose Huestis' affiliation with Cannabix.

Read more: https://www.wired.com/story/why-no-gadget-can-prove-how-stoned-you-are/

Real Heroes Have the Guts to Admit They’re Wrong

What do you do when you discover you’re wrong? That’s a conundrum Daniel Bolnick recently faced. He’s an evolutionary biologist, and in 2009 he published a paper with a cool finding: Fish with different diets have quite different body types. Biologists had suspected this for years, but Bolnick offered strong confirmation by collecting tons of data and plotting it on a chart for all to see. Science for the win!

The problem was, he’d made a huge blunder. When a colleague tried to replicate Bolnick’s analysis in 2016, he couldn’t. Bolnick investigated his original work and, in a horrified instant, recognized his mistake: a single miswritten line of computer code. “I’d totally messed up,” he realized.

But here’s the thing: Bolnick immediately owned up to it. He contacted the publisher, which on November 16, 2016, retracted the paper. Bolnick was mortified. But, he tells me, it was the right thing to do.

Why do I recount this story? Because I think society ought to give Bolnick some sort of a prize. We need moral examples of people who can admit when they’re wrong. We need more Heroes of Retraction.

Right now society has an epidemic of the opposite: too many people with a bulldog unwillingness to admit when they’re factually wrong. Politicians are shown evidence that climate change is caused by human activity but still deny our role. Trump fans are confronted with near-daily examples of his lies but continue to believe him. Minnesotans have plenty of proof that vaccines don’t cause autism but forgo shots and end up sparking a measles outbreak.

“Never underestimate the power of confirmation bias,” says Carol Tavris, a social psychologist and coauthor of Mistakes Were Made (but Not by Me). As Tavris notes, one reason we can’t admit we have the facts wrong is that it’s too painful to our self-conception as smart, right-thinking people—or to our political tribal identity. So when we get information that belies this image, we simply ignore it. It’s incredibly hard, she writes, to “break out of the cocoon of self-justification.”

That’s why we need moral exemplars. If we want to fight the power of self-delusion, we need tales of honesty. We should find and loudly laud the awesome folks who have done the painful work of admitting error. In other words, we need more Bolnicks.

Science, it turns out, is an excellent place to find such people. After all, the scientific method requires you to recognize when you’re wrong—to do so happily, in fact.

Granted, I don’t want to be too starry-eyed about science. The “replication crisis” still rages. There are plenty of academics who, when their experimental results are cast into doubt, dig in their heels and insist all is well. (And cases of outright fakery and fraud can make scholars less likely to admit their sin, as Ivan Oransky, the cofounder of the Retraction Watch blog, notes.) Professional vanity is powerful, and a hot paper gets a TED talk.

Still, the scientific lodestar still shines. Bolnick isn’t alone in his Boy Scout–like rectitude. In the past year alone, mathematicians have pulled papers when they’ve learned their proofs don’t hold and economists have retracted work after finding they’d misclassified their data. The Harvard stem-cell biologist Douglas Melton had a hit 2013 paper that got cited hundreds of times—but when colleagues couldn’t replicate the finding, he yanked it.

Fear of humiliation is a strong deterrent to facing error. But admitting you’re mistaken can actually bolster your cred. “I got such a positive response,” Bolnick told me. “On Twitter and on blog posts, people were saying, ‘Yeah, you outed yourself, and that’s fine!’” There’s a lesson there for all of us.

This article appears in the February issue. Subscribe now.

Read more: https://www.wired.com/story/real-heroes-have-the-guts-to-admit-theyre-wrong/