Meet the tech evangelist who now fears for our mental health

Belinda Parmar was a passionate advocate of the digital revolution but has started keeping her familys smartphones and laptops locked away to protect her loved ones. Is she right to be so worried?

In Belinda Parmars bedroom there is a wardrobe, and inside that wardrobe there is a safe. Inside that safe is not jewellery or cash or personal documents, but devices: mobile phones, a laptop, an iPod, chargers and remote controls. Seven years ago, Parmar was the high priestess of tech empowerment. Founder of the consultancy Lady Geek, she saw it as her mission both to make tech work better for girls and women and to get more girls and women working for tech. Now she wants to talk about the damage it can cause to our mental health, to family life and to children, including her son Jedd, 11, and daughter Rocca, 10.

Parmar made her living and lived her life through these devices, so what happened to make her lock them up? Why did this tech evangelist lose her faith?

Strong women run in Parmars family. She tells me her mother raised her and her sister alone after separating from their father when Parmar was two (shes now 44 and recently separated herself), while her grandmother, who had four children, ran her own business, a recruitment firm in Mile End, east London. She grew up believing anything was possible, which is why she felt driven to start Lady Geek when she was 35, after a man in a phone shop tried to sell her a pink, sparkly phone. That was the way technology was sold and I thought: This is ridiculous. I was so angry that I went home and started a blog, she says.

The blog was called Lady Geek, and it launched a national conversation about sexism in the tech industry. Parmar left her job in advertising to turn it into a business, advising tech companies how to make their products better for women, and going into schools to encourage girls to go into the industry, for which she was awarded an OBE. For me, tech was a leveller, she says. You didnt need money, you didnt need status; it was an enabler of a more equal and more diverse society. This tiny bubble that most of us lived in had been popped and that was wonderful. That still is wonderful.

But certain aspects of her relationship with technology were not so wonderful. Id wake up and look at Twitter, she says. I had two small children, and the first thing I should have been doing was going to see the kids, but Id be looking at Twitter. She realised she was using social media for validation, to feed her ego. She began to think: If technology is an enabler, why am I just using it for things I dont like about myself?

As her children grew up, she started to be disturbed by her sons apparent compulsion to play video games. Technology takes parents out of control. I cant compete with an amazing monster, that level of dopamine. He doesnt want to eat with us, to be with us, because its not as exciting, she says. She bought a Circle, a device that allows you to manage the whole familys internet access, controlling which devices are online at which times and what they can view. My son hid it, she says. She tried to turn the wifi off, but he stood guarding it, blocking her way. She still does not know where the Circle is. In theory, she says, if youve got compliant children, this would be perfect. Perhaps that is why her combination to the safe, with his devices and hers, is 12 digits long.

Safe
Safe keeping … Parmar locks her devices in for the night. Photograph: Teri Pengilley for the Guardian

She has reason to worry. When a friends 12-year-old son showed signs of being addicted to video games, Parmar at first shrugged it off. Then he refused to go to school because he wanted to play all day, and then he spent eight weeks in a psychiatric institution. Hes 15 now. Nothings changed. He still wont go to school, she says.

Professor Mark Griffiths, a psychologist and director of the International Gaming Research Unit at Nottingham Trent University, has spent 30 years studying technological addictions; he was the first to use that phrase in 1995, to describe excessive person-machine relationships. All behaviour is on a continuum from absolutely no problems at all, he says, through to recreationally enjoying something, to excessively enjoying something, to problematic and then addictive and pathological at the far end. For someone to be genuinely addicted to technology, that technology has to be the single most important thing in their life they do it to the neglect of everything else and very few people fulfil that.

He is prolific (helped, he says, by having given up his mobile phone), publishing more than 100 papers last year alone his most recent was on Instagram addiction. But he has his doubters. There are academics wholl say this is complete nonsense, that if it doesnt involve ingestion of a psychoactive substance it cant possibly be an addiction. To that he retorts: what about gambling? What is good for me is the established bodies are catching up, he says. This year, the World Health Organization added gaming disorder to its list of mental health conditions in ICD-11, the International Classification of Diseases.

Griffiths is careful to articulate the difference between believing that technological addictions are real, and believing that they are ubiquitous. Addiction is defined not by the amount of time spent doing the activity, but by the context in which you do it. Parents tend to pathologise behaviour that isnt pathological its the technological generation gap, he says. Every week, concerned parents email him to say their daughter or son is addicted to social media, and when he asks if their children do their homework and chores, take exercise and have a wide network of friends, nearly always the answer is yes. But, they say, the kids are wasting three hours a day online. What were you doing when you were their age? Because I was watching TV for three hours a day when there were only three channels. And then there are the parents who use social media just as much as their kids, and who shouldnt be surprised when kids end up copying exactly what they are doing.

While it may be reassuring that few of us would qualify as addicts by Griffiths definition, the fashion for tech detoxes, and a recent survey that found that 75% of those aged 25 to 34 feel they use their phone too much, suggests many of us remain disturbed by our increasingly entwined relationship with technology. Richard Graham, a child and adolescent psychiatrist who runs the Tech Addiction Service at Londons private Nightingale hospital, tells me: Were psychologically cyborgs now, whether we like it or not. Were integrating these devices into our mental functioning, into our social and emotional lives. He quotes Chief Justice Roberts of the US supreme court: The proverbial visitor from Mars might conclude they were an important feature of human anatomy.

While Graham feels the addiction model has its uses, he also draws on other ways of thinking about what is going on when we cant look away from a screen. He tells me about the student who decided to wind down one evening by playing a game of League of Legends, which would take about 40 minutes; the next time he looked at the clock, it was 5.30am. To explore this, Graham turned to flow psychology, a way of understanding the process of getting into the zone around a piece of work, which can be positive but can also make you lose track of space and time. This is not escapism: A lot of gamers are thinking strategically, in a very deep way. He is also interested in the idea of hyperfocus, which some people with ADHD experience, as not so much a problem of not being able to concentrate, but of not being able to shift concentration.

He was influenced, too, by the work of Sherry Turkle, a social psychologist who has been researching the relationship between people and technology for three decades. Some of the participants in her studies, he says, were so attached to their consoles that they even found winning upsetting because it disrupts the connection with the machine. Theres a sense that they keep going because they dont want that connection to be lost. A psychoanalyst might compare this to the unconscious desire to be back in the womb, in a state of absolute connection.

For young people on the brink of or enduring the horrors of adolescence, like Belindas son, Graham feels there could be something else going on: an identity crisis, trying to find a place in the world of near-adults. For these young people, games and social media arent just fun theyre business. Whether they monetise their YouTube channel or not, this is a way to succeed, to harness digital capital and turn it into self-esteem. Griffiths suggests that screens might even be one of the reasons for the drop in youth crime over the past 25 years: More youth are spending more time in front of technology, so they havent got time to go out and commit acquisitive crime. Being very engrossing isnt necessarily bad.

These experts agree that abstinence is not the way forward: instead, we need to build what they call digital resilience, and learn to use technology in a measured, controlled way. If someone goes diving and is deeply immersed in the ocean, Graham says, you cant just bring them up quickly without significant effect. So rather than talking about digital detox, we need to think about digital decompression.

He recommends the American Academy of Pediatrics family media plan, which tells you how much sleep you need, and schedules a period of no-screen time an hour before bed, as well as clean periods in your day and clean zones in your home. I think it can really help if everyone does it together. But adults can be more slippery than young people. Theyll say: I need my phone for work, for my alarm. Unfortunately, with adolescents, anything like that smacks of hypocrisy and is incredibly damaging.

Young people can be responsive when adults change their own behaviour, he says. I had quite a nice discussion with a young man and his mother. She told me she only has a Kindle, and I replied that the later models will disrupt your sleep as much as anything else. This absolutely thrilled the adolescent, who was much more willing to change his behaviour because Id caught his mum out. And she was up for changing, too.

Parmar realises she has to set an example. I love technology, but my own behaviour has changed because Im more self-aware, she says. Hence her devices being in the safe, along with her sons. But looking around her sunlit bedroom, I see a laptop on the desk, a tablet next to her pillow. So your bedroom isnt screen-free, then, I say. She looks reflective, perhaps a little sheepish, and acknowledges that she likes to watch things on her tablet once the kids have gone to bed. Shes still figuring things out, still coming to terms with the tough decisions we all need to make if we want to be more in control of our relationship with technology.

These are the conversations Parmar wants us to have, which is why she is launching a campaign and website, TheTruthAboutTech.com (no relation to a similarly named American campaign), that will offer practical tips and a space for people to share their stories. This is my new mission. And I tell you what: dealing with my son every day, it reminds me, this is personal. This is really personal.

She also wants to hold to account the tech giants who are profiting from our over-engagement. She raises her voice: I want to say, youve got to be more responsible. You can still make billions, but you should be thinking about how can you bring all the human values we want as a society into your products. She is furious with Reed Hastings, the CEO of Netflix, who last year said the companys main competitor was not Amazon Video or YouTube, but sleep. That is disgraceful. He should be saying: My No 1 mission is to unite families in their living room around great content.

These companies, she says, are the most powerful brands in the world, more powerful than governments. Imagine if a government had said that. Theyre digital dictators, and part of this campaign is getting them to stand up and be accountable. And what does that mean? It means rethinking Snapchats streaks, which track how long users have been in daily communication, keeping them checking in for fear of losing out; it means rethinking YouTubes Up next queue, which automatically plays video after video; it means addiction ratings on video games. And thats barely scratching the surface.

How does she feel about her previous work, spreading the benefits of tech with no mention of its dangers? I think I was naive, she says. I didnt know enough. I feel good about the fact that I got more women into technology, but if I did it again, I would do it in a way that is more realistic, balancing the good and the bad.

I cant stop thinking about that safe. After all, a safe is built to protect our most precious possessions or to lock up our most dangerous weapons. It feels extraordinary that something so everyday, so anodyne as a mobile phone could have such unnerving value, such threatening power. With their influence and wealth, why would the tech giants change from digital dictators to enlightened despots?

Parmar believes commercial pressures will compel them two influential Apple shareholders are already threatening to sue the company for not limiting screen time. Graham proposes a darker alternative: We could edge towards the equivalent of a parasite that drains its host so much that it kills itself, along with the host. He doesnt mean that these technology companies and their products will actually kill us, of course. But if its this relentless, the so-called attention economy will fall down, because well all be too exhausted.

Build your digital resilience

Four tips from addiction expert Richard Graham.

1 Be united as a family. Use the American Academy of Pediatrics family media plan but remember: The whole family needs to buy into this.

2 Plan activities outside the home. Go to the cinema, for example. Its a shared experience, and theres a narrative to stoke the imagination.

3 Vary your digital diet. People get stuck in very simple diets of media consumption, using the same platforms, games and messaging apps. Using different platforms is important its about moving between them and having a sense of ease of being able to do something, then stop and move on.

4 Live healthily. Sleep enough, eat well, drink enough water and do some physical activity every day.

Read more: https://www.theguardian.com/technology/2018/mar/15/meet-the-tech-evangelist-who-now-fears-for-our-mental-health

I made Steve Bannons psychological warfare tool: meet the data war whistleblower

Christopher Wylie goes on the record to discuss his role in hijacking the profiles of millions of Facebook users in order to target the US electorate

The first time I met Christopher Wylie, he didnt yet have pink hair. That comes later. As does his mission to rewind time. To put the genie back in the bottle.

By the time I met him in person, Id already been talking to him on a daily basis for hours at a time. On the phone, he was clever, funny, bitchy, profound, intellectually ravenous, compelling. A master storyteller. A politicker. A data science nerd.

Play Video
13:04

Cambridge Analytica whistleblower: ‘We spent $1m harvesting millions of Facebook profiles’ video

Two months later, when he arrived in London from Canada, he was all those things in the flesh. And yet the flesh was impossibly young. He was 27 then (hes 28 now), a fact that has always seemed glaringly at odds with what he has done. He may have played a pivotal role in the momentous political upheavals of 2016. At the very least, he played a consequential role. At 24, he came up with an idea that led to the foundation of a company called Cambridge Analytica, a data analytics firm that went on to claim a major role in the Leave campaign for Britains EU membership referendum, and later became a key figure in digital operations during Donald Trumps election campaign.

Or, as Wylie describes it, he was the gay Canadian vegan who somehow ended up creating Steve Bannons psychological warfare mindfuck tool.

In 2014, Steve Bannon then executive chairman of the alt-right news network Breitbart was Wylies boss. And Robert Mercer, the secretive US hedge-fund billionaire and Republican donor, was Cambridge Analyticas investor. And the idea they bought into was to bring big data and social media to an established military methodology information operations then turn it on the US electorate.

It was Wylie who came up with that idea and oversaw its realisation. And it was Wylie who, last spring, became my source. In May 2017, I wrote an article headlined The great British Brexit robbery, which set out a skein of threads that linked Brexit to Trump to Russia. Wylie was one of a handful of individuals who provided the evidence behind it. I found him, via another Cambridge Analytica ex-employee, lying low in Canada: guilty, brooding, indignant, confused. I havent talked about this to anyone, he said at the time. And then he couldnt stop talking.

Explainer embed

By that time, Steve Bannon had become Trumps chief strategist. Cambridge Analyticas parent company, SCL, had won contracts with the US State Department and was pitching to the Pentagon, and Wylie was genuinely freaked out. Its insane, he told me one night. The company has created psychological profiles of 230 million Americans. And now they want to work with the Pentagon? Its like Nixon on steroids.

He ended up showing me a tranche of documents that laid out the secret workings behind Cambridge Analytica. And in the months following publication of my article in May,it was revealed that the company had reached out to WikiLeaks to help distribute Hillary Clintons stolen emails in 2016. And then we watched as it became a subject of special counsel Robert Muellers investigation into possible Russian collusion in the US election.

The Observer also received the first of three letters from Cambridge Analytica threatening to sue Guardian News and Media for defamation. We are still only just starting to understand the maelstrom of forces that came together to create the conditions for what Mueller confirmed last month was information warfare. But Wylie offers a unique, worms-eye view of the events of 2016. Of how Facebook was hijacked, repurposed to become a theatre of war: how it became a launchpad for what seems to be an extraordinary attack on the USs democratic process.

Wylie oversaw what may have been the first critical breach. Aged 24, while studying for a PhD in fashion trend forecasting, he came up with a plan to harvest the Facebook profiles of millions of people in the US, and to use their private and personal information to create sophisticated psychological and political profiles. And then target them with political ads designed to work on their particular psychological makeup.

We broke Facebook, he says.

And he did it on behalf of his new boss, Steve Bannon.

Is it fair to say you hacked Facebook? I ask him one night.

He hesitates. Ill point out that I assumed it was entirely legal and above board.

Last month, Facebooks UK director of policy, Simon Milner, told British MPs on a select committee inquiry into fake news, chaired by Conservative MP Damian Collins, that Cambridge Analytica did not have Facebook data. The official Hansard extract reads:

Christian Matheson (MP for Chester): Have you ever passed any user information over to Cambridge Analytica or any of its associated companies?

Simon Milner: No.

Matheson: But they do hold a large chunk of Facebooks user data, dont they?

Milner: No. They may have lots of data, but it will not be Facebook user data. It may be data about people who are on Facebook that they have gathered themselves, but it is not data that we have provided.

Alexander
Alexander Nix, Cambridge Analytica CEO. Photograph: The Washington Post/Getty Images

Two weeks later, on 27 February, as part of the same parliamentary inquiry, Rebecca Pow, MP for Taunton Deane, asked Cambridge Analyticas CEO, Alexander Nix: Does any of the data come from Facebook? Nix replied: We do not work with Facebook data and we do not have Facebook data.

And through it all, Wylie and I, plus a handful of editors and a small, international group of academics and researchers, have known that at least in 2014 that certainly wasnt the case, because Wylie has the paper trail. In our first phone call, he told me he had the receipts, invoices, emails, legal letters records that showed how, between June and August 2014, the profiles of more than 50 million Facebook users had been harvested. Most damning of all, he had a letter from Facebooks own lawyers admitting that Cambridge Analytica had acquired the data illegitimately.

Going public involves an enormous amount of risk. Wylie is breaking a non-disclosure agreement and risks being sued. He is breaking the confidence of Steve Bannon and Robert Mercer.

Its taken a rollercoaster of a year to help get Wylie to a place where its possible for him to finally come forward. A year in which Cambridge Analytica has been the subject of investigations on both sides of the Atlantic Robert Muellers in the US, and separate inquiries by the Electoral Commission and the Information Commissioners Office in the UK, both triggered in February 2017, after the Observers first article in this investigation.

It has been a year, too, in which Wylie has been trying his best to rewind to undo events that he set in motion. Earlier this month, he submitted a dossier of evidence to the Information Commissioners Office and the National Crime Agencys cybercrime unit. He is now in a position to go on the record: the data nerd who came in from the cold.

There are many points where this story could begin. One is in 2012, when Wylie was 21 and working for the Liberal Democrats in the UK, then in government as junior coalition partners. His career trajectory has been, like most aspects of his life so far, extraordinary, preposterous, implausible.

Profile

Cambridge Analytica: the key players

Alexander Nix, CEO

An Old Etonian with a degree from Manchester University, Nix, 42, worked as a financial analyst in Mexico and the UK before joining SCL, a strategic communications firm, in 2003. From 2007 he took over the companys elections division, and claims to have worked on 260 campaigns globally. He set up Cambridge Analytica to work in America, with investment from RobertMercer.

Aleksandr Kogan, data miner

Aleksandr Kogan was born in Moldova and lived in Moscow until the age of seven, then moved with his family to the US, where he became a naturalised citizen. He studied at the University of California, Berkeley, and got his PhD at the University of Hong Kong before joining Cambridge as a lecturer in psychology and expert in social media psychometrics. He set up Global Science Research (GSR) to carry out CAs data research. While at Cambridge he accepted a position at St Petersburg State University, and also took Russian government grants for research. He changed his name to Spectre when he married, but later reverted to Kogan.

Steve Bannon, former board member

A former investment banker turned alt-right media svengali, Steve Bannon was boss at website Breitbart when he met Christopher Wylie and Nix and advised Robert Mercer to invest in political data research by setting up CA. In August 2016 he became Donald Trumps campaign CEO. Bannon encouraged the reality TV star to embrace the populist, economic nationalist agenda that would carry him into the White House. That earned Bannon the post of chief strategist to the president and for a while he was arguably the second most powerful man in America. By August 2017 his relationship with Trump had soured and he was out.

Robert Mercer, investor

Robert Mercer, 71, is a computer scientist and hedge fund billionaire, who used his fortune to become one of the most influential men in US politics as a top Republican donor. An AI expert, he made a fortune with quantitative trading pioneers Renaissance Technologies, then built a $60m war chest to back conservative causes by using an offshore investment vehicle to avoid US tax.

Rebekah Mercer, investor

Rebekah Mercer has a maths degree from Stanford, and worked as a trader, but her influence comes primarily from her fathers billions. The fortysomething, the second of Mercers three daughters, heads up the family foundation which channels money to rightwing groups. The conservative megadonors backed Breitbart, Bannon and, most influentially, poured millions into Trumps presidential campaign.

Wylie grew up in British Columbia and as a teenager he was diagnosed with ADHD and dyslexia. He left school at 16 without a single qualification. Yet at 17, he was working in the office of the leader of the Canadian opposition; at 18, he went to learn all things data from Obamas national director of targeting, which he then introduced to Canada for the Liberal party. At 19, he taught himself to code, and in 2010, age 20, he came to London to study law at the London School of Economics.

Politics is like the mob, though, he says. You never really leave. I got a call from the Lib Dems. They wanted to upgrade their databases and voter targeting. So, I combined working for them with studying for my degree.

Politics is also where he feels most comfortable. He hated school, but as an intern in the Canadian parliament he discovered a world where he could talk to adults and they would listen. He was the kid who did the internet stuff and within a year he was working for the leader of the opposition.

Hes one of the brightest people you will ever meet, a senior politician whos known Wylie since he was 20 told me. Sometimes thats a blessing and sometimes a curse.

Meanwhile, at Cambridge Universitys Psychometrics Centre, two psychologists, Michal Kosinski and David Stillwell, were experimenting with a way of studying personality by quantifying it.

Starting in 2007,Stillwell, while a student, had devised various apps for Facebook, one of which, a personality quiz called myPersonality, had gone viral. Users were scored on big five personality traits Openness, Conscientiousness, Extroversion, Agreeableness and Neuroticism and in exchange, 40% of them consented to give him access to their Facebook profiles. Suddenly, there was a way of measuring personality traits across the population and correlating scores against Facebook likes across millions of people.

An
Examples, above and below, of the visual messages trialled by GSRs online profiling test. Respondents were asked: How important should this message be to all Americans?

The research was original, groundbreaking and had obvious possibilities. They had a lot of approaches from the security services, a member of the centre told me. There was one called You Are What You Like and it was demonstrated to the intelligence services. And it showed these odd patterns; that, for example, people who liked I hate Israel on Facebook also tended to like Nike shoes and KitKats.

There are agencies that fund research on behalf of the intelligence services. And they were all over this research. That one was nicknamed Operation KitKat.

The defence and military establishment were the first to see the potential of the research. Boeing, a major US defence contractor, funded Kosinskis PhD and Darpa, the US governments secretive Defense Advanced Research Projects Agency, is cited in at least two academic papers supporting Kosinskis work.

But when, in 2013, the first major paper was published, others saw this potential too, including Wylie. He had finished his degree and had started his PhD in fashion forecasting, and was thinking about the Lib Dems. It is fair to say that he didnt have a clue what he was walking into.

An

I wanted to know why the Lib Dems sucked at winning elections when they used to run the country up to the end of the 19th century, Wylie explains. And I began looking at consumer and demographic data to see what united Lib Dem voters, because apart from bits of Wales and the Shetlands its weird, disparate regions. And what I found is there were no strong correlations. There was no signal in the data.

And then I came across a paper about how personality traits could be a precursor to political behaviour, and it suddenly made sense. Liberalism is correlated with high openness and low conscientiousness, and when you think of Lib Dems theyre absent-minded professors and hippies. Theyre the early adopters theyre highly open to new ideas. And it just clicked all of a sudden.

Here was a way for the party to identify potential new voters. The only problem was that the Lib Dems werent interested.

I did this presentation at which I told them they would lose half their 57 seats, and they were like: Why are you so pessimistic? They actually lost all but eight of their seats, FYI.

Another Lib Dem connection introduced Wylie to a company called SCL Group, one of whose subsidiaries, SCL Elections, would go on to create Cambridge Analytica (an incorporated venture between SCL Elections and Robert Mercer, funded by the latter). For all intents and purposes, SCL/Cambridge Analytica are one and the same.

Alexander Nix, then CEO of SCL Elections, made Wylie an offer he couldnt resist. He said: Well give you total freedom. Experiment. Come and test out all your crazy ideas.

An
Another example of the visual messages trialled by GSRs online profiling test.

In the history of bad ideas, this turned out to be one of the worst. The job was research director across the SCL group, a private contractor that has both defence and elections operations. Its defence arm was a contractor to the UKs Ministry of Defence and the USs Department of Defense, among others. Its expertise was in psychological operations or psyops changing peoples minds not through persuasion but through informational dominance, a set of techniques that includes rumour, disinformation and fake news.

SCL Elections had used a similar suite of tools in more than 200 elections around the world, mostly in undeveloped democracies that Wylie would come to realise were unequipped to defend themselves.

Wylie holds a British Tier 1 Exceptional Talent visa a UK work visa given to just 200 people a year. He was working inside government (with the Lib Dems) as a political strategist with advanced data science skills. But no one, least of all him, could have predicted what came next. When he turned up at SCLs offices in Mayfair, he had no clue that he was walking into the middle of a nexus of defence and intelligence projects, private contractors and cutting-edge cyberweaponry.

The thing I think about all the time is, what if Id taken a job at Deloitte instead? They offered me one. I just think if Id taken literally any other job, Cambridge Analytica wouldnt exist. You have no idea how much I brood on this.

A few months later, in autumn 2013, Wylie met Steve Bannon. At the time, he was editor-in-chief of Breitbart, which he had brought to Britain to support his friend Nigel Farage in his mission to take Britain out of the European Union.

What was he like?

Smart, says Wylie. Interesting. Really interested in ideas. Hes the only straight man Ive ever talked to about intersectional feminist theory. He saw its relevance straightaway to the oppressions that conservative, young white men feel.

Wylie meeting Bannon was the moment petrol was poured on a flickering flame. Wylie lives for ideas. He speaks 19 to the dozen for hours at a time. He had a theory to prove. And at the time, this was a purely intellectual problem. Politics was like fashion, he told Bannon.

[Bannon] got it immediately. He believes in the whole Andrew Breitbart doctrine that politics is downstream from culture, so to change politics you need to change culture. And fashion trends are a useful proxy for that. Trump is like a pair of Uggs, or Crocs, basically. So how do you get from people thinking Ugh. Totally ugly to the moment when everyone is wearing them? That was the inflection point he was looking for.

But Wylie wasnt just talking about fashion. He had recently been exposed to a new discipline: information operations, which ranks alongside land, sea, air and space in the US militarys doctrine of the five-dimensional battle space. His brief ranged across the SCL Group the British government has paid SCL to conduct counter-extremism operations in the Middle East, and the US Department of Defense has contracted it to work in Afghanistan.

I tell him that another former employee described the firm as MI6 for hire, and Id never quite understood it.

Its like dirty MI6 because youre not constrained. Theres no having to go to a judge to apply for permission. Its normal for a market research company to amass data on domestic populations. And if youre working in some country and theres an auxiliary benefit to a current client with aligned interests, well thats just a bonus.

When I ask how Bannon even found SCL, Wylie tells me what sounds like a tall tale, though its one he can back up with an email about how Mark Block, a veteran Republican strategist, happened to sit next to a cyberwarfare expert for the US air force on a plane. And the cyberwarfare guy is like, Oh, you should meet SCL. They do cyberwarfare for elections.

U.S.
Steve Bannon: He loved the gays, says Wylie. He saw us as early adopters. Photograph: Tony Gentile/Reuters

It was Bannon who took this idea to the Mercers: Robert Mercer the co-CEO of the hedge fund Renaissance Technologies, who used his billions to pursue a rightwing agenda, donating to Republican causes and supporting Republican candidates and his daughter Rebekah.

Nix and Wylie flew to New York to meet the Mercers in Rebekahs Manhattan apartment.

She loved me. She was like, Oh we need more of your type on our side!

Your type?

The gays. She loved the gays. So did Steve [Bannon]. He saw us as early adopters. He figured, if you can get the gays on board, everyone else will follow. Its why he was so into the whole Milo [Yiannopoulos] thing.

Robert Mercer was a pioneer in AI and machine translation. He helped invent algorithmic trading which replaced hedge fund managers with computer programs and he listened to Wylies pitch. It was for a new kind of political message-targeting based on an influential and groundbreaking 2014 paper researched at Cambridges Psychometrics Centre, called: Computer-based personality judgments are more accurate than those made by humans.

In politics, the money man is usually the dumbest person in the room. Whereas its the opposite way around with Mercer, says Wylie. He said very little, but he really listened. He wanted to understand the science. And he wanted proof that it worked.

And to do that, Wylie needed data.

How Cambridge Analytica acquired the data has been the subject of internal reviews at Cambridge University, of many news articles and much speculation and rumour.

When Nix was interviewed by MPs last month, Damian Collins asked him:

Does any of your data come from Global Science Research company?

Nix: GSR?

Collins: Yes.

Nix: We had a relationship with GSR. They did some research for us back in 2014. That research proved to be fruitless and so the answer is no.

Collins: They have not supplied you with data or information?

Nix: No.

Collins: Your datasets are not based on information you have received from them?

Nix: No.

Collins: At all?

Nix: At all.

The problem with Nixs response to Collins is that Wylie has a copy of an executed contract, dated 4 June 2014, which confirms that SCL, the parent company of Cambridge Analytica, entered into a commercial arrangement with a company called Global Science Research (GSR), owned by Cambridge-based academic Aleksandr Kogan, specifically premised on the harvesting and processing of Facebook data, so that it could be matched to personality traits and voter rolls.

He has receipts showing that Cambridge Analytica spent $7m to amass this data, about $1m of it with GSR. He has the bank records and wire transfers. Emails reveal Wylie first negotiated with Michal Kosinski, one of the co-authors of the original myPersonality research paper, to use the myPersonality database. But when negotiations broke down, another psychologist, Aleksandr Kogan, offered a solution that many of his colleagues considered unethical. He offered to replicate Kosinski and Stilwells research and cut them out of the deal. For Wylie it seemed a perfect solution. Kosinski was asking for $500,000 for the IP but Kogan said he could replicate it and just harvest his own set of data. (Kosinski says the fee was to fund further research.)

Dr
An unethical solution? Dr Aleksandr Kogan Photograph: alex kogan

Kogan then set up GSR to do the work, and proposed to Wylie they use the data to set up an interdisciplinary institute working across the social sciences. What happened to that idea, I ask Wylie. It never happened. I dont know why. Thats one of the things that upsets me the most.

It was Bannons interest in culture as war that ignited Wylies intellectual concept. But it was Robert Mercers millions that created a firestorm. Kogan was able to throw money at the hard problem of acquiring personal data: he advertised for people who were willing to be paid to take a personality quiz on Amazons Mechanical Turk and Qualtrics. At the end of which Kogans app, called thisismydigitallife, gave him permission to access their Facebook profiles. And not just theirs, but their friends too. On average, each seeder the people who had taken the personality test, around 320,000 in total unwittingly gave access to at least 160 other peoples profiles, none of whom would have known or had reason to suspect.

What the email correspondence between Cambridge Analytica employees and Kogan shows is that Kogan had collected millions of profiles in a matter of weeks. But neither Wylie nor anyone else at Cambridge Analytica had checked that it was legal. It certainly wasnt authorised. Kogan did have permission to pull Facebook data, but for academic purposes only. Whats more, under British data protection laws, its illegal for personal data to be sold to a third party without consent.

Facebook could see it was happening, says Wylie. Their security protocols were triggered because Kogans apps were pulling this enormous amount of data, but apparently Kogan told them it was for academic use. So they were like, Fine.

Kogan maintains that everything he did was legal and he had a close working relationship with Facebook, which had granted him permission for his apps.

Cambridge Analytica had its data. This was the foundation of everything it did next how it extracted psychological insights from the seeders and then built an algorithm to profile millions more.

For more than a year, the reporting around what Cambridge Analytica did or didnt do for Trump has revolved around the question of psychographics, but Wylie points out: Everything was built on the back of that data. The models, the algorithm. Everything. Why wouldnt you use it in your biggest campaign ever?

In December 2015, the Guardians Harry Davies published the first report about Cambridge Analytica acquiring Facebook data and using it to support Ted Cruz in his campaign to be the US Republican candidate. But it wasnt until many months later that Facebook took action. And then, all they did was write a letter. In August 2016, shortly before the US election, and two years after the breach took place, Facebooks lawyers wrote to Wylie, who left Cambridge Analytica in 2014, and told him the data had been illicitly obtained and that GSR was not authorised to share or sell it. They said it must be deleted immediately.

Christopher
Christopher Wylie: Its like Nixon on steroids

I already had. But literally all I had to do was tick a box and sign it and send it back, and that was it, says Wylie. Facebook made zero effort to get the data back.

There were multiple copies of it. It had been emailed in unencrypted files.

Cambridge Analytica rejected all allegations the Observer put to them.

Dr Kogan who later changed his name to Dr Spectre, but has subsequently changed it back to Dr Kogan is still a faculty member at Cambridge University, a senior research associate. But what his fellow academics didnt know until Kogan revealed it in emails to the Observer (although Cambridge University says that Kogan told the head of the psychology department), is that he is also an associate professor at St Petersburg University. Further research revealed that hes received grants from the Russian government to research Stress, health and psychological wellbeing in social networks. The opportunity came about on a trip to the city to visit friends and family, he said.

There are other dramatic documents in Wylies stash, including a pitch made by Cambridge Analytica to Lukoil, Russias second biggest oil producer. In an email dated 17 July 2014, about the US presidential primaries, Nix wrote to Wylie: We have been asked to write a memo to Lukoil (the Russian oil and gas company) to explain to them how our services are going to apply to the petroleum business. Nix said that they understand behavioural microtargeting in the context of elections but that they were failing to make the connection between voters and their consumers. The work, he said, would be shared with the CEO of the business, a former Soviet oil minister and associate of Putin, Vagit Alekperov.

It didnt make any sense to me, says Wylie. I didnt understand either the email or the pitch presentation we did. Why would a Russian oil company want to target information on American voters?

Muellers investigation traces the first stages of the Russian operation to disrupt the 2016 US election back to 2014, when the Russian state made what appears to be its first concerted efforts to harness the power of Americas social media platforms, including Facebook. And it was in late summer of the same year that Cambridge Analytica presented the Russian oil company with an outline of its datasets, capabilities and methodology. The presentation had little to do with consumers. Instead, documents show it focused on election disruption techniques. The first slide illustrates how a rumour campaign spread fear in the 2007 Nigerian election in which the company worked by spreading the idea that the election would be rigged. The final slide, branded with Lukoils logo and that of SCL Group and SCL Elections, headlines its deliverables: psychographic messaging.

https://www.theguardian.com/news/2018/mar/17/data-war-whistleblower-christopher-wylie-faceook-nix-bannon-trump

Facebook is more dangerous than ever. You can make it stop

Cambridge Analytica's chief executive officer Alexander Nix. His firm recently found itself in the spotlight for misrepresenting itself and harvesting data from millions of Facebook users to aid the Trump campaign / AFP PHOTO / PATRICIA DE MELO MOREIRA
Image: AFP/Getty Images

Remember the Marlboro Man? He was a sexy vision of the American west, created by a cigarette corporation to sell a fatal product. People knew this and used that product anyway, at great detriment to themselves and those around them who quietly inhaled toxic secondhand smoke, day into long night.

An agreement between states and tobacco companies banished the rugged cowboy at the end of the 1990s, but the symbol is useful even 20 years later as we contend with a less deadly but no less frightening corporate force. Social networks that many of us signed up for in simpler times — a proverbial first smoke — have become gargantuan archives of our personal data. Now, that data is collected and leveraged by bad actors in an attempt to manipulate you and your friends. 

The time for ignorance is over. We need social responsibility to counterbalance a bad product. The public learned in alarming detail this weekend how a Trump-aligned firm called Cambridge Analytica managed to collect data on 50 million people using Facebook. All, as the Guardian put it, to “predict and influence choices at the ballot box.” Individuals who opted into Cambridge Analytica’s service — which was disguised as a personality quiz on Facebook — made their friends vulnerable to this manipulation, as well.

There were better days on the social network. When you signed up for Facebook, it’s likely because it was an alluring way for you to connect with old friends and share pictures. You hadn’t ever imagined “Russian trolls” or “fake news” or, lord knows, “Cambridge Analytica.” Chances are, you signed up before 2016, when Wired recently declared the social network had begun “two years of hell,” thanks in no small part to reporting efforts from current Mashable staffer Michael Nuñez

By then, the vast majority of Facebook’s 239 million monthly users in America had registered, had likely built an entire virtual life of friends and photos and status updates that were primed to be harvested by forces they couldn’t yet see or understand. Unlike those who continued smoking after the Marlboro Man arrived (two years after a seminal 1952 article in Reader’s Digest explained the dangers of cigarettes to the broad American public), these Facebook users lit up before they knew the cancer was coming.

Running with a health metaphor, Wired‘s “two years of hell” feature was promoted with a photo illustration by Jake Rowland that depicted a bloodied and bruised Mark Zuckerberg:

Image: photo illustration by jake rowland/esto. courtesy conde nast.

Zuckerberg may have been assaulted from all sides, but we — his users — took more of a licking than he did.

That’s because Facebook’s past two years have been all about ethical and technological crises that hurt users most of all. A favorite editor of mine hated that word, “users,” because it made it sound as though we were talking about something other than people. I can agree with that, but also see now that “users” is the word of moment: Facebook’s problems extend forever out of the idea that we are all different clumps of data generation. Human life is incidental.

Facebook’s problems extend forever out of the idea that we are all different clumps of data generation

The photos you post are interpreted by Facebook’s programs to automatically recognize your face; the interests you communicate via text are collated and cross-examined by algorithms to serve you advertising. Our virtual social connections enrich this marketing web and make advertisers more powerful.

And many of us open the app to scroll without really knowing why. Facebook literally presents us with a “feed.” We are users the way drug addicts are users, and we’re used like a focus group is used to vet shades of red in a new can of Coca-Cola.

None of this has been secret for some time. Braver, more fed up, or perhaps more responsible users have deactivated their Facebook accounts before. But any change they made was on the basis of their experience as individuals. New revelations demand we think more in terms of our online societies.

I have exactly 1,000 Facebook friends, and about 10 actual, best friends I see on a regular basis. It wouldn’t have occurred to me to care much about those other 990 Facebook friends until revelations from the Cambridge Analytica scandal. We have to admit now that the choices we make on Facebook can directly impact others.

The social network’s policies have changed since Cambridge Analytica’s 2016 operation. But Facebook’s business model — gather data on people and profit from that data — hasn’t. We cannot expect it to. But a reasonable person would anticipate it’s only a matter of time until the next major ethical breach is revealed to the public.

We know from bad faith campaigns surrounding Brexit and the 2016 U.S. election that individual users are extremely susceptible to viral disinformation. But until now, it was less clear how Facebook’s own tools could be used by third parties to manipulate an entire network of friends in an attempt to manipulate voter behavior.

Your irresponsibility on Facebook can impact a lot of people. A link you share can catch on and influence minds even if it’s totally falsified; more to this immediate concern, a stupid quiz you take could have opened your friends’ information up in a way they’d never have expected.

You could throw the pack away and deactivate your Facebook account altogether. It will get harder the longer you wait — the more photos you post there, or apps you connect to it.

Or you could be judicious about what you post and share, and what apps you allow access to your account. There are easy ways to review this.

But just remember: There’s no precedent for a social network of this size. We can’t guess what catastrophe it sets off next. Will a policy change someday mean it’s open season on your data, even if that data has limited protections in the here and now? 

Be smart: It’s not just you, or me, out there alone.

Read more: https://mashable.com/2018/03/19/protect-yourself-and-your-friends-from-facebook/

The world is watching: How Florida shooting made U.S. gun control a global conversation

AR-15 "Sport" rifles on sale at deep discounts in an Arizona store.
Image: john moore/Getty Images

When you move to America from a country with more effective gun control laws, one of the first things you learn is how hard it is to talk to Americans — even on the sympathetic side of the political divide — about the gun issue. 

It was particularly difficult when I arrived on these shores in 1996, direct from living in Scotland during its (and Britain’s) worst-ever school shooting. In the tiny town of Dunblane, a 43-year old former shopkeeper and scoutmaster brought four handguns to a school gymnasium full of five-year-olds. He shot and killed 16 of them and their teacher, then turned his handgun on himself.

After Dunblane, the British plunged into a state of collective mourning that was at least as widespread as the better-known grieving process for Princess Diana the following year. (Americans don’t always believe that part, to which I usually say: the kids were five, for crying out loud. Five.)

In a country where nobody would dream of pulling public funding for studies into gun violence, the solution was amazingly rational and bipartisan. After a year, and an official inquiry into Dunblane, the Conservative government passed a sweeping piece of legislation restricting handguns. Then after Labour won the 1997 election, it passed another. Britain hasn’t seen a school shooting since. (Same with Australia, which also passed major gun control legislation in 1996). 

But trying to talk about all that in America over the last two decades, I’ve learned from experience, has been like touching the proverbial third rail: only tourists would be dumb enough to try it. Even gun control advocates now think they’re dealing with an intractable, generational problem. Many tell me that we need to tackle mental health services or gun fetishization in Hollywood movies first. The legislation route couldn’t possibly be that easy, they say.

But what if it is that easy? What if the rest of the world also loves Hollywood action movies and has mental health problems, but manages to have fewer shootings simply because it has fewer guns available? What if the rest of the world has been shouting at America for years that gun control is less intractable than you think — you just have to vote in large numbers for the politicians that favor it, and keep doing so at every election? 

If that’s the case, then perhaps some powerful, leveling international marketplace of ideas could help the U.S. see what everyone else has already seen. Something like social media. 

In one sense, Wednesday’s massacre in Parkland, Florida — a school shooting as shocking and senseless as Dunblane —  was evidence that America was further away from a gun control solution than ever. In 1996, buying an AR-15 assault rifle was illegal under federal law. Now, in Florida and many other states, a 19-year old can walk into any gun store and walk out with this military-grade weapon of mass destruction. 

Yet anecdotally, I have noticed one glimmer of hope. Since the last American gun massacre that got everyone talking, there has been a small shift in the online conversation. It has become a little more global. The students of Parkland have been broadcasting to the world via social media, and the world is taking notice. 

I’m not suggesting some kind of slam-dunk situation where every American on Twitter and Facebook and Snapchat has an epiphany about gun control because they’re more frequently interacting with people from other nations with different laws. 

But I am saying it’s noticeably harder for pro-gun accounts to spread lies about the situation in other countries without people from those countries chiming in. 

Meanwhile, there is a mountain of evidence that Russian bots and troll accounts are attempting to hijack the online conversation using the same playbook from the 2016 elections — manufacture conflict to destabilize American discourse. That means taking the most trollishly pro-NRA position they can think of, in a bid to counteract the large majority of Americans who want sensible gun control. 

So the voices from other countries are chiming in just in time. If anything, we need more of them to balance out cynical foreign influence in a pro-gun direction. 

How effective gun control can happen

Twenty years of trying to have this debate in the U.S. have worn me down. As you might expect, I’ve been on the receiving end of a lot of Second Amendment-splaining from the pro-gun lobby. (Yep, I’m very familiar with the two centuries of debate over the militia clause, thanks.) I’ve been told I didn’t understand the power of the NRA (which, again, I’m quite familiar with: the organization supported sensible gun restrictions until it was radicalized in 1977).

I’ve heard every argument you could imagine: the notion that British police must now be lording it over the poor defenseless population; the blinkered insistence that there must have been a rise in crime with illegal guns and legal knives now all the good people with guns have been taken out of the equation. (Violent crime is still too high in the UK, but it is a fraction of America’s total — and has declined significantly since 1996.) 

I no longer have the dream that a UK-Australia-style handgun ban would work here. There are as many as 300 million firearms in private hands, according to a 2012 Congressional estimate; even though most of them are concentrated in the hands of a small percentage of owners, it’s simply impractical to talk about removing a significant percentage of them from the equation. 

But if anything, I’m more aware of creative legal solutions: laws that require gun insurance the way we require car insurance, or tax ammunition, or hold manufacturers responsible for gun deaths. I’ve seen my adopted state of California implement some of the toughest gun laws in the nation, laws that just went into effect. The fight to prevent future massacres is just getting started.

And any time you want to talk about how it can happen, the rest of a shrinking world is listening — and ready to talk. 

Read more: https://mashable.com/2018/02/17/gun-control-social-media/

Inside the Two Years That Shook Facebookand the World

One day in late February of 2016, Mark Zuckerberg sent a memo to all of Facebook’s employees to address some troubling behavior in the ranks. His message pertained to some walls at the company’s Menlo Park headquarters where staffers are encouraged to scribble notes and signatures. On at least a couple of occasions, someone had crossed out the words “Black Lives Matter” and replaced them with “All Lives Matter.” Zuckerberg wanted whoever was responsible to cut it out.

“ ‘Black Lives Matter’ doesn’t mean other lives don’t,” he wrote. “We’ve never had rules around what people can write on our walls,” the memo went on. But “crossing out something means silencing speech, or that one person’s speech is more important than another’s.” The defacement, he said, was being investigated.

All around the country at about this time, debates about race and politics were becoming increasingly raw. Donald Trump had just won the South Carolina primary, lashed out at the Pope over immigration, and earned the enthusiastic support of David Duke. Hillary Clinton had just defeated Bernie Sanders in Nevada, only to have an activist from Black Lives Matter interrupt a speech of hers to protest racially charged statements she’d made two decades before. And on Facebook, a popular group called Blacktivist was gaining traction by blasting out messages like “American economy and power were built on forced migration and torture.”

So when Zuckerberg’s admonition circulated, a young contract employee named Benjamin Fearnow decided it might be newsworthy. He took a screenshot on his personal laptop and sent the image to a friend named Michael Nuñez, who worked at the tech-news site Gizmodo. Nuñez promptly published a brief story about Zuckerberg’s memo.

A week later, Fearnow came across something else he thought Nuñez might like to publish. In another internal communication, Facebook had invited its employees to submit potential questions to ask Zuckerberg at an all-hands meeting. One of the most up-voted questions that week was “What responsibility does Facebook have to help prevent President Trump in 2017?” Fearnow took another screenshot, this time with his phone.

Fearnow, a recent graduate of the Columbia Journalism School, worked in Facebook’s New York office on something called Trending Topics, a feed of popular news subjects that popped up when people opened Facebook. The feed was generated by an algorithm but moderated by a team of about 25 people with backgrounds in journalism. If the word “Trump” was trending, as it often was, they used their news judgment to identify which bit of news about the candidate was most important. If The Onion or a hoax site published a spoof that went viral, they had to keep that out. If something like a mass shooting happened, and Facebook’s algorithm was slow to pick up on it, they would inject a story about it into the feed.

March 2018. Subscribe to WIRED.

Jake Rowland/Esto

Facebook prides itself on being a place where people love to work. But Fearnow and his team weren’t the happiest lot. They were contract employees hired through a company called BCforward, and every day was full of little reminders that they weren’t really part of Facebook. Plus, the young journalists knew their jobs were doomed from the start. Tech companies, for the most part, prefer to have as little as possible done by humans—because, it’s often said, they don’t scale. You can’t hire a billion of them, and they prove meddlesome in ways that algorithms don’t. They need bathroom breaks and health insurance, and the most annoying of them sometimes talk to the press. Eventually, everyone assumed, Facebook’s algorithms would be good enough to run the whole project, and the people on Fearnow’s team—who served partly to train those algorithms—would be expendable.

The day after Fearnow took that second screenshot was a Friday. When he woke up after sleeping in, he noticed that he had about 30 meeting notifications from Facebook on his phone. When he replied to say it was his day off, he recalls, he was nonetheless asked to be available in 10 minutes. Soon he was on a video­conference with three Facebook employees, including Sonya Ahuja, the company’s head of investigations. According to his recounting of the meeting, she asked him if he had been in touch with Nuñez. He denied that he had been. Then she told him that she had their messages on Gchat, which Fearnow had assumed weren’t accessible to Facebook. He was fired. “Please shut your laptop and don’t reopen it,” she instructed him.

That same day, Ahuja had another conversation with a second employee at Trending Topics named Ryan Villarreal. Several years before, he and Fearnow had shared an apartment with Nuñez. Villarreal said he hadn’t taken any screenshots, and he certainly hadn’t leaked them. But he had clicked “like” on the story about Black Lives Matter, and he was friends with Nuñez on Facebook. “Do you think leaks are bad?” Ahuja demanded to know, according to Villarreal. He was fired too. The last he heard from his employer was in a letter from BCforward. The company had given him $15 to cover expenses, and it wanted the money back.

The firing of Fearnow and Villarreal set the Trending Topics team on edge—and Nuñez kept digging for dirt. He soon published a story about the internal poll showing Facebookers’ interest in fending off Trump. Then, in early May, he published an article based on conversations with yet a third former Trending Topics employee, under the blaring headline “Former Facebook Workers: We Routinely Suppressed Conservative News.” The piece suggested that Facebook’s Trending team worked like a Fox News fever dream, with a bunch of biased curators “injecting” liberal stories and “blacklisting” conservative ones. Within a few hours the piece popped onto half a dozen highly trafficked tech and politics websites, including Drudge Report and Breitbart News.

The post went viral, but the ensuing battle over Trending Topics did more than just dominate a few news cycles. In ways that are only fully visible now, it set the stage for the most tumultuous two years of Facebook’s existence—triggering a chain of events that would distract and confuse the company while larger disasters began to engulf it.

This is the story of those two years, as they played out inside and around the company. WIRED spoke with 51 current or former Facebook employees for this article, many of whom did not want their names used, for reasons anyone familiar with the story of Fearnow and Villarreal would surely understand. (One current employee asked that a WIRED reporter turn off his phone so the company would have a harder time tracking whether it had been near the phones of anyone from Facebook.)

The stories varied, but most people told the same basic tale: of a company, and a CEO, whose techno-optimism has been crushed as they’ve learned the myriad ways their platform can be used for ill. Of an election that shocked Facebook, even as its fallout put the company under siege. Of a series of external threats, defensive internal calculations, and false starts that delayed Facebook’s reckoning with its impact on global affairs and its users’ minds. And—in the tale’s final chapters—of the company’s earnest attempt to redeem itself.

In that saga, Fearnow plays one of those obscure but crucial roles that history occasionally hands out. He’s the Franz Ferdinand of Facebook—or maybe he’s more like the archduke’s hapless young assassin. Either way, in the rolling disaster that has enveloped Facebook since early 2016, Fearnow’s leaks probably ought to go down as the screenshots heard round the world.

II

By now, the story of Facebook’s all-consuming growth is practically the creation myth of our information era. What began as a way to connect with your friends at Harvard became a way to connect with people at other elite schools, then at all schools, and then everywhere. After that, your Facebook login became a way to log on to other internet sites. Its Messenger app started competing with email and texting. It became the place where you told people you were safe after an earthquake. In some countries like the Philippines, it effectively is the internet.

The furious energy of this big bang emanated, in large part, from a brilliant and simple insight. Humans are social animals. But the internet is a cesspool. That scares people away from identifying themselves and putting personal details online. Solve that problem—make people feel safe to post—and they will share obsessively. Make the resulting database of privately shared information and personal connections available to advertisers, and that platform will become one of the most important media technologies of the early 21st century.

But as powerful as that original insight was, Facebook’s expansion has also been driven by sheer brawn. Zuckerberg has been a determined, even ruthless, steward of the company’s manifest destiny, with an uncanny knack for placing the right bets. In the company’s early days, “move fast and break things” wasn’t just a piece of advice to his developers; it was a philosophy that served to resolve countless delicate trade-offs—many of them involving user privacy—in ways that best favored the platform’s growth. And when it comes to competitors, Zuckerberg has been relentless in either acquiring or sinking any challengers that seem to have the wind at their backs.

Facebook’s Reckoning

Two years that forced the platform to change

by Blanca Myers

March 2016

Facebook suspends Benjamin Fearnow, a journalist-­curator for the platform’s Trending Topics feed, after he leaks to Gizmodo.

May 2016

Gizmodo reports that Trending Topics “routinely suppressed conservative news.” The story sends Facebook scrambling.

July 2016

Rupert Murdoch tells Zuckerberg that Facebook is wreaking havoc on the news industry and threatens to cause trouble.

August 2016

Facebook cuts loose all of its Trending Topics journalists, ceding authority over the feed to engineers in Seattle.

November 2016

Donald Trump wins. Zuckerberg says it’s “pretty crazy” to think fake news on Facebook helped tip the election.

December 2016

Facebook declares war on fake news, hires CNN alum Campbell Brown to shepherd relations with the publishing industry.

September 2017

Facebook announces that a Russian group paid $100,000 for roughly 3,000 ads aimed at US voters.

October 2017

Researcher Jonathan Albright reveals that posts from six Russian propaganda accounts were shared 340 million times.

November 2017

Facebook general counsel Colin Stretch gets pummeled during congressional Intelligence Committee hearings.

January 2018

Facebook begins announcing major changes, aimed to ensure that time on the platform will be “time well spent.”

In fact, it was in besting just such a rival that Facebook came to dominate how we discover and consume news. Back in 2012, the most exciting social network for distributing news online wasn’t Facebook, it was Twitter. The latter’s 140-character posts accelerated the speed at which news could spread, allowing its influence in the news industry to grow much faster than Facebook’s. “Twitter was this massive, massive threat,” says a former Facebook executive heavily involved in the decisionmaking at the time.

So Zuckerberg pursued a strategy he has often deployed against competitors he cannot buy: He copied, then crushed. He adjusted Facebook’s News Feed to fully incorporate news (despite its name, the feed was originally tilted toward personal news) and adjusted the product so that it showed author bylines and headlines. Then Facebook’s emissaries fanned out to talk with journalists and explain how to best reach readers through the platform. By the end of 2013, Facebook had doubled its share of traffic to news sites and had started to push Twitter into a decline. By the middle of 2015, it had surpassed Google as the leader in referring readers to publisher sites and was now referring 13 times as many readers to news publishers as Twitter. That year, Facebook launched Instant Articles, offering publishers the chance to publish directly on the platform. Posts would load faster and look sharper if they agreed, but the publishers would give up an element of control over the content. The publishing industry, which had been reeling for years, largely assented. Facebook now effectively owned the news. “If you could reproduce Twitter inside of Facebook, why would you go to Twitter?” says the former executive. “What they are doing to Snapchat now, they did to Twitter back then.”

It appears that Facebook did not, however, carefully think through the implications of becoming the dominant force in the news industry. Everyone in management cared about quality and accuracy, and they had set up rules, for example, to eliminate pornography and protect copyright. But Facebook hired few journalists and spent little time discussing the big questions that bedevil the media industry. What is fair? What is a fact? How do you signal the difference between news, analysis, satire, and opinion? Facebook has long seemed to think it has immunity from those debates because it is just a technology company—one that has built a “platform for all ideas.”

This notion that Facebook is an open, neutral platform is almost like a religious tenet inside the company. When new recruits come in, they are treated to an orientation lecture by Chris Cox, the company’s chief product officer, who tells them Facebook is an entirely new communications platform for the 21st century, as the telephone was for the 20th. But if anyone inside Facebook is unconvinced by religion, there is also Section 230 of the 1996 Communications Decency Act to recommend the idea. This is the section of US law that shelters internet intermediaries from liability for the content their users post. If Facebook were to start creating or editing content on its platform, it would risk losing that immunity—and it’s hard to imagine how Facebook could exist if it were liable for the many billion pieces of content a day that users post on its site.

And so, because of the company’s self-image, as well as its fear of regulation, Facebook tried never to favor one kind of news content over another. But neutrality is a choice in itself. For instance, Facebook decided to present every piece of content that appeared on News Feed—whether it was your dog pictures or a news story—in roughly the same way. This meant that all news stories looked roughly the same as each other, too, whether they were investigations in The Washington Post, gossip in the New York Post, or flat-out lies in the Denver Guardian, an entirely bogus newspaper. Facebook argued that this democratized information. You saw what your friends wanted you to see, not what some editor in a Times Square tower chose. But it’s hard to argue that this wasn’t an editorial decision. It may be one of the biggest ever made.

In any case, Facebook’s move into news set off yet another explosion of ways that people could connect. Now Facebook was the place where publications could connect with their readers—and also where Macedonian teenagers could connect with voters in America, and operatives in Saint Petersburg could connect with audiences of their own choosing in a way that no one at the company had ever seen before.

III

In February of 2016, just as the Trending Topics fiasco was building up steam, Roger ­McNamee became one of the first Facebook insiders to notice strange things happening on the platform. McNamee was an early investor in Facebook who had mentored Zuckerberg through two crucial decisions: to turn down Yahoo’s offer of $1 billion to acquire Facebook in 2006; and to hire a Google executive named Sheryl Sandberg in 2008 to help find a business model. McNamee was no longer in touch with Zuckerberg much, but he was still an investor, and that month he started seeing things related to the Bernie Sanders campaign that worried him. “I’m observing memes ostensibly coming out of a Facebook group associated with the Sanders campaign that couldn’t possibly have been from the Sanders campaign,” he recalls, “and yet they were organized and spreading in such a way that suggested somebody had a budget. And I’m sitting there thinking, ‘That’s really weird. I mean, that’s not good.’ ”

But McNamee didn’t say anything to anyone at Facebook—at least not yet. And the company itself was not picking up on any such worrying signals, save for one blip on its radar: In early 2016, its security team noticed an uptick in Russian actors attempting to steal the credentials of journalists and public figures. Facebook reported this to the FBI. But the company says it never heard back from the government, and that was that.

Instead, Facebook spent the spring of 2016 very busily fending off accusations that it might influence the elections in a completely different way. When Gizmodo published its story about political bias on the Trending Topics team in May, the ­article went off like a bomb in Menlo Park. It quickly reached millions of readers and, in a delicious irony, appeared in the Trending Topics module itself. But the bad press wasn’t what really rattled Facebook—it was the letter from John Thune, a Republican US senator from South Dakota, that followed the story’s publication. Thune chairs the Senate Commerce Committee, which in turn oversees the Federal Trade Commission, an agency that has been especially active in investigating Facebook. The senator wanted Facebook’s answers to the allegations of bias, and he wanted them promptly.

The Thune letter put Facebook on high alert. The company promptly dispatched senior Washington staffers to meet with Thune’s team. Then it sent him a 12-page single-spaced letter explaining that it had conducted a thorough review of Trending Topics and determined that the allegations in the Gizmodo story were largely false.

Facebook decided, too, that it had to extend an olive branch to the entire American right wing, much of which was raging about the company’s supposed perfidy. And so, just over a week after the story ran, Facebook scrambled to invite a group of 17 prominent Republicans out to Menlo Park. The list included television hosts, radio stars, think tankers, and an adviser to the Trump campaign. The point was partly to get feedback. But more than that, the company wanted to make a show of apologizing for its sins, lifting up the back of its shirt, and asking for the lash.

According to a Facebook employee involved in planning the meeting, part of the goal was to bring in a group of conservatives who were certain to fight with one another. They made sure to have libertarians who wouldn’t want to regulate the platform and partisans who would. Another goal, according to the employee, was to make sure the attendees were “bored to death” by a technical presentation after Zuckerberg and Sandberg had addressed the group.

The power went out, and the room got uncomfortably hot. But otherwise the meeting went according to plan. The guests did indeed fight, and they failed to unify in a way that was either threatening or coherent. Some wanted the company to set hiring quotas for conservative employees; others thought that idea was nuts. As often happens when outsiders meet with Facebook, people used the time to try to figure out how they could get more followers for their own pages.

Afterward, Glenn Beck, one of the invitees, wrote an essay about the meeting, praising Zuckerberg. “I asked him if Facebook, now or in the future, would be an open platform for the sharing of all ideas or a curator of content,” Beck wrote. “Without hesitation, with clarity and boldness, Mark said there is only one Facebook and one path forward: ‘We are an open platform.’”

Inside Facebook itself, the backlash around Trending Topics did inspire some genuine soul-searching. But none of it got very far. A quiet internal project, codenamed Hudson, cropped up around this time to determine, according to someone who worked on it, whether News Feed should be modified to better deal with some of the most complex issues facing the product. Does it favor posts that make people angry? Does it favor simple or even false ideas over complex and true ones? Those are hard questions, and the company didn’t have answers to them yet. Ultimately, in late June, Facebook announced a modest change: The algorithm would be revised to favor posts from friends and family. At the same time, Adam Mosseri, Facebook’s News Feed boss, posted a manifesto titled “Building a Better News Feed for You.” People inside Facebook spoke of it as a document roughly resembling the Magna Carta; the company had never spoken before about how News Feed really worked. To outsiders, though, the document came across as boilerplate. It said roughly what you’d expect: that the company was opposed to clickbait but that it wasn’t in the business of favoring certain kinds of viewpoints.

The most important consequence of the Trending Topics controversy, according to nearly a dozen former and current employees, was that Facebook became wary of doing anything that might look like stifling conservative news. It had burned its fingers once and didn’t want to do it again. And so a summer of deeply partisan rancor and calumny began with Facebook eager to stay out of the fray.

IV

Shortly after Mosseri published his guide to News Feed values, Zuckerberg traveled to Sun Valley, Idaho, for an annual conference hosted by billionaire Herb Allen, where moguls in short sleeves and sunglasses cavort and make plans to buy each other’s companies. But Rupert Murdoch broke the mood in a meeting that took place inside his villa. According to numerous accounts of the conversation, Murdoch and Robert Thomson, the CEO of News Corp, explained to Zuckerberg that they had long been unhappy with Facebook and Google. The two tech giants had taken nearly the entire digital ad market and become an existential threat to serious journalism. According to people familiar with the conversation, the two News Corp leaders accused Facebook of making dramatic changes to its core algorithm without adequately consulting its media partners, wreaking havoc according to Zuckerberg’s whims. If Facebook didn’t start offering a better deal to the publishing industry, Thomson and Murdoch conveyed in stark terms, Zuckerberg could expect News Corp executives to become much more public in their denunciations and much more open in their lobbying. They had helped to make things very hard for Google in Europe. And they could do the same for Facebook in the US.

Facebook thought that News Corp was threatening to push for a government antitrust investigation or maybe an inquiry into whether the company deserved its protection from liability as a neutral platform. Inside Facebook, executives believed Murdoch might use his papers and TV stations to amplify critiques of the company. News Corp says that was not at all the case; the company threatened to deploy executives, but not its journalists.

Zuckerberg had reason to take the meeting especially seriously, according to a former Facebook executive, because he had firsthand knowledge of Murdoch’s skill in the dark arts. Back in 2007, Facebook had come under criticism from 49 state attorneys general for failing to protect young Facebook users from sexual predators and inappropriate content. Concerned parents had written to Connecticut attorney general Richard Blumenthal, who opened an investigation, and to The New York Times, which published a story. But according to a former Facebook executive in a position to know, the company believed that many of the Facebook accounts and the predatory behavior the letters referenced were fakes, traceable to News Corp lawyers or others working for Murdoch, who owned Facebook’s biggest competitor, MySpace. “We traced the creation of the Facebook accounts to IP addresses at the Apple store a block away from the MySpace offices in Santa Monica,” the executive says. “Facebook then traced interactions with those accounts to News Corp lawyers. When it comes to Facebook, Murdoch has been playing every angle he can for a long time.” (Both News Corp and its spinoff 21st Century Fox declined to comment.)

Zuckerberg took Murdoch’s threats seriously—he had firsthand knowledge of the older man’s skill in the dark arts.

When Zuckerberg returned from Sun Valley, he told his employees that things had to change. They still weren’t in the news business, but they had to make sure there would be a news business. And they had to communicate better. One of those who got a new to-do list was Andrew Anker, a product manager who’d arrived at Facebook in 2015 after a career in journalism (including a long stint at WIRED in the ’90s). One of his jobs was to help the company think through how publishers could make money on the platform. Shortly after Sun Valley, Anker met with Zuckerberg and asked to hire 60 new people to work on partnerships with the news industry. Before the meeting ended, the request was approved.

But having more people out talking to publishers just drove home how hard it would be to resolve the financial problems Murdoch wanted fixed. News outfits were spending millions to produce stories that Facebook was benefiting from, and Facebook, they felt, was giving too little back in return. Instant Articles, in particular, struck them as a Trojan horse. Publishers complained that they could make more money from stories that loaded on their own mobile web pages than on Facebook Instant. (They often did so, it turned out, in ways that short-changed advertisers, by sneaking in ads that readers were unlikely to see. Facebook didn’t let them get away with that.) Another seemingly irreconcilable difference: Outlets like Murdoch’s Wall Street Journal depended on paywalls to make money, but Instant Articles banned paywalls; Zuckerberg disapproved of them. After all, he would often ask, how exactly do walls and toll booths make the world more open and connected?

The conversations often ended at an impasse, but Facebook was at least becoming more attentive. This newfound appreciation for the concerns of journalists did not, however, extend to the journalists on Facebook’s own Trending Topics team. In late August, everyone on the team was told that their jobs were being eliminated. Simultaneously, authority over the algorithm shifted to a team of engineers based in Seattle. Very quickly the module started to surface lies and fiction. A headline days later read, “Fox News Exposes Traitor Megyn Kelly, Kicks Her Out For Backing Hillary."

V

While Facebook grappled internally with what it was becoming—a company that dominated media but didn’t want to be a media company—Donald Trump’s presidential campaign staff faced no such confusion. To them Facebook’s use was obvious. Twitter was a tool for communicating directly with supporters and yelling at the media. Facebook was the way to run the most effective direct-­marketing political operation in history.

In the summer of 2016, at the top of the general election campaign, Trump’s digital operation might have seemed to be at a major disadvantage. After all, Hillary Clinton’s team was flush with elite talent and got advice from Eric Schmidt, known for running ­Google. Trump’s was run by Brad Parscale, known for setting up the Eric Trump Foundation’s web page. Trump’s social media director was his former caddie. But in 2016, it turned out you didn’t need digital experience running a presidential campaign, you just needed a knack for Facebook.

Over the course of the summer, Trump’s team turned the platform into one of its primary vehicles for fund-­raising. The campaign uploaded its voter files—the names, addresses, voting history, and any other information it had on potential voters—to Facebook. Then, using a tool called Look­alike Audiences, Facebook identified the broad characteristics of, say, people who had signed up for Trump newsletters or bought Trump hats. That allowed the campaign to send ads to people with similar traits. Trump would post simple messages like “This election is being rigged by the media pushing false and unsubstantiated charges, and outright lies, in order to elect Crooked Hillary!” that got hundreds of thousands of likes, comments, and shares. The money rolled in. Clinton’s wonkier messages, meanwhile, resonated less on the platform. Inside Facebook, almost everyone on the executive team wanted Clinton to win; but they knew that Trump was using the platform better. If he was the candidate for Facebook, she was the candidate for LinkedIn.

Trump’s candidacy also proved to be a wonderful tool for a new class of scammers pumping out massively viral and entirely fake stories. Through trial and error, they learned that memes praising the former host of The Apprentice got many more readers than ones praising the former secretary of state. A website called Ending the Fed proclaimed that the Pope had endorsed Trump and got almost a million comments, shares, and reactions on Facebook, according to an analysis by BuzzFeed. Other stories asserted that the former first lady had quietly been selling weapons to ISIS, and that an FBI agent suspected of leaking Clinton’s emails was found dead. Some of the posts came from hyperpartisan Americans. Some came from overseas content mills that were in it purely for the ad dollars. By the end of the campaign, the top fake stories on the platform were generating more engagement than the top real ones.

Even current Facebookers acknowledge now that they missed what should have been obvious signs of people misusing the platform. And looking back, it’s easy to put together a long list of possible explanations for the myopia in Menlo Park about fake news. Management was gun-shy because of the Trending Topics fiasco; taking action against partisan disinformation—or even identifying it as such—might have been seen as another act of political favoritism. Facebook also sold ads against the stories, and sensational garbage was good at pulling people into the platform. Employees’ bonuses can be based largely on whether Facebook hits certain growth and revenue targets, which gives people an extra incentive not to worry too much about things that are otherwise good for engagement. And then there was the ever-present issue of Section 230 of the 1996 Communications Decency Act. If the company started taking responsibility for fake news, it might have to take responsibility for a lot more. Facebook had plenty of reasons to keep its head in the sand.

Roger McNamee, however, watched carefully as the nonsense spread. First there were the fake stories pushing Bernie Sanders, then he saw ones supporting Brexit, and then helping Trump. By the end of the summer, he had resolved to write an op-ed about the problems on the platform. But he never ran it. “The idea was, look, these are my friends. I really want to help them.” And so on a Sunday evening, nine days before the 2016 election, McNamee emailed a 1,000-word letter to Sandberg and Zuckerberg. “I am really sad about Facebook,” it began. “I got involved with the company more than a decade ago and have taken great pride and joy in the company’s success … until the past few months. Now I am disappointed. I am embarrassed. I am ashamed.”

Eddie Guy

VI

It’s not easy to recognize that the machine you’ve built to bring people together is being used to tear them apart, and Mark Zuckerberg’s initial reaction to Trump’s victory, and Facebook’s possible role in it, was one of peevish dismissal. Executives remember panic the first few days, with the leadership team scurrying back and forth between Zuckerberg’s conference room (called the Aquarium) and Sandberg’s (called Only Good News), trying to figure out what had just happened and whether they would be blamed. Then, at a conference two days after the election, Zuckerberg argued that filter bubbles are worse offline than on Facebook and that social media hardly influences how people vote. “The idea that fake news on Facebook—of which, you know, it’s a very small amount of the content—influenced the election in any way, I think, is a pretty crazy idea,” he said.

Zuckerberg declined to be interviewed for this article, but people who know him well say he likes to form his opinions from data. And in this case he wasn’t without it. Before the interview, his staff had worked up a back-of-the-­envelope calculation showing that fake news was a tiny percentage of the total amount of election-­related content on the platform. But the analysis was just an aggregate look at the percentage of clearly fake stories that appeared across all of Facebook. It didn’t measure their influence or the way fake news affected specific groups. It was a number, but not a particularly meaningful one.

Zuckerberg’s comments did not go over well, even inside Facebook. They seemed clueless and self-absorbed. “What he said was incredibly damaging,” a former executive told WIRED. “We had to really flip him on that. We realized that if we didn’t, the company was going to start heading down this pariah path that Uber was on.”

A week after his “pretty crazy” comment, Zuckerberg flew to Peru to give a talk to world leaders about the ways that connecting more people to the internet, and to Facebook, could reduce global poverty. Right after he landed in Lima, he posted something of a mea culpa. He explained that Facebook did take misinformation seriously, and he presented a vague seven-point plan to tackle it. When a professor at the New School named David Carroll saw Zuckerberg’s post, he took a screenshot. Alongside it on Carroll’s feed ran a headline from a fake CNN with an image of a distressed Donald Trump and the text “DISQUALIFIED; He’s GONE!”

At the conference in Peru, Zuckerberg met with a man who knows a few things about politics: Barack Obama. Media reports portrayed the encounter as one in which the lame-duck president pulled Zuckerberg aside and gave him a “wake-up call” about fake news. But according to someone who was with them in Lima, it was Zuckerberg who called the meeting, and his agenda was merely to convince Obama that, yes, Facebook was serious about dealing with the problem. He truly wanted to thwart misinformation, he said, but it wasn’t an easy issue to solve.

One employee compared Zuckerberg to Lennie in Of Mice and Men—a man with no understanding of his own strength.

Meanwhile, at Facebook, the gears churned. For the first time, insiders really began to question whether they had too much power. One employee told WIRED that, watching Zuckerberg, he was reminded of Lennie in Of Mice and Men, the farm-worker with no understanding of his own strength.

Very soon after the election, a team of employees started working on something called the News Feed Integrity Task Force, inspired by a sense, one of them told WIRED, that hyperpartisan misinformation was “a disease that’s creeping into the entire platform.” The group, which included Mosseri and Anker, began to meet every day, using whiteboards to outline different ways they could respond to the fake-news crisis. Within a few weeks the company announced it would cut off advertising revenue for ad farms and make it easier for users to flag stories they thought false.

In December the company announced that, for the first time, it would introduce fact-checking onto the platform. Facebook didn’t want to check facts itself; instead it would outsource the problem to professionals. If Facebook received enough signals that a story was false, it would automatically be sent to partners, like Snopes, for review. Then, in early January, Facebook announced that it had hired Campbell Brown, a former anchor at CNN. She immediately became the most prominent journalist hired by the company.

Soon Brown was put in charge of something called the Facebook Journalism Project. “We spun it up over the holidays, essentially,” says one person involved in discussions about the project. The aim was to demonstrate that Facebook was thinking hard about its role in the future of journalism—essentially, it was a more public and organized version of the efforts the company had begun after Murdoch’s tongue-lashing. But sheer anxiety was also part of the motivation. “After the election, because Trump won, the media put a ton of attention on fake news and just started hammering us. People started panicking and getting afraid that regulation was coming. So the team looked at what Google had been doing for years with News Lab”—a group inside Alphabet that builds tools for journalists—“and we decided to figure out how we could put together our own packaged program that shows how seriously we take the future of news.”

Facebook was reluctant, however, to issue any mea culpas or action plans with regard to the problem of filter bubbles or Facebook’s noted propensity to serve as a tool for amplifying outrage. Members of the leadership team regarded these as issues that couldn’t be solved, and maybe even shouldn’t be solved. Was Facebook really more at fault for amplifying outrage during the election than, say, Fox News or MSNBC? Sure, you could put stories into people’s feeds that contradicted their political viewpoints, but people would turn away from them, just as surely as they’d flip the dial back if their TV quietly switched them from Sean Hannity to Joy Reid. The problem, as Anker puts it, “is not Facebook. It’s humans.”

VII

Zuckerberg’s “pretty crazy” statement about fake news caught the ear of a lot of people, but one of the most influential was a security researcher named Renée DiResta. For years, she’d been studying how misinformation spreads on the platform. If you joined an antivaccine group on Facebook, she observed, the platform might suggest that you join flat-earth groups or maybe ones devoted to Pizzagate—putting you on a conveyor belt of conspiracy thinking. Zuckerberg’s statement struck her as wildly out of touch. “How can this platform say this thing?” she remembers thinking.

Roger McNamee, meanwhile, was getting steamed at Facebook’s response to his letter. Zuckerberg and Sandberg had written him back promptly, but they hadn’t said anything substantial. Instead he ended up having a months-long, ultimately futile set of email exchanges with Dan Rose, Facebook’s VP for partnerships. McNamee says Rose’s message was polite but also very firm: The company was doing a lot of good work that McNamee couldn’t see, and in any event Facebook was a platform, not a media company.

“And I’m sitting there going, ‘Guys, seriously, I don’t think that’s how it works,’” McNamee says. “You can assert till you’re blue in the face that you’re a platform, but if your users take a different point of view, it doesn’t matter what you assert.”

As the saying goes, heaven has no rage like love to hatred turned, and McNamee’s concern soon became a cause—and the beginning of an alliance. In April 2017 he connected with a former Google design ethicist named Tristan Harris when they appeared together on Bloomberg TV. Harris had by then gained a national reputation as the conscience of Silicon Valley. He had been profiled on 60 Minutes and in The Atlantic, and he spoke eloquently about the subtle tricks that social media companies use to foster an addiction to their services. “They can amplify the worst aspects of human nature,” Harris told WIRED this past December. After the TV appearance, McNamee says he called Harris up and asked, “Dude, do you need a wingman?”

The next month, DiResta published an ­article comparing purveyors of disinformation on social media to manipulative high-frequency traders in financial markets. “Social networks enable malicious actors to operate at platform scale, because they were designed for fast information flows and virality,” she wrote. Bots and sock puppets could cheaply “create the illusion of a mass groundswell of grassroots activity,” in much the same way that early, now-illegal trading algorithms could spoof demand for a stock. Harris read the article, was impressed, and emailed her.

The three were soon out talking to anyone who would listen about Facebook’s poisonous effects on American democracy. And before long they found receptive audiences in the media and Congress—groups with their own mounting grievances against the social media giant.

VIII

Even at the best of times, meetings between Facebook and media executives can feel like unhappy family gatherings. The two sides are inextricably bound together, but they don’t like each other all that much. News executives resent that Facebook and Google have captured roughly three-quarters of the digital ad business, leaving the media industry and other platforms, like Twitter, to fight over scraps. Plus they feel like the preferences of Facebook’s algorithm have pushed the industry to publish ever-dumber stories. For years, The New York Times resented that Facebook helped elevate BuzzFeed; now BuzzFeed is angry about being displaced by clickbait.

And then there’s the simple, deep fear and mistrust that Facebook inspires. Every publisher knows that, at best, they are sharecroppers on Facebook’s massive industrial farm. The social network is roughly 200 times more valuable than the Times. And journalists know that the man who owns the farm has the leverage. If Facebook wanted to, it could quietly turn any number of dials that would harm a publisher—by manipulating its traffic, its ad network, or its readers.

Emissaries from Facebook, for their part, find it tiresome to be lectured by people who can’t tell an algorithm from an API. They also know that Facebook didn’t win the digital ad market through luck: It built a better ad product. And in their darkest moments, they wonder: What’s the point? News makes up only about 5 percent of the total content that people see on Facebook globally. The company could let it all go and its shareholders would scarcely notice. And there’s another, deeper problem: Mark Zuckerberg, according to people who know him, prefers to think about the future. He’s less interested in the news industry’s problems right now; he’s interested in the problems five or 20 years from now. The editors of major media companies, on the other hand, are worried about their next quarter—maybe even their next phone call. When they bring lunch back to their desks, they know not to buy green bananas.

This mutual wariness—sharpened almost to enmity in the wake of the election—did not make life easy for Campbell Brown when she started her new job running the nascent Facebook Journalism Project. The first item on her to-do list was to head out on yet another Facebook listening tour with editors and publishers. One editor describes a fairly typical meeting: Brown and Chris Cox, Facebook’s chief product officer, invited a group of media leaders to gather in late January 2017 at Brown’s apartment in Manhattan. Cox, a quiet, suave man, sometimes referred to as “the Ryan Gosling of Facebook Product,” took the brunt of the ensuing abuse. “Basically, a bunch of us just laid into him about how Facebook was destroying journalism, and he graciously absorbed it,” the editor says. “He didn’t much try to defend them. I think the point was really to show up and seem to be listening.” Other meetings were even more tense, with the occasional comment from journalists noting their interest in digital antitrust issues.

As bruising as all this was, Brown’s team became more confident that their efforts were valued within the company when Zuckerberg published a 5,700-word corporate manifesto in February. He had spent the previous three months, according to people who know him, contemplating whether he had created something that did more harm than good. “Are we building the world we all want?” he asked at the beginning of his post, implying that the answer was an obvious no. Amid sweeping remarks about “building a global community,” he emphasized the need to keep people informed and to knock out false news and clickbait. Brown and others at Facebook saw the manifesto as a sign that Zuckerberg understood the company’s profound civic responsibilities. Others saw the document as blandly grandiose, showcasing Zuckerberg’s tendency to suggest that the answer to nearly any problem is for people to use Facebook more.

Shortly after issuing the manifesto, Zuckerberg set off on a carefully scripted listening tour of the country. He began popping into candy shops and dining rooms in red states, camera crew and personal social media team in tow. He wrote an earnest post about what he was learning, and he deflected questions about whether his real goal was to become president. It seemed like a well-­meaning effort to win friends for Facebook. But it soon became clear that Facebook’s biggest problems emanated from places farther away than Ohio.

IX

One of the many things Zuckerberg seemed not to grasp when he wrote his manifesto was that his platform had empowered an enemy far more sophisticated than Macedonian teenagers and assorted low-rent purveyors of bull. As 2017 wore on, however, the company began to realize it had been attacked by a foreign influence operation. “I would draw a real distinction between fake news and the Russia stuff,” says an executive who worked on the company’s response to both. “With the latter there was a moment where everyone said ‘Oh, holy shit, this is like a national security situation.’”

That holy shit moment, though, didn’t come until more than six months after the election. Early in the campaign season, Facebook was aware of familiar attacks emanating from known Russian hackers, such as the group APT28, which is believed to be affiliated with Moscow. They were hacking into accounts outside of Facebook, stealing documents, then creating fake Facebook accounts under the banner of DCLeaks, to get people to discuss what they’d stolen. The company saw no signs of a serious, concerted foreign propaganda campaign, but it also didn’t think to look for one.

During the spring of 2017, the company’s security team began preparing a report about how Russian and other foreign intelligence operations had used the platform. One of its authors was Alex Stamos, head of Facebook’s security team. Stamos was something of an icon in the tech world for having reportedly resigned from his previous job at Yahoo after a conflict over whether to grant a US intelligence agency access to Yahoo servers. According to two people with direct knowledge of the document, he was eager to publish a detailed, specific analysis of what the company had found. But members of the policy and communications team pushed back and cut his report way down. Sources close to the security team suggest the company didn’t want to get caught up in the political whirlwind of the moment. (Sources on the politics and communications teams insist they edited the report down, just because the darn thing was hard to read.)

On April 27, 2017, the day after the Senate announced it was calling then FBI director James Comey to testify about the Russia investigation, Stamos’ report came out. It was titled “Information Operations and Facebook,” and it gave a careful step-by-step explanation of how a foreign adversary could use Facebook to manipulate people. But there were few specific examples or details, and there was no direct mention of Russia. It felt bland and cautious. As Renée DiResta says, “I remember seeing the report come out and thinking, ‘Oh, goodness, is this the best they could do in six months?’”

One month later, a story in Time suggested to Stamos’ team that they might have missed something in their analysis. The article quoted an unnamed senior intelligence official saying that Russian operatives had bought ads on Facebook to target Americans with propaganda. Around the same time, the security team also picked up hints from congressional investigators that made them think an intelligence agency was indeed looking into Russian Facebook ads. Caught off guard, the team members started to dig into the company’s archival ads data themselves.

Eventually, by sorting transactions according to a series of data points—Were ads purchased in rubles? Were they purchased within browsers whose language was set to Russian?—they were able to find a cluster of accounts, funded by a shadowy Russian group called the Internet Research Agency, that had been designed to manipulate political opinion in America. There was, for example, a page called Heart of Texas, which pushed for the secession of the Lone Star State. And there was Blacktivist, which pushed stories about police brutality against black men and women and had more followers than the verified Black Lives Matter page.

Numerous security researchers express consternation that it took Facebook so long to realize how the Russian troll farm was exploiting the platform. After all, the group was well known to Facebook. Executives at the company say they’re embarrassed by how long it took them to find the fake accounts, but they point out that they were never given help by US intelligence agencies. A staffer on the Senate Intelligence Committee likewise voiced exasperation with the company. “It seemed obvious that it was a tactic the Russians would exploit,” the staffer says.

When Facebook finally did find the Russian propaganda on its platform, the discovery set off a crisis, a scramble, and a great deal of confusion. First, due to a miscalculation, word initially spread through the company that the Russian group had spent millions of dollars on ads, when the actual total was in the low six figures. Once that error was resolved, a disagreement broke out over how much to reveal, and to whom. The company could release the data about the ads to the public, release everything to Congress, or release nothing. Much of the argument hinged on questions of user privacy. Members of the security team worried that the legal process involved in handing over private user data, even if it belonged to a Russian troll farm, would open the door for governments to seize data from other Facebook users later on. “There was a real debate internally,” says one executive. “Should we just say ‘Fuck it’ and not worry?” But eventually the company decided it would be crazy to throw legal caution to the wind “just because Rachel Maddow wanted us to.”

Ultimately, a blog post appeared under Stamos’ name in early September announcing that, as far as the company could tell, the Russians had paid Facebook $100,000 for roughly 3,000 ads aimed at influencing American politics around the time of the 2016 election. Every sentence in the post seemed to downplay the substance of these new revelations: The number of ads was small, the expense was small. And Facebook wasn’t going to release them. The public wouldn’t know what they looked like or what they were really aimed at doing.

This didn’t sit at all well with DiResta. She had long felt that Facebook was insufficiently forthcoming, and now it seemed to be flat-out stonewalling. “That was when it went from incompetence to malice,” she says. A couple of weeks later, while waiting at a Walgreens to pick up a prescription for one of her kids, she got a call from a researcher at the Tow Center for Digital Journalism named Jonathan Albright. He had been mapping ecosystems of misinformation since the election, and he had some excellent news. “I found this thing,” he said. Albright had started digging into CrowdTangle, one of the analytics platforms that Facebook uses. And he had discovered that the data from six of the accounts Facebook had shut down were still there, frozen in a state of suspended animation. There were the posts pushing for Texas secession and playing on racial antipathy. And then there were political posts, like one that referred to Clinton as “that murderous anti-American traitor Killary.” Right before the election, the Blacktivist account urged its supporters to stay away from Clinton and instead vote for Jill Stein. Albright downloaded the most recent 500 posts from each of the six groups. He reported that, in total, their posts had been shared more than 340 million times.

Eddie Guy

X

To McNamee, the way the Russians used the platform was neither a surprise nor an anomaly. “They find 100 or 1,000 people who are angry and afraid and then use Facebook’s tools to advertise to get people into groups,” he says. “That’s exactly how Facebook was designed to be used.”

McNamee and Harris had first traveled to DC for a day in July to meet with members of Congress. Then, in September, they were joined by DiResta and began spending all their free time counseling senators, representatives, and members of their staffs. The House and Senate Intelligence Committees were about to hold hearings on Russia’s use of social media to interfere in the US election, and McNamee, Harris, and ­DiResta were helping them prepare. One of the early questions they weighed in on was the matter of who should be summoned to testify. Harris recommended that the CEOs of the big tech companies be called in, to create a dramatic scene in which they all stood in a neat row swearing an oath with their right hands in the air, roughly the way tobacco executives had been forced to do a generation earlier. Ultimately, though, it was determined that the general counsels of the three companies—Facebook, Twitter, and Google—should head into the lion’s den.

And so on November 1, Colin Stretch arrived from Facebook to be pummeled. During the hearings themselves, DiResta was sitting on her bed in San Francisco, watching them with her headphones on, trying not to wake up her small children. She listened to the back-and-forth in Washington while chatting on Slack with other security researchers. She watched as Marco Rubio smartly asked whether Facebook even had a policy forbidding foreign governments from running an influence campaign through the platform. The answer was no. Rhode Island senator Jack Reed then asked whether Facebook felt an obligation to individually notify all the users who had seen Russian ads that they had been deceived. The answer again was no. But maybe the most threatening comment came from Dianne Feinstein, the senior senator from Facebook’s home state. “You’ve created these platforms, and now they’re being misused, and you have to be the ones to do something about it,” she declared. “Or we will.”

After the hearings, yet another dam seemed to break, and former Facebook executives started to go public with their criticisms of the company too. On November 8, billionaire entrepreneur Sean Parker, Facebook’s first president, said he now regretted pushing Facebook so hard on the world. “I don’t know if I really understood the consequences of what I was saying,” h

Read more: https://www.wired.com/story/inside-facebook-mark-zuckerberg-2-years-of-hell/

The Formula for Phone Addiction Might Double As a Cure

In September 2007, 75 students walked into a classroom at Stanford. Ten weeks later, they had collectively amassed 16 million users, $1 million dollars in advertising revenue, and a formula that would captivate a generation.

The class—colloquially known as "The Facebook Class"—and its instructor, BJ Fogg, became Silicon Valley legends. Graduates went on to work and design products at Uber, Facebook, and Google. Some even started companies with their classmates. But a decade later, some of the class’ teachings are in the crosshairs of our society-wide conversation about phone addiction.

Fogg's research group, the Persuasive Technology Lab, looks at how technology can persuade users to take certain actions. Early experiments centered around questions like, “How can you get people to stop smoking using SMS?” But when Facebook, then a three-year-old startup, opened its platform to third-party developers, Fogg saw a perfect opportunity to test some of his theories in the wild.

After a few lectures on the basics of behavioral psychology, students began building Facebook apps of their own. They used psychological tools like reciprocity and suggestion to engineer apps that could, for example, send your friends a virtual hug or get your friends to join an online game of dodgeball. At the time, Facebook had just begun promoting third-party apps in its news feed. The iPhone launched in the summer of 2007; the App Store would follow the year later. Fogg’s teachings became a playbook on how to make apps stick just as apps were becoming a thing.

“Within the first month, there were already millions of people using these apps,” says Dan Greenberg, a teaching assistant for the class who later went on to found the ad-tech platform Sharethrough with some of his classmates. After some students decided to monetize their apps with banner ads, apps like Greenberg’s began bringing in as much as $100,000 a month in ad sales. Fogg had a secret sauce, and it was the ideal time to serve it.

In Silicon Valley, Fogg's Behavioral Model answers one of product designers’ most enduring questions: How do you keep users coming back?

A decade ago, Fogg’s lab was a toll both for entrepreneurs and product designers on their way to Facebook and Google. Nir Eyal, the bestselling author of the book, Hooked, sat in lectures next to Ed Baker, who would later become the Head of Growth at both Facebook and Uber. Kevin Systrom and Mike Krieger, the founders of Instagram, worked on projects alongside Tristan Harris, the former Google Design Ethicist who now leads the Time Well Spent movement. Together, in Fogg's lab, they studied and developed the techniques to make our apps and gadgets addictive.

Now, we are navigating the consequences. From Facebook's former president claiming that Silicon Valley’s tools are “ripping apart the social fabric of society” to France formally banning smartphones in public schools, we are starting to reexamine the sometimes toxic relationships we have with our devices. Looking at the source of product designers’ education may help us understand the downstream consequences of their creations—and the way to reverse it.

Engineering Addiction

BJ Fogg is an unlikely leader for a Silicon Valley movement. He’s a trained psychologist and twice the age of the average entrepreneur with whom he works. His students describe him as energetic, quirky, and committed to using tech as a force for good: In the past, he's taught classes on making products to promote peace and using behavior design to connect with nature. But every class begins with his signature framework, Fogg’s Behavior Model. It suggests that we act when three forces—motivation, trigger, and ability—converge.

In Silicon Valley, the model answers one of product designers’ most enduring questions: How do you keep users coming back? Say you're a Facebook user, with the Facebook app on your phone. You're motivated to make sure photos of you posted online aren't ugly, you get triggered by a push notification from Facebook that you’ve been tagged, and your phone gives you the ability to check right away. You open the Facebook app.

Proponents of the model, like Eyal, believe that the framework can be extremely powerful. “If you understand people’s internal triggers, you can try to satiate them," he says. "If you’re feeling lonely, we can help you connect. If you’re feeling bored, we can help entertain."

But critics say that companies like Facebook have taken advantage of these psychological principles to capture human attention. Especially in advertising-supported businesses, where more time spent in app equals more profit, designers can optimize for values that don’t always align with their users’ well-being.

Tristan Harris, one of the most vocal whistleblowers of tech’s manipulative design practices (and a graduate of Fogg's lab), has grappled with this idea. In 2012, while working at Google, he created a 144-slide presentation called “A Call to Minimize Distraction & Respect Users’ Attention.” The deck, which outlined ways in which small design elements like push notifications can become massive distractions at scale, went viral within the company. Over 5,000 Googlers viewed the presentation, which Harris parlayed into a job a as Google’s first “design ethicist.”

Harris left Google in 2015 to expand the conversation around persuasive design outside of Mountain View. “Never before has a handful of people working at a handful of tech companies been able to steer the thoughts and feelings of a billion people,” he said in a recent talk at Stanford. “There are more users on Facebook than followers of Christianity. There are more people on YouTube than followers of Islam. I don’t know a more urgent problem than this.”

Harris has channeled his beliefs into his advocacy organization, Time Well Spent, which lobbies the tech industry to align with societal well-being. Three years later, his movement has begun to gain steam. Just look at Facebook, which recently restructured its news feed algorithm to prioritize the content that people find valuable (like posts from friends and family) over the stuff that people mindlessly consume (like viral videos). In a public Facebook post, Mark Zuckerberg wrote that one of Facebook’s main priorities in 2018, “is making sure the time we all spend on Facebook is time well spent.” Even, he said, if it's at the cost of how much time you spend on the platform.

Facebook's reckoning shows that companies can redesign their products to be less addictive—at the very least, they can try. Perhaps in studying the model that designers used to hook us to our phones, we can understand how those principles can be used to unhook us as well.

Finding the Cure

Fogg acknowledges that our society has become addicted to smartphones, but he believes consumers have the power to unhook themselves. “No one is forcing you to bring the phone into the bedroom and make it your alarm clock,” he says. “What people need is the motivation.”

Eyal’s next book, Indistractible, focuses on how to do that, using Fogg's model in reverse. It takes the same three ideas—motivation, trigger, and ability—and reorients them toward ungluing us from our phones. For example, you can remove triggers from certain apps by adjusting your notification settings. (Or better yet, turn off all your push notifications.) You can decrease your ability to access Facebook by simply deleting the app from your phone.

“People have the power to put this stuff away and they always have,” says Eyal. “But when we preach powerlessness, people believe that.”

Others, like Harris and venture capitalist Roger McNamee, disagree. They believe corporations’ interests are so intertwined with advertisers’ demands that, until we change the system, companies will always find new ways to maximize consumers’ time spent with their apps. “If you want to fix this as quickly as possible, the best way would be for founders of these companies to change their business model away from advertising,” says McNamee, who was an early investor in Facebook and mentor to Zuckerberg. “We have to eliminate the economic incentive to create addiction in the first place.”

There is merit to both arguments. The same methods that addict people to Snapchat might keep them learning new languages on Duolingo. The line between persuasion and coercion can be thin, but a blanket dismissal of behavior design misses the point. The larger discussion around our relationship with our gadgets comes back to aligning use with intent—for product designers and users.

Where We Go Next

Harris and McNamee believe manipulative design has to be addressed on a systems level. The two are advocating for government regulation of internet platforms like Facebook, in part as a public health issue. Companies like Apple have also seen pressure from investors to rethink how gadget addiction is affecting kids. But ultimately, business models are hard to change overnight. As long as advertising is the primary monetization strategy for the web, there will always be those who use persuasive design to keep users around longer.

So in the meantime, there are tangible steps we can all take to break the loop of addiction. Changing your notification settings or turning your phone to grayscale might seem like low-hanging fruit, but it's a place to start.

“It’s going to take the companies way longer than it would take you to do something about it,” says Eyal. “If you hold your breath and wait, you’re going to suffocate.”

Hooked On Technology

Read more: https://www.wired.com/story/phone-addiction-formula/

Apple is the only thing that can save kids from the app store

Not terribly far off.
Image: mashable/bob Al-Greene

It’s fine-tuned to be as engaging as possible, so that you don’t want to put it down once you pick it up. It lacks real substance and is engineered to be desirable. With their sufficiently random rewards systems giving you unpredictable dopamine hits, they’re designed to be addictive. You don’t need it, but you want it, and companies spend billions of dollars each year to convince you to consume it. 

Once you tap, you can’t stop. 

Our addiction to smartphones has taken on many of the hallmarks of the junk food that has become ubiquitous in our culture. 

Adults have enough problems with this kind of self control in the face of advertising. For kids, it’s an entirely unreasonable ask and a growing body of research speaks to the profound negative effects smartphones are having on them. Excessive smartphone use is correlated with depression and other negative mental health outcomes. 

The discussion around smartphones and children continued this week after two investor groups urged Apple in an open letter to create settings that would allow parents meaningful control over their children’s use of smartphones, prompting renewed scrutiny on the effects of this technology on an inherently vulnerable population. The company currently has a limited set of parental controls

Of course you’re not going to get rid of your smartphone. You probably use it for work, the camera is great, and it makes all the times you have to wait during the day vastly more bearable. Even for kids they can be entertaining and helpful in age-appropriate ways. And it’s a great way to go out to dinner and have everyone live through the experience.

Image: DOMINICK REUTER/AFP/Getty Images

The problems is, as with junk food, it’s a constant battle of regulating how much you interact with your smartphone, what apps you put on it, and what attention you let it claim, with billions of dollars on the line for your attention and your kids’ attention. 

Apple has responsibility here. They designed and produced the most revolutionary consumer tech product in the history of the world — and did so with seemingly little to no concern about the consequences. Consider in-app purchases, where kids were able to run up their parents’ credit card bills with little to no oversight. 

“The developer and us have the same exact interest, which is to get as many apps out in front of as many iPhone users as possible,” Jobs said when he introduced the app store.

Apple’s app ecosystem worked out maybe better than Jobs could have ever imagined. Millions of apps are now locked in competition for a finite amount of attention, similar to food companies battling each other for “stomach share.” If you’re not checking Instagram, you’re on Snapchat. Either way, you – kid or adult – are being manipulated with an array of psychological tools so you just. keep. eating.

When Jobs first announced the app store, he was clear that certain types of apps, like porn or invasions of privacy, wouldn’t be allowed. Those standards are long overdue for updating. 

We’re starting to recognize that the problem of junk food and poor diets is not about people’s willpower, but about controlling the environment — especially when dealing with kids, who can’t reasonably be expected to understand that an ad for candy doesn’t have their best interests in mind. A similar dynamic is at play in the app store. 

Understanding what junk food and advertising does to children has led to some action. Industry groups in Europe and America have pledged to restrict or alter their food advertising to children. School districts continue to ban junk food from their vending machines and improve the quality of school lunches.  

The government could (and maybe should) step in here, but there’s a logical choke point to this system: Apple. 

Tony Fadell, who invested in the iPhone and iPod, recently called on Apple to take action.

These stakeholders want Apple to build into iOS the ability to regulate screen time and content. It’s a compelling proposition, and probably one of the only viable ways to make a dent in the issue. You can’t eat the Oreos nearly as easily if they aren’t in the house.

Giving parents these tools “poses no threat to Apple, given that this is a software (not hardware) issue and that, unlike many other technology companies, Apple’s business model is not predicated on excessive use of your products,” the authors of the investor letter wrote.

Whether Apple makes meaningful changes to the way iOS is built is an open question, despite their pledge to do so. Majors social media companies have also started to face similar pressure, though their business models, which rely on engagement, make it less likely they’ll make big tweaks. Facebook recently had an “its not us it’s you” reaction to research showing that use of its service was correlated with poorer mental health outcomes. YouTube Kids, the supposedly child-friendly gated community, has been found to be full of violent and inappropriate content. Definitely don’t let your kids go to actual YouTube, or they might stumble on a dead body

Apple isn’t YouTube or Facebook, but it’s encountering a version of those companies’ “platform problem.” It can no longer disavow responsibility for what its customers have access to as it reaps the benefits of providing that access. 

The tech industry may not yet be at such a serious inflection point – the science is less clear and the harms more diffuse than cancer or even diet. But if there is one group that society will take action to protect and regulate, it’s children. If Apple models its products after the healthy snack logo it puts on them, the company may find it inspires a new level of devotion and loyalty that would put its current fanbase to shame. 

If it doesn’t, it may end up a digital version of a trashy vending machine. Not a place where you want to spend your money. 

Read more: http://mashable.com/2018/01/10/smartphones-junk-food-kids-addiction/

Facebook acquires anonymous teen compliment app tbh, will let it run

Facebook wants tbh to be its next Instagram. Today, Facebook announced it’s acquiring positivity-focused polling startup tbh and will allow it to operate somewhat independently with its own brand.

tbh had scored 5 million downloads and 2.5 million daily active users in the past nine weeks with its app that lets people anonymously answer kind-hearted multiple-choice questions about friends who then receive the poll results as compliments. You see questions like “Best to bring to a party?,” “Their perseverance is admirable?” and “Could see becoming a poet?” with your uploaded contacts on the app as answer choices.

tbh has racked up more than 1 billion poll answers since officially launching in limited states in August, mostly from teens and high school students, and spent weeks topping the free app charts. When we profiled tbh last month in the company’s first big interview, co-creator Nikita Bier told us, “If we’re improving the mental health of millions of teens, that’s a success to us.”

Financial terms of the deal weren’t disclosed, but TechCrunch has heard the price paid was less than $100 million and won’t require any regulatory approval. As part of the deal, tbh’s four co-creators — Bier, Erik Hazzard, Kyle Zaragoza and Nicolas Ducdodon — will join Facebook’s Menlo Park headquarters while continuing to grow their app with Facebook’s cash, engineering, anti-spam, moderation and localization resources.

However, the tbh founders will become formal Facebook employees, with Facebook email addresses, opposed to running more independently like Instagram and WhatsApp, which have their own buildings and emails.

The tbh team wrote in an announcement post that “When we met with Facebook, we realized that we shared many of the same core values about connecting people through positive interactions. Most of all, we were compelled by the ways they could help us realize our vision and bring it to more people.”

In a statement to TechCrunch, Facebook wrote: “tbh and Facebook share a common goal — of building community and enabling people to share in ways that bring us closer together. We’re impressed by the way tbh is doing this by using polling and messaging, and with Facebook’s resources tbh can continue to expand and build positive experiences.”

It’s interesting that Facebook opted to acquire tbh rather than clone it, since it has been aggressively copying other hit teen apps like Houseparty recently. While Facebook’s Snapchat clone Instagram Stories has achieved massive popularity, other knock-offs it has made haven’t fared as well.

With tbh’s strong brand name, distinctive design and explosive early traction, Facebook seems to have decided it was better to team-up than face-off.

[Correction: tbh has 2.5 million daily active users, not 4 million as we originally published after Bier implied as much in a tweet he deleted after this article was published.]

From 14 failures to Facebook

Bier originally started tbh parent company Midnight Labs back in 2010. The app studio tried a slew of products, including a personal finance app, a college chat app and a personality test. Eventually the company took a small seed round in 2013 from investors, including Greylock via partner Josh Elman, Bee Partners and Indicator Ventures. But nothing took off, and Midnight Labs was running out of money.

With just 60 days of cash left, the company decided to build something at the intersection of the positivity it saw lacking in anonymous apps like Secret and Yik Yak, and the honesty teens craved as seen in the TBH trend where social network users request candid feedback from their friends: tbh was born. “We shipped it to one school in Georgia. Forty percent of the school downloaded it the first day,” said Bier.

The biggest problem quickly became how to keep users engaged with the app; tbh already limits you to answering a few questions at a time, so you beg for more. Its first big feature release came this week with the addition of direct messaging. This lets you message someone who chose you as an answer, and they have the option of revealing their identity to you.

But trying to simultaneously keep the servers online, write new questions and build the next sticky feature was a tall task for such a small team.

Now tbh will have Facebook to help it scale while keeping existing users entertained. The app will remain free to download on iOS and Android, and the brand will remain the same.

Having the backup from Facebook should let Bier and his team breathe a little easier. There’s a lot of advantages to joining forces with Facebook:

  • Cash – Instead of raising another round in hopes of keeping its momentum alive, tbh will have deep pockets to draw from in order chase its mission of making teens happier.
  • Engineering – Scaling to 2.5 million daily users with just four co-creators and some contractors is insanely hard work. If the app keeps crashing, users will disappear. Now tbh will have Facebook’s enormous, elite engineering team behind it.
  • Anti-spam – As tbh revs up its new direct messaging feature, it will have Facebook to assist it in weeding out spam with the technologies it’s been developing for over a decade.
  • Localization – tbh hasn’t even expanded to every state in the U.S. yet. With Facebook’s help, it can allow more locations on board, and, as it goes international, it will be able to adopt new languages and cultures more quickly.
  • Content moderation – tbh has promised to only allow poll questions where the friend you pick will feel good about being chosen. That will be easier with more brains coming up with questions, more eyes reviewing them for misuse and a diverse team to write the right questions for different places.

These resources have helped Instagram grow well over 10X its size, to 800 million users, since Facebook bought it in 2012, while WhatsApp has grown from 450 million to 1.3 billion users since Facebook acquired it in 2014.

But rather than waiting and watching until tbh climbed to be worth nearly $1 billion like Instagram or $19 billion like WhatsApp, Facebook swooped in early. The last thing it needed was tbh ending up being bought by Snapchat.

“Nikita and his team have figured out a lot about how teens are using products. This is one of the few that’s gotten this kind of adoption, and that should be celebrated,” tbh investor Josh Elman says. “Hopefully this shows that there’s still room to get lots of people adopting new mobile experiences.”

Before tbh, most social media was about competing for likes or glorifying your offline life. But those little dopamine-inducing notifications had little true connection on the other side, and it’s easy to think you’re uncool when everyone else seems to be having so much fun. Indeed, tbh filled the gap between being “liked” and actually feeling appreciated.

“We think the next milestone is thinking about social platforms in terms of love and positivity,” Bier told me. “We think that’s what’s been missing from social products since the inception of the internet.”

For more on tbh, read our interview with its co-founder about its creation and mission.

Read more: https://techcrunch.com/2017/10/16/facebook-acquires-anonymous-teen-compliment-app-tbh-will-let-it-run/

Facebook drops no-vote stock plan, Zuck will sell shares to fund philanthropy

Mark Zuckerberg has gotten so rich that he can fund his philanthropic foundation and retain voting control without Facebook having to issue a proposed non-voting class of stock that faced shareholder resistance. Today Facebook announced that it’s withdrawn its plan to issue Class C no-vote stock and has resolved the shareholder lawsuit seeking to block the corporate governance overhaul.

Instead, Zuckerberg says that because Facebook has become so valuable, he can sell a smaller allotment of his stake in the company to deliver plenty of capital to his Chan Zuckerberg Initiative foundation that aims to help eradicate disease and deliver personalized education to all children.

“Over the past year and a half, Facebook’s business has performed well and the value of our stock has grown to the point that I can fully fund our philanthropy and retain voting control of Facebook for 20 years or more,” Zuckerberg writes. Facebook’s share price has increased roughly 45 percent, from $117 to $170, since the Class C stock plan was announced, with Facebook now valued at $495 billion.

Mark Zuckerberg, Priscilla Chan and their daughters Max and August

“We are gratified that Facebook and Mr. Zuckerberg have agreed not to proceed with the reclassification we were challenging,” writes Lee Rudy, the partner at Kessler Topaz Meltzer & Check LLP that was representing the plaintiffs in the lawsuit seeking to block the no-vote share creation. Zuckerberg was slated to testify in the suit later this month, but now won’t have to. “This result is a full victory for Facebook’s stockholders, and achieved everything we could have hoped to obtain by winning a permanent injunction at trial.”

“I want to be clear: this doesn’t change Priscilla and my plans to give away 99% of our Facebook shares during our lives. In fact, we now plan to accelerate our work and sell more of those shares sooner,” Zuckerberg wrote. “I anticipate selling 35-75 million Facebook shares in the next 18 months to fund our work in education, science, and advocacy.” That equates to $5.95 billion to $12.75 billion worth of Facebook shares Zuckerberg will liquidate.

When Zuckerberg announced the plan in April 2016, he wrote that being a founder-led company where he controls enough votes to always steer Facebook’s direction rather than cowing to public shareholders lets Facebook “resist the short term pressures that often hurt companies.” By issuing the non-voting shares, “I’ll be able to keep founder control of Facebook so we can continue to build for the long term, and Priscilla and I will be able to give our money to fund important work sooner.”

A spokesperson for the Chan Zuckerberg Initiative told TechCrunch that this outcome is very good for the foundation, because it provides more predictability to its funding. The plan will also allow Zuckerberg to deliver cash to the CZI sooner, which its new CFO Peggy Alford will be able to allocate between its health, education and advocacy projects.

With the new plan to sell shares, it’s unclear what might happen to Zuckerberg’s iron grip on Facebook’s future in “20 years or more.”

Dropping the Class C shares plan may be seen as a blow to Facebook board member Marc Andreessen, who Bloomberg revealed had coached Zuckerberg through pushing the proposed plan through the rest of the board. But given Zuckerberg’s power, Andreessen is unlikely to be ousted unless the Facebook CEO wants him gone.

Zuckerberg strolls through the developer conference of Oculus, the VR company he pushed Facebook to acquire

For the foreseeable future, though, Zuckerberg will have the power to shape Facebook’s decisions. His business instincts have proven wise over the years. Acquisitions he orchestrated that seemed pricey at first — like Instagram and WhatsApp — have been validated as their apps grow to multiples of their pre-buy size. And Zuckerberg’s relentless prioritization of the user experience over that of advertisers and outside developers has kept the Facebook community deeply engaged instead of pushed away with spam.

Zuckerberg’s ability to maintain power could allow him to continue to make bold or counter-intuitive decisions without shareholder interference. But the concentration of power also puts Facebook in a precarious position if Zuckerberg were to be tarnished by scandal or suddenly unable to continue his duties as CEO.

Zuckerberg warned investors when Facebook went public that “Facebook was not originally created to be a company. It was built to accomplish a social mission.” And yet Facebook has flourished into one of the world’s most successful businesses in part because shareholders weren’t allowed to sell its ambitions short.

Read more: https://techcrunch.com/2017/09/22/facebook-sharing/

Facebook’s original video is something publishers are actually excited for

Virtually Dating" is a five-episode series produced by Cond Nast Entertainment.
Image: conde nast entertainment

For all of Facebook’s big talk about video, it was still just part of the almighty News Feed.

Publishers hoping to capture a moment of a user’s attention looked for thumb-stopping moments, which gave rise to a new and not-terribly compelling format of video that remains endemic to Facebook.

Watch is something different. Facebook’s new original video program features TV-like shows made by media companies. Perhaps most importantly, the shows are showcased in a brand new section of the social network.

That’s enough to convince publishers, who have spent years contorting to fit into Facebook’s plans, that Watch could be big.

“We are really excited,” said Dawn Ostroff, president of Cond Nast Entertainment, which is producing a dating show with a virtual reality twist for Watch. “This is a new opportunity, a new type of content. [Facebook’s] trying to open up a whole new area for content makers.”

Oren Katzeff, Tastemade‘s head of programming, offered similar excitement. The food-focused media company has created six shows for Facebook Watch.

“Were able to be a part of appointment viewing, and thats huge,” Katzeff said

That enthusiasm is quite unlike how publishers have previously behaved when asked about their work with and on Facebook. Typically, there’s a roll of the eyes, a sigh, and a list of grievances.

“The problem with Facebook’s entire ‘news team’ is that they’re glorified client services people,” the head of digital operations at a major news outlet told Mashable at F8, the company’s annual developer conference in April.

Now, there’s a new sense of hope among the media industry. Facebook’s massive scale has always tempted publishers, but revenue has been elusive. Facebook’s new program, with its emphasis on quality content and less on thumb-bait, seems ready-made for high-end ads. These original shows, in concept, also compete with what’s available live on TV and bingeable on Netflix and Huluplatforms that most publishers haven’t cracked.

“I think it is where people will go to watch on-demand programming and live news, and I intend Cheddar to be the leading live news player on Watch,” Jon Steinberg, CEO of business news show Cheddar, wrote in a private Twitter message.

Facebook’s Watch platform

Image: facebook

Simultaneously, there’s little stress for publishers about potential revenuefor now. Facebook has guaranteed minimum earnings for each episode, according to an executive at a participating publisher who could not be named since financial discussions are private. Facebook not only pays a licensing fee to publishers but also will split revenue from mid-roll ads.

It’s not the first time Facebook has cut checks for publishers to support video efforts. Last year, Facebook paid publishers, including Mashable, to produce live videos, requiring a minimum number of minutes streamed per month. (Mashable is also a Watch partner.)

But Facebook’s live video effort was slow to start, and publishers didn’t reap in rewardsespecially when it came to the return of their investments, several participants told Mashable.

It wasn’t all their fault or Facebook’s. For one, Facebook users weren’t really used to going to the site or the app for live video. Since then, Facebook has released several products, including a redesigned version of the current video tab and a TV app, both of which better support the new ecosystem. Publishers’ series will be spotlighted on the Facebook’s new tab for shows, for example. The experience is slowly being rolled out to users over the next month.

Participating publishers are going all in.

Tastemade produced six shows over the last few months and is still wrapping up a couple. Three are food focused: Kitchen Little, Struggle Meals, and Food To Die For. Two are more home and lifestyle: Move-In Day and Safe Deposit. The sixth is a late-night comedy show with celebrity interviews, hosted by an animated taco, called Let’s Taco Bout It.

“Tomas grew up as a Taco, and he had adopted parents, and his life goal has been to discover who his true parents are. He tries to relate with his guests,” Katzeff said.

Tomas Taco

Image: tastemade

What’s exciting here is not just an animated taco, but the fact that these publishers are well positioned to scale these tacos… err video series.

Maybe an animated taco won’t appeal to all 2 billion of Facebook’s users, but it doesn’t necessarily need to. Unlike TV, these shows aren’t locked into specific networks with a specific time-slot. Rather, they can be directed to actual people, based on their interests (Facebook likes) and demographic information.

“With Facebook Watch, the era of audience parting has truly arrived,” wrote Nick Cicero of Delmondo, a Facebook media solutions partner for video analytics.

Unlike TV, Facebook has a built-in platform for conversation. Ostroff of Cond Nast Entertainment said she believed Facebook greenlighted Virtually Dating, a show where blind dates take place in a virtual reality world, for the Watch platform because of the potential for online conversation.

“If it works, it was something that could go viral or a show that everyone could weigh in on,” Ostroff said. “Were excited about learning, learning how the viewer and the consumer is going to use [Watch]. Whats going to succeed and whats not.”

No one is saying it’s been easy. Several publishers told Mashable they have been careful to make sure they are staying in budget. They also noted that it is still a testone that they will be closely monitoring. Now that the shows are near launch, publishers said they will need to focus on promotion.

Watch “is really great for those who were actually able to get into the program,” said Jarrett Moreno, cofounder of ATTN, which has created Health Hacks starring Jessica Alba and We Need to Talk with Nev Schulman and Laura Perlongo.”It’s a priority for Facebook. They’ve emphasized that.”

A priority, for now.

Read more: http://mashable.com/2017/08/12/facebook-watch-original-video-publishers-pitchfork/