Sociologists Examine Hackathons and See Exploitation

As the gospel of Silicon Valley-style disruption spreads to every sector in the economy, so too have the industry’s favorite competitive ritual, hackathons. The contests, where small teams of “hackers” build tech products in marathon all-night coding sessions, are a hallmark of Silicon Valley culture. Recall Facebook’s most famous hackathon, thrown on the eve of its IPO to show the world that the demands of being a public company would not kill the “hacker way” at One Hacker Way.

Now, sponsors ranging from Fortune 500 conglomerates to conference organizers host them. Even New York Fashion Week and the Vatican have hosted hackathons. They’ve become part of a “toolkit” for large organizations seeking a veneer of innovation. Some organizers view them as recruiting opportunities, others as opportunities to evangelize their company’s technology platforms, and others simply want to be associated with something cool and techie. They’re so common that hackathon enthusiast Mike Swift started a company dedicated to organizing and building community around them called Major League Hacking. Last year the company provided services for more than 200 hackathons with more than 65,000 participants.

The phenomenon is attracting attention from academics. One pair of sociologists recently examined hackathons and emerged with troubling conclusions. Sharon Zukin, professor of sociology at Brooklyn College and CUNY Graduate Center, spent a year observing seven hackathons, mostly sponsored by corporations, in New York City, interviewing participants, organizers, and sponsors. In a study called “Hackathons As Co-optation Ritual: Socializing Workers and Institutionalizing Innovation in the ‘New’ Economy,” she and co-author Max Papadantonakis argue that hackathons create “fictional expectations of innovation that benefits all,” which Zukin writes is a “powerful strategy for manufacturing workers’ consent in the ‘new’ economy.” In other words, institutions use the allure of hackathons, with sponsors, prizes, snacks, and potential for career advancement, to get people to work for free.

To Zukin, this is a problem, because hackathons are making the “hacker subculture” they promote into the new work norm. That norm, which coincides with the labor market trend of less-secure employment, encourages professional workers to adopt an “entrepreneurial” career and market themselves for continually shifting jobs. The trend also includes motivating workers with Soviet-style slogans venerating the pleasures of work.

Zukin tells WIRED the unpaid labor of hackathons recalls sociological research on fashion models, who are also expected to spend time promoting themselves on social media, and party girls, who go to nightclubs with male VIPs in hopes of boosting acting or modeling aspirations. Participants are combining self-investment with self-exploitation, she says. It’s rational given the demands of the modern labor market. It’s just precarious work.

Zukin was surprised to find that hackathon participants almost universally view the events positively. Hackathons are often social, emotionally charged, and a way to learn. Swift says his company found that 86 percent of student participants say they learn skills they can’t get in the classroom, and a third of them believe skills they learned at a hackathon helped them get a job.

Zukin observed hackathon sponsors fueling the “romance of digital innovation by appealing to the hackers’ aspiration to be multi-dimensional agents of change,” she writes. The themes of exhaustion (participants often work for 24 or 36 hours straight), achievement, and the belief that this work could bring future financial reward, were prevalent at the events she observed.

To the tech industry and its imitators, these are normal ideas. To a sociologist, they’re exploitative. “From my perspective, they’re doing unpaid work for corporations,” Zukin says. (Even hackathons thrown by schools, non-profits, publishers, and civic organizations tend to have corporate sponsors.)

Viewed through a sociologist’s framework, Zukin says the events’ aspirational messaging—typical Silicon Valley-style futurebabble about changing the world—feels dystopian. Hackathons show “the fault lines of an emerging production system” by embodying a set of “quasi-Orwellian” ideas that are prevalent in the current economic climate, she writes. Zukin encapsulates those ideas in slogans that could be at home on the walls of a WeWork lobby: “Work is Play,” “Exhaustion is Effervescent,” and “Precarity is Opportunity.”

Zukin only examined hackathons that were open to the public. But many companies, like Facebook, host internal hackathons over weekends. Zukin notes that such events, in which employees may feel obligated to participate, are a form of labor control. “They’re just trying to squeeze the innovation out of [their workers],” she says.

Hackathons reflect an asymmetry of power between the hackathons’ corporate sponsors and their participants, the study argues. Their corporate sponsors outsource work, crowdsource innovation, and burnish their reputations while concealing their business goals.

I noticed this phenomenon while reporting on a dozen hackathons between 2012 and 2014. At a 2013 college-sponsored hackathon, it seemed that everyone involved wanted something from the participants: Sponsors wanted to lay the groundwork for potential investments, hire the hackers, convince them to use particular software to build tools and apps, and boost their own reputations by offering cash, snacks and other prizes.

Swift, of Major League Hacking, doesn’t think sponsor involvement is bad for participants. “The corporate sponsors enable these amazing experiences that the students have at these hackathons,” he says. Their sponsorship “demonstrates that the companies understand developers, care about their interest and goals, and are investing in this community,” he says. He notes that because of sponsors, participants get to work with tools they might not have access to, like VR headsets or expensive software platforms.

The irony is that, regardless of whether hackathon participants willingly participate in self-exploitation or are simply having fun and learning, they rarely produce useful innovations that last beyond the event’s 36 hours. Startup lore has plenty of tales of successful companies that were created at hackathons—a popular example is GroupMe, the messaging app created at a TechCrunch hackathon, which sold to Skype for $85 million one year later. But such examples are rare. “Hacks are hacks, not startups,” Swift wrote in a blog post. “Most hackers don’t want to work on their hackathon project after the hackathon ends.”

Hackathons are not particularly effective as recruiting strategies for large companies, either, the study finds. But they sell the dream of self-improvement via technology, something companies want to be associated with regardless of any immediate benefit to their bottom line. As symbols of innovation, they’re not likely to go anywhere anytime soon.

Hacking Away

  • More than 100 students recently coded for 36 hours straight at the Vatican’s first-ever hackathon.
  • Some participants in a federal government hackathon aimed at solutions to the opioid crisis had second thoughts.
  • A photographer documented the networking parties, hackathons and grubby crash pads where techies tap away at their laptops.

Read more: https://www.wired.com/story/sociologists-examine-hackathons-and-see-exploitation/

For Chinas Wealthy, Singapore Is the New Hong Kong

When more than 80 of China’s wealth managers gathered recently at the Shangri-La hotel on Singapore’s resort island of Sentosa, the chatter during tea breaks kept returning to one theme: Hong Kong is starting to be eclipsed by Singapore as the favorite destination for the wealth of China’s rich.

At stake for banks in both cities is a huge pile of money. China’s high-net-worth individuals control an estimated $5.8 trillion—almost half of it already offshore, according to consulting firm Capgemini SE. For some, the city-state of Singapore is preferable because it’s at a safer distance from any potential scrutiny from authorities in Beijing, according to interviews with several wealth managers. Multiple private banking sources in Singapore, who would not comment on the record because of the sensitivity of the subject, report seeing increased flows at the expense of Hong Kong.

The rich may be feeling exposed by changing banking practices. Hong Kong has signed tax transparency agreements that for the first time last year required all banks to report their account holders’ information to Hong Kong tax officials, in preparation for giving that information to 75 jurisdictions, including mainland China. Singapore will have similar agreements with 61 jurisdictions. But they don’t include either Hong Kong or Beijing, meaning its accounts and account holders aren’t visible to the Chinese government. “Many rich people from the mainland believe Hong Kong is still a part of China, after all,” says Xia Chun, chief research officer at Noah Holdings Ltd. of Hong Kong, an asset management service provider. “They think there’s no difference in putting money in Hong Kong, compared to Beijing.”

At the same time, more Chinese banks in Hong Kong are “trying to synchronize their internal systems with those on the mainland to improve service efficiency,” says Eva Law, the Hong Kong-based founder of the Association of Private Bankers in Greater China Region. “This also means the clients’ information will become more transparent and the mainland can identify fund flows more easily, or will have fuller and faster access to your asset holdings, thus enabling easier investigation and tracing.”

Overall, Hong Kong remains the primary destination for China’s offshore money, according to a Capgemini survey, followed by Singapore and New York. Yet the number of Chinese high-net-worth individuals who view Hong Kong as their preferred overseas place of investment is down to 53 percent, from 71 percent two years ago, according to a survey in July by Bain & Co. More than 20 percent favor Singapore, up from 15 percent two years ago. “Singapore is the Zurich of the East,” says Xiao Xiao, the Beijing-based chief operating officer of Chinese wealth manager Fortunes Capital.

“We see Singapore, not Hong Kong, as the bridgehead of China’s investment overseas,” says Li Qinghao, co-founder of NewBanker Tech Consulting, which organized the Sentosa conference last year. About 78 percent of S$2.7 trillion ($1.9 trillion) in assets under management in Singapore comes from overseas sources. Morgan Stanley, JPMorgan Chase & Co., and other firms with big private banking operations are building up their teams of China relationship managers in Singapore.

China has been tightening its grip on Hong Kong. A year ago, Chinese financier Xiao Jianhua was reported by local media to have been seized from a Hong Kong hotel by Chinese authorities and taken to the mainland. The incident followed the disappearance of several Hong Kong booksellers who sold books critical of China’s Communist Party and were reported to have been taken involuntarily across the border.

Then there are the increased restrictions on Hong Kong’s financial practices, such as a 2016 crackdown on sales of certain types of insurance products to mainland Chinese. The products pay dividends over a number of years and are essentially viewed as investments—and potentially a way to send money out of China and evade capital controls. “The Hong Kong market is now heavily affected by mainland China,” says Guan Huanyu, president of Beijing-based wealth manager Zhenghe Holdings, who attended the Sentosa event.

While Hong Kong’s Securities & Futures Commission doesn’t break down the origin of funds, its data show that growth in the city’s private banking business has been slowing. Hong Kong logged 10.7 percent growth in private banking assets under management in 2016, down from 18 percent in 2015.

Singapore has additional attractions for the wealthy of China. Mandarin is one of its four official languages, and it has world-class health facilities and international schools. Not far from the Shangri-La Hotel, Sentosa’s casinos are a popular draw for Chinese tourists. Mainland Chinese were the largest foreign buyers of luxury properties in Singapore during the first half of last year, according to consultancy Cushman & Wakefield. Real estate is far cheaper than in Hong Kong.

But mainly, the rich like to diversify—not only among asset classes, but among political regimes. “Most of our clients have undergone a shift from poor to rich,” says Kou Quan, vice president at Tianjin-based Xinmao S&T Investment Group. “And they’re all worried about becoming poor again.”

    BOTTOM LINE – Hong Kong’s financial sector is becoming more entwined with the mainland, prompting more and more of China’s rich to turn to Singapore.

    Read more: http://www.bloomberg.com/news/articles/2018-02-06/for-china-s-wealthy-singapore-is-the-new-hong-kong

    Inside the Two Years That Shook Facebookand the World

    One day in late February of 2016, Mark Zuckerberg sent a memo to all of Facebook’s employees to address some troubling behavior in the ranks. His message pertained to some walls at the company’s Menlo Park headquarters where staffers are encouraged to scribble notes and signatures. On at least a couple of occasions, someone had crossed out the words “Black Lives Matter” and replaced them with “All Lives Matter.” Zuckerberg wanted whoever was responsible to cut it out.

    “ ‘Black Lives Matter’ doesn’t mean other lives don’t,” he wrote. “We’ve never had rules around what people can write on our walls,” the memo went on. But “crossing out something means silencing speech, or that one person’s speech is more important than another’s.” The defacement, he said, was being investigated.

    All around the country at about this time, debates about race and politics were becoming increasingly raw. Donald Trump had just won the South Carolina primary, lashed out at the Pope over immigration, and earned the enthusiastic support of David Duke. Hillary Clinton had just defeated Bernie Sanders in Nevada, only to have an activist from Black Lives Matter interrupt a speech of hers to protest racially charged statements she’d made two decades before. And on Facebook, a popular group called Blacktivist was gaining traction by blasting out messages like “American economy and power were built on forced migration and torture.”

    So when Zuckerberg’s admonition circulated, a young contract employee named Benjamin Fearnow decided it might be newsworthy. He took a screenshot on his personal laptop and sent the image to a friend named Michael Nuñez, who worked at the tech-news site Gizmodo. Nuñez promptly published a brief story about Zuckerberg’s memo.

    A week later, Fearnow came across something else he thought Nuñez might like to publish. In another internal communication, Facebook had invited its employees to submit potential questions to ask Zuckerberg at an all-hands meeting. One of the most up-voted questions that week was “What responsibility does Facebook have to help prevent President Trump in 2017?” Fearnow took another screenshot, this time with his phone.

    Fearnow, a recent graduate of the Columbia Journalism School, worked in Facebook’s New York office on something called Trending Topics, a feed of popular news subjects that popped up when people opened Facebook. The feed was generated by an algorithm but moderated by a team of about 25 people with backgrounds in journalism. If the word “Trump” was trending, as it often was, they used their news judgment to identify which bit of news about the candidate was most important. If The Onion or a hoax site published a spoof that went viral, they had to keep that out. If something like a mass shooting happened, and Facebook’s algorithm was slow to pick up on it, they would inject a story about it into the feed.

    March 2018. Subscribe to WIRED.

    Jake Rowland/Esto

    Facebook prides itself on being a place where people love to work. But Fearnow and his team weren’t the happiest lot. They were contract employees hired through a company called BCforward, and every day was full of little reminders that they weren’t really part of Facebook. Plus, the young journalists knew their jobs were doomed from the start. Tech companies, for the most part, prefer to have as little as possible done by humans—because, it’s often said, they don’t scale. You can’t hire a billion of them, and they prove meddlesome in ways that algorithms don’t. They need bathroom breaks and health insurance, and the most annoying of them sometimes talk to the press. Eventually, everyone assumed, Facebook’s algorithms would be good enough to run the whole project, and the people on Fearnow’s team—who served partly to train those algorithms—would be expendable.

    The day after Fearnow took that second screenshot was a Friday. When he woke up after sleeping in, he noticed that he had about 30 meeting notifications from Facebook on his phone. When he replied to say it was his day off, he recalls, he was nonetheless asked to be available in 10 minutes. Soon he was on a video­conference with three Facebook employees, including Sonya Ahuja, the company’s head of investigations. According to his recounting of the meeting, she asked him if he had been in touch with Nuñez. He denied that he had been. Then she told him that she had their messages on Gchat, which Fearnow had assumed weren’t accessible to Facebook. He was fired. “Please shut your laptop and don’t reopen it,” she instructed him.

    That same day, Ahuja had another conversation with a second employee at Trending Topics named Ryan Villarreal. Several years before, he and Fearnow had shared an apartment with Nuñez. Villarreal said he hadn’t taken any screenshots, and he certainly hadn’t leaked them. But he had clicked “like” on the story about Black Lives Matter, and he was friends with Nuñez on Facebook. “Do you think leaks are bad?” Ahuja demanded to know, according to Villarreal. He was fired too. The last he heard from his employer was in a letter from BCforward. The company had given him $15 to cover expenses, and it wanted the money back.

    The firing of Fearnow and Villarreal set the Trending Topics team on edge—and Nuñez kept digging for dirt. He soon published a story about the internal poll showing Facebookers’ interest in fending off Trump. Then, in early May, he published an article based on conversations with yet a third former Trending Topics employee, under the blaring headline “Former Facebook Workers: We Routinely Suppressed Conservative News.” The piece suggested that Facebook’s Trending team worked like a Fox News fever dream, with a bunch of biased curators “injecting” liberal stories and “blacklisting” conservative ones. Within a few hours the piece popped onto half a dozen highly trafficked tech and politics websites, including Drudge Report and Breitbart News.

    The post went viral, but the ensuing battle over Trending Topics did more than just dominate a few news cycles. In ways that are only fully visible now, it set the stage for the most tumultuous two years of Facebook’s existence—triggering a chain of events that would distract and confuse the company while larger disasters began to engulf it.

    This is the story of those two years, as they played out inside and around the company. WIRED spoke with 51 current or former Facebook employees for this article, many of whom did not want their names used, for reasons anyone familiar with the story of Fearnow and Villarreal would surely understand. (One current employee asked that a WIRED reporter turn off his phone so the company would have a harder time tracking whether it had been near the phones of anyone from Facebook.)

    The stories varied, but most people told the same basic tale: of a company, and a CEO, whose techno-optimism has been crushed as they’ve learned the myriad ways their platform can be used for ill. Of an election that shocked Facebook, even as its fallout put the company under siege. Of a series of external threats, defensive internal calculations, and false starts that delayed Facebook’s reckoning with its impact on global affairs and its users’ minds. And—in the tale’s final chapters—of the company’s earnest attempt to redeem itself.

    In that saga, Fearnow plays one of those obscure but crucial roles that history occasionally hands out. He’s the Franz Ferdinand of Facebook—or maybe he’s more like the archduke’s hapless young assassin. Either way, in the rolling disaster that has enveloped Facebook since early 2016, Fearnow’s leaks probably ought to go down as the screenshots heard round the world.

    II

    By now, the story of Facebook’s all-consuming growth is practically the creation myth of our information era. What began as a way to connect with your friends at Harvard became a way to connect with people at other elite schools, then at all schools, and then everywhere. After that, your Facebook login became a way to log on to other internet sites. Its Messenger app started competing with email and texting. It became the place where you told people you were safe after an earthquake. In some countries like the Philippines, it effectively is the internet.

    The furious energy of this big bang emanated, in large part, from a brilliant and simple insight. Humans are social animals. But the internet is a cesspool. That scares people away from identifying themselves and putting personal details online. Solve that problem—make people feel safe to post—and they will share obsessively. Make the resulting database of privately shared information and personal connections available to advertisers, and that platform will become one of the most important media technologies of the early 21st century.

    But as powerful as that original insight was, Facebook’s expansion has also been driven by sheer brawn. Zuckerberg has been a determined, even ruthless, steward of the company’s manifest destiny, with an uncanny knack for placing the right bets. In the company’s early days, “move fast and break things” wasn’t just a piece of advice to his developers; it was a philosophy that served to resolve countless delicate trade-offs—many of them involving user privacy—in ways that best favored the platform’s growth. And when it comes to competitors, Zuckerberg has been relentless in either acquiring or sinking any challengers that seem to have the wind at their backs.

    Facebook’s Reckoning

    Two years that forced the platform to change

    by Blanca Myers

    March 2016

    Facebook suspends Benjamin Fearnow, a journalist-­curator for the platform’s Trending Topics feed, after he leaks to Gizmodo.

    May 2016

    Gizmodo reports that Trending Topics “routinely suppressed conservative news.” The story sends Facebook scrambling.

    July 2016

    Rupert Murdoch tells Zuckerberg that Facebook is wreaking havoc on the news industry and threatens to cause trouble.

    August 2016

    Facebook cuts loose all of its Trending Topics journalists, ceding authority over the feed to engineers in Seattle.

    November 2016

    Donald Trump wins. Zuckerberg says it’s “pretty crazy” to think fake news on Facebook helped tip the election.

    December 2016

    Facebook declares war on fake news, hires CNN alum Campbell Brown to shepherd relations with the publishing industry.

    September 2017

    Facebook announces that a Russian group paid $100,000 for roughly 3,000 ads aimed at US voters.

    October 2017

    Researcher Jonathan Albright reveals that posts from six Russian propaganda accounts were shared 340 million times.

    November 2017

    Facebook general counsel Colin Stretch gets pummeled during congressional Intelligence Committee hearings.

    January 2018

    Facebook begins announcing major changes, aimed to ensure that time on the platform will be “time well spent.”

    In fact, it was in besting just such a rival that Facebook came to dominate how we discover and consume news. Back in 2012, the most exciting social network for distributing news online wasn’t Facebook, it was Twitter. The latter’s 140-character posts accelerated the speed at which news could spread, allowing its influence in the news industry to grow much faster than Facebook’s. “Twitter was this massive, massive threat,” says a former Facebook executive heavily involved in the decisionmaking at the time.

    So Zuckerberg pursued a strategy he has often deployed against competitors he cannot buy: He copied, then crushed. He adjusted Facebook’s News Feed to fully incorporate news (despite its name, the feed was originally tilted toward personal news) and adjusted the product so that it showed author bylines and headlines. Then Facebook’s emissaries fanned out to talk with journalists and explain how to best reach readers through the platform. By the end of 2013, Facebook had doubled its share of traffic to news sites and had started to push Twitter into a decline. By the middle of 2015, it had surpassed Google as the leader in referring readers to publisher sites and was now referring 13 times as many readers to news publishers as Twitter. That year, Facebook launched Instant Articles, offering publishers the chance to publish directly on the platform. Posts would load faster and look sharper if they agreed, but the publishers would give up an element of control over the content. The publishing industry, which had been reeling for years, largely assented. Facebook now effectively owned the news. “If you could reproduce Twitter inside of Facebook, why would you go to Twitter?” says the former executive. “What they are doing to Snapchat now, they did to Twitter back then.”

    It appears that Facebook did not, however, carefully think through the implications of becoming the dominant force in the news industry. Everyone in management cared about quality and accuracy, and they had set up rules, for example, to eliminate pornography and protect copyright. But Facebook hired few journalists and spent little time discussing the big questions that bedevil the media industry. What is fair? What is a fact? How do you signal the difference between news, analysis, satire, and opinion? Facebook has long seemed to think it has immunity from those debates because it is just a technology company—one that has built a “platform for all ideas.”

    This notion that Facebook is an open, neutral platform is almost like a religious tenet inside the company. When new recruits come in, they are treated to an orientation lecture by Chris Cox, the company’s chief product officer, who tells them Facebook is an entirely new communications platform for the 21st century, as the telephone was for the 20th. But if anyone inside Facebook is unconvinced by religion, there is also Section 230 of the 1996 Communications Decency Act to recommend the idea. This is the section of US law that shelters internet intermediaries from liability for the content their users post. If Facebook were to start creating or editing content on its platform, it would risk losing that immunity—and it’s hard to imagine how Facebook could exist if it were liable for the many billion pieces of content a day that users post on its site.

    And so, because of the company’s self-image, as well as its fear of regulation, Facebook tried never to favor one kind of news content over another. But neutrality is a choice in itself. For instance, Facebook decided to present every piece of content that appeared on News Feed—whether it was your dog pictures or a news story—in roughly the same way. This meant that all news stories looked roughly the same as each other, too, whether they were investigations in The Washington Post, gossip in the New York Post, or flat-out lies in the Denver Guardian, an entirely bogus newspaper. Facebook argued that this democratized information. You saw what your friends wanted you to see, not what some editor in a Times Square tower chose. But it’s hard to argue that this wasn’t an editorial decision. It may be one of the biggest ever made.

    In any case, Facebook’s move into news set off yet another explosion of ways that people could connect. Now Facebook was the place where publications could connect with their readers—and also where Macedonian teenagers could connect with voters in America, and operatives in Saint Petersburg could connect with audiences of their own choosing in a way that no one at the company had ever seen before.

    III

    In February of 2016, just as the Trending Topics fiasco was building up steam, Roger ­McNamee became one of the first Facebook insiders to notice strange things happening on the platform. McNamee was an early investor in Facebook who had mentored Zuckerberg through two crucial decisions: to turn down Yahoo’s offer of $1 billion to acquire Facebook in 2006; and to hire a Google executive named Sheryl Sandberg in 2008 to help find a business model. McNamee was no longer in touch with Zuckerberg much, but he was still an investor, and that month he started seeing things related to the Bernie Sanders campaign that worried him. “I’m observing memes ostensibly coming out of a Facebook group associated with the Sanders campaign that couldn’t possibly have been from the Sanders campaign,” he recalls, “and yet they were organized and spreading in such a way that suggested somebody had a budget. And I’m sitting there thinking, ‘That’s really weird. I mean, that’s not good.’ ”

    But McNamee didn’t say anything to anyone at Facebook—at least not yet. And the company itself was not picking up on any such worrying signals, save for one blip on its radar: In early 2016, its security team noticed an uptick in Russian actors attempting to steal the credentials of journalists and public figures. Facebook reported this to the FBI. But the company says it never heard back from the government, and that was that.

    Instead, Facebook spent the spring of 2016 very busily fending off accusations that it might influence the elections in a completely different way. When Gizmodo published its story about political bias on the Trending Topics team in May, the ­article went off like a bomb in Menlo Park. It quickly reached millions of readers and, in a delicious irony, appeared in the Trending Topics module itself. But the bad press wasn’t what really rattled Facebook—it was the letter from John Thune, a Republican US senator from South Dakota, that followed the story’s publication. Thune chairs the Senate Commerce Committee, which in turn oversees the Federal Trade Commission, an agency that has been especially active in investigating Facebook. The senator wanted Facebook’s answers to the allegations of bias, and he wanted them promptly.

    The Thune letter put Facebook on high alert. The company promptly dispatched senior Washington staffers to meet with Thune’s team. Then it sent him a 12-page single-spaced letter explaining that it had conducted a thorough review of Trending Topics and determined that the allegations in the Gizmodo story were largely false.

    Facebook decided, too, that it had to extend an olive branch to the entire American right wing, much of which was raging about the company’s supposed perfidy. And so, just over a week after the story ran, Facebook scrambled to invite a group of 17 prominent Republicans out to Menlo Park. The list included television hosts, radio stars, think tankers, and an adviser to the Trump campaign. The point was partly to get feedback. But more than that, the company wanted to make a show of apologizing for its sins, lifting up the back of its shirt, and asking for the lash.

    According to a Facebook employee involved in planning the meeting, part of the goal was to bring in a group of conservatives who were certain to fight with one another. They made sure to have libertarians who wouldn’t want to regulate the platform and partisans who would. Another goal, according to the employee, was to make sure the attendees were “bored to death” by a technical presentation after Zuckerberg and Sandberg had addressed the group.

    The power went out, and the room got uncomfortably hot. But otherwise the meeting went according to plan. The guests did indeed fight, and they failed to unify in a way that was either threatening or coherent. Some wanted the company to set hiring quotas for conservative employees; others thought that idea was nuts. As often happens when outsiders meet with Facebook, people used the time to try to figure out how they could get more followers for their own pages.

    Afterward, Glenn Beck, one of the invitees, wrote an essay about the meeting, praising Zuckerberg. “I asked him if Facebook, now or in the future, would be an open platform for the sharing of all ideas or a curator of content,” Beck wrote. “Without hesitation, with clarity and boldness, Mark said there is only one Facebook and one path forward: ‘We are an open platform.’”

    Inside Facebook itself, the backlash around Trending Topics did inspire some genuine soul-searching. But none of it got very far. A quiet internal project, codenamed Hudson, cropped up around this time to determine, according to someone who worked on it, whether News Feed should be modified to better deal with some of the most complex issues facing the product. Does it favor posts that make people angry? Does it favor simple or even false ideas over complex and true ones? Those are hard questions, and the company didn’t have answers to them yet. Ultimately, in late June, Facebook announced a modest change: The algorithm would be revised to favor posts from friends and family. At the same time, Adam Mosseri, Facebook’s News Feed boss, posted a manifesto titled “Building a Better News Feed for You.” People inside Facebook spoke of it as a document roughly resembling the Magna Carta; the company had never spoken before about how News Feed really worked. To outsiders, though, the document came across as boilerplate. It said roughly what you’d expect: that the company was opposed to clickbait but that it wasn’t in the business of favoring certain kinds of viewpoints.

    The most important consequence of the Trending Topics controversy, according to nearly a dozen former and current employees, was that Facebook became wary of doing anything that might look like stifling conservative news. It had burned its fingers once and didn’t want to do it again. And so a summer of deeply partisan rancor and calumny began with Facebook eager to stay out of the fray.

    IV

    Shortly after Mosseri published his guide to News Feed values, Zuckerberg traveled to Sun Valley, Idaho, for an annual conference hosted by billionaire Herb Allen, where moguls in short sleeves and sunglasses cavort and make plans to buy each other’s companies. But Rupert Murdoch broke the mood in a meeting that took place inside his villa. According to numerous accounts of the conversation, Murdoch and Robert Thomson, the CEO of News Corp, explained to Zuckerberg that they had long been unhappy with Facebook and Google. The two tech giants had taken nearly the entire digital ad market and become an existential threat to serious journalism. According to people familiar with the conversation, the two News Corp leaders accused Facebook of making dramatic changes to its core algorithm without adequately consulting its media partners, wreaking havoc according to Zuckerberg’s whims. If Facebook didn’t start offering a better deal to the publishing industry, Thomson and Murdoch conveyed in stark terms, Zuckerberg could expect News Corp executives to become much more public in their denunciations and much more open in their lobbying. They had helped to make things very hard for Google in Europe. And they could do the same for Facebook in the US.

    Facebook thought that News Corp was threatening to push for a government antitrust investigation or maybe an inquiry into whether the company deserved its protection from liability as a neutral platform. Inside Facebook, executives believed Murdoch might use his papers and TV stations to amplify critiques of the company. News Corp says that was not at all the case; the company threatened to deploy executives, but not its journalists.

    Zuckerberg had reason to take the meeting especially seriously, according to a former Facebook executive, because he had firsthand knowledge of Murdoch’s skill in the dark arts. Back in 2007, Facebook had come under criticism from 49 state attorneys general for failing to protect young Facebook users from sexual predators and inappropriate content. Concerned parents had written to Connecticut attorney general Richard Blumenthal, who opened an investigation, and to The New York Times, which published a story. But according to a former Facebook executive in a position to know, the company believed that many of the Facebook accounts and the predatory behavior the letters referenced were fakes, traceable to News Corp lawyers or others working for Murdoch, who owned Facebook’s biggest competitor, MySpace. “We traced the creation of the Facebook accounts to IP addresses at the Apple store a block away from the MySpace offices in Santa Monica,” the executive says. “Facebook then traced interactions with those accounts to News Corp lawyers. When it comes to Facebook, Murdoch has been playing every angle he can for a long time.” (Both News Corp and its spinoff 21st Century Fox declined to comment.)

    Zuckerberg took Murdoch’s threats seriously—he had firsthand knowledge of the older man’s skill in the dark arts.

    When Zuckerberg returned from Sun Valley, he told his employees that things had to change. They still weren’t in the news business, but they had to make sure there would be a news business. And they had to communicate better. One of those who got a new to-do list was Andrew Anker, a product manager who’d arrived at Facebook in 2015 after a career in journalism (including a long stint at WIRED in the ’90s). One of his jobs was to help the company think through how publishers could make money on the platform. Shortly after Sun Valley, Anker met with Zuckerberg and asked to hire 60 new people to work on partnerships with the news industry. Before the meeting ended, the request was approved.

    But having more people out talking to publishers just drove home how hard it would be to resolve the financial problems Murdoch wanted fixed. News outfits were spending millions to produce stories that Facebook was benefiting from, and Facebook, they felt, was giving too little back in return. Instant Articles, in particular, struck them as a Trojan horse. Publishers complained that they could make more money from stories that loaded on their own mobile web pages than on Facebook Instant. (They often did so, it turned out, in ways that short-changed advertisers, by sneaking in ads that readers were unlikely to see. Facebook didn’t let them get away with that.) Another seemingly irreconcilable difference: Outlets like Murdoch’s Wall Street Journal depended on paywalls to make money, but Instant Articles banned paywalls; Zuckerberg disapproved of them. After all, he would often ask, how exactly do walls and toll booths make the world more open and connected?

    The conversations often ended at an impasse, but Facebook was at least becoming more attentive. This newfound appreciation for the concerns of journalists did not, however, extend to the journalists on Facebook’s own Trending Topics team. In late August, everyone on the team was told that their jobs were being eliminated. Simultaneously, authority over the algorithm shifted to a team of engineers based in Seattle. Very quickly the module started to surface lies and fiction. A headline days later read, “Fox News Exposes Traitor Megyn Kelly, Kicks Her Out For Backing Hillary."

    V

    While Facebook grappled internally with what it was becoming—a company that dominated media but didn’t want to be a media company—Donald Trump’s presidential campaign staff faced no such confusion. To them Facebook’s use was obvious. Twitter was a tool for communicating directly with supporters and yelling at the media. Facebook was the way to run the most effective direct-­marketing political operation in history.

    In the summer of 2016, at the top of the general election campaign, Trump’s digital operation might have seemed to be at a major disadvantage. After all, Hillary Clinton’s team was flush with elite talent and got advice from Eric Schmidt, known for running ­Google. Trump’s was run by Brad Parscale, known for setting up the Eric Trump Foundation’s web page. Trump’s social media director was his former caddie. But in 2016, it turned out you didn’t need digital experience running a presidential campaign, you just needed a knack for Facebook.

    Over the course of the summer, Trump’s team turned the platform into one of its primary vehicles for fund-­raising. The campaign uploaded its voter files—the names, addresses, voting history, and any other information it had on potential voters—to Facebook. Then, using a tool called Look­alike Audiences, Facebook identified the broad characteristics of, say, people who had signed up for Trump newsletters or bought Trump hats. That allowed the campaign to send ads to people with similar traits. Trump would post simple messages like “This election is being rigged by the media pushing false and unsubstantiated charges, and outright lies, in order to elect Crooked Hillary!” that got hundreds of thousands of likes, comments, and shares. The money rolled in. Clinton’s wonkier messages, meanwhile, resonated less on the platform. Inside Facebook, almost everyone on the executive team wanted Clinton to win; but they knew that Trump was using the platform better. If he was the candidate for Facebook, she was the candidate for LinkedIn.

    Trump’s candidacy also proved to be a wonderful tool for a new class of scammers pumping out massively viral and entirely fake stories. Through trial and error, they learned that memes praising the former host of The Apprentice got many more readers than ones praising the former secretary of state. A website called Ending the Fed proclaimed that the Pope had endorsed Trump and got almost a million comments, shares, and reactions on Facebook, according to an analysis by BuzzFeed. Other stories asserted that the former first lady had quietly been selling weapons to ISIS, and that an FBI agent suspected of leaking Clinton’s emails was found dead. Some of the posts came from hyperpartisan Americans. Some came from overseas content mills that were in it purely for the ad dollars. By the end of the campaign, the top fake stories on the platform were generating more engagement than the top real ones.

    Even current Facebookers acknowledge now that they missed what should have been obvious signs of people misusing the platform. And looking back, it’s easy to put together a long list of possible explanations for the myopia in Menlo Park about fake news. Management was gun-shy because of the Trending Topics fiasco; taking action against partisan disinformation—or even identifying it as such—might have been seen as another act of political favoritism. Facebook also sold ads against the stories, and sensational garbage was good at pulling people into the platform. Employees’ bonuses can be based largely on whether Facebook hits certain growth and revenue targets, which gives people an extra incentive not to worry too much about things that are otherwise good for engagement. And then there was the ever-present issue of Section 230 of the 1996 Communications Decency Act. If the company started taking responsibility for fake news, it might have to take responsibility for a lot more. Facebook had plenty of reasons to keep its head in the sand.

    Roger McNamee, however, watched carefully as the nonsense spread. First there were the fake stories pushing Bernie Sanders, then he saw ones supporting Brexit, and then helping Trump. By the end of the summer, he had resolved to write an op-ed about the problems on the platform. But he never ran it. “The idea was, look, these are my friends. I really want to help them.” And so on a Sunday evening, nine days before the 2016 election, McNamee emailed a 1,000-word letter to Sandberg and Zuckerberg. “I am really sad about Facebook,” it began. “I got involved with the company more than a decade ago and have taken great pride and joy in the company’s success … until the past few months. Now I am disappointed. I am embarrassed. I am ashamed.”

    Eddie Guy

    VI

    It’s not easy to recognize that the machine you’ve built to bring people together is being used to tear them apart, and Mark Zuckerberg’s initial reaction to Trump’s victory, and Facebook’s possible role in it, was one of peevish dismissal. Executives remember panic the first few days, with the leadership team scurrying back and forth between Zuckerberg’s conference room (called the Aquarium) and Sandberg’s (called Only Good News), trying to figure out what had just happened and whether they would be blamed. Then, at a conference two days after the election, Zuckerberg argued that filter bubbles are worse offline than on Facebook and that social media hardly influences how people vote. “The idea that fake news on Facebook—of which, you know, it’s a very small amount of the content—influenced the election in any way, I think, is a pretty crazy idea,” he said.

    Zuckerberg declined to be interviewed for this article, but people who know him well say he likes to form his opinions from data. And in this case he wasn’t without it. Before the interview, his staff had worked up a back-of-the-­envelope calculation showing that fake news was a tiny percentage of the total amount of election-­related content on the platform. But the analysis was just an aggregate look at the percentage of clearly fake stories that appeared across all of Facebook. It didn’t measure their influence or the way fake news affected specific groups. It was a number, but not a particularly meaningful one.

    Zuckerberg’s comments did not go over well, even inside Facebook. They seemed clueless and self-absorbed. “What he said was incredibly damaging,” a former executive told WIRED. “We had to really flip him on that. We realized that if we didn’t, the company was going to start heading down this pariah path that Uber was on.”

    A week after his “pretty crazy” comment, Zuckerberg flew to Peru to give a talk to world leaders about the ways that connecting more people to the internet, and to Facebook, could reduce global poverty. Right after he landed in Lima, he posted something of a mea culpa. He explained that Facebook did take misinformation seriously, and he presented a vague seven-point plan to tackle it. When a professor at the New School named David Carroll saw Zuckerberg’s post, he took a screenshot. Alongside it on Carroll’s feed ran a headline from a fake CNN with an image of a distressed Donald Trump and the text “DISQUALIFIED; He’s GONE!”

    At the conference in Peru, Zuckerberg met with a man who knows a few things about politics: Barack Obama. Media reports portrayed the encounter as one in which the lame-duck president pulled Zuckerberg aside and gave him a “wake-up call” about fake news. But according to someone who was with them in Lima, it was Zuckerberg who called the meeting, and his agenda was merely to convince Obama that, yes, Facebook was serious about dealing with the problem. He truly wanted to thwart misinformation, he said, but it wasn’t an easy issue to solve.

    One employee compared Zuckerberg to Lennie in Of Mice and Men—a man with no understanding of his own strength.

    Meanwhile, at Facebook, the gears churned. For the first time, insiders really began to question whether they had too much power. One employee told WIRED that, watching Zuckerberg, he was reminded of Lennie in Of Mice and Men, the farm-worker with no understanding of his own strength.

    Very soon after the election, a team of employees started working on something called the News Feed Integrity Task Force, inspired by a sense, one of them told WIRED, that hyperpartisan misinformation was “a disease that’s creeping into the entire platform.” The group, which included Mosseri and Anker, began to meet every day, using whiteboards to outline different ways they could respond to the fake-news crisis. Within a few weeks the company announced it would cut off advertising revenue for ad farms and make it easier for users to flag stories they thought false.

    In December the company announced that, for the first time, it would introduce fact-checking onto the platform. Facebook didn’t want to check facts itself; instead it would outsource the problem to professionals. If Facebook received enough signals that a story was false, it would automatically be sent to partners, like Snopes, for review. Then, in early January, Facebook announced that it had hired Campbell Brown, a former anchor at CNN. She immediately became the most prominent journalist hired by the company.

    Soon Brown was put in charge of something called the Facebook Journalism Project. “We spun it up over the holidays, essentially,” says one person involved in discussions about the project. The aim was to demonstrate that Facebook was thinking hard about its role in the future of journalism—essentially, it was a more public and organized version of the efforts the company had begun after Murdoch’s tongue-lashing. But sheer anxiety was also part of the motivation. “After the election, because Trump won, the media put a ton of attention on fake news and just started hammering us. People started panicking and getting afraid that regulation was coming. So the team looked at what Google had been doing for years with News Lab”—a group inside Alphabet that builds tools for journalists—“and we decided to figure out how we could put together our own packaged program that shows how seriously we take the future of news.”

    Facebook was reluctant, however, to issue any mea culpas or action plans with regard to the problem of filter bubbles or Facebook’s noted propensity to serve as a tool for amplifying outrage. Members of the leadership team regarded these as issues that couldn’t be solved, and maybe even shouldn’t be solved. Was Facebook really more at fault for amplifying outrage during the election than, say, Fox News or MSNBC? Sure, you could put stories into people’s feeds that contradicted their political viewpoints, but people would turn away from them, just as surely as they’d flip the dial back if their TV quietly switched them from Sean Hannity to Joy Reid. The problem, as Anker puts it, “is not Facebook. It’s humans.”

    VII

    Zuckerberg’s “pretty crazy” statement about fake news caught the ear of a lot of people, but one of the most influential was a security researcher named Renée DiResta. For years, she’d been studying how misinformation spreads on the platform. If you joined an antivaccine group on Facebook, she observed, the platform might suggest that you join flat-earth groups or maybe ones devoted to Pizzagate—putting you on a conveyor belt of conspiracy thinking. Zuckerberg’s statement struck her as wildly out of touch. “How can this platform say this thing?” she remembers thinking.

    Roger McNamee, meanwhile, was getting steamed at Facebook’s response to his letter. Zuckerberg and Sandberg had written him back promptly, but they hadn’t said anything substantial. Instead he ended up having a months-long, ultimately futile set of email exchanges with Dan Rose, Facebook’s VP for partnerships. McNamee says Rose’s message was polite but also very firm: The company was doing a lot of good work that McNamee couldn’t see, and in any event Facebook was a platform, not a media company.

    “And I’m sitting there going, ‘Guys, seriously, I don’t think that’s how it works,’” McNamee says. “You can assert till you’re blue in the face that you’re a platform, but if your users take a different point of view, it doesn’t matter what you assert.”

    As the saying goes, heaven has no rage like love to hatred turned, and McNamee’s concern soon became a cause—and the beginning of an alliance. In April 2017 he connected with a former Google design ethicist named Tristan Harris when they appeared together on Bloomberg TV. Harris had by then gained a national reputation as the conscience of Silicon Valley. He had been profiled on 60 Minutes and in The Atlantic, and he spoke eloquently about the subtle tricks that social media companies use to foster an addiction to their services. “They can amplify the worst aspects of human nature,” Harris told WIRED this past December. After the TV appearance, McNamee says he called Harris up and asked, “Dude, do you need a wingman?”

    The next month, DiResta published an ­article comparing purveyors of disinformation on social media to manipulative high-frequency traders in financial markets. “Social networks enable malicious actors to operate at platform scale, because they were designed for fast information flows and virality,” she wrote. Bots and sock puppets could cheaply “create the illusion of a mass groundswell of grassroots activity,” in much the same way that early, now-illegal trading algorithms could spoof demand for a stock. Harris read the article, was impressed, and emailed her.

    The three were soon out talking to anyone who would listen about Facebook’s poisonous effects on American democracy. And before long they found receptive audiences in the media and Congress—groups with their own mounting grievances against the social media giant.

    VIII

    Even at the best of times, meetings between Facebook and media executives can feel like unhappy family gatherings. The two sides are inextricably bound together, but they don’t like each other all that much. News executives resent that Facebook and Google have captured roughly three-quarters of the digital ad business, leaving the media industry and other platforms, like Twitter, to fight over scraps. Plus they feel like the preferences of Facebook’s algorithm have pushed the industry to publish ever-dumber stories. For years, The New York Times resented that Facebook helped elevate BuzzFeed; now BuzzFeed is angry about being displaced by clickbait.

    And then there’s the simple, deep fear and mistrust that Facebook inspires. Every publisher knows that, at best, they are sharecroppers on Facebook’s massive industrial farm. The social network is roughly 200 times more valuable than the Times. And journalists know that the man who owns the farm has the leverage. If Facebook wanted to, it could quietly turn any number of dials that would harm a publisher—by manipulating its traffic, its ad network, or its readers.

    Emissaries from Facebook, for their part, find it tiresome to be lectured by people who can’t tell an algorithm from an API. They also know that Facebook didn’t win the digital ad market through luck: It built a better ad product. And in their darkest moments, they wonder: What’s the point? News makes up only about 5 percent of the total content that people see on Facebook globally. The company could let it all go and its shareholders would scarcely notice. And there’s another, deeper problem: Mark Zuckerberg, according to people who know him, prefers to think about the future. He’s less interested in the news industry’s problems right now; he’s interested in the problems five or 20 years from now. The editors of major media companies, on the other hand, are worried about their next quarter—maybe even their next phone call. When they bring lunch back to their desks, they know not to buy green bananas.

    This mutual wariness—sharpened almost to enmity in the wake of the election—did not make life easy for Campbell Brown when she started her new job running the nascent Facebook Journalism Project. The first item on her to-do list was to head out on yet another Facebook listening tour with editors and publishers. One editor describes a fairly typical meeting: Brown and Chris Cox, Facebook’s chief product officer, invited a group of media leaders to gather in late January 2017 at Brown’s apartment in Manhattan. Cox, a quiet, suave man, sometimes referred to as “the Ryan Gosling of Facebook Product,” took the brunt of the ensuing abuse. “Basically, a bunch of us just laid into him about how Facebook was destroying journalism, and he graciously absorbed it,” the editor says. “He didn’t much try to defend them. I think the point was really to show up and seem to be listening.” Other meetings were even more tense, with the occasional comment from journalists noting their interest in digital antitrust issues.

    As bruising as all this was, Brown’s team became more confident that their efforts were valued within the company when Zuckerberg published a 5,700-word corporate manifesto in February. He had spent the previous three months, according to people who know him, contemplating whether he had created something that did more harm than good. “Are we building the world we all want?” he asked at the beginning of his post, implying that the answer was an obvious no. Amid sweeping remarks about “building a global community,” he emphasized the need to keep people informed and to knock out false news and clickbait. Brown and others at Facebook saw the manifesto as a sign that Zuckerberg understood the company’s profound civic responsibilities. Others saw the document as blandly grandiose, showcasing Zuckerberg’s tendency to suggest that the answer to nearly any problem is for people to use Facebook more.

    Shortly after issuing the manifesto, Zuckerberg set off on a carefully scripted listening tour of the country. He began popping into candy shops and dining rooms in red states, camera crew and personal social media team in tow. He wrote an earnest post about what he was learning, and he deflected questions about whether his real goal was to become president. It seemed like a well-­meaning effort to win friends for Facebook. But it soon became clear that Facebook’s biggest problems emanated from places farther away than Ohio.

    IX

    One of the many things Zuckerberg seemed not to grasp when he wrote his manifesto was that his platform had empowered an enemy far more sophisticated than Macedonian teenagers and assorted low-rent purveyors of bull. As 2017 wore on, however, the company began to realize it had been attacked by a foreign influence operation. “I would draw a real distinction between fake news and the Russia stuff,” says an executive who worked on the company’s response to both. “With the latter there was a moment where everyone said ‘Oh, holy shit, this is like a national security situation.’”

    That holy shit moment, though, didn’t come until more than six months after the election. Early in the campaign season, Facebook was aware of familiar attacks emanating from known Russian hackers, such as the group APT28, which is believed to be affiliated with Moscow. They were hacking into accounts outside of Facebook, stealing documents, then creating fake Facebook accounts under the banner of DCLeaks, to get people to discuss what they’d stolen. The company saw no signs of a serious, concerted foreign propaganda campaign, but it also didn’t think to look for one.

    During the spring of 2017, the company’s security team began preparing a report about how Russian and other foreign intelligence operations had used the platform. One of its authors was Alex Stamos, head of Facebook’s security team. Stamos was something of an icon in the tech world for having reportedly resigned from his previous job at Yahoo after a conflict over whether to grant a US intelligence agency access to Yahoo servers. According to two people with direct knowledge of the document, he was eager to publish a detailed, specific analysis of what the company had found. But members of the policy and communications team pushed back and cut his report way down. Sources close to the security team suggest the company didn’t want to get caught up in the political whirlwind of the moment. (Sources on the politics and communications teams insist they edited the report down, just because the darn thing was hard to read.)

    On April 27, 2017, the day after the Senate announced it was calling then FBI director James Comey to testify about the Russia investigation, Stamos’ report came out. It was titled “Information Operations and Facebook,” and it gave a careful step-by-step explanation of how a foreign adversary could use Facebook to manipulate people. But there were few specific examples or details, and there was no direct mention of Russia. It felt bland and cautious. As Renée DiResta says, “I remember seeing the report come out and thinking, ‘Oh, goodness, is this the best they could do in six months?’”

    One month later, a story in Time suggested to Stamos’ team that they might have missed something in their analysis. The article quoted an unnamed senior intelligence official saying that Russian operatives had bought ads on Facebook to target Americans with propaganda. Around the same time, the security team also picked up hints from congressional investigators that made them think an intelligence agency was indeed looking into Russian Facebook ads. Caught off guard, the team members started to dig into the company’s archival ads data themselves.

    Eventually, by sorting transactions according to a series of data points—Were ads purchased in rubles? Were they purchased within browsers whose language was set to Russian?—they were able to find a cluster of accounts, funded by a shadowy Russian group called the Internet Research Agency, that had been designed to manipulate political opinion in America. There was, for example, a page called Heart of Texas, which pushed for the secession of the Lone Star State. And there was Blacktivist, which pushed stories about police brutality against black men and women and had more followers than the verified Black Lives Matter page.

    Numerous security researchers express consternation that it took Facebook so long to realize how the Russian troll farm was exploiting the platform. After all, the group was well known to Facebook. Executives at the company say they’re embarrassed by how long it took them to find the fake accounts, but they point out that they were never given help by US intelligence agencies. A staffer on the Senate Intelligence Committee likewise voiced exasperation with the company. “It seemed obvious that it was a tactic the Russians would exploit,” the staffer says.

    When Facebook finally did find the Russian propaganda on its platform, the discovery set off a crisis, a scramble, and a great deal of confusion. First, due to a miscalculation, word initially spread through the company that the Russian group had spent millions of dollars on ads, when the actual total was in the low six figures. Once that error was resolved, a disagreement broke out over how much to reveal, and to whom. The company could release the data about the ads to the public, release everything to Congress, or release nothing. Much of the argument hinged on questions of user privacy. Members of the security team worried that the legal process involved in handing over private user data, even if it belonged to a Russian troll farm, would open the door for governments to seize data from other Facebook users later on. “There was a real debate internally,” says one executive. “Should we just say ‘Fuck it’ and not worry?” But eventually the company decided it would be crazy to throw legal caution to the wind “just because Rachel Maddow wanted us to.”

    Ultimately, a blog post appeared under Stamos’ name in early September announcing that, as far as the company could tell, the Russians had paid Facebook $100,000 for roughly 3,000 ads aimed at influencing American politics around the time of the 2016 election. Every sentence in the post seemed to downplay the substance of these new revelations: The number of ads was small, the expense was small. And Facebook wasn’t going to release them. The public wouldn’t know what they looked like or what they were really aimed at doing.

    This didn’t sit at all well with DiResta. She had long felt that Facebook was insufficiently forthcoming, and now it seemed to be flat-out stonewalling. “That was when it went from incompetence to malice,” she says. A couple of weeks later, while waiting at a Walgreens to pick up a prescription for one of her kids, she got a call from a researcher at the Tow Center for Digital Journalism named Jonathan Albright. He had been mapping ecosystems of misinformation since the election, and he had some excellent news. “I found this thing,” he said. Albright had started digging into CrowdTangle, one of the analytics platforms that Facebook uses. And he had discovered that the data from six of the accounts Facebook had shut down were still there, frozen in a state of suspended animation. There were the posts pushing for Texas secession and playing on racial antipathy. And then there were political posts, like one that referred to Clinton as “that murderous anti-American traitor Killary.” Right before the election, the Blacktivist account urged its supporters to stay away from Clinton and instead vote for Jill Stein. Albright downloaded the most recent 500 posts from each of the six groups. He reported that, in total, their posts had been shared more than 340 million times.

    Eddie Guy

    X

    To McNamee, the way the Russians used the platform was neither a surprise nor an anomaly. “They find 100 or 1,000 people who are angry and afraid and then use Facebook’s tools to advertise to get people into groups,” he says. “That’s exactly how Facebook was designed to be used.”

    McNamee and Harris had first traveled to DC for a day in July to meet with members of Congress. Then, in September, they were joined by DiResta and began spending all their free time counseling senators, representatives, and members of their staffs. The House and Senate Intelligence Committees were about to hold hearings on Russia’s use of social media to interfere in the US election, and McNamee, Harris, and ­DiResta were helping them prepare. One of the early questions they weighed in on was the matter of who should be summoned to testify. Harris recommended that the CEOs of the big tech companies be called in, to create a dramatic scene in which they all stood in a neat row swearing an oath with their right hands in the air, roughly the way tobacco executives had been forced to do a generation earlier. Ultimately, though, it was determined that the general counsels of the three companies—Facebook, Twitter, and Google—should head into the lion’s den.

    And so on November 1, Colin Stretch arrived from Facebook to be pummeled. During the hearings themselves, DiResta was sitting on her bed in San Francisco, watching them with her headphones on, trying not to wake up her small children. She listened to the back-and-forth in Washington while chatting on Slack with other security researchers. She watched as Marco Rubio smartly asked whether Facebook even had a policy forbidding foreign governments from running an influence campaign through the platform. The answer was no. Rhode Island senator Jack Reed then asked whether Facebook felt an obligation to individually notify all the users who had seen Russian ads that they had been deceived. The answer again was no. But maybe the most threatening comment came from Dianne Feinstein, the senior senator from Facebook’s home state. “You’ve created these platforms, and now they’re being misused, and you have to be the ones to do something about it,” she declared. “Or we will.”

    After the hearings, yet another dam seemed to break, and former Facebook executives started to go public with their criticisms of the company too. On November 8, billionaire entrepreneur Sean Parker, Facebook’s first president, said he now regretted pushing Facebook so hard on the world. “I don’t know if I really understood the consequences of what I was saying,” h

    Read more: https://www.wired.com/story/inside-facebook-mark-zuckerberg-2-years-of-hell/

    Apples New Spaceship Campus Has One Flaw and It Hurts

    The centerpiece of Apple Inc.’s new headquarters is a massive, ring-shaped office overflowing with panes of glass, a testament to the company’s famed design-obsessed aesthetic. 

    There’s been one hiccup since it opened last year: Apple employees keep smacking into the glass.

    Surrounding the building, located in Cupertino, California, are 45-foot tall curved panels of safety glass. Inside are work spaces, dubbed “pods,” also made with a lot of glass. Apple staff are often glued to the iPhones they helped popularize. That’s resulted in repeated cases of distracted employees walking into the panes, according to people familiar with the incidents. 

    Some staff started to stick Post-It notes on the glass doors to mark their presence. However, the notes were removed because they detracted from the building’s design, the people said. They asked not to be identified discussing anything related to Apple. Another person familiar with the situation said there are other markings to identify the glass. 

    Apple’s latest campus has been lauded as an architectural marvel. The building, crafted by famed architect Norman Foster, immortalized a vision that Apple co-founder Steve Jobs had years earlier. In 2011, Jobs reportedly described the building “a little like a spaceship landed.” Jobs has been credited for coming up with the glass pods, designed to mix solo office areas with more social spaces. 

    Apple campus in Cupertino.
    Photographer: Jim Wilson/New York Times via Redux

    The building is designed to house some 13,000 employees. Wired magazine, first to pay a visit at its opening last year, described the structure as a “statement of openness, of free movement,” in contrast to Apple’s typically insular culture. “While it is a technical marvel to make glass at this scale, that’s not the achievement,” Jony Ive, Apple’s design chief, told the magazine in May. “The achievement is to make a building where so many people can connect and collaborate and walk and talk.”

    An Apple spokeswoman declined to comment. It’s not clear how many incidents there have been. A Silicon Valley-based spokeswoman for the Occupational Safety and Health Administration referred questions about Apple’s workplace safety record to the government agency’s website. A search on the site based on Apple’s name in California found no reports of injuries at the company’s new campus. 

    It’s not the first time Apple’s penchant for glass in buildings has caused problems. In late 2011, 83-year-old Evelyn Paswall walked into the glass wall of an Apple store, breaking her nose. She sued the company, arguing it should have posted a warning on the glass. The suit was settled without any cost to Apple, according to a legal filing in early 2013. 

      Read more: http://www.bloomberg.com/news/articles/2018-02-16/apple-s-new-spaceship-campus-has-one-flaw-and-it-hurts

      Volkswagen Apologizes for Testing of Diesel Fumes on Monkeys

      The controversy over Volkswagen AG’s diesel-emissions cheating took another twist when the carmaker apologized for a test that exposed monkeys to engine fumes to study effects of the exhaust.

      The company said the study, conducted by a research and lobby group set up by VW, Daimler AG, BMW AG and Robert Bosch GmbH, was a mistake. The New York Times reported earlier about a 2014 trial in a U.S. laboratory in which 10 monkeys inhaled diesel emissions from a VW Beetle.

      “We apologize for the misconduct and the lack of judgment of individuals,” Wolfsburg, Germany-based VW said in a statement. “We’re convinced the scientific methods chosen then were wrong. It would have been better to do without such a study in the first place.”

      The revelations show the rocky road for Volkswagen as it emerges from its biggest crisis after the 2015 bombshell that the company installed emissions-cheating software in some 11 million diesel vehicles to dupe official tests. They also do little to help the poor public perception of the technology under scrutiny for high pollution levels in many European cities. In an additional twist, the Beetle model used in the test was among the vehicles that were rigged to conform to test limits, The New York Times reported.

      Daimler said separately it would start an investigation into the study ordered by the European Scientific Study Group for the Environment, Health and Transport Sector. BMW too distanced itself from the trial, saying it had taken no part in its design and methods. Bosch said it left the group in 2013. The study group, financed equally by the three carmakers, ceased activities last year and the project wasn’t completed, VW said.

      “We believe the animal tests in this study were unnecessary and repulsive,” Daimler said in a statement. “We explicitly distance ourselves from the study.”

        Read more: http://www.bloomberg.com/news/articles/2018-01-28/volkswagen-apologizes-for-testing-of-diesel-fumes-on-monkeys

        Amazon, Berkshire, JPMorgan Link Up to Form New Health-Care Company

        It’s no secret Jeff Bezos has been looking to crack health care. But no one expected him to pull in Warren Buffett and Jamie Dimon, too.

        News Tuesday that Bezos’s Amazon.com Inc., Buffett’s Berkshire Hathaway Inc. and JPMorgan Chase & Co., led by Dimon, plan to join forces to change how health care is provided to their combined 1 million U.S. employees sent shock waves through the health-care industry.

        The plan, while in early stages and focused solely on the three giants’ staff for now, seems almost certain to set its sights on disrupting the broader industry. It’s the first big move by Amazon in the sector after months of speculation that the internet behemoth might make an entry. The Amazon-Berkshire-JPMorgan collaboration will likely pressure profits for middlemen in the health-care supply chain.

        Details were scant in a short joint statement on Tuesday. The three companies said they plan to set up a new independent company “that is free from profit-making incentives and constraints.”

        It was enough to sink health-care stocks. Express Scripts Holding Co. and CVS Health Corp., which manage pharmacy benefits, slumped 6.9 percent and 4.9 percent, respectively. Health insurers such as Cigna Corp. and Anthem Inc. and biotechnology companies also dropped.

        The group announced the news in the very early stages because it plans to hire a CEO and start partnering with other organizations, according to a person familiar with the matter. The effort would be focused internally first, and the companies would bring their data and bargaining power to bear on lowering health-care costs, the person said. Potential ways to bring down costs include providing more transparency over the prices for doctor visits and lab tests, as well as by enabling direct purchasing of some medical items, the person said.

        “I’m in favor of anything that helps move the markets a bit, incentivizes competition and puts pressure on the big insurance carriers,” said Ashraf Shehata, a partner in KPMG LLP’s health care and life sciences advisory practice in the U.S. “An employer coalition can do a lot of things. You can encourage reimbursement models and provide incentives for the use of technology.”

        “Hard as it might be, reducing health care’s burden on the economy while improving outcomes for employees and their families would be worth the effort,” Bezos said in the statement. “Success is going to require talented experts, a beginner’s mind, and a long-term orientation.”

        The initial focus of the new company will be on technology solutions that will provide U.S. employees and their families with simplified, high-quality and transparent health care at a reasonable costs. In the statement, JPMorgan CEO Dimon said the initiative could ultimately expand beyond the three companies.

        “Our goal is to create solutions that benefit our U.S. employees, their families and, potentially, all Americans,” he said.

        HTA Alliance

        Amazon, Berkshire and JPMorgan are among the largest private employers in the U.S. And they’re among the most valuable, with a combined market capitalization of $1.6 trillion, according to data compiled by Bloomberg.

        This isn’t the first time big companies have teamed up in an effort to tackle health-care costs. International Business Machines Corp., Berkshire’s BNSF Railway and American Express Co. were among the founding members of the Health Transformation Alliance, which now includes about 40 big companies that want to transform health care. The group ultimately partnered with existing industry players including CVS and UnitedHealth Group Inc.’s OptumRx.

        Top Team

        The latest effort is being spearheaded by Todd Combs, who helps oversee investments at Berkshire; Marvelle Sullivan Berchtold, a managing director of JPMorgan; and Beth Galetti, a senior vice president for human resources at Amazon.

        Buffett handpicked Combs in 2010 as one of his two key stockpickers. Combs, 47, has been taking on a larger role at Berkshire in recent years, and Buffett has said that Combs and Ted Weschler, who also helps oversee investments, will eventually manage the company’s whole portfolio. Combs also joined JPMorgan’s board in 2016.

        Sullivan Berchtold joined JPMorgan in August after eight years at the Swiss pharmaceutical company Novartis AG, where she was most recently the global head of mergers and acquisitions, according to her LinkedIn profile.

        One of the highest ranking women at Amazon, Galetti has worked in human resources at the e-commerce giant since mid-2013, becoming senior vice president almost two years ago, according to her LinkedIn profile. As of late 2017 she was the only woman on Amazon’s elite S-team, a group of just over a dozen senior executives who meet regularly with Bezos, according to published reports. Previously Galetti worked in planning, engineering and operations at FedEx Express, the cargo airline of FedEx Corp. She has a degree in electrical engineering from Lehigh University and an MBA from Colorado Technical University.

        The management team, location of the headquarters and other operational details will be announced later, the companies said.

        Health-care spending was estimated to account for about 18 percent of the U.S. economy last year, far more than in other developed nations. Buffett has long bemoaned the cost of U.S. health care. Last year, he came out in favor of drastic changes in the U.S. health system, telling PBS NewsHour that government-run health care is probably the best approach and would bring down costs.

        “The ballooning costs of health care act as a hungry tapeworm on the American economy,” Buffett said in Tuesday’s statement. “Our group does not come to this problem with answers. But we also do not accept it as inevitable.”

          Read more: http://www.bloomberg.com/news/articles/2018-01-30/amazon-berkshire-jpmorgan-to-set-up-a-health-company-for-staff

          It’s the (Democracy-Poisoning) Golden Age of Free Speech

          For most of modern history, the easiest way to block the spread of an idea was to keep it from being mechanically disseminated. Shutter the news­paper, pressure the broad­cast chief, install an official censor at the publishing house. Or, if push came to shove, hold a loaded gun to the announcer’s head.

          This actually happened once in Turkey. It was the spring of 1960, and a group of military officers had just seized control of the government and the national media, imposing an information blackout to suppress the coordination of any threats to their coup. But inconveniently for the conspirators, a highly anticipated soccer game between Turkey and Scotland was scheduled to take place in the capital two weeks after their takeover. Matches like this were broadcast live on national radio, with an announcer calling the game, play by play. People all across Turkey would huddle around their sets, cheering on the national team.

          Canceling the match was too risky for the junta; doing so might incite a protest. But what if the announcer said something political on live radio? A single remark could tip the country into chaos. So the officers came up with the obvious solution: They kept several guns trained on the announcer for the entire 2 hours and 45 minutes of the live broadcast.

          It was still a risk, but a managed one. After all, there was only one announcer to threaten: a single bottleneck to control of the airwaves.

          Variations on this general playbook for censorship—find the right choke point, then squeeze—were once the norm all around the world. That’s because, until recently, broadcasting and publishing were difficult and expensive affairs, their infrastructures riddled with bottlenecks and concentrated in a few hands.

          But today that playbook is all but obsolete. Whose throat do you squeeze when anyone can set up a Twitter account in seconds, and when almost any event is recorded by smartphone-­wielding mem­­bers of the public? When protests broke out in Ferguson, Missouri, in August 2014, a single livestreamer named Mustafa Hussein reportedly garnered an audience comparable in size to CNN’s for a short while. If a Bosnian Croat war criminal drinks poison in a courtroom, all of Twitter knows about it in minutes.

          February 2018. Subscribe to WIRED.

          Sean Freeman

          In today’s networked environment, when anyone can broadcast live or post their thoughts to a social network, it would seem that censorship ought to be impossible. This should be the golden age of free speech.

          And sure, it is a golden age of free speech—if you can believe your lying eyes. Is that footage you’re watching real? Was it really filmed where and when it says it was? Is it being shared by alt-right trolls or a swarm of Russian bots? Was it maybe even generated with the help of artificial intelligence? (Yes, there are systems that can create increasingly convincing fake videos.)

          Or let’s say you were the one who posted that video. If so, is anyone even watching it? Or has it been lost in a sea of posts from hundreds of millions of content pro­ducers? Does it play well with Facebook’s algorithm? Is YouTube recommending it?

          Maybe you’re lucky and you’ve hit a jackpot in today’s algorithmic public sphere: an audience that either loves you or hates you. Is your post racking up the likes and shares? Or is it raking in a different kind of “engagement”: Have you received thousands of messages, mentions, notifications, and emails threatening and mocking you? Have you been doxed for your trouble? Have invisible, angry hordes ordered 100 pizzas to your house? Did they call in a SWAT team—men in black arriving, guns drawn, in the middle of dinner?

          Standing there, your hands over your head, you may feel like you’ve run afoul of the awesome power of the state for speaking your mind. But really you just pissed off 4chan. Or entertained them. Either way, congratulations: You’ve found an audience.

          Here’s how this golden age of speech actually works: In the 21st century, the capacity to spread ideas and reach an audience is no longer limited by access to expensive, centralized broadcasting infrastructure. It’s limited instead by one’s ability to garner and distribute attention. And right now, the flow of the world’s attention is structured, to a vast and overwhelming degree, by just a few digital platforms: Facebook, Google (which owns YouTube), and, to a lesser extent, Twitter.

          These companies—which love to hold themselves up as monuments of free expression—have attained a scale unlike anything the world has ever seen; they’ve come to dominate media distribution, and they increasingly stand in for the public sphere itself. But at their core, their business is mundane: They’re ad brokers. To virtually anyone who wants to pay them, they sell the capacity to precisely target our eyeballs. They use massive surveillance of our behavior, online and off, to generate increasingly accurate, automated predictions of what advertisements we are most susceptible to and what content will keep us clicking, tapping, and scrolling down a bottomless feed.

          So what does this algorithmic public sphere tend to feed us? In tech parlance, Facebook and YouTube are “optimized for engagement,” which their defenders will tell you means that they’re just giving us what we want. But there’s nothing natural or inevitable about the specific ways that Facebook and YouTube corral our attention. The patterns, by now, are well known. As Buzzfeed famously reported in November 2016, “top fake election news stories generated more total engagement on Facebook than top election stories from 19 major news outlets combined.”

          Humans are a social species, equipped with few defenses against the natural world beyond our ability to acquire knowledge and stay in groups that work together. We are particularly susceptible to glimmers of novelty, messages of affirmation and belonging, and messages of outrage toward perceived enemies. These kinds of messages are to human community what salt, sugar, and fat are to the human appetite. And Facebook gorges us on them—in what the company’s first president, Sean Parker, recently called “a social-­validation feedback loop.”

          Sure, it is a golden age of free speech—if you can believe your lying eyes.

          There are, moreover, no nutritional labels in this cafeteria. For Facebook, YouTube, and Twitter, all speech—whether it’s a breaking news story, a saccharine animal video, an anti-Semitic meme, or a clever advertisement for razors—is but “content,” each post just another slice of pie on the carousel. A personal post looks almost the same as an ad, which looks very similar to a New York Times article, which has much the same visual feel as a fake newspaper created in an afternoon.

          What’s more, all this online speech is no longer public in any traditional sense. Sure, Facebook and Twitter sometimes feel like places where masses of people experience things together simultaneously. But in reality, posts are targeted and delivered privately, screen by screen by screen. Today’s phantom public sphere has been fragmented and submerged into billions of individual capillaries. Yes, mass discourse has become far easier for everyone to participate in—but it has simultaneously become a set of private conversations happening behind your back. Behind everyone’s backs.

          Not to put too fine a point on it, but all of this invalidates much of what we think about free speech—conceptually, legally, and ethically.

          The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech itself. As a result, they don’t look much like the old forms of censorship at all. They look like viral or coordinated harassment campaigns, which harness the dynamics of viral outrage to impose an unbearable and disproportionate cost on the act of speaking out. They look like epidemics of disinformation, meant to undercut the credibility of valid information sources. They look like bot-fueled campaigns of trolling and distraction, or piecemeal leaks of hacked materials, meant to swamp the attention of traditional media.

          These tactics usually don’t break any laws or set off any First Amendment alarm bells. But they all serve the same purpose that the old forms of censorship did: They are the best available tools to stop ideas from spreading and gaining purchase. They can also make the big platforms a terrible place to interact with other people.

          Even when the big platforms themselves suspend or boot someone off their networks for violating “community standards”—an act that does look to many people like old-fashioned censorship—it’s not technically an infringement on free speech, even if it is a display of immense platform power. Anyone in the world can still read what the far-right troll Tim “Baked Alaska” Gionet has to say on the internet. What Twitter has denied him, by kicking him off, is attention.

          Many more of the most noble old ideas about free speech simply don’t compute in the age of social media. John Stuart Mill’s notion that a “marketplace of ideas” will elevate the truth is flatly belied by the virality of fake news. And the famous American saying that “the best cure for bad speech is more speech”—a paraphrase of Supreme Court justice Louis Brandeis—loses all its meaning when speech is at once mass but also nonpublic. How do you respond to what you cannot see? How can you cure the effects of “bad” speech with more speech when you have no means to target the same audience that received the original message?

          This is not a call for nostalgia. In the past, marginalized voices had a hard time reaching a mass audience at all. They often never made it past the gatekeepers who put out the evening news, who worked and lived within a few blocks of one another in Manhattan and Washington, DC. The best that dissidents could do, often, was to engineer self-sacrificing public spectacles that those gatekeepers would find hard to ignore—as US civil rights leaders did when they sent schoolchildren out to march on the streets of Birmingham, Alabama, drawing out the most naked forms of Southern police brutality for the cameras.

          But back then, every political actor could at least see more or less what everyone else was seeing. Today, even the most powerful elites often cannot effectively convene the right swath of the public to counter viral messages. During the 2016 presidential election, as Joshua Green and Sasha Issenberg reported for Bloomberg, the Trump campaign used so-called dark posts—nonpublic posts targeted at a specific audience—to discourage African Americans from voting in battleground states. The Clinton campaign could scarcely even monitor these messages, let alone directly counter them. Even if Hillary Clinton herself had taken to the evening news, that would not have been a way to reach the affected audience. Because only the Trump campaign and Facebook knew who the audience was.

          It’s important to realize that, in using these dark posts, the Trump campaign wasn’t deviantly weaponizing an innocent tool. It was simply using Facebook exactly as it was designed to be used. The campaign did it cheaply, with Facebook staffers assisting right there in the office, as the tech company does for most large advertisers and political campaigns. Who cares where the speech comes from or what it does, as long as people see the ads? The rest is not Facebook’s department.

          Mark Zuckerberg holds up Facebook’s mission to “connect the world” and “bring the world closer together” as proof of his company’s civic virtue. “In 2016, people had billions of interactions and open discussions on Facebook,” he said proudly in an online video, looking back at the US election. “Candidates had direct channels to communicate with tens of millions of citizens.”

          This idea that more speech—more participation, more connection—constitutes the highest, most unalloyed good is a common refrain in the tech industry. But a historian would recognize this belief as a fallacy on its face. Connectivity is not a pony. Facebook doesn’t just connect democracy-­loving Egyptian dissidents and fans of the videogame Civilization; it brings together white supremacists, who can now assemble far more effectively. It helps connect the efforts of radical Buddhist monks in Myanmar, who now have much more potent tools for spreading incitement to ethnic cleansing—fueling the fastest- growing refugee crisis in the world.

          The freedom of speech is an important democratic value, but it’s not the only one. In the liberal tradition, free speech is usually understood as a vehicle—a necessary condition for achieving certain other societal ideals: for creating a knowledgeable public; for engendering healthy, rational, and informed debate; for holding powerful people and institutions accountable; for keeping communities lively and vibrant. What we are seeing now is that when free speech is treated as an end and not a means, it is all too possible to thwart and distort everything it is supposed to deliver.

          Creating a knowledgeable public requires at least some workable signals that distinguish truth from falsehood. Fostering a healthy, rational, and informed debate in a mass society requires mechanisms that elevate opposing viewpoints, preferably their best versions. To be clear, no public sphere has ever fully achieved these ideal conditions—but at least they were ideals to fail from. Today’s engagement algorithms, by contrast, espouse no ideals about a healthy public sphere.

          The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech.

          Some scientists predict that within the next few years, the number of children struggling with obesity will surpass the number struggling with hunger. Why? When the human condition was marked by hunger and famine, it made perfect sense to crave condensed calories and salt. Now we live in a food glut environment, and we have few genetic, cultural, or psychological defenses against this novel threat to our health. Similarly, we have few defenses against these novel and potent threats to the ideals of democratic speech, even as we drown in more speech than ever.

          The stakes here are not low. In the past, it has taken generations for humans to develop political, cultural, and institutional antibodies to the novelty and upheaval of previous information revolutions. If The Birth of a Nation and Triumph of the Will came out now, they’d flop; but both debuted when film was still in its infancy, and their innovative use of the medium helped fuel the mass revival of the Ku Klux Klan and the rise of Nazism.

          By this point, we’ve already seen enough to recognize that the core business model underlying the Big Tech platforms—harvesting attention with a massive surveillance infrastructure to allow for targeted, mostly automated advertising at very large scale—is far too compatible with authoritarianism, propaganda, misinformation, and polarization. The institutional antibodies that humanity has developed to protect against censorship and propaganda thus far—laws, journalistic codes of ethics, independent watchdogs, mass education—all evolved for a world in which choking a few gatekeepers and threatening a few individuals was an effective means to block speech. They are no longer sufficient.

          But we don’t have to be resigned to the status quo. Facebook is only 13 years old, Twitter 11, and even Google is but 19. At this moment in the evolution of the auto industry, there were still no seat belts, airbags, emission controls, or mandatory crumple zones. The rules and incentive structures underlying how attention and surveillance work on the internet need to change. But in fairness to Facebook and Google and Twitter, while there’s a lot they could do better, the public outcry demanding that they fix all these problems is fundamentally mistaken. There are few solutions to the problems of digital discourse that don’t involve huge trade-offs—and those are not choices for Mark Zuckerberg alone to make. These are deeply political decisions. In the 20th century, the US passed laws that outlawed lead in paint and gasoline, that defined how much privacy a landlord needs to give his tenants, and that determined how much a phone company can surveil its customers. We can decide how we want to handle digital surveillance, attention-­channeling, harassment, data collection, and algorithmic decision­making. We just need to start the discussion. Now.


          The Free Speech Issue

          • “Nice Website. It Would Be a Shame if Something Happened to It.”: Steven Johnson goes inside Cloudflare's decision to let an extremist stronghold burn.
          • Everything You Say Can and Will Be Used Against You: Doug Bock Clark profiles Antifa’s secret weapon against far-right extremists.
          • Please, Silence Your Speech: Alice Gregory visits a startup that wants to neutralize your smartphone—and un-change the world.
          • The Best Hope for Civil Discourse on the Internet … Is on Reddit: Virginia Heffernan submits to Change My View.
          • 6 Tales of Censorship: What it's like to be suspended by Facebook, blocked by Trump, and more, in the subjects’ own words.

          Zeynep Tufekci (@zeynep) is an associate professor at the University of North Carolina and an opinion writer for The New York Times.

          This article appears in the February issue. Subscribe now.

          Read more: https://www.wired.com/story/free-speech-issue-tech-turmoil-new-censorship/

          Sessions Ending Obama-Era Policy That Ushered In Legal Weed

          Attorney General Jeff Sessions is rescinding an Obama-era policy that helped states legalize recreational marijuana, throwing a wet blanket on the fledgling industry during what could have been a celebratory week.

          The Justice Department will reverse the so-called Cole and Ogden memos that set out guardrails for federal prosecution of cannabis and allowed legalized marijuana to flourish in states across the U.S., according to two senior agency officials. U.S. attorneys in states where pot is legal will now be able prosecute cases where they see fit, according to the officials, who requested anonymity discussing internal policy.

          Shares of pot companies plunged as news of the policy change surfaced, though many began to rebound after investors weighed the potential impact.

          The change comes at a high point for the weed industry. California, the biggest U.S. state and sixth-largest economy in the world, launched its legal marketplace on Jan. 1. Sales in California alone are expected to reach $3.7 billion in 2018, according to estimates from BDS Analytics. 

          Seven other states and the District of Columbia have also legalized cannabis for adult use. Twenty-one additional states have voted to allow the plant to be used for medicinal purposes. The market is expected to skyrocket from $6 billion in 2016 to $50 billion by 2026, according to Cowen & Co.

          Sessions, a Republican from Alabama, has long been opposed to marijuana, equating it with heroin. But this is the first action he’s taken that deviates significantly from the Obama administration. Many in the industry said the news is unsurprising but disappointing.

          “While dismantling the industry will prove impossible, the move by Sessions will sow more seeds of uncertainty in an industry that already has its fair share of risks and unknowns,” said Chris Walsh, vice president of Marijuana Business Daily. “Businesses could be in for a bumpy ride amid this uncertainty, and we certainly could see some types of regional crackdowns or delays in upcoming medical or recreational cannabis markets.”

          Shares Plummet

          The Bloomberg Intelligence Global Cannabis Competitive Peers Index dropped as much as 24 percent after the Associated Press first reported the Justice Department plan. Most companies in that group are small. Still, there are a few big names that could be hit by the changing policy. 

          Constellation Brands Inc., which sells Corona beer and Svedka vodka in the U.S., got involved in the cannabis industry in October when it acquired a minority investment in Canopy Growth, a Canadian marijuana company. Scotts Miracle-Gro Co. has also made its way into the Green Rush. It fell as much as 5.7 percent after the news, the biggest intraday drop since May. 

          A tightening of enforcement also would be felt in Canada, where the cannabis industry has blossomed. Ontario’s Canopy Growth fell as much as 19 percent to C$29.06 in Toronto, while Aphria Inc. plunged as much as 23 percent to C$16.59. ETFMG Alternative Harvest ETF, the first pure-play pot ETF to be listed in the U.S., dropped as much as 9.7 percent, the biggest intraday decline since May.

          Fear and Doubt

          Sessions’s policy may cause investors to think twice before putting their money into the Green Rush, according to Adrian Sedlin, founder of Canndescent, a marijuana cultivation and branded-flower company.

          “Fear, uncertainty and doubt will rip through our industry like a California wildfire because of this,” he said. “Whatever happens longterm, this will retard and limit capital flows into the industry for the foreseeable future.”

          The move is likely to sow confusion among consumers and state officials, and may spark a backlash if state-approved retailers are prosecuted. Sixty-four percent of the U.S. population now wants to make pot legal, according to a Gallup poll released in October.

          But it’s too late to stop the industry from growing, said Laura Bianchi, a partner and director of cannabis, business and corporate transactions and estate planning at Rose Law Group in Scottsdale, Arizona.

          “To undo this industry would be like closing Pandora’s box once it’s been opened,” she said. “It would be a Herculean effort that would undermine another Republican cornerstone, which is the importance of states’ rights.”

          Senators React

          Senator Cory Gardner, a Republican from Colorado, where marijuana is legal, said in a tweet that Sessions’s move contradicts what he told the senator before his confirmation.

          “I am prepared to take all steps necessary, including holding DOJ nominees, until the Attorney General lives up to the commitment he made to me,” Gardner said.

          Senator Kirsten Gillibrand, a New York Democrat, said Sessions’s actions are an affront to medical patients who need to use the plant as medicine. 

          “Parents should be able to give their sick kids the medicine they need without having to fear that they will be prosecuted,” she said in a statement. “This is about public health, and it’s about reforming our broken criminal justice system that throws too many minorities in prison for completely nonviolent offenses.”

          Still, the federal policy change may not actually hurt business much at all. Entrepreneurs starting marijuana businesses have already been working under risky circumstances. The plant has remained federally illegal, meaning most large companies — including banks — have shied away. Instead, the business has relied on state regulators, many of whom previously said they would defend the industry through any federal crackdown. 

          “We’re not overly concerned that a change in DOJ policy around cannabis will be meaningfully disruptive to legal adult use cannabis states, given the vocal support offered by these state-level AG’s,” said Vivien Azer, a Cowen & Co. analyst who covers the industry.

            Read more: http://www.bloomberg.com/news/articles/2018-01-04/sessions-said-to-kill-obama-policy-that-ushered-in-legal-weed

            Target to Buy Shipt for $550 Million in Challenge to Amazon

            Target Corp. agreed to purchase grocery-delivery startup Shipt Inc. for $550 million, stepping up its challenge to Amazon.com Inc. by speeding the rollout of same-day shipping.

            The all-cash deal will let Target customers order groceries and other goods online, and then have the items sent directly to their doors from nearby Target stores.

            Buying Shipt further beefs up Target’s logistics operations after the retailer earlier this year acquired software company Grand Junction, which also manages local and same-day deliveries. Target now offers same-day delivery in New York City and can send orders from 1,400 of its stores. Competition in this space is growing fiercer, though, as rivals Wal-Mart Stores Inc. and Best Buy Co. also offer same-day service, keeping pace with Amazon.

            Target’s decision to buy Shipt, rather than partner with it, “shows how serious they are,” Kantar Retail analyst Robin Sherk said. “One-stop shopping was convenient in the 1990s but for today’s families you have to be able to do instant food delivery as well. It’s also a realization that Amazon, this big technology disruptor, has entered the consumer landscape.”

            Four out of five shoppers want same-day shipping, according to a survey by fulfillment software maker Temando, but only half of retailers offer it.

            “With Shipt’s network of local shoppers and their current market penetration, we will move from days to hours, dramatically accelerating our ability to bring affordable same-day delivery to guests across the country,” John Mulligan, Target’s chief operating officer, said in a statement.

            The deal will give Target same-day delivery at about half of its 1,834 stores by next summer, with the number growing to a majority of stores in time for next year’s holiday season. The service — costing $99 a year for unlimited deliveries — will initially encompass categories like groceries, household essentials and electronics before expanding to all major product groups by the end of 2019.

            Improved Position

            “While it will not affect Target’s capability this holiday season, the fact that Target will have this service in place during 2018 will significantly improve its online competitive position,” Charlie O’Shea, an analyst at Moody’s Corp., said in a note.

            Target rose 2.7 percent to close at $62.67 Wednesday, while the news caused a momentary dip for the shares of Shipt’s existing retail partners, Kroger Co. and Costco Wholesale Corp. Kroger ended the day up 1.4 percent, while Costco was little changed.

            Kroger said it’s still optimistic about the company’s prospects for home delivery after expanding its logistics operations in recent years via partnerships with Instacart Inc. and others.

            “We feel really good about the variety of partnerships Kroger has going,” corporate communications head Keith Dailey said. Costco Chief Financial Officer Richard Galanti declined to comment.

            Online Preference

            Consumers’ increasing preference for shopping online, along with Amazon’s purchase of upscale grocer Whole Foods and its encroachment into new arenas like apparel, have sent retailers scrambling to improve their online offerings. E-commerce sales are up about 17 percent this holiday season, according to Adobe Systems Inc., and online merchants racked up a record $6.59 billion on Cyber Monday alone, the company found.

            The question for traditional retailers is how to handle all those internet orders. They could build their own delivery network, but it’s an arduous and expensive process. That’s why many of them are seeking help from e-commerce startups like Shipt and Instacart.

            Founded in 2014, Shipt serves about 20,000 customers through partnerships with retailers including Publix Super Markets Inc., HEB Grocery Co., Kroger and Costco. It will continue to operate independently and plans to expand its business with other retailers, Chief Executive Officer Bill Smith said in an interview.

            ‘Scale Matters’

            “We’ve spoken to a number of our existing partners about this deal and all the conversations have been very positive,” Smith said. “Having multiple retailers allows us to grow our membership base and make it more attractive. In same-day delivery, scale matters.”

            For now, Target shoppers will need to pay Shipt’s $99 annual membership fee to gain access to the service. Once a customer orders, they send a “shopper” into the store to grab the groceries, and then deliver the items. Target is working on how to integrate Shipt into its website and mobile shopping app, Mulligan said.

            The deal is expected to close before the end of the year and will be “modestly accretive” to Target’s profit in 2018, while boosting online sales, the company said. The retailer’s e-commerce sales already grew 24 percent in the third quarter.

            ‘Big Loser’

            Target has worked with Shipt’s rival Instacart for same-day service in cities like Minneapolis and Chicago since 2015, and Mulligan said he “will have conversations with them on where we go next.”

            “The big loser in this deal is Instacart,” said Cooper Smith, an analyst at business-intelligence firm L2.

            Following Target’s announcement, Instacart said it works with more than 165 retailers, including seven of the eight biggest grocers in North America.

            “As an independent company, Instacart doesn’t compete with any of our partners,” the company said. San Francisco-based Instacart has recently expanded its partnerships with retailers including Costco, Kroger, Albertsons Cos. and drugstore giant CVS Health Corp.

            Target and Shipt began discussing the deal in the middle of the summer, Mulligan said. They decided to pursue an acquisition rather than just a partnership in order to plow Target’s resources into expanding Shipt’s business, and to maintain its current level of customer experience.

            Smith will stay in his role, reporting to Mulligan, and its 270 employees will remain in Shipt’s offices in San Francisco and Birmingham, Alabama.

              Read more: http://www.bloomberg.com/news/articles/2017-12-13/target-to-buy-shipt-for-550-million-in-bet-on-same-day-delivery

              Are veggie burgers and cheese-less pizza the solution for a sustainable future?

              Image: pixabay

              Eco-conscious consumers may want to practice portion control before chowing down on that cheeseburger.

              Recent studies about the environmental impact of agricultural industries like meat and dairy have produced worrying statistics. Widely cited research suggests that red meat products are responsible for as much as 40 times the amount of greenhouse gas emissions compared to vegetables and grains. The dairy industry is a culprit as well, with traditional dairy farming contributing to greenhouse gas emission through cow manure, feed production, and milk processing. Practices within both industries can contribute to soil degradation, water waste, and harmful runoff.

              The UN’s Sustainable Development Goals outline top-priority objectives for tackling global issues like climate change, food waste, and sustainable agriculture. Major corporations and small startups alike are taking steps toward making these goals realities — and a few companies are specifically focusing on responsible consumption.

              Here’s the good news: You don’t need to pare down your diet to carrot sticks in order to make an impact. Maintaining an environmentally friendly lifestyle is less about going completely meat-free, and more about responsible choices. Below are four organizations proving that an eco-conscious lifestyle is easier than you might think.

              Beyond Meat

              Companies like Beyond Meat want to ensure that carnivorous consumers can have it all: Big, juicy burgers and a sustainable diet. The company produces plant-based products that look and (more importantly) taste like real meat. Beyond Meat’s burgers are so realistic that some grocery stores have even started stocking them in the meat aisle.

              Much of the meat industry’s environmental impact revolves around problematic livestock practices, which is why plant-based foods are a more sustainable option. Companies like Beyond Meat can help mitigate many problems inherent in the meat industry — without asking consumers to completely forego their beloved burgers.

              Sabra

              Sabra’s Plants with a Purpose initiative is a program launched in 2016 that combats food deserts — or areas/neighborhoods that lack access to fresh, healthy, and affordable fruits and vegetables. According to the company’s estimates, more than 23 million Americans live in such deserts. Many of these families ultimately end up turning to less environmentally conscious (and not to mention, less healthy) meals simply due to lack of access and affordability.

              Image: sabra

              Plants with a Purpose establishes organic work-share gardens in locations like Richmond, Virginia, where Sabra’s Gold LEED certified hummus-manufacturing facility is headquartered. Alongside community education efforts, these types of gardens help improve urban agriculture in underserved communities. 

              “This is the land of plenty, but there are plenty who lack far too much including access to the necessity of fresh fruits and vegetables,” said Sabra CEO Shali Shalit-Shoval on the Sabra website. “As a brand dedicated to creating a fresh new way of eating and connecting, we are uniquely positioned to help address this very real and sometimes surprising challenge facing communities across the country.”

              Daiya

              “Find your happy plate,” riffs plant-based foods brand Daiya

              Much like Beyond Meat offers a burger alternative to consumers who crave their daily dose of beef (but want to skip the side of guilt), Daiya offers a slew of dairy-free foods that taste about as close to the real deal as possible: We’re talking pizza, mac and cheese, and even gooey grilled cheese sandwiches. Better yet, the company ensures that every step of their supply chain — from the way ingredients are grown to packaging materials — are sustainable.

              Image: daiya

              The brand offers a variety of dairy-free dishes for eco-conscious consumers and for people with dietary restrictions. Their products are also free from common allergens like gluten, soy, eggs, peanuts, fish, and shellfish. On Daiya’s website, the company also provides a variety of plant-based living tips and recipe suggestions for getting the most out of their products.

              Worldwide demand for milk products is skyrocketing. While the dairy industry is evolving in its own right, companies like Daiya that provide plant-based alternatives are another option for environmentally savvy consumers who hope to cut down their carbon footprint.

              Beauty Without Cruelty

              It’s not just what we put in our bodies that can have a detrimental environmental impact: What we put on our bodies counts, too. The cosmetics industry is often a perpetrator of ecologically harmful pollutants like some preservatives (including parabens and triclosan), microplastics, and UV filters.

              Image: pixabay

              A member of the Vegan Society, BWC makes beauty products that are 100% suitable for vegans and vegetarians; in addition, the brand uses recycled materials and responsible sourcing methods to minimize its environmental footprint. Their products range from hair and skincare treatments to nail polishes and makeup. Many of the company’s products are fragrance-free and others are gluten-free, too, for consumers with particularly sensitive skin.

              Living an eco-conscious lifestyle doesn’t have to be tedious or difficult. With the rise of sustainability-focused startups as well as concerted efforts from established brands, the bar for responsible consumerism is being raised every day.

              The value of investments can go down as well as up. Your capital and income is at risk. Assets used for secured borrowing are at risk if you do not keep up with repayments. In the UK, UBS AG is authorized by the Prudential Regulation Authority and subject to regulation by the Financial Conduct Authority and limited regulation by the Prudential Regulation Authority. © UBS 2017. All rights reserved.

              Read more: http://mashable.com/2017/12/06/companies-reducing-meat-consumption/