Sundar Pichai got a promotion. Google’s gadgets reached more people. And AI and content moderation continued to create new dilemmas
BY HARRY MCCRACKEN 8 minute read
For the past few years, one of the highest compliments you could pay Google was to say it wasn’t making the same mistakes as Facebook. The two companies are similar in many ways: For instance, both make most of their money by monetizing their users’ data, a business model with fundamental privacy issues. And both are reliant on algorithms that are prone to abuse by those who would use them to spread hate and misinformation. But in general, Google has been far more adept at sidestepping the kind of controversies that Facebook walked into every week.
Still, 2019 will not go down in Google history as a highpoint. Though not an annis horribilis by any means, it was pockmarked with a higher-than-typical quantity of public embarrassments, many of which involved the company failing to live up to its own view of itself as a paragon of idealism and transparency.
Sundar Pichai once declared that AI will be a more profound breakthrough for humans than fire was in its day. But despite that enthusiasm, Google is acknowledging that AI, like fire, can be dangerous if it gets out of hand
Here’s a recap of Google’s year, as defined by its smartest moves and biggest mistakes.
GOOD: THE CHANGING OF THE GUARD IS COMPLETE
Back in 2015, Google cofounder/CEO Larry Page confounded the tech world by creating a new holding company called Alphabet, with Google as one unit. The reorg let Page devote his attention to risky, unprofitable “moonshots” in areas such as healthcare and transportation while Sundar Pichai, the trusted associate who became CEO of Google, ran the parts of the company that were already gigantically popular cash cows, such as Google Search, YouTube, and their associated ad platforms.
Since the split, Page and cofounder Sergey Brin have seemingly opted out of the drudgery of running a major public company, while Alphabet’s moonshots have yet to turn into self-sustaining businesses. In fact, the most consumer-ready new Alphabet business, its Nest smart-home arm, ended up being folded back into Google. So when Page and Brin announced in early December that they were ending active management of Alphabet and naming Pichai as CEO of the whole shebang, the move was shocking only in that it was a normal, common-sense move by this most unorthodox of tech giants. If Pichai exerts a bit more pressure on the Alphabet moonshot factory to produce results, it will probably be good for everyone involved.
BAD: GOOGLE’S CULTURE WAR INTENSIFIED
Google was never the corporate utopia that many of its trademarks—from the “Don’t be evil” mantra to the endless free food for employees—suggested it was trying to be. But in 2019, most of the stories that broke about its corporate culture involved it being . . . well, broken.
Suspensions and firings of employees whom Google accused of leaks and unauthorized data access led to other staffers holding protests outside the company’s San Francisco office, accusing the company of retaliating against employee organizing. Google brought on a union-busting consultancy to provide advice on dealing with unrest and dramatically dialed back its “TGIF” all-hands meetings. The company is still investigating cases that involved multiple top executives being charged with sexual harassment and other types of misconduct, after its initial response involved giving some of them tens of millions of dollars as parting gifts. Then there was the Googler who posted a memo charging the company with discriminating against her when she was pregnant. Repairing Google’s reputation as a workplace should be one of Pichai’s top priorities in 2020 and beyond.
GOOD: “MADE BY GOOGLE” FEELS LESS LIKE AN EXPERIMENT
Even after Google formed a unified hardware group in 2016 and named Rick Osterloh to run it, it wasn’t always clear whether the effort was a serious attempt to take on the existing giants of consumer electronics or a pricey hobby. For instance, the first three generations of its Pixel phone were terrific—but the only U.S. carrier that sold them directly was Verizon, meaning that they weren’t available in many of the wireless stores where most people buy new phones. And because Pixels were priced like iPhones, they weren’t within reach of as many people as garden-variety Android phones.
In 2019, Google introduced the Pixel 3a, which offered much of what made the Pixel 3 great at a much lower price—and was available from T-Mobile and Sprint as well as Verizon. Then it added the Pixel 4, a high-end flagship that finally brought the Pixel line to AT&T stores. This was also the year that Google fully rationalized the Nest brand’s relationship to other Google hardware, making it the umbrella for smart-home gear, much as Pixel is the brand for computing devices. It’s nice to see the Google hardware portfolio become as broad and coherent as it always needed to be.
BAD: THE STADIA GAMING INITIATIVE IS OFF TO A BUMPY START
Both Apple and Google launched ambitious gaming services in 2019. Apple Arcade offers a wealth of high-quality downloadable titles for iPhones, iPads, Apple TV, and Macs at a price—$5 a month—that makes it a no-brainer even for casual gamers. And then there’s Google’s Stadia, which aims to stream console-quality games to all sorts of devices. It involves a $130 hardware purchase, a $10-per-month fee, and separate price tags for the limited quantity of premium games available.
Even some of the people who want to love Stadia are not happy with its current incarnation, and Google’s webpage about the service is awash in footnotes concerning its promised benefits. Much of its ambition lies in features the company plans to roll out next year, which means that most of us need feel no rush to get on board.
GOOD: MORE PRODUCTS ARE GETTING SMART PRIVACY FEATURES
Truly privacy-sensitive people may never buy into the whole idea of using Google services, since the company does so much of what it does by aggregating information about its users (and using some of that data to target them with ads). But compared to Facebook—whose much-anticipated history cleaning tool is a disappointment, at least so far—Google is doing a better job of adding features that let you partake in its offerings while maintaining some control over what the company, or external snoops, know about you.
At its I/O developer conference in May, for instance, Google unveiled an “Incognito mode” for Google Maps that lets you quickly shut off the service’s ability to log your real-world wandering. It’s a useful feature, and the fact that Google called it Incognito—leveraging the brand of the well-known privacy mode in its Chrome browser—makes it easy to understand. Also at I/O, the company introduced the Nest Hub Max, a smart display with a prominent physical switch for shutting off its camera and microphone—part of its overall plan to put actual “off” switches on devices with cameras. (To be fair, Facebook put a similar switch on its second-generation Portal and Portal Mini later in the year.)
BAD: THE COMPANY’S HEALTHCARE PROJECT WAS TOO QUIET
In a July earnings call, Google said it was working with Ascension, a major healthcare system, to use cloud-based AI services to improve medical outcomes. But then it had little else to say about the effort until November, when the Wall Street Journal’s Rob Copeland reported on the vast scale of the project, code-named “Nightingale.” The report said that some Ascension employees were concerned about the privacy implications of giving Google employees access to patients’ health records. Google then published a FAQ spelling out how the two companies were working together, arguing that the work adhered to all regulations and disputing some aspects of media reports.
Even if Project Nightingale holds much promise to fulfill its goal of keeping people healthier, Google would have been well advised to anticipate people’s worst fears about the initiative and to dispel them as early as possible in the process—rather than trying to tamp them down after the Journal’s story appeared.
GOOD: YOUTUBE’S CONTENT POLICIES GOT MORE SENSIBLE
In June, YouTube finally banned neo-Nazi and white-supremacist videos, deleting thousands of them from the site. And in December, it broadened its rules against hate speech to encompass veiled attacks and insults based on factors such as race and sexual orientation. (Better late than never: Earlier in the year, after Vox writer Carlos Maza tweeted a supercut of conservative YouTuber Steven Crowder mocking him—over and over and over—for being gay and Latino, YouTube had maintained that Crowder’s attacks were acceptable.) In a December 60 Minutes appearance, YouTube CEO Susan Wojcicki also said that changes to its algorithm had decreased the amount of time Americans spend watching questionable videos, such as anti-vaccination material and miracle-cure hoaxes, by 70%.
BAD: GOOGLE’S CONTENT TROUBLES REMAIN MANY AND VARIED
Early in the year, ex-YouTube creator Matt Hunter charged that pedophile rings were operating on the service and infesting the comments on videos showing children—a topic that later became the subject of a New York Times investigation by Max Fisher and Amanda Taub. And in December, The Verge’s Casey Newton reported that content moderators employed by Google and subcontractors must look at such horrifying imagery, in such vast volume, that it can lead to PTSD—a problem that isn’t alleviated by work policies allowing for frequent breaks. As usual, Google says that it takes such issues seriously and is working to minimize them—but at Google scale, even a minimized problem has major implications.
GOOD: GOOGLE IS BEING THOUGHTFUL ABOUT AI ETHICS
Sundar Pichai once declared that AI will be a more profound breakthrough for humans than fire was in its day. But despite that enthusiasm, Google is acknowledging that AI, like fire, can be dangerous if it gets out of hand. In January, the company published a white paper saying that it welcomed government regulation on certain aspects of the technology, such as the need to disclose how an algorithm arrived at a particular decision. As Wired’s Tom Simonite has reported, Google is also being methodical how to roll out some of the AI functionality it’s built. For example, its facial-recognition service, which can identify celebrities, is available only to carefully screened customers.
BAD: ITS AI ADVISORY BOARD WAS A FIASCO
Seeking outside counsel on responsible use of AI sounds like a reasonable idea. But Google’s AI ethics board collapsed less than two weeks after its introduction in late March. Google employees protested the inclusion of the president of the conservative think tank the Heritage Foundation and the CEO of a maker of drones with military applications, and infighting and resignation precipitated the board’s official demise. Regardless of your opinion of specific members, the whole plan seemed ill-suited to holding Google accountable. The company says that the ethics board’s abrupt termination doesn’t mean that it’s lost interest in having outsiders play a part in guiding its use of AI. Here’s hoping the utter failure of its first attempt helps it figure out the right way.
ABOUT THE AUTHOR
Harry McCracken is the technology editor for Fast Company, based in San Francisco. In past lives, he was editor at large for Time magazine, founder and editor of Technologizer, and editor of PC World. More