Illustration by Yoshi Sodeoka
Illustration by Yoshi Sodeoka
Illustration by Yoshi Sodeoka

LATE last month, Mark Zuckerberg wrote a brief post on Facebook at the conclusion of Yom Kippur, asking his friends for forgiveness not just for his personal failures but also for his professional ones, especially “the ways my work was used to divide people rather than bring us together.” He was heeding the call of the Jewish Day of Atonement to take stock of the year just passed as he pledged that he would “work to do better.”

Such a somber, self-critical statement hasn’t been typical for the usually sunny Mr. Zuckerberg, who once exhorted his employees at Facebook to “move fast and break things.” In the past, why would Mr. Zuckerberg, or any of his peers, have felt the need to atone for what they did at the office? For making incredibly cool sites that seamlessly connect billions of people to their friends as well as to a global storehouse of knowledge?

Lately, however, the sins of Silicon Valley-led disruption have become impossible to ignore.

Facebook has endured a drip, drip of revelations concerning Russian operatives who used its platform to influence the 2016 presidential election by stirring up racist anger. Google had a similar role in carrying targeted, inflammatory messages during the election, and this summer, it appeared to play the heavy when an important liberal think tank, New America, cut ties with a prominent scholar who is critical of the power of digital monopolies. Some within the organization questioned whether he was dismissed to appease Google and its executive chairman, Eric Schmidt, both longstanding donors, though New America’s executive president and a Google representative denied a connection.

Meanwhile, Amazon, with its purchase of the Whole Foods supermarket chain and the construction of brick-and-mortar stores, pursues the breathtakingly lucrative strategy of parlaying a monopoly position online into an offline one, too.

Now that Google, Facebook, Amazon have become world dominators, the question of the hour is, can the public be convinced to see Silicon Valley as the wrecking ball that it is?

These menacing turns of events have been quite bewildering to the public, running counter to everything Silicon Valley had preached about itself. Google, for example, says its purpose is “to organize the world’s information, making it universally accessible and useful,” a quest that could describe your local library as much as a Fortune 500 company. Similarly, Facebook aims to “give people the power to build community and bring the world closer together.” Even Amazon looked outside itself for fulfillment by seeking to become, in the words of its founder, Jeff Bezos, “the most customer-obsessed company to ever occupy planet Earth.”

Almost from its inception, the World Wide Web produced public anxiety — your computer was joined to a network that was beyond your ken and could send worms, viruses and trackers your way — but we nonetheless were inclined to give these earnest innovators the benefit of the doubt. They were on our side in making the web safe and useful, and thus it became easy to interpret each misstep as an unfortunate accident on the path to digital utopia rather than as subterfuge meant to ensure world domination.

Now that Google, Facebook, Amazon have become world dominators, the questions of the hour are, can the public be convinced to see Silicon Valley as the wrecking ball that it is? And do we still have the regulatory tools and social cohesion to restrain the monopolists before they smash the foundations of our society?

By all accounts, these programmers turned entrepreneurs believed their lofty words and were at first indifferent to getting rich from their ideas. A 1998 paper by Sergey Brin and Larry Page, then computer-science graduate students at Stanford, stressed the social benefits of their new search engine, Google, which would be open to the scrutiny of other researchers and wouldn’t be advertising-driven. The public needed to be assured that searches were uncorrupted, that no one had put his finger on the scale for business reasons.

To illustrate their point, Mr. Brin and Mr. Page boasted of the purity of their search engine’s results for the query “cellular phone”; near the top was a study explaining the danger of driving while on the phone. The Google prototype was still ad-free, but what about the others, which took ads? Mr. Brin and Mr. Page had their doubts: “We expect that advertising-funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers.”

There was a crucial need for “a competitive search engine that is transparent and in the academic realm,” and Google was set to be that ivory tower internet tool. Until, that is, Mr. Brin and Mr. Page were swept up by the entrepreneurism pervasive to Stanford — a meeting with a professor led to a meeting with an investor, who wrote a $100,000 check before Google was even a company. In 1999, Google announced a $25 million investment of venture capital while insisting nothing had changed. When Mr. Brin was asked by reporters how Google planned to make money, he replied, “Our goal is to maximize the search experience, not maximize the revenues from search.”

Mark Zuckerberg took a similar tack back in the early days of Facebook. A social network was too important to sully with commerce, he told The Harvard Crimson in 2004. “I mean, yeah, we can make a bunch of money — that’s not the goal,” he said of his social network, then still called thefacebook.com. “Anyone from Harvard can get a job and make a bunch of money. Not everyone at Harvard can have a social network. I value that more as a resource more than, like, any money.” Mr. Zuckerberg insisted he wouldn’t give in to the profit seekers; Facebook would stay true to its mission of connecting the world.

Seven years later, Mr. Zuckerberg, too, had succumbed to Silicon Valley venture capital, but he seemed to regret it. “If I were starting now,” he told an interviewer in 2011, “I just would have stayed in Boston, I think,” before adding: “There are aspects of the culture out here where I think it still is a little bit short-term focused in a way that bothers me. You know, whether it’s like people who want to start companies to start a company, not knowing what they like, I don’t know, to, like, flip it.”

Ultimately, however, the founders of Google and Facebook faced a day of reckoning. Investors hadn’t signed on for a charity, and they demanded accountability. In the end, Mr. Brin and Mr. Page agreed under pressure to display advertising alongside search results and eventually to allow an outside chief executive, Mr. Schmidt. Mr. Zuckerberg agreed to include ads within the news feed and transferred a favorite programmer to the mobile-advertising business, telling him, “Wouldn’t it be fun to build a billion-dollar business in six months?”

Turns out that there were billion-dollar fortunes to be made by exploiting the foggy relationship between the public and tech companies. We all knew there was no such thing as a free lunch, an insight memorably encapsulated in 2010 by a commenter to the website MetaFilter, as, “If you are not paying for it, you’re not the customer; you’re the product being sold.” But, really, how can you tell? So much of what is happening between the public and Silicon Valley is out of view — algorithms written and controlled by wizards who are able to extract value from your identity in ways you could never do for yourself.

Once Mr. Brin, Mr. Page and Mr. Zuckerberg reversed course on pursuing profits, they reported an odd thing — the public didn’t seem to care. “Do you know the most common feedback, honestly?” Mr. Brin said in 2002 when asked about the reaction to Google’s embrace of advertising. “It’s ‘What ads?’ People either haven’t done searches that bring them up or haven’t noticed them. Or the third possibility is that they brought up the ads and they did notice them and they forgot about them, which I think is the most likely scenario.”

Growth becomes the overriding motivation — something treasured for its own sake, not for anything it brings to the world.

Interactions between people and their computers were always going to be confusing, and that confusion would be easy for programmers to exploit. John McCarthy, the computer-science pioneer who nurtured the first hackers at M.I.T. and later ran Stanford’s artificial intelligence lab, worried that programmers didn’t understand their responsibilities. “Computers will end up with the psychology that is convenient to their designers (and they’ll be fascist bastards if those designers don’t think twice),” he wrote in 1983. “Program designers have a tendency to think of the users as idiots who need to be controlled. They should rather think of their program as a servant, whose master, the user, should be able to control it.”

Call it the Eliza problem. In 1966, Joseph Weizenbaum, a professor at M.I.T., unveiled a computer program, Eliza, which imitated a psychotherapist. It would, by rote, inquire about your feelings toward your parents or try to get you talking by rephrasing what you said in the form of a question. The program immediately touched a nerve, becoming a national phenomenon, to the surprise of Mr. Weizenbaum. For example, The New York Times swooned: “Computer Is Being Taught to Understand English.”

Eliza understood nothing, in truth, and could never reach any shared insight with a “patient.” Eliza mechanically responded to whatever appeared on the screen. A typical therapy session quickly devolved into a Monty Python sketch. (Patient: You are not very aggressive, but I think you don’t want me to notice that. Eliza: What makes you think I am not very aggressive? Patient: You don’t argue with me. Eliza: Why do you think I don’t argue with you? Patient: You are afraid of me. Eliza: Does it please you to believe I am afraid of you?)

Imagine Mr. Weizenbaum’s surprise when his secretary looked up from her computer and interrupted her exchanges with Eliza to say to him, “Would you mind leaving the room, please?” She wanted privacy for a conversation with a machine! Mr. Weizenbaum, appalled, suddenly saw the potential for mischief by programmers who could manipulate computers and potentially the rest of us. He soon switched gears and devoted his remaining years to protesting what he considered the amorality of his computer science peers, frequently referring to his experiences as a young refugee from Nazi Germany.

In his epic anti-A.I. work from the mid-1970s, “Computer Power and Human Reason,” Mr. Weizenbaum described the scene at computer labs. “Bright young men of disheveled appearance, often with sunken glowing eyes, can be seen sitting at computer consoles, their arms tensed and waiting to fire their fingers, already poised to strike, at the buttons and keys on which their attention seems to be as riveted as a gambler’s on the rolling dice,” he wrote. “They exist, at least when so engaged, only through and for the computers. These are computer bums, compulsive programmers.”

He was concerned about them as young students lacking perspective about life and was worried that these troubled souls could be our new leaders. Neither Mr. Weizenbaum nor Mr. McCarthy mentioned, though it was hard to miss, that this ascendant generation were nearly all white men with a strong preference for people just like themselves. In a word, they were incorrigible, accustomed to total control of what appeared on their screens. “No playwright, no stage director, no emperor, however powerful,” Mr. Weizenbaum wrote, “has ever exercised such absolute authority to arrange a stage or a field of battle and to command such unswervingly dutiful actors or troops.”

Welcome to Silicon Valley, 2017.

As Mr. Weizenbaum feared, the current tech leaders have discovered that people trust computers and have licked their lips at the possibilities. The examples of Silicon Valley manipulation are too legion to list: push notifications, surge pricing, recommended friends, suggested films, people who bought this also bought that. Early on, Facebook realized there was a hurdle to getting people to stay logged on. “We came upon this magic number that you needed to find 10 friends,” Mr. Zuckerberg recalled in 2011. “And once you had 10 friends, you had enough content in your newsfeed that there would just be stuff on a good enough interval where it would be worth coming back to the site.” Facebook would design its site for new arrivals so that it was all about finding people to “friend.”

The 10 friends rule is an example of a favored manipulation of tech companies, the network effect. People will use your service — as lame as it may be — if others use your service. This was tautological reasoning that nonetheless proved true: If everyone is on Facebook, then everyone is on Facebook. You need to do whatever it takes to keep people logging in, and if rivals emerge, they must be crushed or, if stubbornly resilient, acquired.

We need to break up these online monopolies because if a few people make the decisions about how we communicate, shop, learn the news, again, do we control our own society?

Growth becomes the overriding motivation — something treasured for its own sake, not for anything it brings to the world. Facebook and Google can point to a greater utility that comes from being the central repository of all people, all information, but such market dominance has obvious drawbacks, and not just the lack of competition. As we’ve seen, the extreme concentration of wealth and power is a threat to our democracy by making some people and companies unaccountable.

In addition to their power, tech companies have a tool that other powerful industries don’t: the generally benign feelings of the public. To oppose Silicon Valley can appear to be opposing progress, even if progress has been defined as online monopolies; propaganda that distorts elections; driverless cars and trucks that threaten to erase the jobs of millions of people; the Uberization of work life, where each of us must fend for ourselves in a pitiless market.

As is becoming obvious, these companies do not deserve the benefit of the doubt. We need greater regulation, even if it impedes the introduction of new services. If we can’t stop their proposals — if we can’t say that driverless cars may not be a worthy goal, to give just one example — then are we in control of our society? We need to break up these online monopolies because if a few people make the decisions about how we communicate, shop, learn the news, again, do we control our own society?

Out of curiosity, the other day I searched “cellphones” on Google. Before finding even a mildly questioning article about cellphones, I paged down through ads for phones and lists of phones for sale, guides to buying phones and maps with directions to stores that sell phones, some 20 results in total. Somewhere, a pair of idealistic former graduate students must be saying: “See! I told you so!”