How Elon Musk, Sam Altman, and the Silicon Valley elite manipulate the public
- by Fortune
- Sep 17, 2024
- 0 Comments
- 0 Likes Flag 0 Of 5
Gary Marcus
Gary Marcus, a professor emeritus at NYU, is a leading voice in artificial intelligence, well known for his challenges to contemporary AI. He is a scientist and best-selling author and was founder and CEO of Geometric.AI, a machine learning company acquired by Uber.
Tesla CEO Elon Musk and OpenAI CEO Sam Altman.
Left: Marc Piasecki—Getty Images; right: Chris Ratcliffe—Bloomberg/Getty Images The following is an excerpt from Gary Marcus’s book Taming Silicon Valley: How We Can Ensure That AI Works for Us.
The question is, why did we fall for Silicon Valley’s over-hyped and often messianic narrative in the first place? This chapter is a deep dive into the mind tricks of Silicon Valley. Not, mind you, the already well-documented tricks discussed in the film The Social Dilemma, in which Silicon Valley outfits like Meta addict us to their software. As you may know, they weaponize their algorithms in order to attract our eyeballs for as long as possible, and serve up polarizing information so they can sell as many advertisements as possible, thereby polarizing society, undermining mental health (particularly of teens) and leading to phenomena like the one Jaron Lanier once vividly called “Twitter poisoning” (“a side effect that appears when people are acting under an algorithmic system that is designed to engage them to the max”). In this chapter, I dissect those mind tricks by which big tech companies bend and distort the reality of what the tech industry itself has been doing, exaggerating the quality of the AI, while downplaying the need for its regulation.
Let’s start with hype, a key ingredient in the AI world, even before Silicon Valley was a thing. The basic move—overpromise, overpromise, overpromise, and hope nobody notices—goes back to the 1950s and 1960s. In 1967, AI pioneer Marvin Minsky famously said: “Within a generation, the problem of artificial intelligence will be substantially solved.” But things didn’t turn out that way. As I write this, in 2024, a full solution to artificial intelligence is still years, perhaps decades away.
But there’s never been much accountability in AI; if Minsky’s projections were way off, it didn’t much matter. His generous promises (initially) brought big grant dollars—just as overpromising now often brings big investor dollars. In 2012, Google cofounder Sergey Brin promised driverless cars for everyone in five years, but that still hasn’t happened, and hardly anyone ever even calls him on it. Elon Musk started promising his own driverless cars in 2014 or so, and kept up his promises every year or two, eventually promising that whole fleets of driverless taxis were just around the corner. That too still hasn’t happened. (Then again, Segways never took over the world either, and I am still waiting for my inexpensive personal jetpack, and the cheap 3D-printer that will print it all.)
Silicon Valley hype—and its rewards
All too often, Silicon Valley is more about promise than delivery. Over $100 billion has been invested in driverless cars, and they are still in prototype phases, working some of the time, but not reliably enough to be scaled up for worldwide deployment. In the months before I wrote this, GM’s driverless car division Cruise all but fell apart. It came out that they had more people behind the scenes in a remote operations center than actual driverless cars on the road. GM pulled support; the Cruise CEO Kyle Vogt resigned. Hype doesn’t always materialize. And yet it continues unabated. Worse, it is frequently rewarded.
A common trick is to feign that today’s three-quarters-baked AI (full of hallucinations and bizarre and unpredictable errors) is tantamount to so-called Artificial General Intelligence (which would be AI that is at least as powerful and flexible as human intelligence) when nobody is particularly close. Not long ago, Microsoft posted a paper, not peer-reviewed, that grandiosely claimed “sparks of AGI” had been achieved. Sam Altman is prone to pronouncements like “by [next year] model capability will have taken such a leap forward that no one expected.…It’ll be remarkable how much different it is.” One master stroke was to say that the OpenAI board would get together to determine when Artificial General Intelligence “had been achieved,” subtly implying that (1) it would be achieved sometime soon and (2) if it had been reached, it would be OpenAI that achieved it.
That’s weapons-grade PR, but it doesn’t for a minute make it true. (Around the same time, OpenAI’s Altman posted on Reddit, “AGI has been achieved internally,” when no such thing had actually happened.)
Only very rarely does the media call out such nonsense. It took them years to start challenging Musk’s overclaiming on driverless cars, and few if any asked Altman why the important scientific question of when AGI was reached would be “decided” by a board of directors rather than the scientific community.
The combination of finely tuned rhetoric and a mostly pliable media has downstream consequences; investors have put too much money in whatever is hyped, and, worse, government leaders are often taken in.
Two other tropes often reinforce one another. One is the “Oh no, China will get to GPT-5 first” mantra that many have spread around Washington, subtly implying that GPT-5 will fundamentally change the world (in reality, it probably won’t). The other tactic is to pretend that we are close to an AI that is SO POWERFUL IT IS ABOUT TO KILL US ALL. Really, I assure you, it’s not.
Many of the major tech companies recently converged on precisely that narrative of imminent doom, exaggerating the importance and power of what they have built. But not one has given a plausible, concrete scenario by which such doom could actually happen anytime soon.
No matter; they got many of the major governments of the world to take that narrative seriously. This makes the AI sound smarter than it really is, driving up stock prices. And it keeps attention away from hard-to-address but critical risks that are more imminent (or are already happening), such as misinformation, for which big tech has no great solution. The companies want us, the citizens, to absorb all the negative externalities (an economist’s term for bad consequences, coined by the British economist Arthur Pigou) that might arise—such as the damage to democracy from Generative AI–produced misinformation, or cybercrime and kidnapping schemes using deepfaked voice clones—without them paying a nickel.
Big Tech wants to distract us from all that, by saying—without any real accountability—that they are working on keeping future AI safe (hint: they don’t really have a solution to that, either), even as they do far too little about present risk. Too cynical? Dozens of tech leaders signed a letter in May 2023 warning that AI could pose a risk of extinction, yet not one of those leaders appears to have slowed down one bit.
Another way Silicon Valley manipulates people is by feigning that they are about to make enormous barrels of cash. In 2019, for example, Elon Musk promised that a fleet of “robo taxis” powered by Tesla would arrive in 2020; by 2024 they still hadn’t arrived. Now Generative AI companies are being valued at billions (and even tens of billions) of dollars, but it’s not clear they will ever deliver. Microsoft Copilot has been underwhelming in early trials, and OpenAI’s app store (modeled on Apple’s app store) offering custom versions of ChatGPT is struggling. A lot of the big tech companies are quietly recognizing that the promised profits aren’t going to materialize any time soon.
But the abstract notion that they might make money gives them immense power; government dare not step on what has been positioned as a potential cash cow. And because so many people idolize money, too little of the rhetoric ever gets seriously questioned.
A dramatic overestimation of value
Another frequent move is to publish a slick video that hints at much more than can be actually delivered. OpenAI did this in October 2019, with a video that showed one of their robots solving a Rubik’s Cube, one-handed. The video spread like wildfire, but the video didn’t make clear what was buried in the fine print.
When I read their Rubik’s Cube research paper carefully, having seen the video, I was appalled by a kind of bait-and-switch, and said so: the intellectual part of solving a Rubik’s Cube had been worked out years earlier, by others; OpenAI’s sole contribution, the motor control part, was achieved by a robot that used a custom, not-off-the-shelf, Rubik’s Cube with Bluetooth sensors hidden inside. As is often the case, the media imagined a robotics revolution, but within a couple years the whole project had shut down. AI is almost always harder than people think.
In December 2023, Google put out a seemingly mind-blowing video about a model they just released, called Gemini. In the video, a chatbot appeared to watch a person make drawings, and to provide commentary on the person’s drawings in real time. Many people became hugely excited by it, saying stuff on X like “Must-watch video of the week, probably the year,” “If this Gemini demo is remotely accurate, it’s showing broader intelligence than a non-zero fraction of adult humans *already*,” and “Can’t stop thinking about the implications of this demo. Surely it’s not crazy to think that sometime next year, a fledgling Gemini 2.0 could attend a board meeting, read the briefing docs, look at the slides, listen to every one’s words, and make intelligent contributions to the issues debated? Now tell me. Wouldn’t that count as AGI?”
But as some more skeptical journalists such as Parmy Olson quickly figured out, the video was fundamentally misleading. It was not produced in real time; it was dubbed after the fact, from a bunch of still shots. Nothing like the real-time, multimodal, interactive-commentary product that Google seemed to be demoing actually existed. (Google itself ultimately conceded this in a blog. ) Google’s stock price briefly jumped 5 percent based on the video, but the whole thing was a mirage, just one more stop on the endless train of hype.
Hype often equates more or less directly to cash. As I write this, OpenAI was recently valued at $86 billion, never having turned a profit. My guess is that OpenAI will someday be seen as the WeWork moment of AI, a dramatic overestimation of value. GPT-5 will either be significantly delayed or not meet expectations; companies will struggle to put GPT-4 and GPT-5 into extensive daily use; competition will increase, margins will be thin; the profits won’t justify the valuation (especially after a pesky fact I mentioned earlier: in exchange for their investment, Microsoft takes about half of OpenAI’s first $92 billion in profits, if they make any profits at all).
The beauty of the hype game is that if the valuations rise high enough, no profits are required. The hype has already made many of the employees rich, because a late 2023 secondary sale of OpenAI employee stock allowed them to cash out. (Later investors could be left holding the bag, if profits never materialize.)
For a moment, it looked as if that whole calculation might change. Just before the early employees were about to sell shares at a massive $86 billion valuation, OpenAI abruptly fired its CEO Sam Altman, potentially killing the deal. No problem. Within a few days, nearly all the employees had rallied around him. He was quickly rehired. Guess what? Business Insider reported, “While the entire company signed a letter stating they’d follow Altman to Microsoft if he wasn’t reinstated, no one really wanted to do it.” It is not that the employees wanted to be with Altman, per se, no matter what (as most onlookers assumed), but rather, I infer, that they wanted the big sale of employee stock at the $86 billion valuation to go through. Bubbles sometimes pop; good to get out while you can.
Downplaying AI pitfalls
Another common tactic is to minimize the downsides of AI. When some of us started to sound alarms about AI-generated misinformation, Meta’s chief AI scientist Yann LeCun claimed in a series of tweets on Twitter, in November and December 2022, that there is no real risk, reasoning, fallaciously, that what hadn’t happened yet would not happen ever (“LLMs have been widely available for 4 years, and no one can exhibit victims of their hypothesized dangerousness”). He further suggested that “LLMs will not help with careful crafting [of misinformation], or its distribution,” as if AI-generated misinformation would never see the light of day. By December 2023, all of this had proven to be nonsense.
Along similar lines, in May 2023, Microsoft’s chief economist Michael Schwarz told an audience at the World Economic Forum that we should hold off on regulation until serious harm had occurred. “There has to be at least a little bit of harm, so that we see what is the real problem. Is there a real problem? Did anybody suffer at least a thousand dollars’ worth of damage because of that? Should we jump in to regulate something on a planet of 8 billion people when there is not even a thousand dollars of damage? Of course not.”
Fast-forward to December 2023, and the harm is starting to come in; The Washington Post, for example, reported: “The rise of AI fake news is creating a ‘misinformation superspreader’”; in January 2024 (as I mentioned in the introduction), deepfaked robocalls in New Hampshire that sounded like Joe Biden tried to persuade people to stay home from the polls.
But that doesn’t stop big tech from playing the same move over and over again. As noted in the introduction, in late 2023 and early 2024, Meta’s Yann LeCun was arguing there will be no real harm forthcoming from open-source AI, even as some of his closest collaborators outside of industry, his fellow deep learning pioneers Geoff Hinton and Yoshua Bengio, vigorously disagreed.
All of these efforts at downplaying risks remind me of the lines that cigarette manufacturers used to spew about smoking and cancer, whining about how the right causal studies hadn’t yet been performed, when the correlational data on death rates and a mountain of causal studies had already made it clear that smoking was causing cancer in laboratory animals. (Zuckerberg used this same cigarette-industry style of argument in response to Senator Hawley in his January 2024 testimony on whether social media was causing harm to teenagers.)
What the big tech leaders really mean to say is that the harms from AI will be difficult to prove (after all, we can’t even track who is generating misinformation with deliberately unregulated open-source software)—and that they don’t want to be held responsible for whatever their software might do. All of it, every word, should be regarded with the same skepticism we accord cigarette manufacturers.
Silicon Valley’s perceived enemies
Then there’s ad hominem arguments and false accusation. One of the darkest episodes in American history came in the 1950s, when Senator Joe McCarthy gratuitously called many people Communists, often with little or no evidence. McCarthy was of course correct that there were some Communists working in the United States, but the problem was that he often named innocent people, too—without even a hint of due process—destroying many lives along the way. Out of desperation, some in Silicon Valley seem intent on reviving McCarthy’s old playbook, distracting from real problems by feinting at Communists. Most prominently, Marc Andreessen, one of the richest investors in Silicon Valley, recently wrote a “Techno-Optimist Manifesto,” enumerating a long, McCarthy-like list of “enemies” (“Our enemy is stagnation. Our enemy is anti-merit, anti-ambition, anti-striving, anti-achievement, anti-greatness, etc.”) and made sure to include a whistle call against Communism on his list, complaining of the “continuous howling from Communists and Luddites.” (As tech journalist Brian Merchant has pointed out, the Luddites weren’t actually anti-technology per se, they were pro-human.)
Five weeks later, another anti-regulatory investor from the Valley, Mike Solana, followed suit, all but calling one of the OpenAI board members a Communist (“I am not saying [so and so] is a CCP asset… but…”). There is no end to how low some people will go for a buck.
The influential science popularizer Liz Boeree recounts becoming disaffected by the whole “e/acc” (“effective accelerationism”) movement that urges rapid AI development:
I was excited about e/acc when I first heard of it (because optimism *is* extremely important). But then its leader(s) made it their mission to attack and misrepresent perceived “enemies” for clout, while deliberately avoiding engaging with counter arguments in any reasonable way. A deeply childish, zero-sum mindset.
In my mind, the entire accelerationist movement has been an intellectual failure, failing to address seriously even the most basic questions, like what would happen if sufficiently advanced technology got into the wrong hands. You can’t just say “make AI faster” and entirely ignore the consequences—but that’s precisely what the sophomoric e/acc movement has done. As the novelist Ewan Morrison put it, “This e/acc philosophy so dominant in Silicon Valley it’s practically a religion.…[It] needs to be exposed to public scrutiny and held to account for all the things it has smashed and is smashing.”
Much of the acceleration effort seems to be little more than a shameless attempt to stretch the “Overton window,” to make unpalatable and even insane ideas seem less crazy. The key rhetorical trick was to make it seem as if the nonsensical idea of zero regulation was viable, falsely portraying anything else as too expensive for startups and hence a death blow to innovation. Don’t fall for it. As the Berkeley computer scientist Stuart Russell bluntly put it, “The idea that only trillion-dollar corporations can comply with regulations is sheer drivel. Sandwich shops and hairdressers are subject to far more regulation than AI companies, yet they open in the tens of thousands every year.”
Accelerationism’s true goal seems to be simply to line the pockets of current AI investors and developers, by shielding them from responsibility. I’ve yet to hear its proponents come up with a genuine, well-conceived plan for maximizing positive human outcome over the coming decades.
Ultimately, the whole “accelerationist” movement is so shallow it may actually backfire. It’s one thing to want to move swiftly; another to dismiss regulation and move recklessly. A rushed, underregulated AI product that caused massive mayhem could lead to subsequent public backlash, conceivably setting AI back by a decade or more. (One could well argue that something like that has happened with nuclear energy.) Already there have been dramatic protests of driverless cars in San Francisco. When ChatGPT’s head of product recently spoke at SXSW, the crowd booed. People are starting to get wise.
‘The new technocrats’
Gaslighting and bullying are another common pattern. When I argued on Twitter in 2019 that large language models “don’t develop robust representations of ‘how events unfold over time’” (a point that remains true today), Meta’s chief AI officer Yann LeCun condescendingly said, “When you are fighting a rear-guard battle, it’s best to know when you adversary overtook your rear 3 years ago,” pointing to research that his company had done, which allegedly solved the problems (spoiler alert: it didn’t). More recently, under fire when OpenAI abruptly overtook Meta, LeCun suddenly changed his tune, and ran around saying that large language models “suck,” never once acknowledging that he’d said otherwise. All this—the abruptly changing tune and correlated denial of what happened—reminded me of Orwell’s famous line on state-sponsored historical revisionism in 1984: “Oceania has always been at war with Eastasia” (when in fact targets had shifted).
The techlords play other subtle games, too. When Sam Altman and I testified before Congress, we raised our right hands and swore to tell the whole truth, but when Senator John Kennedy (R-LA) asked him about his finances, Altman said, “I have no equity in OpenAI,” elaborating that “I’m doing this ’cause I love it.” He probably does mostly work for the love of the job (and the power that goes with it) rather than the cash. But he also left out something important: he owns stock in Y Combinator (where he used to be president), and Y Combinator owns stock in OpenAI (where he is CEO), an indirect stake that is likely worth tens of millions of dollars. Altman had to have known this. It later came out that Altman also owns OpenAI’s venture capital fund, and didn’t mention that either. By leaving out these facts, he passed himself off as more noble than he really is.
And all that’s just how the tech leaders play the media and public opinion. Let’s not forget about the backroom deals. Just as an example, we’ve all known for a long time that Google was paying Apple to put their search engine front and center, but few of us (including me) had any idea quite how much. Until November 2023, that is, when, as The Verge put it, “A Google witness let slip” that Google gives Apple more than a third of the ad revenue it gets from Apple’s Safari, to the tune of $18 billion per year. It’s likely a great deal for both, but one that has significantly, and heretofore silently, shaped consumer choice, allowing Google to consolidate their near-monopoly on search. Both companies tried, for years, to keep this out of public view.
Lies, half-truths, and omissions. Perhaps Adrienne LaFrance said it best, in an article in The Atlantic titled “The Rise of Technoauthoritarianism”:
The new technocrats claim to embrace Enlightenment values, but in fact they are leading an antidemocratic, illiberal movement…The world that Silicon Valley elites have brought into being is a world of reckless social engineering, without consequence for its architects…They promise community but sow division; claim to champion truth but spread lies; wrap themselves in concepts such as empowerment and liberty but surveil us relentlessly.
We need to fight back.
Please first to comment
Related Post
Stay Connected
Tweets by elonmuskTo get the latest tweets please make sure you are logged in on X on this browser.
Sponsored
Popular Post
Tesla: Buy This Dip, Energy Growth And Margin Recovery Are Vastly Underappreciated
28 ViewsJul 29 ,2024