Google is losing control
Google is flailing. After years of singleminded worship of the false god Virtual Assistant, the company is rushing its AI strategy as its competitors join their hands and raise their pitchforks. The irony is it’s all happening because Google thought it had the pitchfork market cornered.
See, in 2017, Google researchers published the article “Attention is all you need,” introducing the concept of the transformer and vastly improving the capabilities of machine learning models. You don’t need to know the technical side of it (and indeed I am not the one to teach you), but it has been enormously influential and empowering; let it suffice to say that it’s the T in GPT.
You may well ask, why did Google give this wonderful thing away freely? While big private research outfits have been criticized in the past for withholding their work, the trend over the last few years has been toward publishing. This is a prestige play and also a concession to the researchers themselves, who would rather their employer not hide their light under a bushel. There is likely an element of hubris to it as well: Having invented the tech, how could Google fail to best exploit it?
The capabilities we see in ChatGPT and other large language models today did not immediately follow. It takes time to understand and take advantage of a new tool, and every major tech company got to work examining what the new era of AI might provide, and what it needed to do so.
Assisting the Assistant
There’s no question that Google was dedicating itself to AI work just like everyone else. Over the next few years, it made serious strides in designing AI computation hardware, built useful platforms for developers to test and develop machine learning models and published tons of papers on everything from esoteric model tweaks to more recognizable things like voice synthesis.
But there was a problem. I’ve heard this anecdotally from Google employees and others in the industry, but there’s a sort of feudal aspect to the way the company works: Getting your project under the auspices of an existing major product, like Maps or Assistant, is a reliable way to get money and staff. And so it seems that despite having hoarded up many of the best AI researchers in the world, their talent was channeled into the ruts of corporate strategy.
Shall we see how that turned out? Here’s an (admittedly selective) little timeline:
In 2018 they showed off incremental improvements to Google Assistant flow, Photos (things like colorizing monochrome images), a smart display with a “visual-first version of Assistant” (have you ever seen it?), Assistant in Maps, AI-assisted Google News and (to their credit) MLKit.
In 2019, a rebranded and bigger smart display, AR search results, AR Maps, Google Lens updates, Duplex for the web (remember Duplex?), a compressed Google Assistant that does more locally, Assistant in Waze, Assistant in driving mode, live captioning and live relay (speech recognition) and a project to better understand people with speech impairments.
To be sure, some of these things are great! Most, however, were just an existing thing, but with a boost from AI. Lots feel a bit cringe in retrospect. You really see how big companies like Google act in thrall to trends as well as drive them.
Meanwhile, in February of that year we also had the headline: “OpenAI built a text generator so good, it’s considered too dangerous to release.” That was GPT-2. Not 3, not 3.5… 2.
In 2020, Google made an AI-powered Pinterest clone, then in December fired Timnit Gebru, one of the leading voices in AI ethics, over a paper pointing out limits and dangers of the technology.
To be fair, 2020 wasn’t a great year for a lot of people — with the notable exception of OpenAI, whose co-founder Sam Altman had to personally tamp down hype for GPT-3 because it had grown beyond tenable levels.
2021 saw the debut of Google’s own large language model, LaMDA, though the demos didn’t really sell it. Presumably they were still casting about for a reason for it to exist beyond making Assistant throw fewer errors.
OpenAI started the year off by showing off DALL-E, the first version of the text-to-image model that would soon become a household name. They had begun showing that LLMs, through systems like CLIP, can perform more than language tasks, and acted rather as an all-purpose interpretation and generation engine. (To be clear, I don’t mean “artificial general intelligence” or AGI, just that the process worked for more than a preset collection of verbal commands.)
In 2022, more tweaks to Assistant, more smart displays, more AR in Maps, and a $100 million acquisition of AI-generated profile pictures. OpenAI released DALL-E 2 in April and ChatGPT in December.
At some point, I suspect early 2022, Google executives opened their eyes and what they saw scared the hell out of them. I’m picturing the scene in Lord of the Rings where Denethor finally looks out at the gathered armies of Mordor. But instead of losing his mind and being laid out by a wizard, these frantic VPs sent out emails asking why some pert startup was running circles around the world leader in AI. Especially after they practically invented the means do do so.
The evidence for this is the trotting out of Imagen a month after DALL-E 2, though like practically every other interesting AI research Google publicized, it was not available for anyone to test out, let alone connect to an API. Then, after Meta released Make-A-Video in September, Google responded with Imagen Video a week later. Riffusion made waves for generating music, and a month later, here comes MusicLM (which you can’t use).
But surely it was ChatGPT that caused Google leadership to swiftly transition from anxiety to full-on flop sweat.
It would have been clear to all involved that this kind of conversational AI was categorically different from the Assistant products Google had been investing in for a decade, and was actually doing what everyone else’s pseudo-AIs (effectively just natural language frontends for a collection of APIs) pretended to. That’s what’s called an existential threat.
Fortune or foresight?
Now, it was bad enough that someone else, some upstart immune to acquisition, had triggered the next phase of evolution for the search engine, and that they had done so in a highly public way that captured the imagination of everyone from industry leaders to the tech-avoidant. The real twist of the knife came unexpectedly from Microsoft.
Calling Bing a “rival” to Google Search is perhaps too generous — with about 3% of global search compared to Google’s 92%, Bing is more of a well-heeled gadfly. Microsoft seems to have abandoned any illusions about Bing’s ability to improve its standing, and looked outside their own house for help. Whether their investment in OpenAI was preternatural foresight or fortunate serendipity, at some point it became clear that they had backed a fast horse.
Perhaps in some smoke-filled room, Satya Nadella and Sam Altman conspired to exclude Google from their new world order, but in public the conversation took the form of money, and lots of it. Whatever the backstory, Microsoft had secured its allegiance with the innovative newcomer and with it the opportunity to put its tech to work wherever it would do the most good.
While we have seen some interesting ideas floated about how generative AI can help in productivity, coding and even management, they have yet to be proven out, due either to copyright concerns or AI’s tendency to be a bit too “creative” in its responses. But given proper guard rails, it was clearly very good at synthesizing information to answer nearly any question, from simple factual queries to complex philosophical ones.
Search combined Microsoft’s need to innovate to get ahead with a core competency of large language models, which by good chance or good sense it had just lined up the world’s foremost creator of as a partner. The move to integrate the latest GPT model (some call it GPT-4, but I suspect OpenAI will reserve that moniker for its own first-party model) with Bing and Edge is a kind of forced hail mary, its last and best play in the search engine world.
Google, clearly rattled, attempted a spoiler campaign with a vacuous blog post the day before Microsoft had scheduled its big event announcing the OpenAI-powered Bing. Bard, apparently the name of Google’s LaMDA-based ChatGPT competitor, was unveiled in now typically spare fashion. Promises of capabilities and no hard dates or access plans.
This attempt at an announcement seems to have been made in such a hurry that its content was barely mentioned at Google’s “Search and AI” event two days later, and indeed it also escaped the kind of fact check you’d want to do if you were advertising the future of the knowledge graph. The image used to illustrate Bard contained a non-trivial error, saying that the James Webb Space Telescope “took the very first pictures of a planet outside our solar system.” This is untrue, and the fact that this vaunted machine intelligence got it wrong, and that no one at Google noticed or cared enough to check, appears to have spooked investors.
ChatGPT certainly has problems, and indeed immediately after the rollout of Microsoft’s enhanced Bing, TechCrunch was able to get the supposedly safe and appropriate AI to improvise an essay by Hitler and then regurgitate vaccine disinfo that an earlier version of itself wrote last month. But these are blemishes on an established record that includes billions of prompts and conversations served, to the overwhelming satisfaction of its users.
Google rushing its shot and tripping up so visibly speaks to a lack of readiness even at a limited, experimental level — let alone a global rollout like the one Microsoft has already begun.
In its investor call, CEO Sundar Pichai said “I think I see this as a chance to rethink and reimagine and drive Search to solve more use cases for our users as well. It’s early days, but you will see us be bold, put things out, get feedback and iterate and make things better.” Does that sound like a man with a plan?
It’s understandable that Google would not want to slaughter the golden goose by prematurely merging Search with whatever half-cooked general-use LLM they have sitting around. They’ve become experts at deploying highly specialized AI, task models that do one or two things. But when it comes to making a big move, their comfortable position has saddled them with inertia.
Is it Google’s downfall? Of course not, it will remain the default and a fabulously profitable, somewhat ridiculous corporation for the immediate future. But investor confidence has been shaken as it turns out that Google’s failure to innovate meaningfully over the last few years might not have been done out of wisdom and confidence, but reticence and pride. (The FTC and Justice taking another shot at its ad business can’t help, either.)
This turn of the worm is only in its first few degrees, however, and we must not speculate too far when the technology in question has yet to prove itself as valuable as everyone wants to believe it is. If it isn’t, the whole tech industry will face the fallout, not just Google.