News How will Google solve its AI conundrum?

Home of many of the latest technologies, Google should be one of the big winners in the tech industry’s fledgling artificial intelligence arms race.
There’s just one problem: With politicians and regulators on top of it, and a huge business model to defend, the internet search giant may be hesitant to use the many weapons at its disposal.
This week, Microsoft invested billions of dollars in artificial intelligence research firm OpenAI, launching a direct challenge to the search giant. The move comes less than two months after the release of OpenAI’s ChatGPT, a chatbot that answers queries with text or snippets of code, suggesting that generative artificial intelligence could one day replace internet searches.
With the priority of commercializing OpenAI technology, Microsoft executives have made no secret that they are using it to challenge Google’s goals, reawakening an old rivalry that has been brewing since Google won the search wars a decade ago.
DeepMind, the London-based research firm Google acquired in 2014, and Google Brain, the advanced research arm of its Silicon Valley headquarters, have long given the search company one of the strongest footholds in artificial intelligence.
More recently, Google has made breakthroughs in different variants of so-called generative AI underpinning ChatGPT, including AI models capable of telling jokes and solving math problems.
One of its most advanced language models, called PaLM, is a general-purpose model that is three times larger than GPT, the underlying AI model for ChatGPT, based on the number of parameters to train the model.
Google’s chatbot LaMDA, a language model for conversational apps, can converse with users in natural language in a similar way to ChatGPT. The company’s engineering team has been working for months to integrate it into consumer products.
Despite technological advances, most state-of-the-art technologies are still only research topics. Critics of Google say that it is hobbled by its lucrative search business, which has prevented it from introducing generative artificial intelligence in consumer products.
Answering queries directly, rather than simply pointing users to suggested links, would reduce search volume, said former Google executive Sridhar Ramaswamy.
This puts Google in the “classic innovator’s dilemma” — cited in a book by Harvard Business School professor Clayton Christensen that attempts to explain why industry leaders often become fast-moving innovators. A victim of the upstart. “If I were someone running a $150 billion business, I would be terrified of this,” Ramaswamy said.
“We have long been focused on developing and deploying artificial intelligence to improve people’s lives. We believe artificial intelligence is a fundamentally transformative technology that is useful for individuals, businesses and communities,” Google said. However, the search giant “needs to consider the wider societal impact these innovations may have.” Google added that it will announce “more external experiences” soon.
While leading to fewer searches and lower revenue, the proliferation of artificial intelligence could also cause Google’s costs to skyrocket.
Ramaswamy calculates that using natural language processing to “read” all the web pages in the search index and then use it to generate more direct answers to the questions people type into the search engine would cost $120 million, based on OpenAI’s pricing. Meanwhile, analysts at Morgan Stanley estimate that using language processing to answer a search query costs about seven times as much as a standard Internet search.
The same considerations may prevent Microsoft from overhauling its Bing search engine, which generated more than $11 billion in revenue last year. But the software company said it plans to use OpenAI’s technology in its products and services, which could offer users new ways to get relevant information in other apps, reducing the need to use search engines.
Many former and current employees close to Google’s AI research team say the biggest constraints on the company’s release of AI are concerns about potential harm and its impact on Google’s reputation, as well as an underestimation of competition.
“I think they fell asleep while driving,” said a former Google AI scientist who now runs an AI company. “Honestly, everyone underestimated how language models are going to disrupt search.”
Those challenges have been exacerbated by political and regulatory issues raised by Google’s growing power and by increased public scrutiny of industry leaders adopting new technologies.
More than a year ago, the company’s leadership became concerned that a surge in artificial intelligence capabilities could lead to a wave of public concern about the implications of such a powerful technology at the company’s disposal, according to a former Google executive. Last year it appointed former McKinsey executive James Manyika as its new senior vice president to advise on the wider societal impact of its new technologies.
Generative AI, used in services like ChatGPT, is inherently prone to giving wrong answers and could be used to generate misinformation, Manyika said. Just days before ChatGPT’s launch, he added in an interview with the Financial Times: “That’s why we’re not rushing to roll out these things in the way people might expect.”
However, the huge interest that ChatGPT has aroused has intensified the pressure on Google to match OpenAI more quickly. That presents a challenge for it to show off its AI prowess and integrate it into its services without damaging its brand or sparking a political backlash.
“It would be a real problem for Google if they wrote sentences with hate speech near the Google name,” said Ramaswamy, co-founder of search startup Neeva. He added that Google’s standards are higher than those of startups, which may argue that its service is only an objective summary of what’s available on the internet.
The search company has previously come under fire for its handling of AI ethics. In 2020, Google caused an uproar over its stance on the ethics and safety of its AI technology when two prominent AI researchers left under controversial circumstances after opposing a research paper assessing language-related AI risks.
Incidents like this subject it to greater public scrutiny than organizations like OpenAI or open-source alternatives like Stable Diffusion. The latter, which generates images based on text descriptions, has several security issues, including the generation of pornographic images. According to the AI researchers, its security filters are easily hackable and the relevant lines of code can be removed manually. Its parent company, Stability AI, did not respond to a request for comment.
OpenAI’s technology has also been abused by users. In 2021, an online game called AI Dungeon licensed the text generation tool GPT to create storylines of their choice based on individual user prompts. Within months, users began crafting gameplay involving child sexual abuse, among other disturbing content. OpenAI ultimately helped the company introduce a better moderation system.
OpenAI did not respond to a request for comment.
If something like this happened to Google, the repercussions would be even worse, according to a former Google AI researcher. With the company now facing a serious threat from OpenAI, it’s unclear whether anyone at the company is prepared to take on the responsibility and risk of releasing new AI products faster, they added.
However, Microsoft faces a similar dilemma over how to use the technology. It’s trying to portray itself as using AI more responsibly than Google. At the same time, OpenAI warns that ChatGPT is prone to errors, making it difficult to embed the technology in its current form into commercial services.
But in the most dramatic demonstration of the power of artificial intelligence sweeping the tech world, OpenAI has sent a notice that even an entrenched power like Google could be at risk.