• Microsoft's OpenAI investment may have been prompted by concerns over Google's AI progress.
  • In a 2019 email, a Microsoft exec said he was "very, very worried" about Google's AI capabilities.
  • The emails were made public as part of the DoJ's antitrust case against Google.

In 2019, Microsoft became "very, very worried" about Google's AI capabilities, newly unearthed emails show, and that may have been what spurred it to invest in OpenAI.

In one lengthy email, Microsoft's chief technology officer Kevin Scott told Satya Nadella and Bill Gates that Google's AI-powered "auto-complete in Gmail" was "getting scarily good."

He added that Microsoft was "multiple years behind the competition in terms of ML [machine learning] scale."

The emails, which had the subject line "Thoughts on OpenAI," were made public on Tuesday as part of the Department of Justice's antitrust case against Google. A large section of Scott's email was redacted.

In response, Microsoft CEO Nadella said the email highlighted "why I want us to do this" and copied Chief Financial Officer Amy Hood into the chain.

Microsoft did not immediately respond to a request for comment from Business Insider, made outside normal working hours.

In 2019, Microsoft made an initial $1 billion investment into its now multi-billion partnership with OpenAI.

Microsoft has since benefited from its well-timed investment.

After public and investor interest in AI surged post-ChatGPT, Microsoft was able to move quickly, incorporating OpenAI's buzzy tech into existing products like Bing and Microsoft 365.

The speed at which Microsoft released AI products even left some wondering whether arch-rival Google had been left behind.

Google, a pioneer of AI technology, has been trying to counter the narrative that it has fallen behind Microsoft ever since. The company released several products to compete with OpenAI's releases, including Bard, an AI-powered chatbot, and an AI model called Gemini.

The 2019 email exchanges also show how Microsoft was keeping tabs on its rivals, with Scott noting that the scale of OpenAI, DeepMind, and Google Brain's AI ambitions were "interesting." Among some of the mentions of what its competitors were doing, Scott mentioned Google's data center designs and distributed systems architecture.

Discussing Microsoft's AI talent, Scott said it had "very smart" people with machine learning expertise in its Bing, vision, and speech team. He added that the teams faced constraints on scaling up their ambitions, which suggests why it saw potential in partnering with OpenAI to bring its AI aspirations to fruition.

Scott added that when Open AI, Deep Mind and Google Brain were competing to see who could achieve the most impressive game-playing stunt, he was "highly dismissive of their efforts," but "that was a mistake."

Read the unredacted portions of the emails below. RL refers to reinforcement learning, NLP is natural language processing, and BERT is bidirectional encoder representations from transformers.

From: Kevin Scott
Sent: Wednesday, June 12, 2019 7:16:11 AM
To: Satya Nadella; Bill Gates
Subject: Re: Thoughts on OpenAI

[Redacted]

The thing that's interesting about what Open AI and Deep Mind and Google Brain are doing is the scale of their ambition, and how that ambition is driving everything from datacenter design to compute silicon to networks and distributed systems architectures to numerical optimizers, compiler, programming frameworks, and the high level abstractions that model developers have at their disposal. When all these programs were doing was competing with one another to see which RL system could achieve the most impressive game-playing stunt, I has highly dismissive of their efforts. That was a mistake. When they took all of the infrastructure that they had built to build NLP models that we couldn't easily replicate, I started to take things more seriously. And as I dug in to try to understand where all of the capability gaps were between Google and us for model training, I got very, very worried.
Turns out, just replicating BERT-large wasn't easy to do for us. Even though we had the template for the model, it took us ~6 months to get the model trained because our infrastructure wasn't up to the task. Google had BERT for at least six months prior to that, so in the time that it took us to hack together the capability to train a 340M parameter model, they had a year to figure out how to get it into production and to move on to larger scale, more interesting models. We are already seeing the results of that work in our competitive analysis of their products. One of the Q&A competitive metrics that we watch just jumped by 10 percentage points on Google Search because of BERT-like models. Their auto-complete in Gmail, which is especially useful in the mobile app, is getting scarily good.
[Redacted]
We have very smart ML people in Bind, in the vision team, and in the speech team. But the core deep learning teams within each of these bigger teams are very small, and their ambitions have also been constrained, which means that even as we start to feed them resources, they still have to go through a learning process to scale up. And we are multiple years behind the competition in terms of ML scale.
[Redacted]
From: Satya Nadella
To: Kevin Scott
CC: Amy Hood Sent: 6/12/2019 6:02:47 PM
Subject: Re: Thoughts on OpenAI
Very good email that explains, why I want us to do this… and also why we will then ensure our infra folks execute.
Amy - fyi

Sent from Mail for Windows 10

Read the original article on Business Insider