Satya Nadella Says We are Moving from the AI Autopilot to Copilot Era with Humans in the Loop
Is this true?
Microsoft CEO Satya Nadella is one of the sharpest operators in business today. He has received a lot of credit for Microsoft’s turnaround and his move to bring OpenAI tightly inside the company’s orbit to challenge Google on multiple fronts with AI innovations. In an interview today with CNBC, he brought up a novel concept.
AI is already there at scale. Every news feed. Every sort of social media feed, search as we know of it before chat plus search, they are all on AI. If anything they are black boxes. I describe them as the autopilot era.
We are moving from the autopilot era of AI to the copilot era of AI. If anything, I feel yes it is moving fast, but moving fast in the right direction. Moving fast where humans are more in control. First of all, humans are in the loop versus being out of the loop.
Nadella said this in response to a question about whether AI was moving too fast. This clever messaging formulation may work with national news media and some government officials. But is it true?
The Autopilot Era of AI
Nadella is right that AI is all around us. It’s in the software many of us use at work. It’s being used by our banks. It’s in search, smartphone apps, home appliances, and cars. He is also right that it is a black box. Someone may be monitoring it, but most of us don’t know if a procedural decision tree is helping us complete our task or if machine learning is powering the feature. Do the developers know much more?
However, the AI he is talking about is largely relegated to a single feature and is not the entire product. Generative AI is used to power new features in existing products and is the core capability for many new products ranging from ChatGPT and Wordtune to Midjourney and Character AI.
The key point is that all of these are black boxes. Nadella is making a distinction between whether the feature is controlled by the developer or the user. If it is developer controlled, then it is autopilot in this formulation. It would be interesting if we could turn off the algorithm on YouTube and in the other applications we use to see what is out there that the algorithms are hiding.
It is amusing that the big fear everyone has about AI is that it will run on its own and make decisions that are counter to the interest of humans. Nadella is saying that is already the case! We are blessed to have “humans in the loop” now after long last.
On a more serious note, these systems are the same. If I enter information into YouTube via what we used to call search and now refer to it as a prompt, it is black box processing until I receive a response. If I don’t like the response, I can scroll or enter a new prompt to refine my search. If I enter a prompt into ChatGPT, the process is much the same. What is different is the output today is far more robust.
The Copilot Era of AI
The Copilot concept is attractive. It is the idea of AI sitting alongside us and helping us complete a task. Google search is also a copilot for finding information. It is just not as sophisticated as Bing Chat.
ChatGPT, Stable Diffusion, Midjourney, and other tools feel more like copilots than tools such as Grammarly and Photoshop because they can accomplish much more of a task. Plus, they can sometimes (or is it often) produce outputs we could not do on our own to the same level of quality. In other cases, they produce outputs we could not even conceive of.
In the voice assistant world, Voicebot developed a model of the assistant across the table from you, the assistant beside you, and the assistant with agency. The first completed discrete tasks like Alexa, Siri, or some contact center chatbots. The third is the an agent that can represent you in the digital world for a variety of tasks without you present or requiring constant supervision. Only rudimentary examples of this exist. Think Google Duplex or Google’s message screening service on Android.
The assistant beside you didn’t exist in any meaningful form because there was no co-creation. This is the realm of copilots, and it is a capability that many of us were missing even if we didn’t know it.
Generative AI Enables Both
Of course, this misses a key point. Generative AI has certainly ushered in copilots. It has also brought us more autopilots. Character AI enables you to chat with characters that run on autopilot. Nearly every contact center out there is hoping to use some form of non-hallucinating LLM to provide more customer self-service coverage so no human ever needs to speak with a customer. The AutoGPT craze is all about autopilots.
AI autopilot is just as immature as the copilot segment. We will want both to advance, and I suspect that autopilot will be the far more attractive segment as it matures. Sometimes we will collaborate with an AI to accomplish a task. Other times we will save time and effort because AI can do the work independently.
How Regulation Affects This Debate
There may be an important distinction here between what Nadella is saying and why he is saying it. Characterizing recent AI developments as moving toward a model where humans have more control is an implicit argument against regulation.
AI is already all around us. You just don’t see it because it runs on autopilot in a hidden black box. Experience tells us it is pretty benign and mostly helpful. Did you even notice it in the hundreds of products and services you use daily?
This new flavor of AI is pretty great. And, get this. You actually know you are using AI and can influence how it operates. Progress!
This does not address safety concerns or runaway AI superintelligence. It changes the subject. It is easy to understand and logical if you don’t look too closely. That makes it brilliant messaging, especially if you want more time to execute your business plan before regulatory constraints are imposed.
Nadella is walking a fine line because he must weigh two business concerns. The first is how regulation could shift the competitive dynamics in a market where Microsoft’s product offerings with OpenAI have a clear lead in both mindshare and market share. The second is how the U.S. Federal Trade Commission and European Union regulators may impact Microsoft’s operations in the AI market if there are concerns about the tech giants having to much power and influence.
Sam Altman may be on a different messaging plan than the company's CEO, who holds the largest ownership stake. NBC’s report on Altman’s Senate testimony today said:
The U.S. should require companies to be licensed by the government if they want to develop powerful artificial intelligence systems, the head of one the country’s top AI companies said at a Senate committee hearing Tuesday.
In his first appearance before Congress, Sam Altman, the CEO of OpenAI, the company that developed ChatGPT, said the U.S. “might consider a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities.”
If the humans are fully in control, this seems unnecessary. Hey! Sam! This is the copilot era. 😎
I welcome the AI copilot era and look forward to the AI autopilot era, which still has a long way to go.
On-point as usual. I feel the autopilot/copilot is a little bit of a false dichotomy when talking about black boxes; being able to co-create with AI doesn't make these systems less black-boxy. Nadella knows that of course. It DOES provide us with more control (for better or for worse) over the outcomes, that part is true. But I would say that only makes the tool more powerful, not less. Because it is not evil AI we should fear, but evil people with access to powerful AI ;)