Doesn't OpenAI Board Member Adam D'Angelo Have a Conflict of Interest in Ongoing Dispute?
The OpenAI saga has devolved into a power struggle. A central question now is whether OpenAI will even survive. Over 700 employees of 770 have signed a letter calling for the board to resign. They have also suggested they will join OpenAI’s former CEO and president, Sam Altman and Greg Brockman, at Microsoft and build a rival generative AI service.
If you remove 90% of your staff in one week, the company’s viability is hanging by a thread. If those employees join together to create a competitor focused on crushing their former company, the situation is even worse. It’s total warfare.
The build-up to total war has gone through three stages.
Control - Today, we have the power struggle that has blurred the ideological loyalty lines that separated the AI doomers (safety-first) from AI boomers (innovation-first).
Safety - Over the weekend, the conflict narrative was around AI safety (i.e., the doomers vs the boomers).
Conflicts of Interest - However, back on Friday, it was said that undertones of safety and conflicts of interest precipitated the firing of OpenAI CEO Sam Altman. Bloomberg reported on Friday night:
Altman clashed with members of his board, especially Ilya Sutskever, an OpenAI co-founder and the company’s chief scientist, over how quickly to develop what’s known as generative AI, how to commercialize products and the steps needed to lessen their potential harms to the public…
Alongside rifts over strategy, board members also contended with Altman’s entrepreneurial ambitions. Altman has been looking to raise tens of billions of dollars from Middle Eastern sovereign wealth funds to create an AI chip startup to compete with processors made by Nvidia Corp., according to a person with knowledge of the investment proposal. Altman was courting SoftBank Group Corp. chairman Masayoshi Son for a multibillion-dollar investment in a new company to make AI-oriented hardware in partnership with former Apple designer Jony Ive.
After Friday, the conflict of interest narrative seemed to melt away. That suggests the foundations of the accusation were less concrete. However, a remaining OpenAI board member has what appears to be a clear conflict of interest that has garnered little to no attention.
Adam D’Angelo runs a company that uses OpenAI technology in a key product but is not dependent on it. The product also directly competes with ChatGPT, the recently announced GPTs product, and OpenAI’s forthcoming GPT marketplace. ChatGPT and GPTs are formidable obstacles to that product’s success. He also happens to be an OpenAI board member.
Poe-potential for Conflict
D’Angelo, a former CTO at Facebook during its pre-IPO period, is the CEO and co-founder of Quora, which is the creator of Poe.
April 2018: Adam D’Angelo joined OpenAI’s board.
December 2022: Quora launched Poe, the Platform for Open Exploration, as a ChatGPT-like general knowledge chatbot.
March 2023: Poe added a subscription tier that matched the $20 per month price of ChatGPT Plus.
April 2023: Poe added the ability to create your own bots using either ChatGPT or Anthropic’s Claude large language models (LLM).
July 2023: Poe adds support for Meta’s Llama 2 LLM.
August 2023: Poe adds support for Google’s PaLM 2 LLM.
October 2023: Poe announces a new marketplace where custom bot creators can earn revenue from users. Bot creators can also add a custom knowledge base to ground the answers, customize responses, and share with others.
November 2023: OpenAI announces GPTs, a new feature that enables the creation of customized chatbots based on ChatGPT. Sam Altman also indicated there would be a marketplace launched later in the month that would allow bot developers to share and sell access to their creations.
Poe is in direct competition with ChatGPT and the GPTs product concept championed by Altman. Given the scale of OpenAI’s ChatGPT user base and dominant mindshare, it presents the single biggest obstacle to Poe’s growth success survival. Poe is not dependent on OpenAI LLMs as it long ago diversified model selection to include alternatives.
Adam D’Angelo oversees the company that controls Poe. In fact, you can see his real focus in his Twitter profile, which includes two links, both for Poe and none for Quora. He is one of four OpenAI board members who voted to fire Sam Altman and demote Greg Brockman. He also blocked the return of the executives to the company and has put OpenAI at risk of dissolution. Poe is among the many generative AI-based solutions that would benefit from OpenAI’s demise.
Not an Obvious AI Doomer
D’Angelo’s comments regarding AI and AI safety do not place squarely in the AI doomer camp. He signed the Center for AI Safety’s declaration about working to prevent an AI-precipitated human extinction event in May 2023. OpenAI’s Altman, Sutskever, and Murati were also signatories.
When asked by Semafor in April about AI safety in April 2023, D’Angelo commented:
Q: We’re in the midst of a big discussion on AI safety. What are you worried about, if anything?
A: I’m representing Quora here, not OpenAI. There’s this legitimate fear that once this technology gets powerful enough, it could be more intelligent than humans. Then what does that mean for society and how can we make sure that humans still stay in control of society?
We want to ensure that whatever is in control values humans or shares human values. It’s a solvable problem and it’s important that this goes well.
…
Q: You also have some alarmist stuff out there. Is it helpful?
A: I don’t think it’s helpful. There’s a thought that we’re basically doomed and because of that, there’s some advocacy for very drastic actions to stop it, like airstrikes on Chinese or Russian data centers if they don’t shut down. There are some people who hear these things and might commit some kind of violence. I don’t think people should be advocating these things.
When people theorize about the future of AI, they often make some wrong assumptions. There’s often this idea that there’s going to be one AI system that’s incredibly powerful and no one else will have any comparable technology. Then you can tell the story that this thing can just run wild and take over the world.
At the point at which there is this super powerful AI, there will also be super powerful AI that is available to everyone else to defend themselves or rein it in or enforce the laws.
He told The Information in a June 2023 interview:
A few weeks before he and 350 industry peers released a bizarre, one-line statement warning that AI could herald a nuclear-level extinction event, the 38-year-old co-founder of Quora told me he actually sees more upside in AI than downside.
“Right now there’s a little bit of a climate of, like, if you say something that’s too positive, people need to cut you down and remind you of all the bad things that happened in the past,” D’Angelo said during a one-hour Zoom call from his San Francisco home, where the floppy-haired CEO was on paternity leave. “But when you look at what’s been enabled by this technology so far, it’s very positive.”
These don’t sound like the words of an AI safety zealot (AKA, a doomer). This is not to say D’Angelo is unconcerned about AI risks. He clearly is. His position and level of fearfulness may also have changed for any number of reasons, including information he was privy to as a member of OpenAI’s board. However, none of this eliminates the appearance of a conflict of interest.
Conflicted?
I don’t know Adam D’Angelo, and attempts to reach him today have been unsuccessful. That is understandable given the situation. Still, this leaves me wondering. Reid Hoffman voluntarily stepped down from OpenAI’s board earlier this year because of concerns over potential conflicts of interest regarding Inflection AI and other investments. Why not D’Angelo?
Does D’Angelo’s role in the OpenAI saga undermine his credibility regarding the foundation’s mission and its direct impact on his business interests?
Let me know what you think in the comments.
Synthedia is a community-supported publication. Please check out our first sponsor, Dabble Lab, an independent research and software development agency that is entirely automated with a mix of in-house AI tools and GitHub Copilot. The founder, Steve Tingiris, is also the author of Exploring GPT-3, the first developer training manual for building applications on GPT-3.