Atlassian Intelligence is an OpenAI-Powered "Teammate"
Yet another copilot for business software
Atlassian, best known for its popular Jira, Confluence, and Trello software solutions, today announced the introduction of an “AI-powered virtual teammate” called Atlassian Intelligence. The solutions will be driven by a combination of OpenAI models and AI models developed by Atlassian. CNBC reported that GPT-4 is the OpenAI model behind the new service.
The company announced several features, from summarizing meeting notes and defining test plans to automating help desk requests and writing responses to customer support requests. Atlassian noted that the new OpenAI-powered service desk agents would integrate with Microsoft Teams and Slack.
Answer Anything
Granted, the most popular feature may turn out to be search. Users will be able to ask questions and receive answers from all of their data already in Atlassian software.
Atlassian Intelligence has a unique understanding of teamwork. This knowledge underpins the teamwork graph, which is modeled on the two most common types of teamwork:
Service-based work: teams accepting incoming requests for help and using custom workflows and data to drive resolutions for employees and customers.
Project-based work: teams managing projects from concept to delivery with roadmaps, plans, tasks, goals, and documentation.
Using large language models, Atlassian Intelligence deduces how teams at a given company work together and constructs a custom teamwork graph showing the types of work being done and the relationship between them.
“We have a graph of work basically,” Scott Farquhar, one of Atlassian’s two founders and CEOs, told CNBC in an interview earlier this week. “I reckon we have one of the best ones in the world out there. It spans people doing stuff from design to development to test to deployment to project management to collaborating on stuff, too.”
There is a waitlist for access to the solution. You can sign up for early access here.
Who Will Pay for Generative AI Features
Using generative AI models adds cost to a software solution. We posed the question of who will pay for generative AI inference costs at the time of MailChimp’s generative AI announcement. CNBC reported that Anu Bharadwaj, president of Atlassian, commented that the company was unsure of how the economics will play out.
Bharadwaj said Atlassian hasn’t figured out how much to charge for Atlassian Intelligence. Nor does she know how much money Atlassian will wind up paying OpenAI for GPT-4, because it isn’t clear how heavily Atlassian customers will use the new features.
Some interesting scenarios may emerge around this issue. If a software company adds a lot of value with its generative AI feature implementation, then customer use will be high, which, in turn, will raise the cost of provisioning the features. This is almost certain to lead to a rise in price in whatever service level gets access to the generative AI feature.
If a software provider does a poor feature implementation, then use will be low, and costs might be easily absorbed within the existing price structure. Today, it seems compulsory for nearly every software provider to announce a generative AI feature to maintain the appearance of competitive parity. It may be enough to “check the box” as many customers will never try a rival software and learn how much better its generative AI features are.
Granted, I suspect the likely outcome is that software providers will refrain from offering generative AI features in their entry-level packages. They can then use the generative AI feature as an incentive for upgrades that raises the average revenue per user for the company and likely covers the AI inference costs.
Regardless, Atlassian is the latest software company to follow the pattern. Step 1 in the generative AI era is to provide a copilot (or a teammate in this instance).