Atlassian’s release of AI features might lack strategy.
Atlassian made a major release announcement of a range of AI capabilities integrated into products across their development tool suite. It's underwhelming, here's where I think they went wrong.
As I shared in my recent post, when there is a rush into a particular space, it’s hard to compete with those who have deeper pockets:
Fortunately, you can count on many rushing in and potentially being less deliberate about how they invest and best align with their strategy. You can use this ineptitude to your advantage and potentially find a niche even with an area that is getting so much investment that it’s fast commoditising. By fast commoditising in an AI context, I mean that before, many AI benefits were the domain of organisations that could afford data scientists who could create or leverage data science capabilities through code.
After finishing my earlier post on AI, I saw that Atlassian had launched its range of AI capabilities. My first impressions of the wide range of integrations they have shipped make me sceptical. This looks like the feature factory approach I referred to in my last post. The ways AI is used resemble starting with the capability and working backward into problems rather than starting with crucial user needs and working backward to find the right solutions.
There are a lot of features listed, as you can see in the release announcement: https://www.atlassian.com/blog/announcements/atlassian-intelligence-ga
The features take the most prominent applications of tools like ChatGPT and integrate them in very simple ways. For instance, it is generating user story content or summarising existing content. These sound like very useful things, but you can achieve them using AI tools directly, so is implementing them into the Atlassian UI the most critical problem to solve for its users?
Atlassian has decided it needs to include AI in its tools in a big way. It appears there was a decree for teams across the organisation to start prioritising AI features. With so many teams starting on their AI feature journeys, there is a limit to what can be expected initially. But if this is the case, I don’t understand why all these features were banked up in a big release. If I were being cynical, I would say it’s a crude tactic to justify a price rise.
Where they might have gotten it right.
It’s not all bad. Amongst the release notes some ideas show some promise. For instance, whilst I don’t think generative content is the most significant need, I like that their implementation is text formatting-aware. It takes the strength of the Atlassian suite and builds upon it.
The example of the browser table is cool. There’s also natural language search and natural language JQL and SQL alternatives. I could imagine these may have been challenging for some users. I am sceptical that these were the most significant issues faced by Atlassian users, but hey, they have access to the data, and I don’t. At least they enhance areas that may be seen as a strength for Atlassian.
Where it can go wrong
As is quite common in the development tools space, there is often a focus on solving the wrong problems. For instance, there are often tools that are trying to save developers typing. It's not unhelpful to save a few keystrokes, but as far as the most time-consuming parts of software product development, typing, in most cases, is far from the top constraint. Making it worse is that many people may think that this is their problem, so there’s a market for this functionality. It just may not make your customers as successful as they could be, so prioritising something that will make good strategic sense. And, of course, there’s always a counter-example that proves the rule - some tools like Github’s Copilot go well beyond saving a few keystrokes, but for every tool like this, many fall short.
From the features listed in the release announcement, Atlassian believes the most significant problems that Atlassian users have with development are typing user stories or generating ideas. I haven’t seen too many organisations that don’t have the opposite problems of too many opportunities to execute on. Of course, generative AIs made adding these particular features relatively easy. And if it's easy to implement, it's easy for a competitor to copy (in the scenario, they stumbled onto solving an actual top job to be done). That’s an effort for no strategic advantage.
I am confident that assessing adjacent opportunities would reveal better opportunities for Atlassian than most features they’ve tackled this time. Hopefully, this experience of working with AI integrations is building the team’s confidence in working with these tools and confidence to explore more useful features leveraging the power of AI.
Not to be one to criticise and not offer up alternatives, here’s how I might approach this issue:
Review the long-term strategic goals for the organisation (hopefully, these are in good shape, not just actions such as “integrate more AI features”!).
Review the findings of any ethnographic research or jobs to be done research for each product. Rank by those that are:
least addressed by your products,
least addressed by the market
and which are most valued by users.
Assess AI capabilities that may address these jobs to be done for what is realistic to implement for the team.
Assess these opportunities against other opportunities.
If AI is not presenting the best options I’d suggest the teams first exhaust the other opportunities. The exception might be if, strategically, we needed to build up a team’s capabilities and confidence using AI. That, of course, suggests that we believe there are AI capabilities that will create compelling features that solve real challenges for our user base.
Interestingly the trend with AI I’ve noticed recently is its ability to augment a worker’s ability to learn. For instance, engineers and architects I have spoken to have been using AI tools to study new technologies and help them do things with technology in a fraction of the time.
Given I am not in a position to conduct the jobs to be done research for Atlassian I will have to guess at some examples. For tools like Atlassian’s Confluence, there have always been challenges, such as how to keep the wiki organised with many contributors. AI no doubt could provide suggestions on opportunities to keep the cognitive load of navigating the wiki acceptable for readers of the wiki.
For Jira, the most common way I see using it go off the rails is administrators using workflows and permissions too aggressively. AI could identify these scenarios readily and help avoid the unnecessary heartache and damage this can cause.
I am under no illusion that these ideas are amazing, but as examples of real challenges with these products, they provide a comparison to consider against the AI features that were implemented instead. With access to the strategic goals and jobs to research, you’d have an even richer set of opportunities for teams to consider.
What is the approach to using AI at your organisation? Does it lean towards feature factories where the first applications of AI are being implemented, or is it more considered? Share the approaches in the comments.