And we need to know why it makes the decisions it makes.
<div class="media-with-label__label">
Credit: <a href="http://www.apple.com" target="_blank">Apple</a>
</div>
</figure>
</div>
</div>
</div>
</div>
Apple’s latest machine learning research seems to confirm what most of us intuitively know already. It shows that while people are open to using AI, they also want to hang onto their own personal agency and want the decision-making processes used by this intelligent tech to be transparent.
Those are some of the conclusions drawn on reading the latest Apple study, Mapping the Design Space of User Experience for Computer Use Agents. The study focuses primarily on the user experience when using these models, but in doing that it also provides an illuminating insight into the nature of our current relationships with the fast-moving technology.
AI is great, but users want to be in control
For me, one of the biggest confirmations is that in order to become comfortable using AI, people want to be able to make informed choices about what the technology decides — particularly when those decisions carry real-world consequences.
On a personal basis, that means people using AI services want to be able to veto big decisions such as making payments, accessing or using contact details, changing account details, placing orders, or even just seeking clarity during a decision-making process. Extend this way of thinking to the working environment and the resistance is likely to be equally strong in professional settings.
None of this should be seen as new; these demands have been clear since before OpenAI’s ChatGPT appeared in late 2022. With recognition that AI can make decisions based on hallucinations, it seems more important than ever to preserve a role for human agency, as Apple’s research shows. Interestingly enough, Google CEO Sundar Pichai sees it the same way, arguing, “The future of AI is not about replacing humans, it’s about augmenting human capabilities.”
Masters and machines
Apple’s study suggests that while people can get used to using artificial intelligence to get things done, they don’t want to do so at the expense of agency. A KPMG study last year confirmed the extent to which people now use the tech, with 38% of respondents saying they use AI on a weekly or daily basis. That same study also showed 54% of people are wary when it comes to trusting the systems they use — and indicated that trust has declined over time.
The conclusions are inescapable. They tell us that when it comes to using AI, people want to be able to make the big decisions themselves. This extends to the use of personal data, as well as tactical decisions if one line of questions/requests leads down a one-way street.
What about transparency?
The report also makes clear that AI users want transparency from the systems they use. They don’t want these smart machines making decisions inside a black box process; they want to be able to audit how those choices are made.
“…Although the agent may have the ability to operate without continuous user attention and automate UI actions, our findings echo recent developments in explainable AI, suggesting that agent designs could intentionally embrace ‘seamfulness.’ Such designs should prioritize user understanding and preserve users’ agency to intervene, particularly in situations involving ambiguity and uncertainty,” the Apple report said.
There’s nothing unreasonable about that, and it has been known for years that this kind of audit trail is one of the biggest challenges to AI deployment, particularly in regulated industries. It is also widely recognized that achieving such transparency remains a work in progress, and light touch regulation could prove too weak to support that process. The KPMG study showed 70% of people believe national and international regulation of AI is required.
With all the hype claiming AI will gain more than human intelligence within the coming year or two, it’s no surprise human beings feel supplanted — so resistance to being subsumed to the advanced intelligence of these systems is inevitable. No matter how smart it might become, AI simply has to know its place.
Where do we go next?
I expect Apple has been looking at how its customers can best relate with AI for good reason. The company will want to wrap these tools within its own models of human-centered technology, and will want to remove or respond to any pain points in these interactions.
We can detect some of Apple’s approaches within its stated commitments to privacy, and innovative solutions such as Private Cloud Compute. We have already become accustomed to those regular system prompts we receive when applications attempt to use controlled or personal information, and it makes sense to expect them as AI gets extended across its platforms.
Perhaps the biggest audience that should be looking at Apple’s study will be those third-party Apple developers seeking to deeply embed AI in a positive and constructive way within their apps. After all, just because it’s a chatbot doesn’t mean the user experience no longer matters.
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.