Emerging Technologies

Rethinking the user experience in the age of multi-agent AI

SoftBank CEO Masayoshi Son attends an event to pitch AI for businesses in Tokyo, Japan February 3, 2025: Multi-agent AI offsets deeper results for speed, which user experience can manage.

Multi-agent AI offsets deeper results for speed, which user experience can manage. Image: REUTERS/Kim Kyung-Hoon

Babak Hodjat
Chief Technology Officer AI, Cognizant
Benjamin Wiener
Global Head of Cognizant Moment, Cognizant
  • Multi-agent artificial intelligence delivers deeper results at the cost of speed.
  • This perceived slowness can be managed by user experience.
  • Users should be able to interact naturally through text, speech, documents or graphical user interface and seamlessly shift between modalities.

We’re entering a new era of artificial intelligence (AI), one driven by multi-agent AI systems – multiple, interacting agents that work together to enable more intelligent, adaptive and collaborative digital experiences to meet an end goal.

These systems are poised to revolutionize how organizations handle complex tasks; multiple specialized agents will join forces to solve problems with unprecedented speed, agility and intelligence.

However, unlocking that potential requires more than technical excellence. While they offer significant performance benefits, multi-agent AI systems sometimes feel slower to users, especially compared to the instant responses we’ve come to expect from traditional software.

This isn’t due to inefficiency but because multi-agent AI systems engage in deeper, more contextual reasoning behind the scenes. The agents collaborate, exchange insights and iteratively refine their analysis before responding.

To unlock their full potential, organizations must design systems that minimize the perceived delay. Creating experiences that feel fast and intuitive is key to building user trust and realizing the true business value of next-generation AI.

Expanding how we interact

Multi-agent AI systems, powered by multi-modal large language models (LLMs), are transforming how users interact with enterprise computer systems. No longer limited to text input, these models can now process voice, images, documents, structured data and more.

Specialized agents can be embedded across business functions – from HR to finance and IT – offering users flexibility to ask questions, request updates and automate workflows in natural, conversational ways.

Like human collaboration, the interaction model must be trustworthy and transparent. Sometimes, a human in the loop is essential to approve decisions; in other cases, agents can act autonomously under minimal supervision. Each interaction style introduces unique user experience (UX) design challenges.

In addition to delays, these challenges include a possible lack of transparency – when users perceive AI as a “black box,” distrust follows. Creating a seamless experience is also difficult when users are engaging through chat, voice, uploads and more. There may also be ambiguity surrounding various agents’ roles.

That’s why design must go beyond basic useability. Multi-agent AI systems require experiences that clearly communicate confidence, transparency and control, especially when balancing human input with machine autonomy.

Users need to understand what the system is doing, why it’s doing it and how much influence they have at each step.

The legacy of adaptive interfaces

These challenges aren’t new. Two decades ago, we explored adaptive natural language systems in a project called CRUSE (Context Reactive User Service Environment), a mobile interface that responded to user context.

For example, if a user pointed to an image and said, “find me more,” the system would infer intent based on that visual context, not just the words.

CRUSE used a multi-agent architecture; those agents dynamically generated buttons or forms based on whatever made the interaction most intuitive. That principle still holds: systems should adapt to users, not the other way around.

Today, whether the user is an employee or a customer, an organization’s AI systems should support a mix of modalities and entry points. Users might start from a chatbot, toggle to a form, upload a file or jump directly into conversation with a specific agent. The system must infer intent from both expressed and implicit cues.

Designing human-centric multi-agent UX

Here are some key interaction modalities to consider when designing UX for multi-agent systems:

  • Text: The core interface – natural, expressive and fast.
  • Speech: Helpful in hands-free settings or customer support, though often optional.
  • Graphical user interface integration: Essential for hybrid workflows. Users should fluidly move between forms and chat.
  • Attachments: PDFs, URLs and structured inputs enrich agent understanding.
  • Audio/visual cues: Optional but valuable in emotion-aware use cases.

Entry points and direct access

Users should interact with agents across familiar platforms, including the World Wide Web, Slack or Microsoft Teams and be able to engage directly with relevant domain agents (such as in HR or the legal department), not just a centralized interface. This saves on key cost elements and reduces user frustration.

Have you read?

Managing perception of delay

Multi-agent AI systems often deliver better, more holistic decision-making by coordinating specialized agents across domains. But that complexity plays out behind the scenes and without the right UX, it can create a sense of opacity and uncertainty for users accustomed to instant feedback.

To bridge this perception gap, we must make the invisible visible. Thoughtful UX design can transform moments of perceived inactivity into opportunities for trust, engagement and user confidence. Effective strategies include the following:

  • Make work visible: Show agents in progress (“Agent Alpha is synthesizing insights…”).
  • Explain the wait: Provide reasons for delay (“Analyzing vendor contracts…”).
  • Use progressive disclosure: Offer previews, intermediate results or agent activity indicators.
  • Enable zero-click updates: Automatically deliver results without user intervention.
  • Build a human narrative: Highlight different agents’ roles visually and functionally.

Asynchronous and empathetic feedback

Even when users must wait, the experience doesn’t have to feel passive. Asynchronous and empathetic feedback ensures users stay informed and reassured, without requiring constant engagement. How you can achieve this:

  • Notify users when tasks are complete: Let users step away by sending alerts (via email, chat or app) once results are ready.
  • Provide transparent ETAs: Share estimated completion times to help manage expectations and reduce uncertainty.
  • Use personality and tone: Light humour or empathetic phrasing (“Still thinking… the internet’s being dramatic today”) makes delays feel more human.
  • Incorporate multi-sensory feedback: Reinforce activity and progress with subtle audio cues, colour-coded agent states or simple animations.Rethinking the benchmark.

Users are conditioned to expect speed and consistency from traditional apps. But multi-agent AI systems aim for something different: insight, nuance and intelligent collaboration. They should be evaluated not by app-speed standards but against human-level workflows, where depth and adaptability matter more than immediacy.

A response that takes 10 seconds but replaces hours of human coordination is not a delay. It’s a leap forward in productivity and insight and we must reframe how we think about responsiveness in intelligent systems.

From tools to teammates

Multi-agent AI systems are helping organizations shift from reactive software to proactive, adaptive services. However, the work isn’t done just by adding more agents. Success requires a new UX language that treats interaction as a conversation, delays as narrative opportunities and agency as shared between humans and machines.

Every update, every wait, every insight is a moment to build trust and shape collaboration. That’s how we move from assistants to collaborators and from smart tools to intelligent teams.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Artificial Intelligence

Related topics:
Emerging Technologies
Innovation
Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

More on Emerging Technologies
See all

AI’s new dual workforce challenge: Balancing overcapacity and talent shortages

Frédéric Gigant and Rémy Sergent

October 3, 2025

The UN has moved to close the gap in AI governance. Here's what to know

About us

Engage with us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2025 World Economic Forum