Header background

The rise of agentic AI part 1: Understanding MCP, A2A, and the future of automation

By now, everyone is aware of generative AI fueled by large language models (LLMs) and generative pre-trained transformers (GPTs). The next level of innovation is agentic AI and the autonomous AI agents that drive it. Using Model Context Protocol (MCP) to facilitate agent-to-agent communication, these systems are revolutionizing how enterprises automate tasks and orchestrate complex workflows.

Powered by LLMs, vector databases, retrieval augmented generation (RAG) pipelines and additional tools, these AI agents are expanding extensively, giving rise to multi-agent systems, cross-agent protocols, and context-sharing standards. But these autonomous agents also introduce new challenges in monitoring, debugging, and security.

We’ll examine in detail the fundamentals of AI agents, models, and the emerging standards that help them communicate, like Agent2Agent (A2A) and Model Context Protocol (MCP).

Key takeaways:

  • Autonomous AI agents are the backbone of agentic AI. These services combine to deliver adaptable automated tasks.
  • AI agents depend on LLMs and orchestration logic. These technologies maintain the agent’s state, session memory, context, and reasoning strategies.
  • Agents depend on protocols, such as A2A and MCP, to effectively communicate. Models and agents need these protocols to manage multi-agent communication.

What is agentic AI?

Agentic AI is an artificial intelligence system that can take initiative and perform sequences of actions to complete tasks by reasoning, learning, and adapting to changing circumstances.

Dynatrace Chief Technologist Alois Reitbauer described agentic AI this way:

Alois Reitbauer

“It’s really delegating a task to software the way you would delegate it to a human. Say if you wanted to do travel booking, give it some complexity and freedom and some decision points it can make. Like, I have to go to Vegas, I need a hotel, I need a couple of good restaurants to go to, we’re going to be 50 people, fix it with my schedule.”
– Alois Reitbauer in The New Stack

Agentic AI systems rely on AI agents to perform the tasks that lead to the desired outcome.

What are AI agents?

An AI agent is a self-directed autonomous application that harnesses large language model (LLM) reasoning, tool usage, and context-awareness from numerous data sources to carry out tasks.

Agents can think and act independently without outside intervention. Agents can think through chain-of-thought, plan, execute (Reason+Act=ReAct), and refine their actions as needed. Businesses are looking into adopting these autonomous agents for applications such as customer service automation, supply-chain optimization, and content generation.

How do AI agents operate?

AI agents operate similarly to a Michelin-starred chef in a busy kitchen: They continuously gather information, plan, execute, and adjust to reach their desired end goal.

In the chef analogy, the cook surveys orders and available ingredients, decides on a suitable recipe, and then refines the approach based on feedback or resource constraints.

Agents do the same thing in a computational context. Specifically, they observe the world (for example, a user request or a set of data), perform internal reasoning about the best course of action, then carry out the steps needed to fulfill the request. This cycle allows them to respond adaptively to changing conditions, much as a chef would substitute ingredients or modify a dish mid-preparation.

Underpinning this iterative loop is the orchestration layer, which maintains the agent’s state, session memory, and reasoning strategies (such as ReAct, Chain-of-Thought, or Tree-of-Thoughts). Large language models (such as OpenAI’s GPT, Anthropic Claude, Google Gemini, Amazon Nova) provide the core reasoning capability for the agent. The model “thinks” about the user’s query. But the agent gains its power by incorporating additional frameworks or tools that can fetch external information or execute actions in the real world. One way to fetch and provide tools and information is through a unified protocol called Model Context Protocol (MCP).

Additionally, the orchestration layer ensures that multiple rounds of reasoning, tool usage, and tool outputs are all tracked and synthesized before the agent returns a final response to the user. Agents follow these steps in a structured way, so they can produce more accurate, context-rich answers and easily manage complex tasks.

architecture diagram that shows multiple agents interacting with an agentic application
Figure 1. Autonomous agent workflows and task execution.

What is the difference between models and agents?

A model (like a large language model) simply generates outputs based on its training data and the given prompt, typically without any built-in mechanism for session memory, external actions, or complex decision loops and validations.

An agent, on the other hand, includes the model but goes further. It maintains a stateful process (managing multi-turn conversations and thought processes), uses external tools to gather fresh data or perform actions, and follows a defined orchestration logic (such as ReAct and chain-of-thought). Thus, while a model is a core reasoning component, an agent adds the surrounding structure and capabilities needed for autonomous, goal-directed behavior.

What is Agent2Agent (A2A)? How multiple agents communicate with each other

As enterprises slowly adopt multiple specialized agents, interoperability of these services becomes crucial to create reliable experiences. To achieve this, A2A from Google helps to create an open protocol that enables agents—regardless of vendor or framework—to securely exchange information, coordinate actions, and integrate capabilities. By specifying tasks, capabilities, and artifacts in a standardized JSON-based lifecycle model, A2A fosters multi-agent collaboration across otherwise siloed systems.

A2A protocol enables agents to share updates and delegate tasks without overhead. However, direct communication between agents only solves half the problem: These agents also need relevant, up-to-date data and context to drive decisions and be equipped with the right toolset to execute actions.

Without a unified method for accessing diverse data sources, even the most capable multi-agent ecosystem remains limited in scope. The open-source project Model Context Protocol (MCP) fills this gap.

architecture diagram showing two agents using different protocols communicating using A2A protocol as part of an AI agent monitoring and MCP monitoring scheme.
Figure 2. Agent-to-agent communication.

What is Model Context Protocol? How MCPs empower agents

As an open standard, the Model Context Protocol (MCP) connects AI agents to relevant data sources, such as repositories, tools, or external APIs. Instead of the above mentioned integrations for each data silo, MCP provides a universal interface like USB-C to connect multiple relevant sources to feed the right context to the models and agents. This universality simplifies how agents access relevant context, leading to better task outcomes, execution and more consistent performance across complex environments. For managing complex tasks like the ones highlighted above, the Dynatrace MCP server on GitHub helps to get real-time end-to-end observability and MCP data into your daily workflow.

architecture diagram showing Dynatrace MCP monitoring reference architecture
Figure 3. Dynatrace MCP server reference architecture.

What’s next: Monitoring A2A and MCP for better agentic AI

As these technologies evolve, we can expect deeper integrations between agent orchestration protocols (A2A and MCP) and open observability frameworks, delivering end-to-end visibility from data ingestion to cross-agent collaboration. Likewise, as standards converge, organizations will rapidly compose advanced AI solutions while retaining full transparency and control, paving the way for even greater scalability, resilience, and confidence in autonomous agents.

Part 2 of this agentic AI series will explore how monitoring A2A and MCP communications results in better, more effective agentic AI.

Check out Dynatrace MCP and the Dynatrace AI Observability solution for AI agent monitoring and MCP monitoring at scale.

Check out Dynatrace MCP and Dynatrace AI Observability for AI agent monitoring and MCP monitoring at scale.