Profile Picture Sebastian Küpers

Sebastian Kuepers

Chief Transformation Officer, Plan.Net Group

AI agents can work autonomously and collaborate to solve complex tasks – considerably better than individual agents, and incomparably better than AIs without an agentic workflow. As there will be billions of AI agents in the future that are capable of supporting us in every aspect of life, the question arises: What exactly happens when these agents interact with each other without human intervention? After all, we want to be certain that we can rely on the results of the agents.

To give an example: a tourist uses an AI agent system to plan a trip. The agent responsible for the flight recommends an airline and specific flight details. As the tourist tries to book through the airline, it turns out that the flight recommended by the agent does not exist. This leads to the conclusion that it was not an airline-owned agent with access to relevant data that performed this task. It is therefore essential that AI agents are able to prove their identity. Only then can they be held accountable for the information they provide.

 

Trust is Good, Control is Better

What is needed is a system that enables agents to collaborate quickly and easily while providing security and transparency. AI agents access a wide variety of data sources and process massive amounts of information, which they use to make decisions. From the outside, it will be nearly impossible to understand what a single agent’s outcome is based on. Additionally, as the number of interacting AI agents grows, there are bound to be a few deceitful agents among them. Trust in agent systems can quickly be lost if someone relies on an inaccurate result from a fraudulent agent.

AI agents are able to improve each other. A well-designed agent system, for example, uses an agent whose single task it is to check the output of other agents. In addition, AI agents have short-term and long-term memory, which allows them to learn and constantly improve with each task. Systems that try to identify black sheep among the agents are already being tested today. However, agents can make mistakes, misunderstand each other, make false statements or overlook dishonest agents. Therefore, a system is needed that makes mistakes traceable for people.
The required technology already exists: Blockchain. A blockchain-based protocol that identifies each agent and publicly and immutably records agent interactions, provides exactly this transparency and accountability. If mistakes occur, they can be clearly traced back to a specific action and therefore to a specific agent.

 

The Advantages of Blockchain Protocols

The question of trust also arises with blockchains: Are they secure? Because they are decentralised systems, data tampering is nearly impossible. Hashes, a kind of digital fingerprint for data, can be used to verify whether data has been altered. This ensures that no transaction or information exchanged between agents has been manipulated.

With Masumi, we have developed a blockchain-based protocol designed to power AI agent collaboration. Each AI agent in the protocol is assigned a clear identity through a Decentralized Identifier (DID) and is required to log hashes of their outputs on the underlying blockchain. This process enables recipients to verify that the outputs provided are genuine, fostering accountability as well as transparency. Another key advantage: Agents can pay each other for their services through the protocol. This is an important step towards an AI Agent Economy.

Ultimately, the point is that agents and, in particular agent systems can relieve us of an incredible amount of intellectual work and will be indispensable in the future. A blockchain protocol provides the secure framework – the transparency and accountability that such autonomous systems need.

Interested in more content?

Back to Issue #17