Societies of Autonomous Agents
by
David Luckham
An autonomous agent (AA) is an event processing computer program that responds to incoming events from its environment by taking actions and outputting new events independently of any instructions from its users. It is a self-contained, stand-alone, executable module. Agents can vary from simple rule-based systems to large complex state-based AI systems utilizing heuristics and probabilistic techniques. Many types of agents are based upon Artificial Intelligence (AI).
Usually, an AA performs some simple repetitive tasks for its user, thus allowing the user to focus on important aspects of his job. An AA may fill out travel vouchers for its user’s trips, or update databases that track the status of various company equipment in the user’s possession. A thermostat is a primitive type of agent.
However, these days “agent” has become a buzz word for very large and complex systems. Some say that ChatGTP4 is an AA. However, when I asked ChatGTP4.o if it was an AA this was its response:
“Hi David! I’m not an autonomous agent. I’m an AI language model created by OpenAI, designed to assist with generating text, answering questions, and providing information. How can I help you today?”
I pushed it a little further…. but surely you work like an agent, doing things and answering questions for people? And the reply, …
“Yes, in a way, I do function similarly to an agent. I can perform tasks like answering questions, generating text, and providing information based on the data and context you provide. While I don’t have physical autonomy or the ability to make independent decisions, I can assist with a wide range of tasks within the scope of my programming. What would you like to know or do today?”
So, the deciding issue for ChatGTP4.o was autonomy, the ability to do things without being asked or told. That may be evidence that it really doesn’t understand what an AA is, because most AAs have pre-programmed behavior and are only autonomous in the sense that they take actions when certain input events arrive. They do not have minds of their own!
Nevertheless, there is an industry wide trend to use agents to build mission critical systems. A mission might involve executing a real time marketing campaign that was designed by corporate management or handling a natural disaster such as an earthquake at a governmental level. These large missions are multi-faceted, and each facet will be assigned to a specific agent designed to deal with that one facet. Such a mission will require a large number of different types of AAs, and a method of organizing them to collaborate in real-time.
This article suggests a structural organization of AAs and their inter-communication called a society, its purpose being to implement large, multi-faceted missions.
Large missions usually have a hierarchical structure. At the lowest level are the incoming events that provide the information that the mission needs to take actions. Then the events at each higher level of the mission are complex events that are abstractions of sets of the events at the level below. Each complex event is specified by a mapping between lower level events and the higher level complex event.
This relationship between higher and lower level events is called vertical causality. Some events also cause other events to happen at the same level, which is called horizontal causality.
The hierarchical structure of levels of events and the mappings between them is called a mission outline. Note that the mappings provide an effective way of seeing how the complex events at higher levels are composed out of events at lower levels. Constructing a mission outline is a specialized task and requires an understanding of event abstraction and what higher level events to put in the outline.
Note also that higher level complex events in a mission may have an importance that requires actions at different corporate levels, perhaps finally reaching the highest level of management. This is one motivation for specifying a mission by a mission outline.
A mission outline also indicates a set of agents that need to be built and how they should communicate so as to execute the mission. Such a set of AAs and their communication structure, as mentioned before, is called a society. A mission outline may indicate several different sets of AAs and ways to organize their society so as to implement it. But all the societies will follow a common organizational hierarchy which we describe here.
Step 1: design and build a set of AAs capable of gathering, formatting and outputting all the lowest level information (events) specified by the mission outline. Follow the principle that more is better than less so that each AA is dedicated to a single task.
Step 2: design and build agents that can abstract events from lower level agents so as to deliver the complex events at each of the levels in the mission outline. There may be several ways to do this, choose one.
The abstraction mappings in the mission outline will provide an implementation guide. Each higher level agent will have a list of the types of events from the level below that it takes as inputs and a list of the types of events that it outputs. Thus, a higher level agent implements a mapping between lower level events and higher level events.
Steps1 & 2 will result in a hierarchy of agents corresponding to the mission outline. Each level may contain many agents. This is a society of AAs that implements the mission that is specified by a mission outline.
Step 3: AI may be needed to restructure a society in real time so that the information it is delivering at each level allows the mission outline to be achieved. This may be necessary if, for example, the mission has variabilities depending upon input events, e.g., some events may be more important than others or unexpected events may occur that need to be considered.
This step introduces several methodological issues.
- At which levels to first introduce AI, in the AAs themselves or in the mappings of events between the levels, and in the latter case, at which levels first?
- What kinds of AI?
- How to train or test the AI.
Societies of AAs as neural networks. Here we propose one possibility. We can treat a society of AAs as a neural network in which each of the AAs is a neuron and the events communicated between them define the pathways of the network. The layers in the neural network correspond to the layers in the society, which in turn corresponds to the mission outline.
We can add weights to both the AAs and their communication pathways. The weights will control when an AA produces its output events or when a communication pathway is open.
If the society is not achieving the mission outline, then we can try back propagation to trend its performance towards the mission. Since training data may not be available, this method would essentially be reinforcement learning.
One problem with this idea is what weights to use and how to vary them in back propagation. At the moment we must leave this to experimentation. Also, there are examples we can use as references.
Follow on discussion. There are several related topics that affect the organization of AA societies, and the building of AAs. We will discuss these in another article.
- Supervised learning to train societies of agents.
- Modular approach to big problems by componentization into small tasks. The idea here is to keep agents small and specialized to single tasks and let the organization of the society deal with the overall mission.
- Syntactic support for the design of AAs and their communication structure. Agents should have separate declarative and implementation parts. Declarative part should specify input and output event types. A single agent type may have several different implementations.
- Role of AI in training societies of agents to achieve a given complex mission.
- Design of the hierarchical organization of agent societies. This deals with how to define hierarchies of agents.
Leave a Reply
You must be logged in to post a comment.