reactive agents via k-punk
An agent is an autonomous entity with an ontological commitment and agenda of its own. The term originated in philosophy. Each agent possesses the ability to act autonomously; this is an important distinction because a simple act of obedience to a command does not qualify an entity as an agent. Nevertheless in business and in law an agent is often acting on a principal's behalf and has a legal duty to act in that person's best interest. An agent may interact or negotiate with its environment and/or with other agents. It may make decisions, such as whether to trust and whether to cooperate with others. —Agent, Wikipedia
In computer science, a software agent is a piece of autonomous, or semi-autonomous proactive and reactive, computer software. Many individual communicative software agents may form a multi-agent system. —Software agent, Wikipedia
In computer science, a multi-agent system (MAS) is a system composed of several agents, capable of mutual interaction. The interaction can be in the form of message passing or producing changes in their common environment. The agents can be autonomous entities, such as software agents or robots. MAS can include human agents as well. Human organizations and society in general can be considered an example of a multi-agent system. Multi-agent systems can manifest self-organization and complex behaviors even when the individual strategies of all their agents are simple. Topics of research in MAS include: (i) beliefs, desires, and intentions (BDI), (ii) cooperation and coordination, (iii) communication, (iv) distributed problem solving, (v) multi-agent learning. —Multi-agent system, Wikipedia[@ Multi-Agent Systems]
Introduction to Multi-Agent Systems
A multi-agent system can be thought of as a group of interacting agents working together to achieve a set of goals. To maximize the efficiency of the system, each agent must be able to reason about other agents' actions in addition to its own. A dynamic and unpredictable environment creates a need for an agent to employ flexible strategies. The more flexible the strategies however, the more difficult it becomes to predict what the other agents are going to do. For this reason, coordination mechanisms have been developed to help the agents interact when performing complex actions requiring teamwork. These mechanisms must ensure that the plans of individual agents do not conflict, while guiding the agents in pursuit of the goals of the system.
Agents themselves have traditionally been categorized into one of the following types:
- Deliberative </i>
The key component of a deliberative agent is a central reasoning system that constitutes the intelligence of the agent. Deliberative agents generate plans to accomplish their goals. A world model may be used in a deliberative agent, increasing the agent's ability to generate a plan that is successful in achieving its goals even in unforeseen situations. This ability to adapt is desirable in a dynamic environment.
The main problem with a purely deliberative agent when dealing with real-time systems is reaction time. For simple, well known situations, reasoning may not be required at all. In some real-time domains, such as robotic soccer, minimizing the latency between changes in world state and reactions is important.
Reactive agents maintain no internal model of how to predict future states of the world. They choose actions by using the current world state as an index into a table of actions, where the indexing function's purpose is to map known situations to appropriate actions. These types of agents are sufficient for limited environments where every possible situation can be mapped to an action or set of actions.
The purely reactive agent's major drawback is its lack of adaptability. This type of agent cannot generate an appropriate plan if the current world state was not considered a priori. In domains that cannot be completely mapped, using reactive agents can be too restrictive.
Hybrid agents, when designed correctly, use both approaches to get the best properties of each. Specifically, hybrid agents aim to have the quick response time of reactive agents for well known situations, yet also have the ability to generate new plans for unforeseen situations.
We propose to have a hierarchy of agents spanning a continuum of deliberative and reactive components. At the root of the hierarchy are agents that are mostly deliberative, while at the leaf nodes are agents that are completely reactive.