|
This article provides a
quick overview
of Janet. The core facilities and conceptual ideas of Janet.CAS
(cooperative agent system) and Janet.ADÉ (Automatic Distributed
Execution) are presented and
explained. For a more detailed overview of
Janet the interested reader is referred to the document "Architecture and Design
of Janet".
|
|
Janet.CAS
|
|
|
|
Janet.ADÉ |
|
|
|
Nodes
Agents in Janet
reside on nodes. There can be several nodes in a
network and there can be several nodes on one workstation (since you
can startup several Java VMs on one workstation). A network with
several nodes that are all connected with each other is called a
cluster in the following. A node is defined by
its node descriptor. The node descriptor is a good starting point to
explain basic concepts of Janet. Figure 1 shows a simplified minimal
node descriptor.
|
|
<?xml
version="1.0" encoding="ISO-8859-1" ?>
<!-- node
definition for minimal
sample node -->
<node
version="0.1" showGUI="true" exitVMOnNodeShutdown="true">
<registry centralHostname="localhost" centralPort="1099" localPort="1099" />
<clusterSpaces>
<clusterSpace name="sharedSpace1" hostname="localhost" port="1099" />
</clusterSpaces>
<clusterEventRegistries>
<clusterEventRegistry name="sharedEventRegistry1" hostname="localhost" port="1099" />
<clusterEventRegistry name="sharedEventRegistry2" hostname="localhost" port="1099" />
</clusterEventRegistries>
<applications>
<systemApplication>
<exportedEvents>
<event name="EVENT_ADD_LOG_LISTENER"
handler="...AddLogListenerHandler" />
<event name="EVENT_REMOVE_LOG_LISTENER"
handler="...RemoveLogListenerHandler"
/>
</exportedEvents>
<capabilities>
<capability
name="CORE">
<interpreters>
<interpreter>...NodeStartedInterpreter</interpreter>
<interpreter>...RegisteredNodeInterpreter</interpreter>
<interpreter>...RegisterNodeInterpreter</interpreter>
<interpreter>...DeregisterNodeInterpreter</interpreter>
<interpreter>...DeregisterNodeFinalInterpreter</interpreter>
<interpreter>...RegisterApplicationInterpreter</interpreter>
<interpreter>...DeregisterApplicationInterpreter</interpreter>
<interpreter>...NodeShutdownInterpreter</interpreter>
</interpreters>
</capability>
</capabilities>
</systemApplication>
<application
name="MyApp">
<capabilities>
<capability
name="CORE">
<agents>
<agent name="MyAgent1" executeWhenStarted="myApp.StartCommand" />
</agents>
<interpreters>
<interpreter>myApp.StartInterpreter</interpreter>
<interpreter>...</interpreter>
</interpreters>
</capability>
</capabilities>
</application>
</applications>
</node> |
Figure 1: Simplified
minimal node
descriptor
|
|
The
Central
All nodes in a
network need to have a so-called central. The central manages startup
and shutdown of nodes. It also serves as a reference registry for
agents and applications in the cluster. Every node has a complete
registry of agents and applications on other nodes for agent
and capability lookup in the
cluster to be immediate.
Every node in a cluster needs to specify on
which workstation its central is located. As shown in figure 2 either
"localhost" can be used or the workstation's network name.
|
|
<node
version="0.1" showGUI="true" exitVMOnNodeShutdown="true">
<registry centralHostname="localhost" centralPort="1099" localPort="1099" />
<!--
... -->
</node> |
Figure 2: Specifying the
central's location
|
|
Applications
with Capabilities and Agents
The
concept of an application is a central building block of a
node. A very simple
application is
shown in figure 3, which is a little more elaborated than the minimal
application in figure 1. To make use of Janet a user has to define
applications that are served by the node. The node creates a protected
environment for the application to run it. Only agents of the same
application (located on the local workstation or on additional
workstations in the cluster) can communicate directly with each other.
An application is identified by its name. Applications with the same
name on different nodes are considered by Janet to be the same
application. However, an application's definition does not need to be
identical on every node. Agents of different applications can only
communicate indirectly with each other through events.
|
|
<applications>
<application
name="CAT_MOUSE_GAME">
<capabilities>
<capability
name="CAT">
<agents>
<agent name="Cat1"
executeWhenStarted="...StartCommand" />
</agents>
<interpreters>
<interpreter>...StartInterpreter</interpreter>
<interpreter>...ChaseMouseInterpreter</interpreter>
</interpreters>
</capability>
<capability
name="MOUSE">
<agents>
<agent name="Mouse1" executeWhenStarted="...StartCommand" />
<agent name="Mouse2" executeWhenStarted="...StartCommand" />
</agents>
<interpreters>
<interpreter>...StartInterpreter</interpreter>
<interpreter>...RunAwayInterpreter</interpreter>
</interpreters>
</capability>
</capabilities>
</application>
</applications> |
Figure 3: Minimal
application definition
|
|
An
application consists of capabilities.
Capabilities are a means to partition an application into logical
sub-applications. The sample cat-and-mouse application in figure 3 uses
capabilities to partition the functionality of cats and mice into
separate capabilities. Every capability specifies agents. Applications
with their capabilities and agents can also be specified
programmatically and added to a node or removed from it at run-time.
For every agent a command can be defined that is sent to the agent
right after the agent's capability is activated (when node startup has
finished or the capability has been added to the application at
run-time). The agent's startup command is specified by the executeWhenStarted
tag.
|
|
Commands
and Interpreters
Agents communicate
through the
asynchronous exchange of commands. A command is message that passes on
information from an agent to another agent. A command carries no code
and cannot make an agent do something. On receipt of a command an agent
chooses an interpreter that deals with the command in some way. It may
ignore the command or start some action it considers appropriate. The
command-interpreter pair was developed as an extension of the command
pattern from the "Gang of Four" book. A notification sent from one
agent to another is therefore called a command. Commands received by an
agent are added to a priority queue. A scheduler, which runs in a
separate thread, processes the commands in the agent's command queue.
At the time of writing interpreters executed in response to a command
run to completion. Suspendable interpreters can be suspended by
Janet.ADÉ, Janet's load balancing system.
|
|
System Application
The system application is
a special
application that runs the node itself. The system application has
a capability named CORE. A capability with this name must be provided
and it is not possible to define agents for it. A node's CORE
capability is served by a single system agent that has the highest
priority of all agents. The system application may have additional
capabilities for which an arbitrary number of agents may be defined.
These agents will run at second highest priority. Agents of user
defined applications all run at the same priority which is lower than
the priorities of the system agents. This makes sure that application
agents cannot cause system agents to starve. Since the system
application can be modified like user applications a node's behavior
can be changed entirely by supplying the respective interpreters of the
CORE capability without having to change the system itself.
Events can be exported by
user
applications as well. An exported event
can be signaled by agents of different applications. Exported events
are the means in Janet for agents of different applications to
communicate. The owner of a node specifies in the exportedEvents
section what handler to call in response
to a signaled event. Figure 4
shows the system application of a standard node which also exports
events. Full class paths are omitted for brevity. Complete node
descriptors are located in the <janet>/conf directory from which
the full class paths can be seen.
|
|
<systemApplication>
<exportedEvents>
<event name="EVENT_ADD_LOG_LISTENER"
handler="...AddLogListenerHandler" />
<event name="EVENT_REMOVE_LOG_LISTENER"
handler="...RemoveLogListenerHandler"
/>
</exportedEvents>
<capabilities>
<capability
name="CORE">
<interpreters>
<interpreter>...NodeStartedInterpreter</interpreter>
<interpreter>...RegisteredNodeInterpreter</interpreter>
<interpreter>...RegisterNodeInterpreter</interpreter>
<interpreter>...DeregisterNodeInterpreter</interpreter>
<interpreter>...DeregisterNodeFinalInterpreter</interpreter>
<interpreter>...RegisterApplicationInterpreter</interpreter>
<interpreter>...DeregisterApplicationInterpreter</interpreter>
<interpreter>...NodeShutdownInterpreter</interpreter>
</interpreters>
</capability>
</capabilities>
</systemApplication>
|
Figure 4: The system
application
|
|
Object Spaces
Object spaces are
associative object
stores where agents' interpreters can store permanent
information. There are object spaces on applicaton-level,
node-level, and cluster level. Agents that belong to the same
application on the same node may store objects in the application-level
object space. Agents of all applications on the same node may store
objects in the node-level object space. All agents of all applications
of all nodes may store objects in the cluster-level object space. There
can be several cluster-level object spaces, which are implemented as
RMI server objects. For a cluster-level object space to be visible for
a node it has to be defined in the node's descriptor as shown in figure
5. As shown either
"localhost" can be used or the workstation's network name.
|
|
<clusterSpaces>
<clusterSpace name="sharedSpace1" hostname="localhost" port="1099" />
</clusterSpaces> |
Figure 5: Cluster-level
object space
definition |
|
Event
Registries
Event registries allow
agents to register
for events that can be signaled by any other agent, residing on
whatever node and pertaining to whatever application. Analogously to
object spaces there are event registries on three levels:
application-level, node-level, and cluster-level. The cluster-level
event registry is implemented as an RMI server object. For a
cluster-level event registry to be
visible for
a node it has to be defined in the node's descriptor as shown in figure
6. As shown either
"localhost" can be used or the workstation's network name.
|
|
<clusterEventRegistries>
<clusterEventRegistry name="sharedEventRegistry1" hostname="localhost" port="1099" />
</clusterEventRegistries> |
Figure 6: Cluster-level
event registry
definition
|
|
|