Adaptation in Decision Making Organizations

Adaptation in Decision Making Organizations Alexander H. Levis C3I Center George Mason University Fairfax, VA 22030 USA Abstract Types of variable s...
Author: Guest
1 downloads 0 Views 259KB Size
Adaptation in Decision Making Organizations Alexander H. Levis C3I Center George Mason University Fairfax, VA 22030 USA

Abstract

Types of variable structure decision making organizations are defined and triggers of adaptation are discussed. An approach to determining quasi-static organizational forms is described; in this approach a set of fixed structure organizational forms are used. Then, an algorithm for determining incrementally adapting organizations is outlined. This includes the representation of the adaptive structures, the set of rules that govern the adaptation, and the determination of the feasible paths through which a given structure can be transformed to another one. The procedure establishes requirements for the underlying command, control, and communications system. 1. Introduction The changing nature of military operations, from conducting operations other than war to joint and multinational war fighting, require a wide range of decision making organizational forms that meet the particular needs of the operation. This leads to the requirement that the organizational structures be flexible and reconfigurable. Flexibility has to do with the ability to carry out a variety of missions and tasks, while reconfigurability deals with the ability to execute tasks with different assignments of functions to resources.

A few years ago, Monguillet and Levis (1993) defined in some detail variable structure organizations and classified variability into three types. A variable structure decision making organization (VDMO) is one for which the topology of interactions between the elements or components can vary. Analogously, a DMO, which has a constant pattern of interactions among its components, i.e., a fixed structure, is called a FDMO.

The relationships that tie the components together are defined at three different levels: physical arrangements, links between components, and protocols ruling the arrangements of these links. The architecture of the organization allows the topology of interactions to vary. The way it does vary is implemented in the protocols themselves. The rules setting the interactions can be of any kind. We distinguish three types of variability, each corresponding to characteristic properties that a VDMO may exhibit; an actual VDMO may very well have these properties (to some extent) together and simultaneously.

* Type 1 variability: The VDMO adapts its structure of interactions to the input it processes. Some patterns of interactions may be more suitable for the processing of a given input than others.

* Type 2 variability: The VDMO adapts its structure of interactions to the environment. The performance of a DMO depends strongly on the characteristics of the environment as perceived by the organization. For example, an air defense organization may be optimized for some types of threats and their probabilities of occurrence. Now, if the adversary's doctrine changes or the deployment of his assets changes, then the probability distribution of the occurrence of the threats is modified. The organization (with the interactions set as before the changes in the environment) may not meet the mission requirements any more.

* Type 3 variability: The VDMO adapts its structure of interactions to the system's parameters. The performance of a system changes when assets are destroyed or become unavailable because of countermeasures such as jamming of communications.

These three different types of variability can be related to the properties of Flexibility, Reconfigurability, and Survivability. A DMO is survivable when it can achieve prescribed levels of performance under some wide range of changes either in the environment, or in the characteristics of the organization, or in the mission itself. The extent to which a DMO is survivable depends on the extent to which it is flexible, and reconfigurable. Flexibility means that the DMO may adapt to the tasks it has to process, to their relative frequency, or to its mission(s). Reconfigurability means that it can adapt to changes in the availability of resources. Both properties overlap, and their quantitative evaluation clearly falls outside the scope of this paper.

The triggers of adaptation in this construct were:

Changes in the set of tasks the organization is to perform. Mathematically, the set can be expressed by an alphabet; a change in the tasks is modeled by a change in the input alphabet. Changes in the arrival frequency of the tasks. Since different tasks require different resource use, changes in their frequency distribution are capable of triggering adaptation. Change in the availability of resources. The loss of a resource may force the reconfiguration of the organization and may lead to the use of an alternative operational procedure to carry out the task. Neither of these three triggers includes time explicitly, even though both the frequency trigger and the availability have some aspect of time implicitly included. In recent experiments carried out under the Adaptive Architectures for Command and Control (A2C2) program, pre-experimental modeling showed that the tempo with which tasks arrive (given a fixed alphabet and a constant frequency of tasks) is another trigger.

Tempo of operations. Changes in the tempo of operations may result in unavailability of resources. For example, the organization may be capable of coping with n tasks per unit time. However, if the tempo increases to (n + m) tasks per unit time, the organization may find itself with inadequate resources to cope with the situation. This could be thought as roughly equivalent to keeping the tempo as n per unit time, but reducing the resources.

2. Quasi-static Adaptive Organizations

After defining types of adaptivity, Monguillet and Levis (1993) proceeded to investigate the performance of an organization that operates in several fixed modes, but can switch from one mode to another depending on the values of the triggers. Let the two measures of performance be Accuracy and Timeliness.

Accuracy, denoted by J, is a measure of the degree to which the actual response of the organization to a given input matches the ideal response for that same input. If we denote by X the alphabet of inputs xi: X = {x1 , x2 , ..., xn ), Y the alphabet of outputs yj: Y = {y1 , y2 ,..., yq), p(xi) the probability of occurrence of the input xi, with __p(xi) = 1, yd(xi) the ideal (or desired) response to xi, yaj(xi), j = 1,...,q, the response that the DMO actually produces, C(yd, ya) the cost of the discrepancy between the ideal and the actual responses, then a measure of Accuracy of the DMO is:

Timeliness, denoted as T, is the ability to respond to the input with a time delay Td which is within the allotted time [Tmin ,Tmax], called the window of opportunity. If we denote by Td(xi) the average processing delay of xi, 1_ the characteristic function on the set _, then a measure of Timeliness of the DMO is the expected value of the processing delay:

The performance loci for the two fixed structure organizations, i.e., FDMO1 and FDMO2, are depicted in Fig. 1. They are disjoint, and no matter what decisions from the set of admissible decisions are used in any of the two organizations, FDMO2 needs more time to respond. As indicated in Fig. 1, the whole locus for FDMO1 is to the left of the line T = 7 units of time, whereas the one for FDMO2 is to the right of the line T = 9 units of time. Now consider a variable structure organization which can take the fixed structure form of either FDMO1 or FDMO2 and can switch instantaneously and without cost from one structure to another.

The same methodology for evaluating the MOPs applies to the organization with a variable structure, the VDMO. The system locus of VDMO is shown also in Fig. 1. As expected, the variable structure organization is, on the average, faster to respond than the slower fixed structure organization (FDMO2), and is also, on the average, more accurate than the least accurate fixed structure organization (FDMO1)

3. Algorithmic Design of Quasi-static Organizations

A distributed decision making organization is seen as an information processing system that must perform several functions to accomplish its mission (Minsky, 1986; Levis, 1988). To help in formulating the algorithmic solution to the problem of generating quasi-static solutions, the functions are divided into individual tasks, the roles. Each role is a series of repetitive procedures that are prescribed by the requirements of the mission, so that each decision maker’s activity contributes a little to each of the several functions. The inputs to the organization are the observations made by the sensors. These items of information are transmitted to the proper destinations within the system, they are analyzed, and the selected response is implemented by the effectors. The model is restricted to observations that are temporally consistent, they refer to the same temporal origin, i.e., to an event with a specific time of occurrence. It is further assumed that the processing of one set of simultaneous observations is deterministic, it is achieved while involving a unique set of interactions.

Sensors : A DMO processes data from N sources of information, i.e., N sensors labeled Sensor 1 to Sensor N. Sensor n can output one signal or symbol from its associated set of possible signals, its output alphabet Xn = {xn1, ...,xn|Xn| }, which contains |Xn| elements. In the Petri Net formalism, each independent Sensor is modeled by a place, as represented in Fig. 2. A transition models the communication of the sensor's

observations. The temporal consistency of the observations is embedded in the fact that all sensors are the output of a single process. This process has a single input place p0, which is called the external place.

From the system point of view, temporal consistency also implies that the input to the system is an N-dimensional vector: x = (x1, x2, ..., xN). This vector has as components the N independent observations and belongs to the alphabet X where X= X1_ X2 _. . ._ XN.

CPN Representation of an Interaction An interaction is characterized by its pattern of activation over the set of inputs. Therefore, every interaction is described by a diagonal |X| _ |X| matrix L. • Lii = 1 if the i-th input in the lexicographic ordering of X activates the interaction. • Lii = 0 if the i-th input in the lexicographic ordering of X does not activate it. In the Colored Petri Net model, an interaction is represented by a link between two transitions t1 and t2. The link indicates that the output of process t1 is an input to process t2. The place that belongs to the link is an interactional place, which models a communication buffer. The links are annotated by the matrix L.

The tokens in the CPN have an identity and represent messages. A token's identity belongs to X and x = describes a message that has been generated by the set of simultaneous observations . The matrix L attached to an interaction (a link) describes the set of tokens that can go through that link. A transition is enabled if and

only if all its input places contain messages (tokens) as described by the annotation of its input links. When a transition fires, tokens of the type indicated by the annotation in the links are taken from the input places and tokens are generated in the output places.

There are three basic types of interactions in a variable structure. • The permanent links. These are the links for which L is the |X| __|X| identity matrix. Every input requires this interaction to be processed. By convention, these links are depicted without annotation on a CPN model of the system. • The inadmissible links. These are the links for which L is the |X| __|X| null matrix. No input requires this interaction to be processed. These links are never depicted on a CPN model of the system. • The variable links. These are the links for which L has 0s and 1s on the diagonal. Some inputs require this interaction in order to be processed, while some do not. Permanent links, as well as inadmissible links, are not key elements in generating variable structures. If a link is permanent or inadmissible, the existence of the interaction is not subordinated to the information content of the input while for variable links the decision to interact or not is based on the information content of the input x. The alphabet Xi of Sensor i is said to be effective, if the decision to interact is based, in part or in whole, on the output of Sensor i. More formally:

Proposition 1 Given a variable interaction described by a diagonal matrix L, the alphabet Xi is an effective alphabet of the interaction if and only if there exist two signals in Xi, xi1 and xi2, such that there exists an input x = (x1, x2, ..., xN) in X that activates the interaction, with xi = xi1 and there exists an input x' = (x'1, x'2, ..., x'N) in X that does not activate the interaction, with x'i = xi2.

Interactions: Each role is modeled by a subnet with four transitions and three internal places, as shown in Fig. 3. The four stage decision making process consists of four algorithms SA, IF, CI, and RS and is a reduced form of the general five stage process. (Levis, 1988) In Figure 3, x represents an input signal from an external source of

information or from the rest of the organization, i.e., from another role. The Situation Assessment (SA) algorithm processes the incoming signal to formulate an assessment of the situation. The assessed situation z may be transmitted to other roles. Concurrently, the role may incorporate one or several signals z'' from other parts of the system. The signals z and z'' are fused together in the Information Fusion stage (IF) to produce the final situation assessment z'. The next algorithm, the Command Interpretation algorithm (CI) receives and interprets possible commands (v') from other roles, which restrict the set of responses that can be generated. The CI stage outputs a command v, which is used in the Response Selection algorithm (RS) together with the assessed situation z to produce the response of the role, the output y. This output can be sent to the effectors and/or to other roles. The input stage of a role may be SA, IF or CI; these are the stages that can accept external inputs. The final output stage must be RS, the stage in which the role selects its response.

Every role might not have access to all the sensors' observations. It might base its situation assessment on a restricted number of observations. Fig. 4 depicts schematically the interactions between sensors and roles. Sij models a link between Sensor i and Role j.

Only certain types of interactions between roles make sense within the model (Remy, 1988). They are depicted on Fig. 5. For the sake of clarity, only the links from the i-th role to the j-th role have been represented. The symmetrical links from i to j are valid interactions as well.

The place si models the case in which the i-th role communicates the response it has selected to the external environment through the effectors. If Role i sends its response to

the effectors, then there exists a link between the RS stage of Role i and the output transition.

This output transition has a unique output place, which is called the sink. The place Fij denotes the interaction that occurs when the situation assessment which is produced as an output of the SA stage is sent to the j-th role to be fused with the assessment of the j-th role, and/or assessments from other roles. Gij depicts the case where the response selected by the i–th role is the input of the j-th role. Hij shows the sharing of a result, Role i informs Role j of its final decision. The j-th role may or may not take this information into account. Finally, Cij has been introduced to model hierarchies between roles. It describes the possibility of role i sending a command to role j.

Matrix Representation of Interactions: The model of interactions in a variable structure DIS can be represented in matrix form. Suppose that a variable structure contains R roles, N sensors, and that its input alphabet is X = X1 _ X2 _..__ XN. Then, a DIS is completely determined by the six-tuple: _ = (S, s, F, G, H, C) . • S is a N _ R block array, which depicts the flow of information from the external environment to the DIS. • s is a 1 _ N block array that depicts the flow of information from the DIS to the effectors. • F, G, H, C are four R _ R block arrays which model the interactions between roles within the DIS. - Fij models the link from the SA stage of Role i to the IF stage of Role j. - Gij models the link from the RS stage of Role i to the SA stage of Role j.

- Hij models the link from the RS stage of Role i to the IF stage of Role j. - Cij models the link from the RS stage of Role i to the CI stage of Role j. • Every block in S, s, F, G, H, C is a |X| _ |X| diagonal matrix L. - Lii = 1 if the i-th input in the lexicographic ordering activates the link. - Lii = 0 if the i-th input in the lexicographic ordering does not activate the link. For fixed dimensions X, N, and R, the set of all six-tuples _ of dimensions (X, N, R) is called V, the set of Well Defined Variable Structures (WDVS).

A fixed structure is determined by two parameters, the number of roles, R, and the number of sensors, N. A fixed structure of dimensions (N, R) can be represented in this model by the six-tuple:

_ = (S', s', F', G', H', C' )

• S' is an N _ R array • s' is a 1 _ N array • F', G', H', C ' are four R _ R arrays. • Their entries are in {0,1} - 1 if the interaction is present - 0 if the interaction is not present. Here again, if N and R are fixed, the set of all six-tuples S of dimensions (N, R) is called W, the set of Well Defined Fixed Structures (WDFS).

4.1 Properties

Some properties of the sets of variable structures, V, and of the set of fixed structures, W, are stated in this section.

Proposition 2 Each element of V can be equivalently described in a matrix form _ or by a Colored Petri Net.

There exists a one to one relationship between the representation of a variable structure in a matrix form and a Colored Petri Net model of the structure. One can thus work with the language that is most appropriate for one's needs. The proof is as follows. One transition is created for each process in the system, and a link is drawn between any two transitions that interact. This information is provided by the matrix form _. Note, however, that a DIS contains also internal links, which describe the continuous flow of information within one role, and are not embedded in the matrix form _. The key proposition is that the internal links are completely determined by the activation of interactional links: a link between two internal processes t1 and t2 within a role exists if and only if t1 has at least one input link - internal or interactional.

Proposition 3 Each element of W can be equivalently described in a matrix form _ or by an Ordinary Petri Net.

There exists a one to one relationship between the representation of a fixed structure in a matrix form and an Ordinary Petri Net model of the structure. The proof for fixed structures is similar to the proof for variable structures.

Next, the theory of variable structures is related to the theory of fixed structures through Proposition 4.

Proposition 4 Any variable structure corresponds to a mapping _: X --> W x --> _(x) which associates with each input in X one and only one fixed structure.

In other terms, the Colored Petri Net model of a variable structure can be represented as a mapping from X into the set of Ordinary Petri Nets W. Finally, the sets V and W can be investigated using some partial orderings, which allow one to sort elements in a set, as described by Propositions 5 and 6.

Proposition 5 The set V of variable structures is ordered by the binary relation ACT, where _ ACT _' is equivalent to every input that activates an interaction in _ activates the same interaction in _', i.e., each interaction in _ has fewer or equal number of activations than in _'.

The elements of V can be ordered from the ones with the least activation to the ones with the largest activation. Similarly, it is easy to prove Proposition 6.

Proposition 6 The set W of fixed structures is ordered by the binary relation SUB, where _ SUB _'

means that every interaction in _ is present in _', i.e. the Ordinary Petri Net that represents _ is a subnet of the Ordinary Petri Net that represents _'.

The elements of W can be ordered from the ones that are least connected to the ones that are maximally connected.

4.2 Constraints

Generic Constraints: Any WDVS might not model a DIS that makes physical sense. Some generic constraints must be defined on V, to restrict the class of variable structures to those that are admissible. The generic constraints on V can be divided into two classes. The first class relates the properties of variable structures to the properties of fixed structures, as described in Remy and Levis (1988). The second class is specific to variable structures. The set of variable structures that satisfy the generic constraints is called AV, the set of Admissible Variable Structures (AVS).

Let _ be a variable structure. For any x in X, the fixed structure _(x) must satisfy

(R1) (a) The Ordinary Petri Net that corresponds to _(x) should be connected, i.e., there should be at least one (undirected) path between any two nodes in the Net, (b) A directed path should exist from the external place to every node and from every node to the sink. (R2) The Ordinary Petri Net that corresponds to _(x) should have no loops, i.e., the structure must be acyclic. (R3) In the Ordinary Petri Net that corresponds to _(x), there can be at most one link from the RS stage of a role i to another role j, i.e., for each i and j, only one element of the triplet {G(x)ij, H (x)ij, C (x) ij} can be non-zero. (R4) Information fusion can take place only at the IF and CI stages. Consequently, the SA stage of a role can either

receive observations from sensors, or receive one and only one response sent by some other role. (R5) There cannot be one link from the SA stage of role i to the IF stage of role j and a link from the RS stage of role i to the SA stage of role j. Constraint R1(a) eliminates data flow structures that do not represent a single structure. Constraint R1(b) insures that the flow of information is continuous within the organization from the sensors to the effectors. Constraint R2 allows acyclical fixed dataflow structures only. This restriction is imposed to avoid deadlocks and infinite circulation of messages within the organization. Constraint R3 indicates that it does not make sense to send the same output to the same role at several stages. It is assumed that once the output has been received by a role, this output is stored in its internal memory and can be accessed at later stages. Constraints R4 and R5 ensure that the IF stage is indeed a stage at which items of information coming from different sources are fused and that deadlocks are avoided.

(R6) If the first stage of a role is SA, then each input link in S and G is permanent. (R7) If the first stage of a role i is IF, then each input link Fji, Hji for j in [1..R] is permanent. (R8) If the first stage of a role i is CI, then each input link Cji for j in [1..R] is permanent. (R9) Let L be a variable link between two stages t1 and t2, and let us suppose that Xi is an effective alphabet of the variable interaction. • If t1 is a SA stage and t2 is a IF stage, then there mus be in every _(x) a directed path from the placeSensor i to t1, and a directed path from the place Sensor i to the SA stage of the role that contains t2. • If t1 is a RS stage and t2 is a IF stage, then there must exist in every _(x) a directed path from the place Sensor i to t1, and a directed path from Sensor i to the SA stage of the role that contains t2. • If t1 is a RS stage and t2 is CI stage, then there must exist in every _(x) a directed path from the place Sensor i to t1,

and a directed path from the place Sensor i to the IF stage of the role that contains t2. Constraints R6, R7, and R8 proceed from a common rationale. They state that a role at its input stage could not have any knowledge about the input to the system and, therefore, cannot exhibit a variable interaction. Thus, at the SA stage, any link between the sensors and the roles must be fixed. Similarly, if a role receives the response from another role, the latter must always communicate its response (R6). Constraints R7 and R8 incorporate the fact that the input stage of a role can be the Information Fusion or Command Interpretation stages. Constraint R9 states that a variable interaction between two stages t1 and t2 must be based on sources of information that are accessed jointly by the roles that interact. The stage t1 must determine, based on some information it has accessed, whether or not it has to send a message to t2. Similarly, the role that contains t2 must infer from some of the information it has already received, whether or not it must wait for a message from t1 before initiating process t2. The condition that the information can be accessed is formulated by checking that there is a flow of information (a directed path) from a source that is effective to the stages at which the information contained in the sensor observation is needed. In other words, an effective source of information must be accessible. Constraints R6 to R9 lead to the introduction of a new concept, the accessible pattern of a variable structure, which keeps track of the sensors that are accessible for each potentially variable interaction. Let _ be any AVS. Its accessible pattern is a set of arrays:

E (_) = [ Fe ( _ ), He ( _ ), Ce ( _ ), se ( _ )].

• Fe (_), He (_), Ce (_) are three R _ R arrays that correspond to the interactions between roles that can be variable. • se(_) is an R_ 1 array that corresponds to the links from the roles to the effectors, which can be variable. • Each array contains the alphabet that are accessible for this interaction.

User—Defined Constraints: A designer can also introduce constraints that reflect his knowledge about the structure under study. He may rule in or rule out some links, force a certain pattern of variability, or express hierarchical relationships between the roles. For example, due to a particular expertise, one might like to indicate that a certain set of observations can only be processed by some roles, while the processing of other sets of observations can be carried out by any role (Stabile and Levis, 1984). The designer can translate his knowledge by filling 0s and 1s at the appropriate places in the arrays S, s, F,

G, H, C. The other elements will remain unspecified, and constitute the degrees of freedom of the design.

A designer can impose two types of conditions, fixed constraints or colored constraints. Fixed constraints are the constraints that are valid for any input x in X. They can be of two types, ruling in or ruling out. A link is ruled out by putting the |X| _ |X| null matrix O in the appropriate entry of _. A link is set to be a permanent link by putting the |X| _ |X| identity matrix I in the appropriate entry of _. Colored constraints are used to rule out a link for some set of observations and rule it in for some other set of inputs. This can be done by filling the appropriate matrix L in _.

4.3 Computation of Solutions

The design problem is to determine all the Admissible Variable Structures that satisfy the user-defined requirements. Demaël and Levis (1994) have shown that this task can be carried out within acceptable computational limits. First, it is proven that the task of determining the set of solutions can be carried out quantitatively using the formalism of the CPN theory. Then, it is shown how to characterize the set of solutions without having to do a computationally expensive, and practically infeasible, exhaustive enumeration. Indeed, the set can be determined from its boundaries, i.e., its minimal and maximal solutions. A solution _0 is minimal in V if it is not possible to have a variable structure _, with _ ACT _0, without violating one of the constraints R1 to R9. A solution _0 is maximal in V if it is not possible to have a variable structure _, with _0 ACT _, without violating one of the constraints R1 to R9. The next propositions lead step by step to a characterization of the set of solutions that can be translated into a design algorithm.

Proposition 7 Consider the fixed structure, called the Universal Net, which contains all the interactions that have not been ruled out at the design specifications stage. Then, for every x, _(x) must be a subnet of the Universal Net.

The rationale is that any _(x) cannot have a link that has been ruled out for any input to the system. Then, the colored constraints must be analyzed to determine the correlation of

the links whose variability has been set explicitly by the designer. Indeed, for each variable link that has been specified, the set of inputs X is divided into two subsets, as is the set W. The inputs that activate the variable link L (the inputs in AC) must be assigned to fixed structures that contain the fixed link L (fixed structures in W1), and the inputs that do not activate the variable link L (the inputs in DC) must be assigned to fixed structures that do not contain the fixed link L (fixed structures in W0). The fact that the activation of links is correlated is illustrated in Figure 6, because there is no reason to believe that the variability of two links induces the same partitions of X and W.

This analysis yields: • A partition of the set of inputs X into k elementary sets of inputs EXi, i = 1..k. • A characterization of k disjoint subsets in W, Wi, i = 1..k. • The condition that each input in EXi must be assigned to a unique structure in Wi, _i. The elements of every subset Wi, i = 1..k, can be characterized using simple information flow paths. A simple information flow path is a directed path without loops from the external place p0 of the Universal Net to the sink.

Proposition 8 _ is a fixed structure that belongs to Wi, i = 1..k, if and only if _ lies between a maximal element and a minimal element in Wi: _ _1 and _2 such that _1 SUB __ SUB _2 _ is a union of simple information flow paths of the Universal Net.

Figure 6. Correlation of Colored Constraints

Let a candidate solution be an AVS that satisfies all constraints of the design but R6 to R9. Proposition 9 characterizes candidate solutions.

Proposition 9 An AVS verifies all constraints of the design but R6 to R9 if and only if__ lies between a maximal candidate solution and a minimal candidate solution : _ _1 and _2 such that _1 ACT __ACT _2 _1 is such that for every i, i = 1..k, EXi --> _i, with _i minimal element in Wi _2 is such that for every i, i = 1..k, EXi --> _i, with _i maximal element in Wi

The proof is a combination of Proposition 4 and Proposition 8. The set of the AVSs that satisfy all constraints of the design but R6 to R9 is thus completely determined by its boundaries. Unfortunately, all AVSs that satisfy Proposition 9 do not fulfill constraints R6 to R9. The first reason is that candidate minimal and candidate maximal solutions may not fulfill one of the constraints R6 to R9. The second reason is that variable structures between the candidate maximal and minimal solutions have variable links in

which some effective alphabets are not accessible. However, the set of solutions is completely determined by its minimal and maximal solutions, as described by Proposition 10 and illustrated on figure 7.

Proposition 10 The set of solutions can be partitioned into families with each family containing all the structures that have the same permanent input links.

• One family corresponds to a layer of partially ordered sets. In each layer, the AVSs have the same accessible patterns. • Within one family, _ is a solution if and only if • _ fulfills Proposition 9. • _ is bounded by at least one minimal and one maximal solutions that have the same accessible pattern. _1 ACT _ ACT _2. These accessible patterns are sorted in increasing number of accessible alphabets. • The boundaries of each layer are determined by the maximal and minimal elements of the family.

A minimal solution to the problem is called a VMINO, whereas a maximal solution to the problem is called a VMAXO. The constraints that are specific to variable structures require that the set of solutions be divided into subsets of solutions with the same input links (Constraints R6, R7, and R8). The VMINOs and VMAXOs determine completely the set of solutions within one family. Between a VMINO and a VMAXO there are several layers of solutions. Each layer has boundaries, and any AVS: 1) between the boundaries, 2) whose variability is based on the corresponding accessible pattern, is a solution to the design problem. Finally, the boundaries are determined by adding selected links of the maximal solution to the minimal solution.

Figure 7 depicts one possible configuration of a set of solutions. There are two families of solutions. Each subset is characterized by some VMINOs and VMAXOs. In the first

family, there are two VMINOs and two VMAXOs. From VMINO 2 to VMAXO 2, there exist three layers of solutions with the same effective pattern, which correspond to the effective patterns E1, E2, E3.

The computation that allows one to reduce the set of candidate solutions to the set of solutions can be done as follows. First, one determines the number of input links that are allowed by the degrees of freedom of the design.

Figure 7. Structure of a Set of Solutions

One family of solutions will be determined for each combination. Then, for each family, the computation considers its minimal candidate solutions and its maximal candidate solutions, and searches for solutions that verify constraint R9, which is checked (i) by determining for each variable link its effective alphabets; (ii) if Xi is an effective alphabets, by checking if there exist simple information flow paths, from Sensor i to the stages that interact in each fixed structure _k. If one variable link does not fulfill R9, the structure is rejected.

VMINOs are computed by checking first all the candidate minimal solutions. If some minimal candidate solutions verify R9, then the computation stops. Otherwise, the search

continues inductively on the variable structures that are immediately above, until a solution is found. Symmetrically, the VMAXOs are computed by scanning first all the candidate maximal solutions. If some such structures fulfill R9, the search stops. Otherwise, it continues inductively by checking the set of candidate solutions that are immediately below the ones just scanned.

Finally, the intermediate boundaries between one VMINO and one VMAXO are determined as follows. One computes the effective pattern of the VMINO and the VMAXO. If they are equal the search is over. If they are not, the interactions at which the VMAXO has more accessible alphabets are determined. Then, for each pair (interaction I, accessible alphabet Xi) one determines the combinations of simple information flow paths that make Xi accessible at I. These information flow paths, when added to the VMINO, generate the set of intermediate boundaries.

To summarize the computational procedure, we have

Step 1: Given the set of constraints to the design, determine the Universal Net, the elementary sets of inputs EXi, i = 1..k, and the constraints on Wi, i = 1..k.

Step 2: Apply the Lattice Algorithm of Remy and Levis (1988) to compute the minimal and maximal elements of each Wi, i = 1..k.

Step 3: Use Proposition 4 to obtain candidate minimal solutions and candidate maximal solutions.

Step 4: Sort the families of solutions. Search for VMINOs and VMAXOs.

Step 5: Determine the intermediate boundaries of each family.

5. Incrementally Adaptive Organizations

In many cases, particularly in tactical decision making organizations, the organization can not cease its activities to switch from one configuration to another. Switching has to occur smoothly without interfering with current operations. To accomplish this, adaptation has to be incremental - a sequence of small changes that transition the organization from one architecture (the current one) to the desired or target architecture while maintaining the requisite functionality. This incremental adaptation can be visualized as a morphing process. Morphing is a visual effect technique which builds the necessary sequence of frames (in-between images) to make the transition from one source image to a different target image. In this sequence, frames have features that differ more and more from the source image and become closer and closer to the target image. Morphing organizations are adaptive organizations that enable smooth and progressive switching from one configuration to another. Because of the distribution of processes (functions) across team members for the accomplishment of the mission, coordination between team members is essential for successful incremental adaptation or morphing. Coordination between team members is required to identify the need for change, to select the new configuration, and to implement it. This takes the form of coordination rules allocated to team members that specify how to implement switching from one configuration to another in a distributed environment. These rules are defined for each team member and specify under what conditions to perform some functions and where to send the function output. The derivation of these coordination rules is a critical part of the design of adaptive teams A team is defined as a group of experts with overlapping areas of expertise, who work cooperatively to solve decision problems or carry out missions. This paper presents a methodology for adaptive team design that captures the dynamical aspects of morphing organizations, and shows how the transfer of functions between team members should take place. 5.1 A Methodology for Designing Adaptive Organizations The methodology for Adaptive Team design is summarized in Figure 8. It consists of seven steps. From a Process Model that specifies the different functions to be carried out and the data exchange between them (step 1), redundant responsibilities are defined by allocating each function to several team members (step 2). A priority for back-up implementation is defined for each function and is used to construct the Responsibility Definition Matrix. This redundant responsibility definition allows to define the possible configurations, called modes of operations, expected to be applied by the team. The set of all modes of operations is characterized by a lattice called the Mode Transition Graph (step 3). From the Responsibility Matrix, the Fully Connected Graph that represents all the defined function allocations and all the required data exchanges between team members can also be constructed (step 4). Coordination rules need then to be defined and allocated to team members to specify exactly the conditions under which each team

member has to perform the back-up of a function and where to send its output (step 5). Different candidate designs can be obtained at this time and need to be evaluated and compared. A Colored Petri net model of each candidate design can be constructed in step 6, to be used in the evaluation phase in step 7. This process can be seen as a three stage process: an analysis stage corresponding to steps 1 to 4, a synthesis stage corresponding to steps 5 and 6, and an evaluation stage corresponding to step 7. This paper addresses the first six steps. For the evaluation phase, see Perdu (1997).

Step 1: Construct the Process Model The starting point of the methodology is to generate the Operational Concept that will drive the design. The Operational Concept specifies what the team is supposed to do, the type of tasks it will carry out, the missions it will execute and how it will do them. An Operational Concept corresponds to the broadest requirements. From the Operational Concept, a Functional Decomposition is carried out: functions are identified and further decomposed into subfunctions. The set of subfunctions has to be mutually exclusive and possibly exhaustive

Figure 8. Methodology for Adaptive Team Design and Evaluation

to characterize and cover the different aspects of the function. This decomposition process can then be applied further to the subfunctions until each subfunction corresponds to an elementary task that can be performed by a single team member. The Process Model is derived by specifying the data exchanged between subfunctions. It can be represented by a Petri Net that shows how the functions interact for the execution of a mission. The process model is acyclical. It shows the flow of information through the system from inputs from the environment to outputs to the environment. A Process Model can be constructed for each mission and for each input pattern. The obtained Petri Nets need then to be folded together into a single structure that depicts all the functions and interactions necessary for the execution of the set of missions. The resulting Functional Architecture is a variable one and is represented by a Colored Petri Net. If a function appears in several missions, the coordination constraint (Lu and Levis, 1992) needs to be checked to make sure that there is no ambiguity as to which mission is performed and where to send the output of the function. If a problem is identified, the process model needs to be modified.

Step 2: Perform a Redundant Function Allocation Adaptation can only take place if there is a way to transfer responsibilities from one team member to another. It is enabled by allocating the same function to several team members. Since a function can not be performed simultaneously by several team members, an order of allocation of each function to different team members has to be defined. This order/priority assignment can be the result of an analysis of anticipated load on the team member or be dependent on the ability and/or qualifications of the team member to perform the function.

Step 3: Construct the Mode Transition Graph A mode of operation is defined as the allocation of each function to each team member. If n functions need to be performed and if the k-th function is allocated to ik team members in the responsibility definition matrix, there are i1 x i2 x ... x in different possible modes of operations. The order of back-up implementation for each function introduces a partial ordering. It can be shown that the set of all possible modes of operations with this partial ordering constitutes a lattice, that is a partially ordered set where any two of its elements have a greatest lower bound and a least upper . This lattice can be represented graphically by a Hasse diagram called the Mode Transition Graph. Each node corresponds to a mode of operation. A directed link between two nodes indicates that a function has been transferred from one team member to another in the order specified in the responsibility definition matrix. The lower bound of M is the normal mode of operation, the upper bound is the back-up mode where each function has been allocated to the team member in the lowest priority of the responsibility definition matrix. Figure 9 shows the mode

transition graph for a small example with five decision makers and five basic functions. Each mode has been assigned a label X.Y where X indicates the number of back-up operations that have occurred in the mode of operations and Y enumerates the modes having the same number of back-ups. This graph presents all the possible modes of operations that can be used by the team. However, only a subset of these modes are desirable and adaptation is the process of morphing from one desired mode of operation to another one. The "undesired" modes of operations are transitional modes of operations and can be seen as in-between frames of the morphing process. A step in the morphing process corresponds to the transfer of a function from one team member to another and to an arrow in the Mode Transition Graph. Therefore, the morphing is represented by an undirected path in the graph from a node representing the source mode of operations to the target mode of operations.

Step 4: Construct the Fully Connected Graph To the responsibility definition matrix corresponds a Petri Net, called the Fully Connected Graph, that represents the complete allocation of functions to team members and the exchange of data between them for the execution of these functions. This net is generated as follows. The different decision makers are represented as rounded boxes. The functions allocated to the team members, as defined by the responsibility definition matrix, are represented as transitions inside these rounded boxes. Every place of the Process Model net is replicated as many times as necessary in the fully connected graph to represent the corresponding interaction between team members: Source places are replicated as many times as there are instances of the transitions to which they are the input. A label is associated with each source place to indicate which team member receives the corresponding item of information. This label has the prefix 0 to indicate it comes from the environment. For example 01a indicates that the data "a" from the environment is received by DM1.

Figure 9. Mode Transition Graph for the Example

Sink places are replicated as many times as there are instances of transitions having output places that are not input to any transition. A label is associated with each sink place to indicate which team member produces the team output. This label has 0 for second character to indicate that the data is sent to the environment. For example 50h indicates that DM5 produces the team output h. Interaction places are replicated as many times as there are instances of the pairs of transitions they are connected to. A specific label is defined for each place that specifies the team member sending the corresponding data, the decision maker receiving it, and the content of the message: 12d indicates that the data d is sent by DM1 to DM2. The resulting fully connected graph represents all the processes and data exchanges required for the mission to be performed in the different modes of operation. This net is acyclical by construction. Figure 10 shows the fully connected graph for an example.

Figure 10. Fully Connected Graph for the Example

In a given mode of operation, only a subset of transitions and their interconnected places are active. Figure 11 shows in bold the active places and transitions in the normal mode of operation. Figure 12 shows the active places and transitions for the back-up mode 1.3. One can see by looking at figures 11 and 12 that the transfer of responsibility from some team members to others leads to important changes in the pattern of interactions between team members. There is a need to allocate to team members, along with functions, rules that will help the team members to know under what conditions what functions need to

be performed and where the output of these functions needs to be sent. The derivation of these coordination rules is the topic of the next step. To complete the design of an adaptive team, coordination rules need to be defined to help each team member decide when to perform a function and where to direct the output of a function. These coordination rules specify which interactions of the Fully Connected Graph are active according to the current situation and trigger the transition from one mode of operations to another in the Mode Transition Graph.

Step 5: Derive the Coordination Rules The coordination scheme used by a team can be of two types: centralized process coordination and distributed process coordination. The former requires a supervisor who keeps track of the state of the system and the environment and makes adjustment in the responsibility distribution among team members by issuing configuration switching command. The coordination rules allocated to team members take into account received switching commands in the triggering of the function to be performed. However, this coordination scheme is hardly implementable in the context of decision making teams with strong time constraints. The basic assumption for the implementation of distributed process coordination and the derivation of the coordination rules is that the team members react only to the messages they receive. The presence of all items of information required for the execution of a function triggers its execution. The transfer of responsibility fort a function is triggered by the detection of an event detected by one or several team members that forces the team to adapt. Coordination rules specify what to do in case such an event is detected. This detection is local and the coordination rules have to be derived so that the activity of the team remains consistent. Such trigger events can be: (a) the arrival of an item of information not required for the function to be performed in the current mode of operation; (b) the arrival of a message indicating that a subsequent function for a task can not be performed by a fellow team member; (c) the non arrival of an acknowledgment after a certain time-out; or (d) the non arrival of an item of information necessary for the execution of a function within a specified time interval after receiving a first item of information required for its execution. Key to distributed process coordination is the idea that the team member in charge of performing the back-up of a function needs the different items of information necessary for its execution. To accomplish that, different back-up implementation strategies can be defined:

Sender Initiated (SI). A team member having performed a function sends the output to another team member who is too overloaded to perform the function in a reasonable delay and "refuses" the message and notifies the sender (or no acknowledgment is sent). At the reception of the refusal message, or when the time-out for reception of the acknowledgment has expired, the sender sends then the item of information to the team member in charge of assuring the back-up of this function as defined in the mode transition graph. Receiver Initiated (RI). The overloaded team member, receiving an item of information necessary for a function, sends an acknowledgment to the sender and forwards this item to the team member in charge of assuring the back-up of this function. The sender-initiated strategy is still in place for the cases where no acknowledgment is received after a certain timeout. Broadcast Strategy. An information item produced by a function executed by a team member is sent to all the team members who are able to perform the subsequent function in any mode of operations. As a result, all the items of information necessary for the execution of this function are provided to several team members who then can execute it. To avoid the multiple execution of a function for the same task by different team members, the execution of this function has to be conditioned to the reception of a triggering coordination message. This triggering coordination message is generated by the team member who produced the new information item and is sent to the team member in charge of performing the subsequent function in the baseline allocation. If this team member is overloaded, the transfer of responsibilities is then performed through the transfer of this coordination messages among team members. The transfer of this triggering coordination message is done using either the Sender-Initiated strategy or the Receiver-Initiated strategy. Other. Other strategies can be constructed, such as, for example, the sender keeping track of what he sent to a specific team member and, forecasting that the recipient will be overloaded, sends the message directly to the team member responsible for the back-up of the function. Different back-up implementation strategies will result in different sets of rules, adding another dimension to the problem. The number of different modes of operation is a combinatorial problem that grows with the number of back-ups assigned to each function. A systematic approach to derive coordination rules to respond to overloading of team members has been developed. Details of the approach are provided in Perdu (1997). After having determined the candidate back-up implementation strategies to be used by the team to transfer of responsibilities, the coordination rules for the SI strategy are derived from the Fully

Connected Graph and the Mode Transition Graph. The rules for the selected back-up implementation strategies are derived from the rules for the SI strategy by appropriately transferring rules from one team member to another and changing the labels of the messages.

Step 6: Construct the Model of the Team A generic Colored Petri Net (Jensen, 1992) model of the team member has been developed and is fully described in Perdu (1997). This model is used to construct the model of an adaptive team using Hierarchical Colored Petri Nets where substitution transitions can replace subnets that are drawn on other subpages and different substitution transitions can then refer to the same subpage, leading to the creation of different instances of the same subpage. Each instance behaves independently of the others. The model of the adaptive team is then constructed by drawing five substitution transitions corresponding to the team members. The specifics of each instances of the team member model, that is their ID numbers and the rules allocated to each of them are specified inside each instance. 6. Conclusion Organizations with adaptive structures have been discussed. Types of adaptation and triggers for them have been presented. The use of adaptive organizations has been motivated by the need to achieve performance not achievable by any single fixed structure organization Then two approaches were outlined: in the first one, an organization adapts by switching among predetermined fixed structures – no consideration is given to the transition itself. In the second approach, the transition itself from one structure to another through a sequence of intermediate steps (incremental adaptation or morphing) is the key issue. This is one part of a methodology for team design, which goes from specification of the functional requirements to the evaluation of alternate candidate designs. This methodology considers not only the redundancy of processes required for adaptation to take place, but also how the transfer of responsibility is carried out. 7. Acknowledgment This work was supported by the US Office of Naval Research under contract No. N00014-93-1-0912.

8. References

[Deamël and Levis, 1994] J. J. Demaël and A. H. Levis. On Generating Variable Structure Architectures for Decision Making Systems, Information and Decision Technologies, vol. 19, 1994, pp. 233-255. [Jensen, 1992] K. Jensen. Coloured Petri Nets: Basic Concepts, Analysis Methods and Practical Use. Springer-Verlag, Berlin, Germany 1992. [Levis, 1988] A. H. Levis. "Quantitative Models of Organizational Information Structures," in Concise Encyclopedia of Information Processing in Systems and Organizations, A. P. Sage, Ed., Pergamon Books Ltd., Oxford 1988. [Lu and Levis, 1992] Zhuo Lu and A. H. Levis (1992). Coordination in Distributed Decision Making, Proc. 1992 IEEE International Conference on Systems, Man, and Cybernetics, Chicago, Illinois, October 18-21, 1992 pp. 891-897. IEEE, 1992. [Monguillet and Levis, 1993] J-M Monguillet and Ash. H. Levis. "Modeling and Evaluation of Variable Structure Organizations," in Toward a Science of Command Control and Communications, Carl R. Jones, Ed., AIAA Press, Washington, DC, 1993. Perdu D. (1997) Distributed Process Coordination in Adaptive Command and Control Teams. Report GMU/C3I-184-TH. Ph. D. Dissertation, C3I Center, George Mason University, Fairfax, VA. Perdu, D., and A. H. Levis (1998). Adaptation as a Morphing Process: A Methodology for the Design and Evaluation of Adaptive Organizational Structures. Computational and Mathematical Organization Theory (to appear) [Remy and Levis, 1988] P. Remy and A. H. Levis. "On the Generation of Organizational Architectures Using Petri Nets," in Advances in Petri Nets 1988, G. Rozenberg, Ed., Springer-Verlag, Berlin 1988 [Stabile and Levis, 1984] D. A. Stabile and A. H. Levis. "The Design of Information Structures: Basic Allocation Strategies for Organizations," Large Scale Systems, Vol. 6, 1984 , pp. 123-132