Evaluation of a Pre-Reckoning Algorithm for Distributed Virtual Environments

Evaluation of a Pre-Reckoning Algorithm for Distributed Virtual Environments Xiaoyu Zhang, Denis Graˇcanin, Thomas P. Duncan Virginia Tech Department ...
Author: Nickolas Boone
1 downloads 0 Views 209KB Size
Evaluation of a Pre-Reckoning Algorithm for Distributed Virtual Environments Xiaoyu Zhang, Denis Graˇcanin, Thomas P. Duncan Virginia Tech Department of Computer Science 7054 Haycock Road Falls Church, VA, 22043 {zhangxy, gracanin, thduncan}@vt.edu Abstract The recently introduced pre-reckoning algorithm is an alternative to the traditional dead reckoning algorithm used in DIS-compliant distributed virtual environments. Before the algorithm can be applied to distributed virtual environments, a detailed evaluation is needed. The pre-reckoning algorithm and the general dead reckoning algorithm are implemented within a simple distributed virtual environment system — CUBE. A comparison between the two algorithms is provided and the evaluation of performance and accuracy of two algorithms is presented. The results indicate that the pre-reckoning algorithm achieves a much more accurate model of the actual trajectory of an entity controlled by a remote host than the general dead reckoning algorithm. Moreover, the pre-reckoning algorithm improves accuracy while generating fewer state update packets. Based on the demonstrated performance, the pre-reckoning has a potential to improve scalability of distributed virtual environments and enhance consistency of users’ view of the dynamic shared state.

1. Introduction In the design of distributed virtual environments, there is a fundamental trade-off associated with the consistency of each user’s view of the dynamic shared state and the throughput of the environment which is correlated to the environment’s ability to keep pace with the frequent state changes. This design issue is referred to as the ConsistencyThroughput Trade-off because a designer must often sacrifice consistency to operate within an acceptable throughput level [7]. The concept of dead reckoning (DR) is introduced to reduce the network traffic required to manage the dynamic shared state. When using a DR algorithm, update packets

are only issued to notify peer hosts of a change in status when a prescribed error threshold has been exceeded. A peer host predicts entities’ current states between status update packets. The prediction is based on the most recent updates of entity’s behavior. When a status update packet arrives, the host corrects its depiction of the entity’s state through the use of a convergence algorithm. The network latency results in an inconsistent view of the dynamic shared state which is an inherent shortcoming of the DR algorithm. For this reason, the DR algorithm has been widely applied in virtual environments with large number active entities where some inconsistency can be tolerated. Dead reckoning is one of the features of the Distributed Interactive Simulation (DIS) protocol [5]. Numerous improvements of the standard algorithm have been proposed. These improvements focus on prediction modeling, adjusting threshold that triggers the generation of entity state update packets, and smoother rendering. Position History-Based Dead Reckoning (PHBDR) is a hybrid method which uses recent position updates to track the behavior of entities controlled by remote hosts [6]. Based on the recent position history, PHBDR adaptively selects the order of derivative polynomial to predict the entity’s movement. Another approach uses an adaptive, multilevel DR algorithm [3]. The adaptive multi-level threshold is selected according to the area of interest (AOI), sensitive region (SR), and distance between entities. The advantage of the algorithm is that it reduces the total number of update packets, decreases the error, and improves collision detection. Another variation of the standard DR algorithm is to tailor the algorithm to the behavior of an entity [7]. The entity’s physical model and movement characteristic are taken into account when generating the entity’s state. These methods provide more accurate prediction. Since DR algorithms allow for error to improve throughput, the accuracy of the prediction is an important performance measure - particularly when the DR algorithm is used for military simulation and real-time system simula-

Proceedings of the Tenth International Conference on Parallel and Distributed Systems (ICPADS’04) 1521-9097/04 $ 20.00 IEEE

tions. The pre-reckoning algorithm is proposed as a variation of the standard DR algorithm to decrease prediction errors without significantly increasing network traffic [4]. The algorithm seeks to anticipate the potential for a violation of the error threshold and issues an status update packet immediately rather than waiting for the threshold to be violated. While this can result in update packets being generated unnecessarily, the objective of the algorithm is to eliminate foreseeable error with a negligible increase in update packets. The initial assessment of the pre-reckoning algorithm demonstrates its potential to outperform the standard DR algorithm, but the experiments are too limited in scope to draw more general conclusions [4]. This paper describes the implementation and evaluation of the pre-reckoning algorithm in a simple distributed virtual environment. The remainder of the paper is organized as follows. Section 2 provides an overview of the pre-reckoning algorithm. Section 3 describes the algorithm and Section 4 provides an overview of its implementation within CUBE, a multi-user distributed virtual environments system. Section 5 evaluates the performance and accuracy of the pre-reckoning algorithm based on the experimental results. Section 6 concludes the paper and provides directions for future work.

2. Pre-reckoning Algorithm The pre-reckoning algorithm is comprised, like the standard DR algorithm, of two parts, prediction and convergence. Compared to the DR algorithm, the convergence part in the pre-reckoning algorithm has not been tailored and so it can employ any convergence algorithm. The implementation uses an “accelerated” convergence algorithm as the default convergence algorithm [4]. The algorithm is referred to as “accelerated” because it allows the predicted entity to move at a faster rate until convergence is achieved. The local host predicts not only the motion of the entities controlled by remote hosts, but also the motion of the entity that it controls. An entity controlled directly by the local host has two models: one is for the exact movement (which is controlled by user input), and the other is the simulation of the prediction model executed by the remote host. The pre-reckoninig algorithm assumes that the entity will continue as directed by the most recent user manipulation eventually violating the error threshold due to the unpredictable change in behavior. In that case the controlling host issues an entity state update packet before the error threshold is actually violated. This enriched prediction portion of the pre-reckoning algorithm is what distinguishes it from the DR algorithm. The pre-reckoning algorithm can generate more accurate results because the update packets are generated and delivered earlier than by the DR algorithm which prevents additional errors. However, if the actual entity reverts back to the predictable behavior, the pre-reckoning algorithm generates

an unnecessary update. This is an acknowledged shortcoming of the pre-reckoning algorithm, but it is considered to be a trade-off for the reduction in prediction error. The assessment results based on a two-dimensional maze [4] show that the pre-reckoning algorithm actually issues fewer entity state update packets than the DR algorithm because the pre-reckoning algorithm does not have to perform convergence as frequently. The DR algorithm continues to issue update packets because the convergence portion of the algorithm could not catch up with the actual position within the error threshold. The angle of embrace (or angle, for short) is also considered as an important threshold for the pre-reckoning algorithm. The angle is defined based on three most recent position updates [6]. Whenever the angle becomes acute, which indicates that the entity is making a sharp turn, an immediate update is issued. As a result, the pre-reckoning algorithm is specially suitable for processing three types of behavior: when an entity begins to move, when a moving entity comes to a stop, and when an entity makes a sharp turn. However, it is difficult to predict when the angle threshold will be violated. The “pre-reckoning” is the most important feature of the algorithm when used for distributed virtual environments. The pre-reckoning algorithm can be considered as a threshold degeneration method because when the update packet is sent, the distance between the predicted position and the actual position seldom exceeds the value of the error threshold. The effect of the algorithm is somewhat like using a smaller threshold; however, the reduced threshold is not a fixed one. The value of the equivalent threshold is altered according to the most recent state of the entity. Figure 1 illustrates how the pre-reckoning algorithm works. Y

Threshold Scope

1 2

3a

Position

3b 3c

1' 2'

3' x

Figure 1. Threshold Equivalent Example

In Figure 1, the upper trajectory represents the actual movement of an entity controlled by user input on local host. The lower trajectory shows the modeled behavior of the same entity by a remote host. When the entity is in the position 2, the modeled entity is estimated to be in the position 2’. The entity changes direction with a new speed (and possibly a new rate of acceleration). The pre-reckoning algorithm assumes the entity will move to the position 3’ and

Proceedings of the Tenth International Conference on Parallel and Distributed Systems (ICPADS’04) 1521-9097/04 $ 20.00 IEEE

predicts the future position based on the most recent change in the trajectory. If the difference in the actual position and the modeled position is from 3a to 3c, an update packet will not be sent. If the entity moves fast, it may pass the position 3c, and an update packet is issued. If the entity slows down and does not reach the position 3a, an update packet is issued. Sudden changes in speed and direction suggest the potential for a future violation of the error threshold. The generation of entity state update packets by the prereckoning algorithm is based not only on the position, but also on the velocity and the time interval for the position update. The pre-reckoning algorithm can also be adapted to address changes in acceleration. Using the first order derivative polynomial as the basis for the prediction model, the threshold can be thought as a function f (g(V, t), h(v, t)), rather than the general threshold which can be only presented as f (h(v, t)). The function g predicts the entity’s position on the controlling host. The function h predicts the entity’s position on remote hosts. The variable V represents the predicted velocity of the entity when directly controlled by the user. The variable v represents the predicted velocity of the entity as simulated by the remote host. The results of the experiments show the potential performance improvement yielded by the pre-reckoning algorithm compared to the DR algorithm [4]. The pre-reckoning algorithm is effective when the actual entity makes an abrupt change in trajectory and continues on that path because it immediately issues an entity state update and prevents additional error from accumulating. When movement of the actual entity changes smoothly, the pre-reckoning algorithm is somewhat less effective, but it still outperforms the DR algorithm. However, it is important to note that these findings are based upon a minimal set of experiments using a two-dimensional maze environment with pre-scripted movements modeled by a single host. The results presented in this paper represents the continued evaluation of the prereckoning algorithm that employs a three-dimensional, realtime, multi-user distributed virtual environment.

3. Algorithm Description and Implementation The implementation of the pre-reckoning algorithm requires that the local host maintains two models of each entity controlled by that host. One is the model of actual behavior based on the user input and the other is the model of the predicted behavior as simulated by remote hosts. The remote hosts execute the same predictive model and initiate convergence with the actual position (or a forecasted position) when an entity state update is received. The controlling host must account for network latency to model the remote host’s convergence. The following is additional termi-

nology that must be defined to support the description of the modeling environment: Player: The model which is used to render the entity and display its movement on the local host. Ghost: The model that simulates the player’s behavior on the remote host. In the pre-reckoning algorithm, there are two separately computed thresholds for determining when an entity state update should be issued: • The distance between the actual position and the modeled position. • The angle defined by the last two changes in the trajectory, as measured by the corresponding velocity vectors. If the distance between the player’s and ghost’s positions is larger than the predefined value, then the distance threshold is exceeded. If the angle corresponding to the ghost’s future changes in velocity is larger than µ, where µ = 180−α, than the angle of embrace threshold α is said to be exceeded. If no threshold is exceeded for a given amount of time, the controlling host issues a heartbeat packet with the current entity state. The accelerated convergence algorithm is employed as in [4]. The steps for the implementation of the pre-reckoning algorithm in a distributed virtual environment are listed as below: 1. Initialize the entities’ states, the distance threshold, the angle threshold, and the heartbeat timer. 2. Read data from network. If there is an update packet for some entity, store the state information from the packet into the entity’s ghost and perform the convergence on its player. If there is no update packet, change the representation of both the player and ghost to the next predicted state. 3. Read input from devices, change the player’s state accordingly. 4. Predict the movement of the entity’s ghost and calculate whether some thresholds will be exceeded in the next state. If a threshold is going to be exceeded, apply the convergence algorithm on its ghost and send an update packet to the network. If no threshold will be violated, move the ghost to the next predicted state. 5. If an update packet is sent, reset the timer for the heartbeat packet and perform the convergence on the entity’s ghost. If an update packet was not sent, decrement the timer for the heartbeat packet. 6. Render and display a new frame on the screen. 7. Repeat steps 2 to 6. See [4] for more information on the convergence algorithm.

Proceedings of the Tenth International Conference on Parallel and Distributed Systems (ICPADS’04) 1521-9097/04 $ 20.00 IEEE

4. System Implementation

log operations. The architecture of the system is depicted in Figure 3.

CUBE is a three-dimensional multi-user game engine, a representative of a typical distributed virtual environment [1]. It uses frequent state regeneration to manage the dynamic shared state. CUBE is a single threaded system. In each loop, it reads input devices and network, sends update packets, and generates an renders a new frame on the display. The physical simulation module and the collision detection module are automatically applied, which enhances the realism of the virtual environment. CUBE is an open source program which makes it feasible to implement the pre-reckoning algorithm and the DR algorithm. The default frequent state regeneration is replaced with implementations of both the pre-reckoning algorithm and the DR algorithm to allow for comparison. Both algorithms have similar implementations (Figure 2), the only difference is how is the threshold violation detected. The same accelerated convergence algorithm is employed for both algorithms. The prediction model uses on the first order derivative polynomials. The velocity is predicted based on the velocity in the previous frame. The frame rate is guaranteed to be greater than 30 frames per second. The heartbeat time limit is set at 5 seconds. The virtual world of the CUBE is 256 × 256 × 256 units, the maximum speed of the avatar in CUBE is 22 units/sec, the diameter of the avatar is 2.2 units and its height is 3.9 units. The distance threshold is evaluated for values ranging from zero to three units (half-unit increments) and the angle is evaluated at 20 degrees increments starting from 20 degrees up to 120 degrees.

















 

















































 



























 







 

 





"







 





"





 

 







  

























!





"





 







 









/

/



































































0

0



















































 























































 

































































































































































































































































Figure 3. The architecture of the experimental system

All test cases are based on the same sequence of movements through the virtual environment. The sequence of movements is recorded as a time series referred to as the record session. The counterpart procedure of the record session is the replay session. The replay session ensures the clients perform the same behaviors as they did during the record session. The three associated data sets for an observed entity are compiled for the various thresholds values. The analysis based on the data sets for the two algorithms is described in the next section. Each replay session lasts 30 seconds. During the replay session, about 1200 frames are rendered by the local host with an average frame rate of 40 frames/sec. The frame rate on the remote host is stable at 60 frames/sec, about 1800 frames being rendered per replay session.



#

$

%

&

'

(

&

'

)

*

+

,

-

,

.

Figure 2. Flowcharts for two algorithms

The system implementation is designed so that both algorithms are tested under same conditions and the results are implementation independent. Three data sets are required to evaluate algorithms performances for each active entity: the actual positions on local host, the update packets transmitted across the network, and the predicted positions on remote hosts. A pipe server is used to handle the

5. Experimental Results and Evaluation Performances and accuracy are criteria used to evaluate the variations of the DR algorithm. Performances can be measured quantitatively by the resource utilization. The information principle Resources ≈ H × M × T × B × P indicates the relationships between performances and resources [7]. Reducing resource utilization will achieve higher scalability and better performance. Compared to the DR algorithm, the pre-reckoning algorithm has the same

Proceedings of the Tenth International Conference on Parallel and Distributed Systems (ICPADS’04) 1521-9097/04 $ 20.00 IEEE

value of H (average number of destination hosts for each packet), B (average amount of the network bandwidth required for a packet to each destination), P (number of processor cycles required to process each packet), and T (the timeliness with which the network must deliver packets to each to destination). The difference in M (number of packets transmitted in the network) is the main criterion when comparing two algorithms. The accuracy describes how well the predicted positions match the actual one. Positional accuracy and behavioral accuracy are two measurements for the visual representations of entities [6]. Positional accuracy indicates the position error for the remote host, and Behavioral accuracy describes the level of resemblance of the velocity and acceleration for the remote host. Because the acceleration does not play an important role for an entity based on our implementation, we will not address the evaluation of behavioral accuracy for the two algorithms. We use three metrics for measuring the positional accuracy. The first one is the maximum error between the two trajectories. This is also known as Hausdorff distance [2]. “The Hausdorff distance simply assigns to each point of one set the distance to its closest point on the other and takes the maximum over all these values.” However, the trajectory in a virtual environment is a time series. The maximum value of all the distances between the two positions having the same timestamp is used as the maximum error criteria. The maximum error is critically important for those simulations where the error should be within a controllable range. The second metric, the average error, is the average distance between the two trajectories. The average error is widely used to rate how accurate the predicted trajectory approximates the actual trajectory. The third metric is the standard deviation error between the two trajectories. This criterion represents the shape similarity between two trajectories. The trajectory with small standard deviation error has a shape similar to the shape of the actual trajectory. In the implementation, each entity’s position is recorded per frame. Based on the records, a trajectory is drawn according to the recorded positions and time interval. Any point between two consecutive timestamps can be calculated by linear interpolation. In our evaluation, fo represents the function for original trajectory on local host; fp represents the function for the predicted trajectory on the remote host. |fo (ti ) − fp (ti )| is the distance between the two positions at time ti . The start time is ts and the end time is te . Between ts and te , the time interval is divided into N −ts . segments, t0 = ts , t1 , . . . , tN −1 , tN = te , ti = ts + i teN The maximum error M is M = max |fo (ti ) − fp (ti )| i

i ∈ {0, 1, . . . , N }

The average error E is: N

E=

1  |fo (ti ) − fp (ti )| N + 1 i=0

The standard deviation error D is:  N 2 i=0 (|fo (ti ) − fp (ti )| − E) D= N +1

5.1. Test Cases Figure 4 shows the trajectory of the replay session. The trajectory describes the entity’s actual movement on the local host. Figure 5 depicts parts of the predicted trajectory on the remote host using pre-reckoning algorithm and the predicted trajectory using the DR algorithm where x ∈ [80, 105], y ∈ [45, 80], and z ∈ [8, 12].The angle threshold of the pre-reckoning algorithm is 100 degrees and distance threshold of which is 2 units. The distance threshold of DR algorithm is 2 units.

20 18 16 14 Z

12 10 8 6 4 2 120 100

120

80

100

60

80 40

60 20

Y

40 0

20

X

Figure 4. Actual trajectory on the local host

Figure 5-(a) shows both the actual trajectory (dotted line) and the predicted trajectory (solid lane) based on the prereckoning algorithm. Figure 5-(b) shows the actual trajectory (dotted line) and predicted trajectory based on the DR algorithm (solid line). The diagram shows that the turns in trajectory based on the DR algorithm are sharper than that of the pre-reckoning algorithm. The reason is that the DR algorithm detects turns later than the pre-reckoning algorithm. The trajectory based on the pre-reckoning algorithm contains more turns than that of the DR algorithm because the angle threshold enables it to detect more turns when the turn angle is sharp. The convergence algorithm will not always easily correct the prediction error when collision detection is applied. If the prediction error becomes too large, the number of update packets increases and error remains large when the

Proceedings of the Tenth International Conference on Parallel and Distributed Systems (ICPADS’04) 1521-9097/04 $ 20.00 IEEE

stands for the case when the angle threshold is 20. The number of update packets decreases as the distance threshold increases. Figure 8 shows the number update packets number as a function of the thresholds in a 3D diagram.

12

Z

10

8 80 70 60 50

80

Y

85

105

100

95

90 X

Pre−reckoning:20 Pre−reckoning:40 Pre−reckoning:60 Pre−reckoning:80 Pre−reckoning:100 Pre−reckoning:120 DR

3

10

12

10

8 80 70 60 50

80

85

Y

105

100

95

90 X

Number of Update Packets

Z

Figure 5. Details on parts of three trajectories 2

10

20

Z

0

0.5

1

1.5 Error Threshold

2

2.5

3

Figure 7. Update packets of both algorithms

1400 1200 Number of Update Packets

convergence algorithm can not correct the predicted error. For example, in Figure 6, the predicted entity “falls” at the position (80, 70, 12). In the virtual world the entity can not catch up to the actual position because it is blocked by a wall. This situation is more likely to occur when the error threshold increases for both algorithms. This situation is called the unrecoverable predicted error. When evaluating the performance of both algorithms, those test cases whose convergence algorithm can correct the predicted error are compared. When evaluating the accuracy of the two algorithms, all the test cases are taken into account.

1000 800 600 400 200

15

0 120

10

0 0.5

100 1

80 5

1.5

60

2

40

2.5 20

0

3 Distance Threshold

Angle Threshold −5 120 100

120

80

100

60

80 40

60 20

Y

Figure 8. Pre-reckoning algorithm: number of update packets

40 0

20

X

Figure 6. Unrecoverable predicted error

5.2. Performance evaluation Update packets are issued when either the threshold is exceeded or the heartbeat timer expires. Figure 7 illustrates the number of the update packets in case of the pre-reckoning algorithm. The legend “Pre-reckoning: 20”

The results indicate that the number of update packets decreases when the distance threshold increases. When the threshold is larger than a certain value, the number of update packets increases again. When the value of error threshold becomes small, the possibility of violation decreases consequently. If the error threshold is too large, it will take the predicted entity more time to correct the error. During the correction, it is easier to violate the threshold again. The increase in the angle threshold reduces the number of update

Proceedings of the Tenth International Conference on Parallel and Distributed Systems (ICPADS’04) 1521-9097/04 $ 20.00 IEEE

2 Pre−reckoning:20 Pre−reckoning:40 Pre−reckoning:60 Pre−reckoning:80 Pre−reckoning:100 Pre−reckoning:120 DR

1.8

1.6

1.4 Average error

packets since the use of the angle threshold improves the accuracy of prediction. In the DR algorithm experiments, the distance threshold is the only threshold. Figure 7 also illustrates the number of the update packets for the DR algorithm in the bold line. From the figure we can see that for the same distance threshold, the DR algorithm requires more update packets compared to the pre-reckoning algorithm.

5.3. Accuracy

1.2

1

0.8

0.6

We use M , E, D to compare the accuracy of two algorithms (N = 1000). The sample rate of the replay session is 300 samples/sec. The pre-reckoning algorithm has shown better performance than the DR algorithm.

0.4

0.2

Pre−reckoning:20 Pre−reckoning:40 Pre−reckoning:60 Pre−reckoning:80 Pre−reckoning:100 Pre−reckoning:120 DR

7

0.5

1

1.5 Error Threshold

2

2.5

3

Figure 10. Example of an unrecoverable predicted error

9

8

0

1.6 Pre−reckoning:20 Pre−reckoning:40 Pre−reckoning:60 Pre−reckoning:80 Pre−reckoning:100 Pre−reckoning:120 DR

Maximum error

6

1.4

5

1.2 Standard deviation error

4

3

2

1

0

0.5

1

1.5 Error Threshold

2

2.5

1

0.8

3

0.6

Figure 9. Example of an unrecoverable predicted error

0.4

0.2

Figures 9, 10, and 11 show the values of the maximum error, average error, and standard deviation error with the increment of distance threshold and angle threshold. The corresponding results for the DR algorithm are shown using the bold line. The maximum error, average error and standard deviation error become smaller when the angle threshold is growing larger or distance threshold is getting smaller. Figure 9 contains some abnormally large values. For example, when the distance error threshold is 1 and the angle threshold is 20, the maximum error is 6.4. This value is larger than the value even when the distance threshold is 3.0. The unrecoverable predicted error makes the error value reach a high value temporarily, but it does not have much impact on the average error. The average error and the standard deviation error of the DR algorithm are larger compared to the pre-reckoning al-

0

0.5

1

1.5 Error Threshold

2

2.5

3

Figure 11. Example of an unrecoverable predicted error

gorithm with the same distance error threshold. These results show that the trajectory using the pre-reckoning algorithm is closer to the actual trajectory and has a shape that is better approximate to the actual one. Even there are some abnormally large value of maximum error in some cases of the pre-reckoning algorithm, the average error of the prereckoning algorithm is always smaller compared to the DR algorithm. In our experiments, the unrecoverable predicted error causes large maximum error and standard deviation error. For example when the error threshold is 0.5 and the angle threshold is 60, both maximum error and standard deviation error are larger than those of the DR algorithm. How-

Proceedings of the Tenth International Conference on Parallel and Distributed Systems (ICPADS’04) 1521-9097/04 $ 20.00 IEEE

ever, the occurrence of unrecoverable predicted error is not due to the algorithm. It depends on the scenario and how the entity moves in the virtual environments. In most cases we can see from Figure 11 (especially when the error threshold is between 1.5 and 3) that the standard deviation error of pre-reckoning is less compared to the DR algorithm. We can conclude that with the same distance threshold, even with a larger angle threshold, the pre-reckoning algorithm is more accurate that the DR algorithm. The unrecoverable predicted error can be reduced by decreasing the distance error threshold. The reduction of the distance error generates more update packets, which help the predicted entity to arrive timely at the correct position. Increasing the angle threshold may not help much to reduce the unrecoverable prediction error even if it reduces the prediction error. For example, Figure 12 depicts the movement when applying the DR algorithm and the pre-reckoning algorithm. The solid line represents the actual trajectory and the dashed line represents the predicted trajectory. If the entity moves into the pool, it will not correct the error and return back. The left half of the figure shows that the predicted entity using the standard DR algorithm moves to C until it gets the update packet. The entity will correct the predicted error without moving into the pool. In the case of the prereckoning algorithm, the predicted entity turns at the point A because the angle at the point A is sharp enough to violate the angle threshold. However, that is not the case at the point B. The entity will reach the unrecoverable area before it gets the update packet to make the convergence possible. The ability to avoid an unrecoverable error is usually not sufficient to properly evaluate the accuracy of the algorithm because the occurrence of an unrecoverable predicted error depends on the shape of the trajectory and physical conditions in the environment.

Figure 12. Example of an unrecoverable predicted error

6. Conclusions The results of the experiments show that the prereckoning algorithm achieves better performance and accuracy than the standard dead reckoning algorithm. The increased accuracy is achieved by predicting the entity’s behavior in a virtual environment. The pre-reckoning algorithm determines the potential for the error threshold violation and then issues a state update packet immediately rather than waiting for the threshold violation to occur. Better accuracy of the pre-reckoning algorithm makes it very promising for use in multi-user distributed virtual environments. However, better understanding is needed to determine relationships between the structure/shape of the virtual environment, distance/angle thresholds and network traffic. The current research efforts are focused on performing extensive measurements and evaluation using different scenarios and different virtual environments. Given the performance results achieved by the pre-reckoning algorithm in the CUBE distributed virtual environment, continued research is warranted to examine a broader range of implementation conditions. The pre-reckoning algorithm shows significant promise in improving scalability of distributed virtual environments while also improving consistency of the participants’ views of the dynamic shared state.

References [1] Cube game engine. http://wouter.fov120.com/cube/, [accessed january 18, 2004]. [2] H. Alt and L. J. Guibas. Discrete geometric shapes: Matching, interpolation, and approximation: A survey. [3] W. Cai, F. B. Lee, and L. Chen. An auto-adaptive dead reckoning algorithm for distributed interactive simulation. Proceedings of the Thirteenth Workshop on Parallel and Distributed Simulation, pages 82–89, May 1999. [4] T. P. Duncan and D. Graˇcanin. Pre-reckoning algorithm for distributed virtual environments. Proceedings of the 2003 Winter Simulation Conference, pages 1086–1093, New Orleans, LA, 2003. [5] IEEE. Standard for Distributed Interactive Simulation — Application Protocols, volume IEEE Std 1278.1-1995. The Institute for Electrical and Electronics Engineers, Inc., New York, march 26 1996. [6] S. Singhal and D. Cheriton. Using a position history based protocol for distributed object. Technical Report STAN-CSTR94-1505, Department of Computer Science, Stanford University, Feb. 1994. [7] S. Singhal and M. Zyda. Networked Virtual Environments: Design and Implementation. ACM Press SIGGRAPH Series. Addison-Wesley, Reading, Massachusetts, 1999.

Proceedings of the Tenth International Conference on Parallel and Distributed Systems (ICPADS’04) 1521-9097/04 $ 20.00 IEEE

Suggest Documents