Motion Prediction for Distributed Virtual Environments

Although there are some methods proposed for predicting the 3D motion of an object in a distributed virtual environment, most of these methods are primarily designed for predicting the motion of specific objects, by assuming certain object motion behaviors. We notice that in desktop distributed 3D applications, such as virtual walkthrough and computer games, the 2D mouse is still the most popular device being used as navigation input. Through studying the motion behavior of a mouse during 3D navigation, we propose a hybrid motion model for predicting the mouse motion during a 3D walkthrough:

The predicted mouse motion will then be mapped to the 3D virtual environment to determine the 3D motion of the viewer or any object in the environment. The initial results of such prediction method are very encouraging. Our experiments show that this method is more accurate than the popular prediction methods available.
We have implemented three methods for comparison with the actual user navigation movement: We tested the three methods by predicting 1, 3 and 5 steps ahead. The results are as follows:
In general, our motion prediction method is considered as a short-term prediction scheme. We are currently investigating a more long-term prediction scheme, which hopefully will further improve the accuracy of our prediction method.

On the other hand, we have noticed that finger motion has a similar motion characteristics as the mouse motion. We have recently extended the mouse motion prediction method for hand motion prediction by considering the static and dynamic constraints of finger motion.

We have performed two sets of experiments to evaluate the new hand motion prediction method. The first set of experiments compare the accuracy of the 4 predictors at different prediction lengths (how far ahead in time that we attempt to predict) as follows: The four predictors used in the above experiments are:
  • actual - actual user hand motion captured at the predicted time (reference)
  • SOP: the Second Order Polynomial predictor
  • GMM: the Gauss-Markov Model predictor
  • HM: the Hybrid Motion Model predictor (without constraints)
  • EHM: the Constraint-Based Hybrid Model predictor (the new method)
The second set of experiments show the application of the 4 predictors in a classic game called "Rock, Paper and Scissors". The following video demos compare the prediction accuracy of the 4 predictors during interactions. The hand on the left is a local hand (connected to the local host), while the hand on the right is a remote hand, which gesture is predicted. The prediction length was set at 0.5s.
Recently, we have also started looking at the accuracy of motion prediction.


Last updated in March, 2010.