Although there are some methods proposed for predicting the 3D motion of an object in a distributed virtual environment, most of these methods are primarily designed for predicting the motion of specific objects, by assuming certain object motion behaviors. We notice that in desktop distributed 3D applications, such as virtual walkthrough and computer games, the 2D mouse is still the most popular device being used as navigation input. Through studying the motion behavior of a mouse during 3D navigation, we propose a hybrid motion model for predicting the mouse motion during a 3D walkthrough:
We have implemented three methods for comparison with the actual user navigation movement:In general, our motion prediction method is considered as a short-term prediction scheme. We are currently investigating a more long-term prediction scheme, which hopefully will further improve the accuracy of our prediction method.We tested the three methods by predicting 1, 3 and 5 steps ahead. The results are as follows:
- actual - actual user navigation movement
- P1 - our method
- P2 - a predictor using the Guass-Markov process
- P3 - a second order polynomial predictor
On the other hand, we have noticed that finger motion has a similar motion characteristics as the mouse motion. We have recently extended the mouse motion prediction method for hand motion prediction by considering the static and dynamic constraints of finger motion.
We have performed two sets of experiments to evaluate the new hand motion prediction method. The first set of experiments compare the accuracy of the 4 predictors at different prediction lengths (how far ahead in time that we attempt to predict) as follows:Recently, we have also started looking at the accuracy of motion prediction.The four predictors used in the above experiments are:
- File predict_100ms.mov: 0.1s
- File predict_300ms.mov: 0.3s
- File predict_500ms.mov: 0.5s
The second set of experiments show the application of the 4 predictors in a classic game called "Rock, Paper and Scissors". The following video demos compare the prediction accuracy of the 4 predictors during interactions. The hand on the left is a local hand (connected to the local host), while the hand on the right is a remote hand, which gesture is predicted. The prediction length was set at 0.5s.
- actual - actual user hand motion captured at the predicted time (reference)
- SOP: the Second Order Polynomial predictor
- GMM: the Gauss-Markov Model predictor
- HM: the Hybrid Motion Model predictor (without constraints)
- EHM: the Constraint-Based Hybrid Model predictor (the new method)
- File rps_actual.mov: using the original gestures (reference)
- File rps_sop_500ms.mov: using the Second Order Polynomial predictor
- File rps_gmm_500ms.mov: using the Gauss-Markov Model predictor
- File rps_hm_500ms.mov: using the Hybrid Motion Model predictor
- File rps_ehm_500ms.mov: using the Constraint-Based Hybrid Model predictor (the new method)
Publications: