Determining angle of view through movement classification
Virtual reality applications open up a portal to new worlds that are distant in time and space. To do this, various sensors built into VR goggles capture the movements of the user’s head and body in space. The cost-effective inertial sensor technology currently used estimates the head orientation of a user relative to a starting or anchor point in the room. With a single sensor, it is not possible to capture the actual angle of view without errors. This makes it important to use a combination of the absolute position and a relative orientation given by the sensor system on the head. The captured raw data is filtered and is used for further feature extraction.
On the basis of this data, an algorithm uses machine learning techniques to classify how movements are directed from head to body. This type of movement classification avoids positioning errors through early recognition and the correction of large deviations compared to the expected result.
Next, we translate this automatic, intelligent and data-based relationship between head and body movements and the calculated movement trajectory into the correct direction of view. All the while, the evaluation of the orientation of body parts is continuously repeated and improved. The algorithm learns to adapt predictively to successive movements, so that the correct direction of view and a pleasant picture for the user can be fed into the VR goggles.