VR Research Group

Spatial Updating

Sensory Calibration

Cue Combination

Image-based Navigation

Display Calibration

Representations




Spatial Updating

How do people update the direction of objects as they move? We tested this in a real-world setting and in VR and found large systematic errors, repeatable across settings and across participants. Standard models of distorted spatial representation cannot account for these errors. We have found that a model based on affine distortions of the scene gave the most accurate prediction of pointing errors. This shows that participants implemented a strategy that was not based on the true geometry of space, nor on a unique distorted representation. To explore the raw data, click here.

This project is funded by Microsoft Research, Cambridge.

Videos:


Peter is tested in the real-world setting.

Using data collected from a participant tested in virtual reality, a video depicting head position and pointing directions could be created using Matlab.

Video of the virtual reality stimulus.

Conference Presentations:

Top


Sensory Calibration

Sensory cues have an unknown mapping to properties of the world we wish to estimate and interact with. Despite this fact, our sensory systems exhibit expert skill in predicting how the world works e.g. knowledge of mass, gravity and object kinematics.

Recent research from the lab has shown that sensory predictions, based on internal physical models of object kinematics, can be used to accurate calibrate visual cues to 3D surface slant. Participants played a game, somewhat akin to a 3D version the classic computer game Pong. The aim of the game was to adjust the slant of a surface online so that a moving checkerboard-textured ball would bounce off the surface and through a target hoop.

This shows how "high-level" knowledge of physical laws can be used to interpret and explain incoming sensory data.

Further discussion of the results can be found here.

Videos:


Stimulus of the experiment with a non-spinning ball.

Stimulus of the experiment with a spinning ball.

Publication:

Top


Cue Combination

A primary focus of the lab has been understanding how observers combine different sources of sensory information as they move naturally through their environment. Cue combination has been studied extensively with static observers, but rarely when an observer is free to move. It is this more natural situation that the lab is primarily interested in.

Perception of the world in an expanding room

A number of papers from the lab have examined peoples perception of 3D attributes when the scene around them expands or contracts. Remarkably, people do not notice anything odd is happening here, even when the expansion or contraction is dramatic e.g. a normal size room expanding to the size of a basketball court.

However, despite these surprising effects, observers judgements can be well modelled as statistically optimal decisions given the available sensory data. This has important implications for understanding how humans represent the 3D layout of a scene.

How and why does the expanding room work?

The expanding room experiment exploits the unique attributes of virtual reality equipment. Just as within the film Inception, we can create virtual, physically impossible worlds, which dynamically reconfigure as a person moves within them.

In the case of the expanding room the centre of expansion is a point half way between the eyes. This is important since it means that the retinal projection of a virtual object remains the same irrespective of its size. For example, a wall of the room may double its distance from you but will also double in size. Alternatively, the moon is 1/400th the size of the sun, but appears to be the same size in the sky. This is because the sun is approximately 400 times further away.

As an observer walks around in the real room, the virtual room expands or contracts depending on their position. When they are in the middle of the real room, the real and virtual rooms are the same size so the observer's feet (which they cannot see) are at the same height as the virtual floor. On the left of the real room, the virtual room is half the size. On the right hand side of the real room, the virtual room has expanded to double its original size. Thus, walking from the extreme left to the extreme right of the physical room, the observer will see a four-fold expansion of the virtual room.

The key thing is, despite this massive change in scale, the pattern of light falling on the observer’s retina is similar to that experienced by an observer walking through a static room, although the relationship between distance walked and image change is altered. Only stereopsis and motion parallax can give the observer any clue as to the size of the room they occupy.

These are demonstrably powerful cues under normal circumstances and contribute to our ability to grasp different sized coffee mugs, navigate across furniture filled rooms without colliding with tables, and catch balls.

Questions addressed by the labs research in the expanding room include:
  • Why does the brain place so much weight on the assumption that the scene remains a constant size?
  • Why is the brain so willing to throw away correct information from other visual and non-visual cues?
  • Can we kick the brain into using the correct cues and ignore misleading ones?
  • Are judgements about different properties of the room mutually consistent?

Videos:


A participant is tested in an expanding room experiment.

Publications:

Top


Image-based Navigation

View-based and Cartesian representations provide rival accounts of visual navigation in humans, and here we explore possible models for the view-based case. A visual 'homing' undertaken by human participants in immersive virtual reality. The distributions of end-point errors on the ground plane differed significantly in shape and extent depending on visual landmark configuration and relative goal location. A model based on simple visual cues captures important characteristics of these distributions. Augmenting visual features to include 3D elements such as stereo and motion parallax result in a set of models that describe the data accurately, demonstrating the effectiveness of a view-based approach.

Videos:


A participant is tested in our navigation experiment.

Publications:

Top


Display Calibration

We present here our calibration method for Head Mounted Displays. Unlike other calibration methods, our technique:

  • Works for see-through and non-see-through HMDs.
  • Fully models both the intrinsic and extrinsic properties of each HMD display.
  • Support optional modelling of non-linear distortions in HMD display.
  • Is a fully automated procedure, requiring only a few discrete inputs from the operator. No need to wear the HMD and make difficult judgements with a 3D stylus!
  • Is quick and robust.
  • Requires no separate 3D calibration.
  • Delivers quantifiable results.

Publication:

Top


Representations

The experimental work in the group is primarily aimed at determining the type of representations that observers might build up as they move around and carry out different tasks. There is now a wiki from our lab to explore issues in this area (click here for a link).

A number of discursive papers from the lab discuss alternatives to 3D reconstruction as a basis for visual representation.

Videos:

Publications:

Top




Touch and Vision

Humans, like other animals, integrate information from multiple sensory modalities when making perceptual judgements about properties of the world. In collaboration with the Haptic Robotics group of Prof. William Harwin (School of Systems Engineering), work in the VR lab is focused on understanding how sensory information is integrated from vision and touch during interactive naturalistic tasks. To do this we use Haptic force-feedback devices in conjunction with VR to allow people to touch, pick-up and interact with 3D simulated objects.

Conference Presentation:

Top