Gaze-contingent Depth of Field
In my opinion eye tracking is very important for virtual/augmented reality HMD. It is relatively cheap hardware wise and it can practically be used as another form of input. Successfully utilizing gaze tracking increases immersion and provides easier ways to interact with objects.
Many games use depth of field to increase immersion, but for proper real-lifelike experience in a virtual reality head mounted display simple depth of field effect is not enough. Gaze-contingent depth of field on the other hand helps simulate vision effects such as vergence and focus more closely to the real life and effectively will decrease eye strain and increase immersion.
In the video above I have implemented such depth of field. It uses Tobii EyeX and Unity3D.
The user can switch between two different versions of gaze-dependant depth of field.
The first version features blurring that happens on objects both close and further away from the camera.
The second version is more lifelike and the DOF can only be observed when the user focuses on an object that is close.
Having an eye tracking solution in a virtual reality HMD also brings the possibility of more personalized LOD.
Currently the Level Of Detail in games happen based of the distance from the object to the camera. The further the distance the smaller is the polycount of the object. But sometimes the player looks at the background can could notice LOD popping of objects or generally unpleasantness in the scenery far away.
With an eye tracking solution a new form of LOD can be achieved, one that prioritizes objects closer to the position the player looks at. That way objects far from the player gaze need not be rendered with the highest LOD possible even if they are close to the camera. This sort of LOD in combination with the gaze-contingent depth of field will further mask LOD popping and increase performance as only objects that are observed will be fully rendered with maximum quality.