T(ether) is a novel spatially aware display that supports intuitive interaction with volumetric data. The display acts as a window affording users a perspective view of three- dimensional data through tracking of head position and orientation. T(ether) creates a 1:1 mapping between real and virtual coordinate space allowing immersive exploration of the joint domain. Our system creates a shared workspace in which co-located or remote users can collaborate in both the real and virtual worlds. The system allows input through capacitive touch on the display and a motion-tracked glove. When placed behind the display, the user’s hand extends into the virtual world, enabling the user to interact with objects directly.
It is a spatially and body aware window for collaborative editing and animation of 3D virtual objects. Through T(ether) three dimensional objects can be viewed and edited with unprecedented ease with other people. A simple pinch gesture through a motion tracked glove can be used to create and manipulate virtual objects.
Object manipulation; shape selection – sphere, mesh cube…; hand tracking.
All the objects in the scene can be animated by key frames. Above the screen, interactions allow users to scrub through time using the pinch gesture.
T(ether) allows multiple devices to connect simultaneously, allowing mass collaboration.
T(ether) uses Vicon motion capture cameras to track the position and orientation of tablets, user heads and hands. Server-side synchronization was coded using NodeJS and tablet-side code uses Cinder. The synchronization server forwards tag location to each of the tablets over wifi, which in turn renders the scene. Touch events on each tablet are broadcasted to all other tablets using the synchronization server.