![]() |
A still frame from a pre-recorded 3D video stream provided by the CITRIS tele-immersion lab at UC Berkeley. The people in the 3D video stream are dancers captured during a live joint performance between UC Berkeley and University of Illinois at Urbana-Champaign. This screen shot was rendered using the naive reconstruction algorithm, i.e., by reprojecting depth pixels into world space and rendering them as fat points. |
![]() |
The same still frame, rendered using our current real-time triangulation algorithm. The spaces between depth pixels are filled in, with colors properly blended between neighboring pixels. The remaining artifacts are mostly due to a synchronization problem between the 12 individual depth image streams and pixels dropped during capture. |
![]() |
This screen shot shows a close-up of the point cloud reconstruction algorithm. The captured shape disappears as it dissolves into a cloud of unconnected points. |
![]() |
This screen shot shows a close-up of the triangulation reconstruction algorithm. |
![]() |
This is a photograph of the 3D video display engine running in the KeckCAVES immersive environment. The dancers are displayed at their true size and positioned such that the viewer in the environment can interact with them. The observer is wearing head-tracked shutter glasses to perceive the dancers at the correct size and position. |
![]() |
Another photograph showing a different still frame from the same 3D video stream. The reconstruction algorithm used in these photographs is the on-the-fly real-time triangulation approach, currently implemented in software. |
![]() |
Another photograph showing a different still frame from the same 3D video stream. These photographs were taken directly in the immersive environment and show exactly what the observer is seeing, without any further post-processing such as rotoscoping. |
![]() |
Another photograph showing a different still frame from the same 3D video stream. |
![]() |
Another photograph showing a different still frame from the same 3D video stream. |
![]() |
Screenshot from a collaborative visual exploration experiment conducted between IDAV's VR lab and UC Berkeley's tele-immersion lab. Two users at remote sites are using a shared version of 3D Visualizer to jointly explore a 3D gridded dataset. The users can see each other's actions in real-time, and can see each other via 3D video. The 3D video avatars of all participating users are mapped into the shared workspace with correct eyelines, i.e., if user A looks directly at user B's avatar, user B will see A's avatar looking directly at her. This mapping supports intuitive and natural interaction between remote participants, much unlike what is provided by traditional videoconferencing systems. |
![]() |
Another screenshot from the same experiment. Here, the two users have moved next to each other to examine a feature in the data together. |
![]() |
Later in the same experiment: one user has reached into the data to create an animated isosurface, which is displayed at the other user's site in real time. |