LiDAR Visualization

LiDAR (Light Detection and Ranging) is a new approach to high-resolution surface model generation. A LiDAR scanner traces a narrow laser beam across a regular grid of sample points and measures the arrival time of reflected light for each sample point. Based on this time and the constant speed of light, the scanner can report the 3D positions of surface points lying inside a pyramidal region of space. Larger regions can be surveyed by aligning and merging multiple individual LiDAR scans. LiDAR's main benefits are accuracy of each sample point, and speed, i.e., the number of sample points that can be measured in a given time period.

Airborne LiDAR mounts a downwards-pointing LiDAR scanner on a low-flying aircraft and is used to gather high-resolution DEMs (digital elevation models) of the Earth's surface. Data gathered by airborne LiDAR is generally treated as a height field and resampled to a regular grid of elevation postings. Terrestrial or tripod LiDAR, on the other hand, mounts a horizontally pointing LiDAR scanner on a tripod to gather ultra high-resolution 3D data of natural and man-made structures. Tripod LiDAR is typically treated as a "point cloud" sampling of arbitrary surfaces, and not resampled to a grid. It can generate samplings with sample spacings of less than a centimeter, and accuracies of around 2mm per sample.

Figure 1: Left: LiDAR scan of a part of the UC Davis campus, containing the UC Davis water tower and (parts of) the Mondavi Center for the Performing Arts. This data set contains around 4.7 million points, and was provided by Gerald Bawden of the US Geological Survey. The KeckCAVES page contains a video of a user viewing and analyzing this data set in a CAVE VR environment (MPEG-1 format, 27MB). Right: Airborne LiDAR scan of the northern California coast, provided by NASA. This image was created using real-time point-based illumination.
Tripod LiDAR can be used for very accurate deformation measurement. By scanning a region at two different times, one can measure the displacement of individual features by comparing the point clouds defining their surfaces. However, since the distribution of individual sample points on a surface depends on the exact location and orientation of the scanner, LiDAR scans cannot be compared point-by-point. Instead, one has to derive position information from larger sets of points. For example, a LiDAR scan of a house will contain large numbers of points representing each of the walls, the roof, etc. By selecting the subset of points representing one wall, and assuming that the wall is planar, one can derive the equation of the plane that best fits the selected subset of points (see Figure 2). Similarly, one can derive best-fitting cylinders (representing pipes etc.) or other shapes. By deriving multiple equations, one can then calculate the locations of points by intersecting several derived equations, for example three planes. Since each plane equation averages the positions of large numbers of individual sample points, the (small) random inaccuracies introduced by the scanning process are reduced significantly. This allows the measurement of features with sub-millimeter accuracy.

Figure 2: LiDAR scan of a building with several planes fitted to the visible walls, to extract the position and relative alignments of the walls with very high precision.
The challenge in visualizing and analyzing tripod LiDAR data is that data sets can contain hundreds of millions to billions of unstructured (scattered) 3D points, each with their (x, y, z) position and an associated intensity or color value. Although the sample points sample surfaces in the surveyed area, the data does not contain any relationships between points -- the underlying surfaces have to be reconstructed from the point data alone. Our work focuses on developing software to visualize the "raw" LiDAR data as a cloud of 3D points with intensity or color values. We use an out-of-core multiresolution approach to visualize LiDAR data that is too big to fit into the computer's main memory, at interactive frame rates of around 60 frames per second. Our software also contains tools to analyze LiDAR data, for example, an interactive selection tool to mark subsets of points defining a single feature, and algorithms to derive equations defining the shape of such features.

Project Goals

Project Status

We implemented an out-of-core multiresolution point cloud renderer as described above. The renderer is able to visualize the largest LiDAR scans we currently have, containing up to 11.3 billion (11.3 * 109) sample points, at interactive frame rates using a fixed-size memory cache. Visualization quality degrades gracefully with computer and graphics performance. We implemented a brush-based selection mechanism that allows a user to select points by touching them with a three-dimensional "brush" connected to a 6-DOF input device in a VR environment, but that can also be controlled using mouse and keyboard in a desktop environment. Our software currently supports the extraction of point, line, plane, sphere, or cylinder equations to fit geometric primitives to selected subsets of data. It also contains a simple color mapping algorithm that visualizes each point's distance from an extracted plane, for quick visual assessment of data elevations. More recently, we added point-based illumination, which renders point clouds with the same appearance as polygonal or triangulated meshes. Filtered normal vectors for illumination are computed in a pre-processing step, and the renderer can handle multiple light sources during rendering in real-time with little impact on performance.


Pages In This Section

Screen Shots
Screen shots of LiDAR scans visualized using our software, and photographs showing the LiDAR viewer used in immersive visualization environments.
Movies of LiDAR data being visualized and analyzed in immersive VR environments.
Download page for the current and several older releases of LiDAR Viewer, released under the GNU General Public License.