Rendering Techniques

The lgf project implements a wide range of state of the art image-based rendering techniques.

Two-Plane Parameterized Light Field

The classical two-plane parameterized light field was the first light field model introduced to the community and therefore was one of the first models supported in this framework. A sequence of camera images is aligned in a restrictive two plane grid, where one plane aligns the camera locations and the other plane is the common image plane. With this setup novel images can be reconstructed very fast and hardware accelerated. lgf3 contains a software-only and an OpenGL hardware accelerated renderer.

lgf-pp.jpg lgf-pp-hint.jpg

The lgf3 framework also supports rendering different stages of the actual rendering techniques. Therefore, the inner workings of an algorithm can be seen and this allows to use this tool in graphics education, too. E.g. the image on the right shows the color-coded weighting of associated camera images on the camera plane.

The Free-Form Light Field

The free-form light field is an extension of the two-plane paramterized light field and loosens the restrictions imposed on camera placement. The camera mesh is not a plane anymore but a mesh with arbitrary shape. The cameras do not share a common image plane anymore. Again, in lgf3 a software-only and an OpenGL hardware accelerated renderer is implemented.

lgf-ff.jpg lgf-ff-hint.jpg

The Unstructured Lumigraph

The unstructured lumigraph is the most general way of using a camera sequence for image-based reconstruction. No other structure or parameterization is required, only a sequence of images with the corresponding calibration information. The rendering technique reconstructs novel views by querying nearby cameras and projecting their image information weighted onto a coarse reconstruction of the local scene geometry. In lgf3 the standard algorithm is implemented with OpenGL hardware acceleration. Numerous additions were made, to support local depth information stored in depth maps, local confidence values to control weighting with external masks and the initial support for dynamic light field rendering.

lgf-ulg.jpg lgf-ulg-hint.jpg

Surface Light Fields and Light Field Mapping

The surface light field defines the captured light rays not with the help of global structures as the methods above, but uses a local parameterization. This model is usually given as a rather coarse geometric mesh of the depicted scene. The directional color information is then stored per pixel on the triangle faces and represented in a set of texture maps. This allows efficient hardware accelerated rendering. In the lgf3 framework, both the software-assisted initial surface light field approach and the hardware-accelerated light field mapping technique is implemented.

lgf-lfm.jpg lgf-lfm-hint.jpg

Texture Slicing

Texture slicing is a technique that can reconstruct local depth information stored in depth maps very quickly on modern graphics hardware. The model parts, the color and the depth image, are loaded as texture maps. The volume occupying the space of the depth map is then filled with a carefully chosen set of polygons. Each fragment on the polygons is rasterized with the so called texture slicing operator that cuts out the object’s surface with the help of the depth map. This renderer was purely invented and developed with the lgf3 framework.

lgf-ts.jpg lgf-ts-hint.jpg

lgf User Environment

lgf-desk.jpg lgf-desk2.jpg

The lgf framework provides an extensive set of readily available tools for many image-based rendering and modeling tasks. Many GUI components allow to experiment with different calibration and rendering techniques.