pjt@cis cis.pku.edu.cn 2 2224 2003-09 09-10
2 Landscape painting is really a box of air with little marks in it telling you how far back that air thing are. g g (Billboarding) g (Sprite) g (Impostor) g g
3 g Modeling Rendering User input texture texture maps, maps, survey survey data data Geometry Textures Light sources Images For photorealism Modeling is hard Rendering is slow
4 g ( )
5 g Images user user input input range range scanners scanners Modeling Images & depth maps Rendering Images For photorealism Fast modeling Complexity-independent rendering Built on desire to bypass the manual modeling phase
6 g (1) (2) (3)
7 g
8 g 3D Graphics Sample-Based Geometry or Graphics Surface Based Rendering & Modeling Image-Based Rendering & Modeling Volume Rendering
9
10 n u
11 g
12 g
13 g
14 g
15 gmicrofacet Billboarding Microfacet? Microfacet
16 gmicrofacet Billboarding
17 gmicrofacet Billboarding
18 gmicrofacet Billboarding
19 gmicrofacet Billboarding
20 g Office2000 IBR
21 g
22 g
23 g
24 g
25 g
26 g
27 g
28 g
29 g It also can use slices from the original volume and blends them. http://zeus.fri.uni-lj.si/~aleks/slicing-and-blending/
30 g
31 g
32 g
33 g
34 g alpha
35 g Shogo: Mobile Armored Division alpha
36 g Adelson Bergeb Plenoptic Function The appearance of the world can be thought of as the dense array of light rays filling the space, which can be observed by posing eyes or cameras in the space. (x,y,z) ( β,ϕ ) PLENOPTIC function: plenoptic=plenus+optic plenus=complete, full. λ t
37 g As pointed out by Adelson and Bergen : The world is made of three-dimensional objects, but these objects do not communicate their properties directly to an observer. Rather, the objects fill the space around them with the pattern of light rays that constitutes the plenoptic function, and the observer takes samples from this function. The plenoptic function serves as the sole communication link between the physical objects and their corresponding retinal images. It is the intermediary between the world and the eye. E. H. Adelson, and J. R. Bergen, The plenoptic function and the elements of early vision, Computational Models of Visual Processing, Edited by Michael Landy and J. Anthony Movshon. The MIT Press, Cambridge, Mass. 1991, Chapter 1.
38 g
39 g 5D / 4D 2D 3D
40 g Outward Inward
41 g >50%
42 g
43 g From www.quicktimevr.apple.com
44 g
45 g QuickTime VR, LivePicture, IBM (Panoramix) VideoBrush IPIX (PhotoBubbles), Be Here, etc.
46 g BeHere http://www.behere.com/index.html http://www.behere.com/raiders300kbps.html http://www.behere.com/espn_1_300kbps.html imove http://www.smoothmove.com/
47 g Light Field Rendering Levoy and Hanrahan SigGraph 1996 The Lumigraph Gortler, Grzeszczuk, Szeliski and Cohen SigGraph 1996
48 g -Static geometry -Fixed lighting
49 g - - - -
50 g Need a 2D set of (2D) images Choices: -Camera motion: human vs. computer -Constraints on camera motion
51 g Point / angle Two points on a sphere Points on two planes Original images and camera positions
52 g Two Two planes, evenly sampled: light slab In In general, planes in arbitrary orientations In In practice, one plane = camera locations Minimizes resampling
53 g Define two parallel planes uv-plane and st-plane Light field defined as L(u,v,s,t) (r,g,b) for each (u,v,s,t) tuple
54 g (1)Typically create regular sampling of uv- and st-planes. (2)Place eye point at (u,v)on the uv-plane (3)Generate image with each corresponding to a point on st-plane each pixel for image (u,v) supplies sample (u,v,x,y) using skewed perspective matrix, (x,y) = (s,t) Data looks like 2D array of 2D images.
55 g Looking at a Light Field
56 g Compress individual images (JPEG, etc.) Adapt video compression to 2D arrays Decomposition into basis functions Vector quantization
57 g Displaying a Light Field Step 1 -Compute the (u,v,s,t) parameters for each image ray Step 2 -Resample the radiance at those line parameters
58 g Displaying a Light Field (cont.) Step 1 -Simple projective map from image coords to (u,v) and (s,t) -Can use texture mapping to compute line coords u = uw/w v = vw/w (uw, vw, w) -Inverse map to (u,v,s,t) only two texcoord calculations per ray -Can be hardware accelerated!
59 g Displaying a Light Field (cont.) Step 2 -Reconstruct radiance function from original samples Approximate by interpolation from nearest sample Prefilter (4D mip map) if image is small Quadralinear interpolation on full 4D function best -Apply band pass filter to remove HF noise that may cause aliasing
60 g
61 g Need high sampling density -Prevents excessive blurriness -Need large number of images -Thanks to coherence compression is good Observer restricted to free space -Can be addressed by stitching together multiple light fields based on geometry partition Requires fixed illumination -If interreflections are ignore, can address by augmenting light field with surface normals and optical properties
62 g All light from an object can be represented as if it were coming from a cube. Each point on the cube has a number of rays coming from it.
63 g The walls of the cube Each wall of the cube is actually two parallel planes Rays are parameterized by where they intersect these planes Any point in the 4D Lumigraph is identified by it s coordinates (s,t,u,v)
64 g Converting to Ray Space Similar to line space in light fields Each ray is a point, plotted by its intersection at each plane
65 g Benefits of parallel planes Simple to compute a ray s coordinates simply find its intersection with two planes Reconstruction can be done rapidly by using texture mapping operations built into hardware on today s workstations As an eyepoint moves along a straight line, the projections of the object track along parallel straight lines
66 g Samples of the object (pixels) are not continuous The camera location is not continuous We have gaps in our lumigraph
67 g Problems with discreteness The ray (s,u) doesn t exist, we must create it The ray that is closest to this one is (s i+1,u p ) The rays (s i+1,u p-1 ) and (s i,u p+1 ) are likely to have values closer to the true value because those rays intersect at nearly the same point as the ray we are trying to find
68 g If we know the depth value z We could compute u for a ray (s i,u ) that is perpendicular to our parallel planes at the point where (s,u) intersects the object u = u + (s s i )z/(1-z) The diagonal line passing through (s,u) indicates the optical flow of the intersection as one moves through s
69 g
70 g Capture: move camera by hand Camera intrinsics assumed calibrated Camera pose recovered from markers
71 g Rebin The coefficient associated with the basis function B is defined as the integral of the continuous Lumigraph function multiplied by some kernel function B~ Three-step approach developed -Splat -Pull -Push
72 g Splat, Push, Pull Splatting estimates integral -Coefficents are computed with Monte-Carlo integration Pulling coefficients computed for hierarchical set of basis functions -Linearly sum together higher-resolution kernels Pushing information from lower grids pushed up to higher-res ones to fill in gaps -Up sample and convolve
73 g Ray Tracing -Generate ray -Calculate (s,t,u,v) coords -Interpolate nearby grid points Texture Mapping -One texture associated with each st grid point -Project pixel onto uv-plane
74 g
75 g Constrain camera motion to planar concentric circles. Create concentric mosaics by composing slit images taken at different viewpoints along each circle. 3D plenoptic function Small file size 3D Ray index: radius, rotation angle, vertical elevation Depth correction alleviate vertical distortion
76 g An experimental setup for constructing concentric mosaics
77 g Construction of a concentric mosaic
78 g A collection of concentric mosaics.
79 g Concentric mosaics represent a 3D plenoptic function.
80 g Three examples of concentric mosaics Rendering with concentric mosaics: (a) parallax change; (b) specular highlight and parallax change.
81 g Rendering a novel view with concentric mosaics.
82 g Depth correction Rays in the capture plane: No problem Rays off the plane: Only a small subset of the rays off the plane is stored. So have to approximate all rays off the plane from only the slit images. This may cause vertical distortion in the rendered images.
83 g Depth correction with concentric mosaics
84 g Rendering with constant depth correction: (a) and (c front view and back view with depth correction 4R; (b) and (d) front view and back view with depth correction 16R; (e) illustration of camera viewpoints, front view A, back view B. The aspect ratio of the plant is preserved in (a) and (c), but not in (b) and (d).
85 g Rendering a lobby: rebinned concentric mosaic (a) at the rotation center (b) at the outermost circle (c) at the outermost circle but looking at the opposite direction of (b)
86 g (d) a child is occluded in one view but not in the other (e) parallax change between the plant and the poster
87 g (f) lighting changes near the ceiling caused by sunshine.
88 g Problems Geometries not known for real scenes. -Existing Vision problem Hole-filling exists. No vertical parallax is captured.
89 Thank you for your attention!