-
Notifications
You must be signed in to change notification settings - Fork 28
Sampling
The reader is directed to the pipeline page, specifically the section on sampling data related to buckets here for a primer on the sample point structures, and their initialisation. This section is dedicated to a more thorough, in-depth explanation of the sampling process, so will skip much of the introductory information presented there.
The basic way sampling works in Aqsis is to store all the sample points for the entire bucket in an nD spacial subdivision structure 1). The two primary dimensions of this structure are X and Y position, in addition, there are bounds on motion time, and depth of field (DoF) bound index (see Focus Blurred).
| Sample Point Data | Type |
|---|---|
| Position | Vec2D |
| DoF Offset | Vec2D |
| Time | Float |
| DoF Offset Index | Int |
| Detail Level | Float |
| Subcell Index | Int |
| Data | [SqImageSample] |
Table 1: The data stored in a sample point.
| Node Data | Type |
|---|---|
| Min Position | Vec2D |
| Max Position | Vec2D |
| Min Time | Float |
| Max Time | Float |
| Min DoF Bound Index | Int |
| Max DoF Bound Index | Int |
| Min Detail Level | Int |
| Max Detail Level | Int |
Table 2: The data stored in an occlusion hierarchy node.
Each time a micropolygon (MP) is sampled 2), it is driven through the hierarchy at the top, for each node, a check is made against the MP bound for the range of X/Y positions at that node, only if it is contained will processing continue to the branches of the node. A similar check is made where appropriate against time, i.e. if the MP exists at a time that isn't within the minimum and maximum times of all samples within the node, processing stops. And finally, a further bound check against the DoF bound index. Each sample point has a DoF index associated with it, if the DoF index currently being checked (passed into the sample function, along with the MP) lies outside the range of samples contained within the current node, processing doesn't continue down that branch of the tree. If a MP makes it down to a leaf node of the tree (a node with only a single sample), the sample point is checked against the MP for containment, if it is contained it is considered a sample hit and the relevant information from the MP is transfered to a depth ordered list of hit data associated with the sample point. Initialisation
The sample positions are calculated using an n-rooks style shuffling algorithm in the InitialiseSamples method. Each pixel is made up of n x m samples, as defined by the PixelSamples setting. Then each sample is further split into m x n subsamples, swapping m and n, gives us a subsample matrix that is has the same size in both dimensions, even if the PixelSamples differ in x and y, and hence a good spread of sample positions. These are populated in a canonical order by initialising the samples as sample(x,y) = subsamplepos(y,x) which gives us a distribution where there is only ever one sample in any row or column. Then the samples are 'shuffled' first in y, then in x by looping over each row/column, choosing a random row/column between the current, and the end, and swapping them. This maintains the quality of only ever having one sample position in each row/column, while randomising the samples.
Figure 1. The pixel being sampled
Figure 2. The samples (4×2)
Figure 3. The subsamples (2×4 within each sample)
Figure 4. The basic sample positions
Figure 5. Jittered in X…
Figure 6. …and in Y
NOTE: This is redundant (see Pipeline)
Finally these sample indexes into the subsample grid are turned into actual pixel offsets by adding in the sample offset, the center of the subsample, and the pixel position. This way the sample positions can be read out directly and contain absolute postions in raster space. The indexes (x and y) into the subsample grid are also stored locally in the pixel data to be used during Filtering. This way, a single filter value table can be generated for each possible subsample location in a grid, and looked up using the indexes at filter time, instead of recalculating the filter values each time the bucket sample positions are regenerated.
When considering focus blur, the sample structure is initialised to contain extra information regarding depth of field (DoF) offsets. This information is used to shift the sampling points around on a Circle of Confusion 3), which is determined by the depth of the micropolygon being sampled in relation to the camera focal values.
The DoF sample data consists of a set of DoF offsets, one for each sample point within the pixel, and a matching DoF bounding box. The initialisation process first splits the pixel into a conceptial grid of subpixels, the grid is nXm in size, with n and m being the X and Y values of PixelSamples. The a DoF offset is assigned to each sample point, randomly distributed in the subpixel bound. The following diagram illustrates this canonical form, shown without the random placement (jittering) for clarity 4).
Once this canonical form is complete, the D_oF offsets_ are projected from the unit square to the unit circle 5). This is becasue the circle of confusion is a radius, so when offsetting the sample data during sampling, the offsets need to be values contained within a unit circle. If we didn't do this, the offsets would neither be guaranteed to be within the unit circle, nor would they represent a good spread within the unit circle. As well as projecting the offsets themselves, we also project the points of the grid to the unit circle (shown in green), and then recalculate the DoF bounds using those projected coordinates (shown in grey) 6). The points in the DoF structures represent offets within a circular (actually elliptical, but we'll consider the radius to be the same in X & Y for the purposes of simplicity) lens. The final data is shown in the following diagram.
In a scene with DoF enabled, micropolygons are sampled in a slightly different way. For each microplygon, it's circle of confusion is calculated and stored. For each sample point, the DoF offset is multiplied by the calculated CoC to give an offset on the lens for the micropolygon at that sample point. This offset is then used to shift the micropolygon on the conceptual lens. The process is a little more complicated than that in practice, described in the following snippet of pseudocode.
1. for each micropolygon
2. get the bound in camera space
3. adjust to remove the CoC expansion
4. for each sample point index
5. get the DoF bound for that point
6. multiply by micropolygon CoC
7. adjust bound by this amount
8. pass through the occlusion sampling tree, with the DoF index
Some of these steps warrant additional explanation, so I'll consider those individually now.
- .
- .
- The bound reported by the MP initially includes the adjustment for the circle of confusion calculated from the depth of the MP. This stage removes that adjustment, returning the bound to a simple bound of the points.
- .
- Each sample point has a DoF offset and a DoF index, as described in the initialisation section above. The bound is used to adjust the MP bound so that it can be considered for points in that region of the lens only.
- The DoF bound is stored for a lens of unit radius, this step scales it to the circle of confusion radius for the MP depth.
- The bound of the MP is adjusted to make sure that it is only considered by sample points in the appropriate DoF bound.
- .
The sampling process is highly complicated and also highly sensitive in terms of aqsis over performance. The following changes are hoped to provide some significant performance and resource useage improvement.
- Delay the tree useage to the sub pixel level. Aqsis does bucket level sampling, at present the occlusion tree covers the entire area of the bucket. This has a couple of issues
- The tree is very large and must be “built” (or shuffled/rebuilt) at the beginning of each bucket, regardless of how much of the tree will be used.
- Shuffling essentially invlaidates the tree, adjusting the bounds of the tree has results in very unbalanced trees.
- The tree is not really needed at the pixel level since we can determine which sub pixels a MPG covers by scan coverting over the bucket.
- Make the tree building “on demand”. This is really jst an implementation detail, but if the sub tree turn out not to be shuffle'able for some reason (they should be), then we can save alot of time by only building the occ tree for a sub pixel when the first MPG impinges on that sub pixel. There is a potential minor improvement in memory useage in doing this anyway, though since sample patterns are shuffled within the bucket it is unlikely that a given sample pattern/pixel will not be used at some point.
The potential advantages are…
- Smaller, seperable (shuffleable?) occlusion trees.
- No tree rebuilding.
- Natural handling of empty buckets (no need to recalc bounds means the intial bucket setup phase is much simplified).
- Potential for significant speed increased.
Issues still remain as to how to implement DoF on top of this (the existing process could still be used, but we may want to re consider using the “DoF as an additional sampling dimension” method previously attempted.
NOTE: Code references are outdated
2) occlusion.cpp@CqOcclusionTree::SampleMPG
3) renderer.h@CqRenderer::GetCircleOfConfusion
4), 6) imagepixel.cpp@CqImagePixel::InitialiseSamples
5) imagepixel.h@CqImagePixel::ProjectToCircle