7.11. Rendering and backward raytracing
Unlike tracing rays in their natural direction (forward raytracing), as applied for the RadiCal method, it is also possible to perform backward raytracing. As stated above, most raytracing applications actually perform backward raytracing, as their purpose is the generation of (more or less) photorealistic images of three-dimensional objects (renderings).
Following the fundamental principle of the reversibility of light propagation, referred to as Helmholtz or Stokes-Helmholtz reversibility, it is always possible to reverse the path of light, applying the same models to describe any refractions or reflections along this path. This does include the detailed modelling based on polarisation states, as applied here. Therefore, the RadiCal raytracer can be used to generate images of the target objects if rays originating from a specific eye point are directed towards the target object.
Unlike in common raytracing tools, no explicit light sources are included in the model. Instead, a high-contrast spherical environment provides illumination for the rendering. This is because considering light sources with a singular location (point light source) or a singular directional profile (sunlight) requires different sampling techniques. In such cases, biased sampling or correlated sampling strategies (e.g. Metropolis sampling) have to be applied to ensure that the relatively few relevant directions having high intensities are sampled adequately; see, e.g., Veach et al. (1997).
However, the forward raytracing approach implemented in the RadiCal method relies on unbiased, uncorrelated sampling, which is more efficient for the chosen forward raytracing approach. Since the main purpose of generating renderings with the RadiCal method is validating the applied algorithms, it would not be reasonable to implement alternative sampling strategies for backward raytracing. Nevertheless, the implementation has proven that unbiased sampling can still be used for rendering purposes if singular light sources are avoided. In

stead, illumination is provided by a panoramic image in HDRI format. HDRI stands for high dynamic range image. Unlike in regular image formats, the RGB values are not stored as either 8 or 16-bit integer values but as 32-bit floating point values. While the colour channels with integer values are Figure 112 Typical full-panoramic HDRI environment CC0 license (Poly Haven – public asset library, 2023) Figure 113 Working principle of backward raytracing (idealised, environment at infinite distance) A full spectral polarisation Monte-Carlo Raytracer RadiCal, D. Rüdisser 182 limited to 1:255 (8-bit) and 1:65535 (16-bit) contrast ratios, the floating-point exponent format allows almost infinite contrast ratios. This can be used to include extended bright regions that effectively provide illumination in the RadiCal rendering method. In order to optimise the efficiency of the sampling, i.e. the calculation time for the rendering, these illumination spots should still be extended over relatively large areas of the image. Consequently, the final renderings appear as photos taken under diffuse lighting conditions.
The principle of the backward raytracing algorithm is depicted in Figure 113. A virtual screen is located between the eye point and the target object. The screen represents the final rendered images. Rays originating at the eye point are cast through the screen towards the object, successively scanning each pixel of the image. If the rays hit the target objects, the specific scattering functions (see section 7.8) are called to determine if the ray is absorbed, reflected or refracted. The process is repeated until the ray is absorbed or left without further intersection. The final direction of all rays that are not absorbed is used to determine a corresponding location on the environmental image (HDRI). The colour and brightness of the relevant location on the (HDRI) finally determine the contribution of this ray to the colour of the pixel in the screen.
Figure 114 and the related description provide additional information regarding the actual implementation of the algorithm. All parts of the algorithm were developed and implemented by the author. However, they cannot be presented in more detail here, as this would exceed the scope of this chapter.
(1)A sample ray originating at the eye point and going through the screen pixel (x,y) is generated. The polarisation state is generally assumed as unpolarised. The wavelength is randomly sampled on the global radiation spectrum but limited to the visible range.
(2)The collision detection algorithm is called (section 0) to detect intersections with target objects.
(3) If a surface is hit, the lightsurface-interaction class is called to determine the result of the scattering event. If the ray is refracted or transmitted, the collision detection is repeated. If the ray is absorbed, the tracing of the current ray is terminated, and a new ray is generated.
(4)The wavelength of the sampling ray is

converted into an RGB vector. Note that the mapping of wavelengths into the RGB colour space is not unique. Therefore, the conversion provides just one plausible solution.
(5)Based on the final direction of the ray, a position on the environment HDRI is determined. The corresponding RGB colour vector is then determined by calculating a weighted average of the neighbouring four pixels (anti-aliasing).
(6)The components of the environment colour vector are weighted using the components of the 𝜆[𝑟, 𝑔, 𝑏] vector. The resulting RGB vector is cumulated on the screen pixel.
(7)The convergence is monitored by applying a simple colour metric for the standard errors of the RGB channels of the screen pixel. If the determined colour deviations fall below a specified threshold, the sampling for this pixel is terminated. The standard error of the colours is again determined by applying the CLT (section 5.1.3). The approach allows for minimizing the sampling steps required for each pixel.
The entire process is carried out for all pixels of the screen. The final image is further processed using commonly applied colour and brightness calibration as well as a gamma correction. These post-processing steps are not part of the description above. Depending on the characteristics of the target object, the contrast of the selected HDRI environment and the targeted colour noise threshold, typically 100 to 1000 samples per pixel are required. Hence, the total number of scattering events that must be processed for a single image typically exceeds one billion operations. The occurring scattering events, therefore, cover an extensive range of potential wavelengths, angles and polarisation states. Any errors in the algorithms or numerical instabilities of the algebraic methods, such as algebraic singularities, would either cause a termination of the raytracing or faulty results in the final rendering. Generating rendered images by performing backward raytracing with the RadiCal algorithm is therefore considered a powerful validation step. In addition to validating the implemented algorithms, the renderings also allow checking the correct and appropriate assignment of material properties.

Exemplary renderings are shown in Figure 115 to Figure 117. The dashed circles in Figure 115 indicate regions with pronounced refraction, interreflections and total internal reflection. Figure 116 shows a rendering of the validation window model that was used for the full-system validation (chapter 8). In Figure 117, only one material’s complex-valued refractive index function is altered. The material is used for the panes of the left window, the shading slats of the right window and the octopus sculpture. It can be seen how the reflectivity, refractivity, transparency and colour change significantly. Note that no explicit RGB colour information is contained in any model. The colours arise naturally from the wavelength dependencies of the refractive index and Fresnel functions
