Illumination Illustrated for a Sphere

In this Notebook is a full implementation of ambient, diffuse and specular reflection - in color - for a scene consisting of one sphere. The lightiing parameters and material properties, along with the camera setup, can all be adjusted to experiment with different possibilities. The resulting rendered image is captuered as an image and displayed as such.

Ross Beveridge, October 11, 2018

SageMath setup.

In this notebook we are taking advantage of support for images provided as part of SageMath. Also, if the image being rendered is 8x8 or smaller a 3D interactive plot is constructed to help sanity check the relative position of the camera, object and lights.

In [108]:
from sage.repl.image import Image
from sage.plot.plot3d.shapes import *
from sage.plot.plot3d.base import SHOW_DEFAULTS
SHOW_DEFAULTS['aspect_ratio'] = (1,1,1)
SHOW_DEFAULTS['frame_aspect_ratio'] = (1,1,1)
SHOW_DEFAULTS['perspective_depth'] = false

Scene setup - Base Case

The next code sets up the scene to be rendered. It includes information about the camera, the single object including material properties, and finally lights.

In [109]:
cam = {'eye'  : vector(RR, 3, (25, 25, 60)),
       'look' : vector(RR, 3, (25, 25, 10)),
       'up'   : vector(RR, 3, (0, 1, 0)),
       'bnds' : { 'l' :-1, 'r' : 1, 'b' : -1, 't' : 1},
       'nefa' : { 'near' : -10, 'far' : -60},
       'resl' : { 'width' : 64, 'height' : 64} } 
globe = {'c'  : vector(RR, 3, (25,25,10)), 'r' : 4.0 }
mat1  = {'ka' : vector(RR, 3, (0.2, 0.2, 0.2)),
         'kd' : vector(RR, 3, (0.7, 0.7, 0.7)),
         'ks' : vector(RR, 3, (0.5, 0.5, 0.5)) }
ambient = vector(RR, 3, (0.2, 0.2, 0.2))
lights  = [{'p' : vector(RR, 3, ( 5, 32, 30)), 
            'e' : vector(RR, 3, (0.5, 1.0, 0.5))},
           {'p' : vector(RR, 3, (45, 32, 30)), 
            'e' : vector(RR, 3, (1.0, 0.5, 0.5))} ]
phong  = 32

Incremental Changes Go Here

Working off the complete configuration above, the following is a convenient place to make changes and then see how the rendered images changes

In [110]:
# Use the following to make basic camera specification and geometry changes

#cam['bnds']['l'] = -1.5;  cam['bnds']['r'] = 1.5;  
#cam['bnds']['b'] = -1;    cam['bnds']['t'] = 1;  
#cam['resl']['width'] = 300; cam['resl']['height'] = 200; 

cam['resl']['width'] = 128; cam['resl']['height'] = 128; 

'Unpacking' - Setup Code to Render an Example

The following code create the necessary data structures and variables required later to actually ray trace the scene. In particular notice the camera setup which should by now be familiar to you.

In [111]:
var('umin, umax, vmin, vmax, near, far')
CEv = cam['eye'] ; CLv = cam['look']; CUPv = cam['up']
CWv = CEv - CLv; CWv = CWv / CWv.norm();
CUv = CUPv.cross_product(CWv); CUv = CUv / CUv.norm();
CVv = CWv.cross_product(CUv);
bs = cam['bnds']; right = bs['r']; left = bs['l']; top = bs['t']; bottom = bs['b']
nf = cam['nefa']; near = nf['near']; far = nf['far']
rs = cam['resl']; width = rs['width']; height = rs['height']

Define a Pixel Ray Object

Define a function to creates an object representing a ray eminating from a pixel.. This object is implemented as a Python Dictionary object with a base point and direction vector. This is consitent with how we've been expressing rays throughout our treatment of ray casting. If the width of the image is small, the code below will produced a 3D visualization of the rays.

In [112]:
def pixelRay(i,j) :
    px = RR(i/(width-1)*(right-left)+left)
    py = RR(j/(height-1)*(bottom-top)+top)
    # Naming is base point L and direction U
    Lv = CEv + (near * CWv) + (px * CUv) + (py * CVv)
    Uv = Lv - CEv; Uv = Uv / Uv.norm()
    return {'L' : Lv, 'U' : Uv}

if (width <= 8) : 
    rays =[pixelRay(i,j) for i in range(width) for j in range(height)]
    def gAxe(pt,c) : 
        return line3d([(0,0,0),pt],thickness=5,color=c)
    def axes() : 
        return map(gAxe,[(75,0,0),(0,75,0),(0,0,75)],['red','green','blue'])
    def gRay(r) :
        rayfar = r['L'] + r['U'] * abs((far-near))
        return [point(r['L'], size=10), arrow3d(r['L'], rayfar, width=4,color='orange')]
    gRays   = [e for sub in map(gRay, rays) for e in sub]
    gEye    = point(CEv, color='red', size=8)
    gGlobe  = Sphere(globe['r'],color=Color(list(mat1['kd']))).translate(globe['c'])
    gLights = [point(lights[i]['p'], color=Color(list(lights[i]['e'])),size=16) for i in range(len(lights))]
    figcon  = sum(gRays + axes()) + gGlobe + gEye + sum(gLights)
    figcon.show(aspect_ratio=1)

Compute Color for a Pixel

The key bit of code needed is that which computes the illumination (red, green, blue) at a point of interesection between a ray and a sphere. This is two step process. First, is there an intersection. Second, if so, what amount of red, green and blue should be returned.

Notice this code is hardwired for a single Sphere in order to simplify the example. In other words, there is no looping through a set of objects retaining that which is closest to the camera.

Also notice that the complete ambient, diffuse and specular illumination computation is included in the code below. Hence, you have a full working example of the concepts discussed in lecture 12.

Lastly, there are some nice computational features of Python being used here to create compact code. For example, the pairwise_product method. Play with these features enough to understand what they are doing and how they simplify the code.

In [113]:
def raySphereTest(ray, sph) : 
    r  = sph['r']
    Cv = sph['c']
    Lv = ray['L']
    Uv = ray['U']
    Tv = vector(RR, 3, (Cv - Lv))
    v    = Tv.dot_product(Uv) 
    csq  = Tv.dot_product(Tv) 
    disc = r^2 - (csq - v^2)
    if (disc < 0) :
        return [False]
    else :
        d  = sqrt(disc)
        t  = v - d
        pt = Lv + t * Uv
        return [True, t, pt] 
        
def raySphereRGB(ray, sph) :
    hitp = raySphereTest(ray, sph)
    if hitp[0] :
        ptos = hitp[2]
        snrm = ptos - sph['c'] ; snrm = snrm / snrm.norm()
        color = ambient.pairwise_product(mat1['ka'])
        for lt in lights :
            ptL = lt['p']
            emL = lt['e']
            toL = ptL - ptos; toL = toL / toL.norm()
            if (snrm.dot_product(toL) > 0.0) :
                color += mat1['kd'].pairwise_product(emL) * snrm.dot_product(toL)
                toC    = ray['L'] - ptos; toC = toC / toC.norm()
                spR    = (2 * snrm.dot_product(toL) * snrm) - toL
                CdR    = toC.dot_product(spR)
                if (CdR > 0.0) :
                    color += mat1['ks'].pairwise_product(emL) * CdR**phong
        return [True, color]
    else :
        return [False]

The Ray Tracer

The following seven lines of code implement the top level of a ray tracer. In general, something you should note is that your own code at the top level should be comparably simple. If proper rules of information hiding are observed when writting a ray tracer than at the highest level to process is as simple - and compact - as shown. In particular, you will want to study the pixel value assignment line of code. What is going on in that line of code, involving both the 'tuple' and 'map' command is extremely useful and also not immediately obvious to programmers unfamiliar with code of this kind.

Finally, notice how the use of the SageMath Image class is handling a lot of complexity for us, including the display of the resulting color image.

In [114]:
img = Image('RGB', (width, height), 'black')
pix = img.pixels()
for i in range(width) :
    for j in range(height) :
        res = raySphereRGB(pixelRay(i, j), globe)
        if res[0] :
            pix[i,j] = tuple(map(lambda(x) : ZZ(max(0,min(255,round(255.0 * x)))), res[1]))
img
Out[114]:
In [115]:
img.save('test1011.png')