Essential Mathematics for Games and Interactive Applications

Hopefully, this book will leave readers with a desire to learn even more details and the breadth of the mathematics involved in creating high-performance, high-quality 3D games.

James M. Van Verth, Lars M. Bishop

705 Pages

69487 Reads



PDF Format

9.03 MB

iOS App Development

Download PDF format

  • James M. Van Verth, Lars M. Bishop   
  • 705 Pages   
  • 20 Feb 2015
  • Page - 1

    read more..

  • Page - 2

    This excellent volume is unique in that it covers not only the basic techniques of computergraphics and game development, but also provides a thorough and rigorous—yet very readable—treatment of the underlying mathematics. Fledgling graphics and games developers will find ita valuable introduction; experienced developers will find it an invaluable reference. Everythingis here, from the detailed numeric issues of IEEE floating point notation, to the correct way touse quaternions and read more..

  • Page - 3

    This page intentionally left blank read more..

  • Page - 4

    Essential Mathematicsfor Games andInteractive ApplicationsA Programmer’s GuideSecond Edition read more..

  • Page - 5

    This page intentionally left blank read more..

  • Page - 6

    Essential Mathematicsfor Games andInteractive ApplicationsA Programmer’s GuideSecond EditionJames M. Van VerthLars M. BishopAMSTERDAM• BOSTON• HEIDELBERG• LONDONNEW YORK• OXFORD• PARIS• SAN DIEGOSAN FRANCISCO• SINGAPORE• SYDNEY• TOKYOMorgan Kaufmann Publishers is an imprint of Elsevier read more..

  • Page - 7

    Acquisitions EditorLaura LewinAssistant EditorChris SimpsonPublishing Services ManagerGeorge MorrisonSenior Production ManagerPaul GottehrerCover DesignerJoanne BlankCompositiondiacriTechInterior printerRR DonnelleyCover printerPhoenix ColorMorgan Kaufmann Publishers is an imprint of Elsevier.30 Corporate Drive, Suite 400, Burlington, MA 01803, USAThis book is printed on acid-free paper. ∞Copyright © 2008 by Elsevier Inc. All rights reserved.Designations used by companies to distinguish their read more..

  • Page - 8

    DedicationsTo Mur and Fiona, for allowing me to slay the monster one more time. —JimTo Jen, who constantly helps me get everything done; and to Nadia and Tasha,who constantly help me avoid getting any of it done on time. —Lars read more..

  • Page - 9

    About the AuthorsJames M. Van Verth is an OpenGL Software Engineer at NVIDIA, wherehe works on device drivers for NVIDIA GPUs. Prior to that, he was a found-ing member of Red Storm Entertainment, where he was a lead engineer foreight years. For the past nine years he also has been a regular speaker at theGame Developers Conference, teaching the all-day tutorials “Math for GameProgrammers” and “Physics for Game Programmers,” on which this bookis based. His background includes a B.A. in read more..

  • Page - 10

    ContentsPrefacexixIntroductionxxiiiChapter1Real-World Computer Number Representation11.1Introduction 11.2Representing Real Numbers 21.2.1Approximations 21.2.2Precision and Error 31.3Floating-Point Numbers 41.3.1Review: Scientific Notation 41.3.2A Restricted Scientific Notation 51.4Binary “Scientific Notation” 61.5IEEE 754 Floating-Point Standard 91.5.1Basic Representation 91.5.2Range and Precision 111.5.3Arithmetic Operations 131.5.4Special Values 161.5.5Very Small Values 191.5.6Catastrophic read more..

  • Page - 11

    x ContentsChapter2Vectors and Points352.1Introduction 352.2Vectors 362.2.1Geometric Vectors 362.2.2Linear Combinations 392.2.3Vector Representation 402.2.4Basic Vector Class Implementation 422.2.5Vector Length 442.2.6Dot Product 472.2.7Gram-Schmidt Orthogonalization 512.2.8Cross Product 532.2.9Triple Products 562.2.10Real Vector Spaces 592.2.11Basis Vectors 622.3Points 632.3.1Points as Geometry 642.3.2Affine Spaces 662.3.3Affine Combinations 682.3.4Point Implementation 702.3.5Polar and Spherical read more..

  • Page - 12

    Contentsxi3.2.7Performing Vector Operations with Matrices 973.2.8Implementation 983.3Linear Transformations 1013.3.1Definitions 1013.3.2Null Space and Range 1033.3.3Linear Transformations and Basis Vectors 1043.3.4Matrices and Linear Transformations 1063.3.5Combining Linear Transformations 1083.4Systems of Linear Equations 1103.4.1Definition 1103.4.2Solving Linear Systems 1123.4.3Gaussian Elimination 1133.5Matrix Inverse 1173.5.1Definition 1173.5.2Simple Inverses 1203.6Determinant read more..

  • Page - 13

    xii Contents4.5Object Hierarchies 1694.6Chapter Summary 171Chapter5Orientation Representation1735.1Introduction 1735.2Rotation Matrices 1745.3Fixed and Euler Angles 1745.3.1Definition 1745.3.2Format Conversion 1775.3.3Concatenation 1785.3.4Vector Rotation 1785.3.5Other Issues 1795.4Axis–Angle Representation 1815.4.1Definition 1815.4.2Format Conversion 1825.4.3Concatenation 1845.4.4Vector Rotation 1845.4.5Axis–Angle Summary 1855.5Quaternions 1855.5.1Definition 1855.5.2Quaternions as Rotations read more..

  • Page - 14

    Contentsxiii6.3Projective Transformation 2126.3.1Definition 2126.3.2Normalized Device Coordinates 2166.3.3View Frustum 2166.3.4Homogeneous Coordinates 2206.3.5Perspective Projection 2216.3.6Oblique Perspective 2286.3.7Orthographic Parallel Projection 2316.3.8Oblique Parallel Projection 2326.4Culling and Clipping 2356.4.1Why Cull or Clip? 2356.4.2Culling 2386.4.3General Plane Clipping 2396.4.4Homogeneous Clipping 2446.5Screen Transformation 2466.5.1Pixel Aspect Ratio 2486.6Picking read more..

  • Page - 15

    xiv Contents7.6.2Shader Input and Output Values 2797.6.3Shader Operations and Language Constructs 2807.7Vertex Shaders 2807.7.1Vertex Shader Inputs 2807.7.2Vertex Shader Outputs 2817.7.3Basic Vertex Shaders 2827.7.4Linking Vertex and Fragment Shaders 2827.8Fragment Shaders 2837.8.1Fragment Shader Inputs 2837.8.2Fragment Shader Outputs 2847.8.3Compiling, Linking, and Using Shaders 2847.8.4Setting Uniform Values 2867.9Basic Coloring Methods 2877.9.1Per-Object Colors 2887.9.2Per-Vertex Colors read more..

  • Page - 16

    Contentsxv8.4Types of Light Sources 3198.4.1Directional Lights 3208.4.2Point Lights 3218.4.3Spotlights 3278.4.4Other Types of Light Sources 3308.5Surface Materials and Light Interaction 3318.6Categories of Light 3328.6.1Emission 3328.6.2Ambient 3328.6.3Diffuse 3348.6.4Specular 3388.7Combined Lighting Equation 3438.8Lighting and Shading 3488.8.1Flat-Shaded Lighting 3498.8.2Per-Vertex Lighting 3508.8.3Per-Fragment Lighting 3548.9Textures and Lighting 3588.9.1Basic Modulation 3598.9.2Specular read more..

  • Page - 17

    xvi Contents9.6.3Interpolating Texture Coordinates 3929.6.4Other Sources of Texture Coordinates 3949.7Evaluating the Fragment Shader 3959.8Rasterizing Textures 3959.8.1Texture Coordinate Review 3969.8.2Mapping a Coordinate to a Texel 3969.8.3Mipmapping 4049.9From Fragments to Pixels 4159.9.1Pixel Blending 4169.9.2Antialiasing 4209.9.3Antialiasing in Practice 4279.10 Chapter Summary 428Chapter10Interpolation43110.1 Introduction 43110.2 Interpolation of Position 43310.2.1General Definitions read more..

  • Page - 18

    Contentsxvii11.2 Probability 49311.2.1Basic Probability 49411.2.2Random Variables 49711.2.3Mean and Standard Deviation 50111.2.4Special Probability Distributions 50211.3 Determining Randomness 50511.3.1Chi-Square Test 50611.3.2Spectral Test 51211.4 Random Number Generators 51311.4.1Linear Congruential Methods 51611.4.2Lagged Fibonacci Methods 52011.4.3Carry Methods 52111.4.4Mersenne Twister 52311.4.5Conclusions 52611.5 Special Applications 52711.5.1Integers and Ranges of Integers read more..

  • Page - 19

    xviii Contents12.4 A Simple Collision System 58812.4.1Choosing a Base Primitive 58912.4.2Bounding Hierarchies 59012.4.3Dynamic Objects 59112.4.4Performance Improvements 59312.4.5Related Systems 59612.4.6Section Summary 59912.5 Chapter Summary 599Chapter13Rigid Body Dynamics60113.1 Introduction 60113.2 Linear Dynamics 60213.2.1Moving with Constant Acceleration 60213.2.2Forces 60513.2.3Linear Momentum 60613.2.4Moving with Variable Acceleration 60713.3 Numerical Integration 60913.3.1Definition read more..

  • Page - 20

    PrefaceWriting a book is an adventure. To begin with, it is a toy and an amusement;then it becomes a mistress, and then it becomes a master, and then a tyrant. Thelast phase is that just as you are about to be reconciled to your servitude, youkill the monster, and fling him out to the public. — Sir Winston ChurchillThe Adventure BeginsAs humorous as Churchill’s statement is, there is a certain amount of truth to it; writingthis book was indeed an adventure. There is something about the read more..

  • Page - 21

    xx Prefaceof the basics of rendering on handheld devices as part of a SIGGRAPH course. Jim andLars discussed the fact that handheld 3D rendering had brought back some of the “lostarts” of 3D programming, and that this might be included in a book on mathematicsfor game programming.Thus, a co-authorship was formed. Lars joined Jim in teaching the GDC 2003version of what was now called “Essential Math for Game Programmers,” and simul-taneously joined Jim to help with the book, helping to read more..

  • Page - 22

    Prefacexxisystem goes wrong (and it always does), the best programmers are never satisfied with“I fixed it, but I’m not sure how;” without understanding, there can be no confidence inthe solution, and nothing new is learned. Such programmers are driven by the desire tounderstand what went wrong, how to fix it, and learning from the experience. No othertool in 3D programming is quite as important to this process than the mathematicalbases1 behind it.Those Who Helped Us Along the RoadIn read more..

  • Page - 23

    xxii PrefaceIn addition, Jim would like to thank Mur and Fiona, his wife and daughter, whowere willing to put up with this a second time after his long absences the first timethrough; his sister, Liz, who provided illustrations for an early draft of this text; andhis parents, Jim and Pat, who gave him the resources to make it in the world andintroduced him to the world of computers so long ago.Lars would like to thank Jen, his wife, who somehow had the courage to survivea second edition of the read more..

  • Page - 24

    IntroductionThe (Continued) Rise of 3D GamesOver the past decade or so (driven by increasingly powerful computer and videogame console hardware), three-dimensional (3D) games have expanded from custom-hardware arcade machines to the realm of hardcore PC games, to consumer set-topvideo game consoles, and even to handheld devices such as personal digital assistants(PDAs) and cellular telephones. This explosion in popularity has lead to a corre-sponding need for programmers with the ability to read more..

  • Page - 25

    xxiv Introductionmathematics needed to create 3D games, as well as an understanding of how thesemathematical bases actually apply to games and graphics. The book provides not onlytheoretical mathematical background, but also many examples of how these conceptsare used to affect how a game looks (how it is rendered) and plays (how objects moveand react to users). Each type of reader is likely to find sections of the book that, forthem, provide mainly refresher courses, a new understanding of the read more..

  • Page - 26

    Introductionxxv3. Detect collisions between the characters and objects (e.g., the soccer ballentering the goal or two players sliding into one another).4. React to these collisions and basic forces such as gravity in the scene in aphysically correct manner (e.g., the soccer ball in flight).All of these steps will need to be done for each frame to present the player witha convincing game experience. Thus, the code to implement the steps above must becorrect and optimal.Chapters 1–5: The read more..

  • Page - 27

    xxvi Introductionof color and the concept of shaders, which are short programs that allow moderngraphics hardware to draw the scene objects to the display device. Chapter 8 explainshow to use these programmable shaders to implement simple approximations of real-world lighting. The rendering section concludes with Chapter 9, which details themethods used by low-level rendering systems to draw to the screen. An understandingof these details can allow programmers to create much more efficient and read more..

  • Page - 28

    Introductionxxviito illustrate the concepts in a way that is analogous to the static figures in thebook itself. Throughout the book, you will find references to interactive demos thatmay be found on the CD-ROM. Whenever a topic is illustrated with an interactivedemo, a special icon like the one seen next to this paragraph will appear in themargin.Support LibrariesSource CodeLibraryNameIn addition to the source code for each of the demos, the CD-ROM includes the sup-porting libraries used to read more..

  • Page - 29

    xxviii IntroductionThe animation demos use a shared library called IvCurves, which includes classesthat implement spline curves, the basic objects used to animate position, IvCurves isbuilt upon IvMath, extending this basic functionality to include animation. As withIvMath, the IvCurves library is likely to be useful beyond the scope of the book, asthese classes are flexible enough to be used (along with IvMath) in other applications.Finally, the simulation demos use a shared library called read more..

  • Page - 30

    Introductionxxixhas an associated set of exercises, ranging from easy to hard questions, that shouldhelp those readers interested in testing their understanding of the material within.Certain chapters also have supplemental material that unfortunately didn’t make itsway into the book proper due to space considerations. Those chapters have notes attheir end indicating that such material is available on the CD-ROM.References and Further ReadingHopefully, this book will leave readers with a read more..

  • Page - 31

    xxx IntroductionFor further reading, we suggest several books that cover topics related to this bookin much greater detail. In most cases they assume that the reader is familiar with theconcepts discussed in this book. David Eberly’s 3D Game Engine Design [25] discussesthe design and implementation of a full game engine, focusing mostly on graphics andanimation. Books by Gino van den Bergen [108] and Christer Ericson [32] cover topicsin interactive collision detection. Finally, Eberly [27] and read more..

  • Page - 32

    Chapter1Real-WorldComputer NumberRepresentation1.1 IntroductionIn this chapter we’ll discuss what is perhaps the most fundamental basisupon which three-dimensional (3D) graphics pipelines are built: computerrepresentation of numbers, particularly real numbers. While 3D programmersoften use the computer representations (approximations) of real numbers suc-cessfully without any understanding of how they are implemented, this canlead to subtle bugs and performance problems at inopportune stages read more..

  • Page - 33

    2 Chapter 1 Real-World Computer Number Representation3D pipelines. A brief case study of floating-point-related performance issuesin a real application is also presented.We will assume that the reader is familiar with the basic concepts ofinteger and whole-number representations on modern computers, includingsigned representation via two’s complement, range, overflow, common stor-age lengths (8,16, and 32 bits), standard C and C++ basic types (int, unsignedint, short, etc.), and type read more..

  • Page - 34

    1.2 Representing Real Numbers3In order to adequately understand the representations of real numbers, weneed to understand the concept of precision and error.1.2.2 Precision and ErrorFor any numerical representation system, we imagine a generic functionRep(A), which returns the value in that system that is closest to the valueA. In a perfect representation system, Rep(A)= Afor all values of A. Whenrepresenting real numbers on a computer, however, even limiting range tofinite extremes will not read more..

  • Page - 35

    4 Chapter 1 Real-World Computer Number Representationdivision, relative error cannot be computed for a value that approximateszero. It is a measure of the ratio of the error to the magnitude of the valuebeing approximated. Revisiting our previous example, the relative errors ineach case would be (approximately)RelErrorSun=1km149,597,871km≈ 7× 10−9RelErrorApple=0.00011km0.00011km= 1.0Clearly, relative error is a much more useful error metric in this case. TheEarth–sun distance error is read more..

  • Page - 36

    1.3 Floating-Point Numbers5Any decimal number can be represented in this notation (other than 0,which is simply represented as 0.0), and the representation is unique for eachnumber. In other words, for two numbers written in this form of scientificnotation, the numbers are equal if and only if their mantissas and exponentsare equal. This uniqueness is a result of the requirements that the exponentbe an integer and that the mantissa be “normalized” (i.e., have magnitude inthe range [1.0, read more..

  • Page - 37

    6 Chapter 1 Real-World Computer Number Representationthere are a limited number of values that could ever be represented exactlyby this system, namely:(exponents)× (mantissas)× (exponent signs)× (mantissa signs)= (102)× (9× 103)× (2)× (2)= 3,600,000Note that the leading digit of the mantissa must be nonzero (since themantissa is normalized), so that there are only nine choices for its value [1, 9],leading to 9× 10× 10× 10= 9,000possible mantissas.This adds finiteness to both the range read more..

  • Page - 38

    1.4 Binary “Scientific Notation”7Mantissa is a bit more complicated. It is an M+ 1-bit number whose mostsignificant bit is 1. Mantissa is actually a “fixed-point” number. Fixed-pointnumbers are based on a very simple observation with respect to computerrepresentation of integers. In the standard binary representation, each bitrepresents twice the value of the bit to its right, with the least significantbit representing 1. The following diagram shows these powers of two for astandard read more..

  • Page - 39

    8 Chapter 1 Real-World Computer Number Representationdefined to be 1, so the resulting fixed-point mantissa is in the range1.0≤ mantissa≤ 2.0−12MPut together, the format involves M+ E+ 3bits (M+ 1for the mantissa, Efor the exponent, and two for the signs). Creating an example that is analogousto the preceding decimal case, we analyze the case of M= 3,E= 2:± 1. 0 1 0 × 2± 0 1Any value that can be represented by this system can be representeduniquely by 8 bits. The number of values that read more..

  • Page - 40

    1.5 IEEE 754 Floating-Point Standard91.5 IEEE 754 Floating-Point StandardBy the early to mid-1970s, scientists and engineers were using floating-point very frequently to represent real numbers; at the time, higher-poweredcomputers even included special hardware to accelerate floating-pointcalculations. However, these same scientists and engineers were finding thelack of a floating-point standard to be problematic. Their complex (andoften very important) numerical simulations were producing read more..

  • Page - 41

    10 Chapter 1 Real-World Computer Number Representationnegative exponents is handled in the exponent itself (and is discussed next).The only difference between Xand−X in IEEE floating-point is the high-orderbit. A sign bit of 0 indicates a positive number, and a sign bit of 1 indicates anegative number.This sign bit format allows for some efficiencies in creating a floating-point math system either in hardware or software. To negate a floating-pointnumber, simply “flip” the sign bit, read more..

  • Page - 42

    1.5 IEEE 754 Floating-Point Standard11maximum exponents of−127 (= 0− 127) and 128 (= 255− 127), respectively.However, for reasons that will be explained, the minimum and maximumvalues (−127 and 128) are reserved for special cases, leading to an exponentrange of [−126, 127]. As a reference, these base-2 exponents correspond tobase-10 exponents of approximately [−37, 38].The mantissa is normalized (in almost all cases), as in our discussion ofdecimal scientific notation (where the read more..

  • Page - 43

    12 Chapter 1 Real-World Computer Number RepresentationThe minimum and maximum single-precision floating-point values are then± 2.0−1223× 2127≈±3.402823466 × 1038The precision of single-precision floating-point can be loosely approxi-mated as follows: For a given normalized mantissa, the difference between itand its nearest neighbor is 2−23. To determine the actual spacing betweena floating-point number and its neighbor, the exponent must be known.Given an exponent E, the difference read more..

  • Page - 44

    1.5 IEEE 754 Floating-Point Standard13The relative error of representation is thus generally constant, regardless ofthe magnitude of A.1.5.3 Arithmetic OperationsIn the next several sections we discuss the basic methods used to per-form common arithmetic operations upon floating-point numbers. While fewusers of floating-point will ever need to implement these operations at a bit-wise level themselves, a basic understanding of the methods is a pivotal steptoward being able to understand the read more..

  • Page - 45

    14 Chapter 1 Real-World Computer Number Representation5. The resulting mantissa MA+B may not be normalized (it may have anintegral value of 2 or 3). If this is the case, shift MA+B to the right 1 bitand add 1 to EA+B.Note that there are some interesting special cases implicit in this method.For example, we are shifting the smaller number’s mantissa to the right toalign the radix points. If the two numbers differ in exponents by more thanthe number of mantissa bits, then the smaller number will read more..

  • Page - 46

    1.5 IEEE 754 Floating-Point Standard15MultiplicationMultiplication is actually rather straightforward with IEEE floating-pointnumbers. Once again, the three components that must be computed are thesign, the exponent, and the mantissa. As in the previous section, we will givethe example of multiplying two floating-point numbers, Aand B.Owing to the fact that an explicit sign bit is used, the sign of the result maybe computed simply by computing the exclusive-OR of the sign bits, producinga read more..

  • Page - 47

    16 Chapter 1 Real-World Computer Number RepresentationThe specification defines these modes with specific references to bitwiserounding methods that we will not discuss here, but the basic ideas are quitesimple. We break the mantissa into the part that can be represented (theleading 1 along with the next 23 most significant bits), which we call M, andthe remaining lower-order bits, which we call R. Round toward 0 is also knownas chopping and is the simplest to understand; in this mode, Mis read more..

  • Page - 48

    1.5 IEEE 754 Floating-Point Standard17Another issue with respect to floating-point zero arises from the fact thatIEEE floating-point numbers have an explicit sign bit. The IEEE specifica-tion defines both positive and negative 0, differentiated by only the sign bit.To avoid very messy code, the specification does require that floating-pointcomparisons of positive zero to negative zero return “equal.” However, thebitwise representations are distinct, which means that applications read more..

  • Page - 49

    18 Chapter 1 Real-World Computer Number Representationexponents. For example, 1.0× 1038is just within the range of single-precisionfloating-point, but in single precision,(1.0× 1038)2= 1.0× 1076≈∞fpThe behavior of infinity is defined by the standard as follows (the standardcovers many more cases, but these are representative):∞fp − P=∞fpP∞fp=+0−P∞fp=−0where0 <P <∞fpThe bitwise representations of±∞fp use the reserved exponent value 128and all explicit mantissa read more..

  • Page - 50

    1.5 IEEE 754 Floating-Point Standard19In each of these cases, none of the floating-point values we have discussedwill accurately represent the situation. Here we need a value that indicates thefact that the desired computation cannot be represented as a real number. TheIEEE specification includes a special pair of values for these cases, knowncollectively as Not a Numbers (NaNs). There are two kinds of NaNs: quiet(or silent) NaN (QNaN) and signaling NaN (SNaN). Compare the read more..

  • Page - 51

    20 Chapter 1 Real-World Computer Number Representationmantissa would have only the implicit units bit set, leading to a value ofFmin= 20× 2−126= 2−126The largest value smaller than this in a normalized floating-point systemwould be 0.0. However, the smallest value larger than Fmin would differ byonly 1 bit from Fmin — the least significant mantissa bit would be set. Thisvalue, which we will call Fnext, would be simply:Fnext= (20+ 2−23)× 2−126= 2−126+ 2−149= Fmin+ 2−149This read more..

  • Page - 52

    1.5 IEEE 754 Floating-Point Standard21which would be returned as zero on a flush-to-zero floating-point system.While this is a contrived example, it can be seen that any pair of nonequalnumbers whose difference has a magnitude less than 2−126would demon-strate this problem. There is a solution to this and other flush-to-zero issues,however. The solution is known as gradual underflow, and it is discussed inthe next section.Denormals and Gradual UnderflowThe IEEE specification specifies read more..

  • Page - 53

    22 Chapter 1 Real-World Computer Number Representation1.5.6 Catastrophic CancelationWe have used relative error as a metric of the validity of the floating-pointrepresentation of a given number. As we have already seen, converting realnumbers Aand Binto the closest floating-point approximations Afp and Bfpgenerally results in some amount of relative representation error, which wecompute as:RelErrA=A− AfpARelErrB=B− BfpBThese relative representation errors accurately represent how well read more..

  • Page - 54

    1.5 IEEE 754 Floating-Point Standard23We can clearly see that in terms of real numbers,A= 3.0B= 10,000,001.5However, if we look at the single-precision floating-point representations,we getAfp = 3.0Bfp = 10,000,002.0Ais represented exactly, but Bis not, giving a relative error of repre-sentation for Bfp ofRelErrB=0.5107= 5× 10−8Quite a small relative error. However, if we compute the distances A− Aand B− Bin floating-point, the story is very different:Afp − Afp= 3.0− 1.5= 1.5= δBfp read more..

  • Page - 55

    24 Chapter 1 Real-World Computer Number RepresentationIn the case of B− B, almost all of the original mantissa bits in theoperands were canceled out in the subtraction, leaving the least significantbits of the operands as the most significant bit of the result. Basically noneof the fractional bits of the resulting mantissa were actual data — the systemsimply shifted in zeros. The precision of such a result is very low, indeed.This is catastrophic cancelation; the significant bits are all read more..

  • Page - 56

    1.6 Real-World Floating-Point25onset of precision or range issues is to switch the code to use double-precisionfloating-point values in the offending section of code (or sometimes eventhroughout the entire system). While double precision can solve almost allrange issues and many precision issues (though catastrophic cancelation canstill persist) in interactive 3D applications, there are several drawbacks thatshould be considered prior to its use:■Memory. Since double-precision values require read more..

  • Page - 57

    26 Chapter 1 Real-World Computer Number RepresentationStepping in a debugger, the following will happen on many majorcompilers and systems:1. After the initial assignment, fHuge = 1.0e30, as expected.2. After the multiplication, fHuge =∞fp, as expected.3. After the division, fHuge = 1.0e30!This seems magical. How can the system divide the single value∞fp andget back the original number? A look at the assembly code gives a hint. Thebasic steps the compiler generates are as follows:1. Load read more..

  • Page - 58

    1.6 Real-World Floating-Point27not imply much about the speed of another (e.g., square root). Finally, not allinput data are to be considered equal in terms of performance. The followingsections describe examples of some real-world performance pitfalls found infloating-point implementations.Performance of Denormalized NumbersDuring the course of creating a demo for a major commercial 3D game engine,one of the authors found that in some conditions, the performance of thedemo dropped almost read more..

  • Page - 59

    28 Chapter 1 Real-World Computer Number Representation// End "normal" timer here// Start "denormal" timer herefor (i = 0; i < 10000; i++){// 1.0e-40f is denormalized in single precisionfTest = TestFunction(1.0e-40f);}// End "denormal" timer here}Having verified that the assembly code generated by the optimizer didindeed call the desired function the correct number of times with the desiredarguments, they found that the denormal loop took 30 times as long as read more..

  • Page - 60

    1.6 Real-World Floating-Point29uninformed developer with a working program that exhibits horrible floating-point performance, in some cases hundreds of times slower than could beexpected from a hardware FPU.It’s worth reiterating that not all FPUs support both single and doubleprecision. Some major game consoles, for example, will generate FPU codefor single-precision values and emulation code for double-precision values.As a result, careless use of double precision can lead to much slower read more..

  • Page - 61

    30 Chapter 1 Real-World Computer Number Representationspecial vector processor that can execute parallel math operations on fourfloating-point values, packed into a 128-bit register. The SSE instructionswere specifically targeted at 3D games and multimedia, and this is evidentfrom even a cursory view of the design. Several design decisions related to thespecial-purpose FPU merit mentioning here:■The original SSE (Pentium III) instructions can only support 32-bitfloating-point values, not read more..

  • Page - 62

    1.6 Real-World Floating-Point31conditions in the geometry pipelines. Thus, these hardware design decisionstend to merely reflect the common practices of game programmers, ratherthan adding new limitations upon them.1.6.4 Graphics Processing Units andHalf-Precision Floating-Point Formats“Half-precision” or fp16floating-point numbers refer to a de facto standardformat for floating-point values that can fit in 16 bits of storage. While notspecified by the IEEE 754 specification and not read more..

  • Page - 63

    32 Chapter 1 Real-World Computer Number Representationexceptional behaviors may be found in “GPU Image Processing in Apple’sMotion” in Pharr [92].The reduced size of the fp16comes with significantly reduced precisionand range when compared to even a single-precision 32-bit floating-pointformat. Assuming IEEE-style specials and denormals, the extrema of fp16are:Maximum representable value: 65,504Smallest positive value: 2−25≈ 3.0× 10−8Largest consecutive integer: 2,048These limits read more..

  • Page - 64

    1.8 Chapter Summary331.8 Chapter SummaryIn this chapter we have discussed the details of how computers represent realnumbers. These representations have inherent limitations that any seriousprogrammer must understand in order to use them efficiently and cor-rectly. Floating-point presents subtle limitations, especially issues of limitedprecision. We have also discussed the basics of error metrics for numberrepresentations.Hopefully, this chapter has instilled two important pieces of read more..

  • Page - 65

    This page intentionally left blank read more..

  • Page - 66

    Chapter2Vectors andPoints2.1 IntroductionThe two building blocks of most objects in our interactive digital world arepoints and vectors. Points represent locations in space, which can be usedeither as measurements on the surface of an object to approximate the object’sshape (this approximation is called a model), or as simply the position of aparticular object. We can manipulate an object indirectly through its positionor by modifying its points directly. Vectors, on the other hand, represent read more..

  • Page - 67

    36 Chapter 2 Vectors and Points2.2 VectorsOne might expect that we would cover points first since they are the buildingblocks of our standard model, but in actuality the basic unit of most of themathematics we’ll discuss in this book is the vector. We’ll begin by discussingthe vector as a geometric entity since that’s primarily how we’ll be using it,and it’s more intuitive to think of it that way. From there we’ll present howwe can represent vectors algebraically and how that allows read more..

  • Page - 68

    2.2 Vectors37change. If we have an object moving through space, we can assign a velocityvector to the object, which represents a change in position. We can displacethe object by adding the velocity vector to the object’s location to get a newlocation. Vectors also can be used to represent change in other vectors. Forexample, we can modify our velocity vector by adding another to it; the secondvector is called acceleration.We can perform arithmetic operations on vectors just as we can with read more..

  • Page - 69

    38 Chapter 2 Vectors and Pointswwvvv2wv1wFigure2.3 Vector addition and subtraction.uwvu1v1wv1wu1vFigure2.4 Associative property of vector addition.We can verify this informally by drawing a few test cases. For example, ifwe examine Figure 2.3 again, we can see that one path along the parallelogramrepresents v+ wand the other represents w+ v. The resulting vector is thesame in both cases. Figure 2.4 presents the associative property in a similarfashion.The other basic operation is scalar read more..

  • Page - 70

    2.2 Vectors39Figure2.5 Scalar multiplication.7. a(v+ w)= av+ aw(distributive property).8. 1· v= v(multiplicative identity).As with the additive rules, diagrams can be created that provide a certainamount of intuitive understanding.2.2.2 Linear CombinationsOur definitions of vector addition and scalar multiplication can be used todescribe some special properties of vectors. Suppose we have a set Sof nvectors, where S={v0,..., vn−1}. We can combine these to create a newvector vusing the read more..

  • Page - 71

    40 Chapter 2 Vectors and Pointsv1v0Figure2.6 Two vectors spanning a plane.v1v0v2Figure2.7 Linearly dependent set of vectors.2.2.3 Vector RepresentationIn symbolic mathematics and (more important for our purposes) in the com-puter, representing vectors graphically is not convenient. Instead we define aset of linearly independent vectors known as a basis, and define our remainingvectors in terms of those. So for example, for 3D space (formally representedasR3) we can define three vectors i, j, read more..

  • Page - 72

    2.2 Vectors41zxykjiFigure2.8 Standard 3D basis vectors.known and fixed, we just store the x, y, and zvalues and use them to representour vector numerically. In this way a 3D vector vis represented by an orderedtriple (x,y,z). These are known as the vector components. Our basis vectors i,j, and kwill be represented with components (1, 0, 0), (0, 1, 0), and (0, 0, 1),respectively.We can do the same for two-dimensional (2D) space, orR2, by using asour basis{i, j}, where i= (1, 0)and j= (0, 1), and read more..

  • Page - 73

    42 Chapter 2 Vectors and PointsScalar multiplication works similarly:av= a(xi+ yj+ zk)= a(xi)+ a(yj)+ a(zk)= (ax)i+ (ay)j+ (az)kAnd again, pulling out i, j, and kgives usa(x, y, z)= (ax, ay, az)(2.2)A more formal discussion of basis vectors can be found in Section Basic Vector Class ImplementationSource CodeLibraryIvMathFilenameIvVector3Now that we’ve presented an algebraic representation for vectors, we can talkabout how we will store them in the computer. In our case we’ll read more..

  • Page - 74

    2.2 Vectors43We can observe a few things about this declaration. First, we declared ourmember variables as a type float. This is the single-precision IEEE floating-point representation for real numbers, as discussed in Chapter 1. While notas precise as double-precision floating-point, it has the advantage of beingcompact and compatible with standard representations on most graphicshardware.The second thing to notice is that, unlike the previous edition of this book,we’re making our member read more..

  • Page - 75

    44 Chapter 2 Vectors and Points{return IvVector3( a*vector.x, a*vector.y, a*vector.z);}These methods are given friend access by the class to allow use of theprivate member variables.Similar operators for postmultiplication and division by a scalar are alsoprovided within the library; their declarations are:IvVector3 operator*( const IvVector3& vector, float scalar );IvVector3 operator/( const IvVector3& vector, float scalar );IvVector3& operator*=( IvVector3& vector, float scalar read more..

  • Page - 76

    2.2 Vectors45One that we’ll use more often is the Euclidean norm, also known as the 2norm or just length. If we give no indication of which type of norm we’reusing, this is usually what we mean.We derive the Euclidean norm as follows. Suppose we have a 2D vectoru= xi+ yj. Recall the Pythagorean theorem x2+ y2= d2. Since xis thedistance along iand yis the distance along j, then the length dof uisu= d= x2+ y2as shown in Figure 2.9. A similar formula is used for a vector v= (x,y,z),using the read more..

  • Page - 77

    46 Chapter 2 Vectors and PointsOur implementations of length methods (forR3) are as follows:floatIvVector3::Length() const{return IvSqrt( x*x + y*y + z*z );}floatIvVector3::LengthSquared() const{return x*x + y*y + z*z;}IvVector3&IvVector3::Normalize(){float lengthsq = x*x + y*y + z*z;ASSERT( !IsZero( lengthsq ) );if ( IsZero( lengthsq ) ){x=y=z= 0.0f;return *this;}float recip = IvInvSqrt( lengthsq );x *= recip;y *= recip;z *= recip;return *this;}Note that in addition to the mathematical read more..

  • Page - 78

    2.2 Vectors47square root is often slow. Rather than use it, we can use an approximationon some platforms, which is faster and accurate enough for our purpose. Onother platforms there are internal assembly instructions that are not usedby the standard library. In particular, there may be an instruction that per-forms the inverse square root, which is faster than calculating the squareroot and performing the floating-point divide. Defining our own layer of indi-rection gives us flexibility and read more..

  • Page - 79

    48 Chapter 2 Vectors and PointsSubstituting in the definition of vector length inR3 and expanding, we get−2 vw cos θ= (vx− wx)2+ (vy− wy)2+ (vz− wz)2− (v2x + v2y + v2z )− (w2x + w2y + w2z )−2 vw cos θ=−2vxwx − 2vywy− 2vzwzvw cos θ= vxwx+ vywy+ vzwzSo, to compute the dot product inR3, multiply the vectors component-wise, and then add:v· w= vxwx+ vywy+ vzwzNote that for this definition to hold, vectors v and w need to be representedwith respect to the standard basis{i, j, read more..

  • Page - 80

    2.2 Vectors49vectors forR3 are orthogonal. We can now demonstrate this. For example,taking i· jwe geti· j= (1, 0, 0)· (0, 1, 0)= 0+ 0+ 0= 0It is possible, although not always recommended, to use equation 2.4 totest whether two unit vectorsˆv andˆw are pointing generally in the samedirection. If they are, cos θis close to 1, so 1− ˆv · ˆw is close to 0 (we use thisformula to avoid problems with floating-point precision). Similarly, if 1+ ˆv · ˆwis close to 0, they are pointing in read more..

  • Page - 81

    50 Chapter 2 Vectors and Pointsw2•v<0vw1•v50w0•v>0Figure2.11 Dot product as measurement of angle.tvOEFigure2.12 Measuring angle to target.Equation 2.4 allows us to use the dot product in another manner. Sup-pose we have two vectors vand w, where w= 0. We define the projection ofvonto wasproj w v=v· ww 2w read more..

  • Page - 82

    2.2 Vectors51wvFigure2.13 Dot product as projection.This gives the part of vthat is parallel to w, which is the same as dropping aperpendicular from the end of vonto w(Figure 2.13).We can get the part of vthat is perpendicular to wby subtracting theprojection:perp w v= v−v· ww 2wBoth of these equations will be very useful to us. Note that if wis normalized,then the projection simplifies toproj ˆw v= (v· ˆw) ˆwThe corresponding library implementation of dot product inR3 is as read more..

  • Page - 83

    52 Chapter 2 Vectors and PointsIn many cases we start with a general set of vectors and want to generatethe closest possible orthonormal one. One example of this is when we performoperations on currently orthonormal vectors. Even if the pure mathemat-ical result should not change their length or relative orientation, due tofloating-point precision problems the resulting vectors may be no longerorthonormal. The process that allows us to create orthonormal vectors frompossibly nonorthonormal read more..

  • Page - 84

    2.2 Vectors532.2.8 Cross ProductSuppose we have two vectors vand wand want to find a new vector uorthogonal to both. The operation that computes this is the cross product,also known as the vector product. There are two possible choices for thedirection of the vector, each the negation of the other (Figure 2.14); the onechosen is determined by the right-hand rule. Hold your right hand so that yourforefinger points forward, your middle finger points out to the left, and yourthumb points up. If read more..

  • Page - 85

    54 Chapter 2 Vectors and Pointswvv × wFigure2.16 Cross product length equals area of parallelogram.where θis the angle between vand w. Note that the cross product is notcommutative, so order is important:v× w=−(w × v)Also, if the two vectors are parallel, sin θ= 0, so we end up with the zero vector.It is a common mistake to believe that if vand ware unit vectors, thecross product will also be a unit vector. A quick look at equation 2.6 showsthis is true only if sin θis 1, in which case read more..

  • Page - 86

    2.2 Vectors55There are two common uses for the cross product. The first, and mostused, is to generate a vector orthogonal to two others. Suppose we have threepoints P, Q, and R, and we want to generate a unit vector nthat is orthogonalto the plane formed by the three points (this is known as a normal vector).Begin by computing v= (Q− P)and w= (R− P). Now we have a decisionto make. Computing v× wand normalizing will generate a normal in onedirection, whereas w× vand normalizing will read more..

  • Page - 87

    56 Chapter 2 Vectors and Pointsv'vFigure2.18 Perpendicular vector.where θis the signed angle between vand w. That is, if the shortest rotationto get from vto wis in a clockwise direction, then θis negative. And similarto the cross product, the absolute value of the perpendicular dot product isequal to the area of a parallelogram bordered by the two vectors.It is possible to take cross products in dimensions greater than three byusing n−1 vectors to take an n-dimensional cross product, but in read more..

  • Page - 88

    2.2 Vectors57v 3 wvw 3 (v 3 w)wFigure2.19 The vector triple product.v 3 wvwuFigure2.20 Scalar triple product equals volume of parallelopiped.or parallelopiped (Figure 2.20). Then the area of the base equals v× w, andu cos θgives the height of the box. So,u· (v× w)= uv× w cos θor area times height equals the volume of the box.In addition to computing volume, the scalar triple product can be usedto test the direction of the angle between two vectors vand w, relative toa third vector uthat is read more..

  • Page - 89

    58 Chapter 2 Vectors and Pointsthe result is positive, then we know that dlies to the left of v(counterclockwiserotation), and we turn left. Similarly, if the value is less than zero, then weknow we must turn right to match d(Figures 2.21 and 2.22).If we know that the tank is always oriented so that it lies on the xyplane,we can simplify this considerably. Vectors vand dwill always have zvaluesof 0, and uwill always point in the same direction as the standard basisvector k. In this case, the read more..

  • Page - 90

    2.2 Vectors59ikjFigure2.23 Right-handed rotation.90-degree rotation of iinto jshows that the basis is right-handed. We cando the same trick with the left hand rotating clockwise to show that a set ofvectors is left-handed.Formally, if we have three vectors{v0, v1, v2}, then they are right-handedif v0· (v1× v2)> 0, and left-handed if v0· (v1× v2)< 0.If v0· (v1× v2)= 0,we’ve got a problem — our vectors are linearly dependent.While the scalar triple product only applies to vectors read more..

  • Page - 91

    60 Chapter 2 Vectors and Pointsrules, which can be quite powerful. Finally, there are certain properties ofvector spaces that will prove to be quite useful when we cover matrices andlinear transformations.To simplify our approach, we are going to concentrate on a subset ofvector spaces known as real vector spaces, so called because their fundamentalcomponents are drawn fromR, the set of all real numbers. We usually say thatsuch a vector space Vis overR. We also formally define an element ofR read more..

  • Page - 92

    2.2 Vectors61of numbers, we need to define two specific operations on the elements thatfollow certain algebraic rules. The two operations should be familiar from ourdiscussion of geometric vectors: They are addition and scalar multiplication.We’ll define these operations so that the vector space Vhas closure with respectto them; that is,1. For any uand vin V, u+ vis in V(additive closure).2. For any ainR and vin V, avis in V(multiplicative closure).So formally, we define a real vector read more..

  • Page - 93

    62 Chapter 2 Vectors and Pointswell as forR itself. Generalized overRn, we haveu+ v= (u0,...,un−1) + (v0,...,vn−1)= (u0+ v0,...,un−1 + vn−1)andav= a(v0,...,vn−1)= (av0,...,avn−1)Now suppose we have a subset Wof a vector space V. We call Wa subspaceif it is itself a vector space when using the same definition for addition andmultiplication operations. In order to show that a given subset Wis a vectorspace, we only need to show that closure under addition and scalar multipli-cation read more..

  • Page - 94

    2.3 Points63a basis for V, and each element of βas a basis vector. So far we’ve shownonly the standard Euclidean basis, but other bases are possible for a givenvector space, and they will always have the same number of elements. Weformally define a vector space’s dimension as equal to the number of basisvectors required to span it. So, for example, any basis forR3 will containthree basis vectors, and so it is (as we’d expect) a 3D space. Note that while thestandard Euclidean basis is read more..

  • Page - 95

    64 Chapter 2 Vectors and PointsWithin this section it is also assumed that the reader has some generalsense of what lines and planes are. More information on these topics followsin subsequent sections.2.3.1 Points as GeometryEveryone who has been through a first-year geometry course should befamiliar with the notion of a point. Euclid describes the point in his workElements [33] as “that which has no part.” Points have also been presented asthe cross-section of a line, or the intersection read more..

  • Page - 96

    2.3 Points65y-axisx-axisxyOPFigure2.24 Two-dimensional Cartesian coordinate system.z-axisy-axisx-axisOFigure2.25 Three-dimensional Cartesian coordinate system.favor this because the x- and y-axes match the relative axes of the 2D screen,but most of the time we’ll be using the former convention for this book.Both of the 3D coordinate systems we have described are right-handed.As before, we can test this via the right-hand rule. This time point your thumbalong the z-axis, your fingers along the read more..

  • Page - 97

    66 Chapter 2 Vectors and Pointsy-axisx-axisz-axisOFigure2.26 Alternate 3D Cartesian coordinate system.into the y-axis. As with left-handed bases, we can have left-handed coordinatesystems (and will be using them later in this book), but the majority of ourwork will be done in a right-handed coordinate system because of convention.2.3.2 Affine SpacesWe can provide a more formal definition of coordinate systems based onwhat we already know of vectors and vector spaces. Before we can do so,though, read more..

  • Page - 98

    2.3 Points67This relationship can be seen in Figure 2.27. We can think of the vector vasacting as a displacement between the two points Pand Q. To determine thedisplacement between two points, we subtract one from another. To displacea point, we add a vector to it and that gives us a new point.We can define a fixed-point Oin W, known as the origin. Then usingequation 2.7, we can represent any point Pin WasP= O+ vor, expanding our vector using nbasis vectors that span V:P= O+ a0 v0+ a1 v1+ read more..

  • Page - 99

    68 Chapter 2 Vectors and PointsOzkijyxFigure2.28 Relationship between points and vectors in Cartesian affine frame.and the distance between them isdist(P1,P0)= v= (x1− x0)2+ (y1− y0)2+ (z1− z0)2This is also known as the Euclidean distance. In theR3 Cartesian frame, thedistance between a point P= (x,y,z)and the origin isdist(P, O)= x2+ y2+ z22.3.3 Affine CombinationsSo far the only operation that we’ve defined on points alone is subtraction,which results in a vector. However, there is a read more..

  • Page - 100

    2.3 Points69show why this restriction allows us to perform this operation by rewritingequation 2.10 asa0= 1− a1− ··· − akand substituting into equation 2.9 to getP= (1− a1− ··· − ak)P0+ a1P1+ ··· + akPk= P0+ a1(P1− P0)+ ··· + ak(Pk− P0)(2.11)If we set u1= (P1− P0), u2= (P2− P0), and so on, we can rewrite this asP= P0+ a1 u1+ a2 u2+ ··· + ak ukSo, by restricting our coefficients in this manner, it allows us to rewrite theaffine combination as a point plus a read more..

  • Page - 101

    70 Chapter 2 Vectors and PointsFigure2.29 Convex versus nonconvex set of points.2.3.4 Point ImplementationSource CodeLibraryIvMathFilenameIvVector3Using the Cartesian frame and standard basis inR3, the x, y, and zvalues of apoint PinR3 match the x, y, and zvalues of the corresponding vector P− O,where Ois the origin of the frame. This also means that we can use one classto represent both, since one can be easily converted to the other. Because ofthis, many math libraries don’t even bother read more..

  • Page - 102

    2.3 Points71a directional light, which only casts light rays in one direction. Both arespecified by a single call:GLfloat light_position[] = {1.0, 1.0, 1.0, 0.0};glLightfv(GL_LIGHT0, GL_POSITION, light_position);If the final value of light_positionis 0, then it is treated as a directional light;otherwise, it is treated as a point light.In our case, we will not be using a separate class for points. There wouldbe a certain amount of code duplication, since the IvPoint3class would endup being read more..

  • Page - 103

    72 Chapter 2 Vectors and Pointsfloat y = point1.y - point2.y;float z = point1.z - point2.z;return ( x*x + y*y + z*z );}2.3.5 Polar and Spherical CoordinatesCartesian coordinates are not the only way of measuring location. We’vealready mentioned latitude, longitude, and altitude, and there are other,related systems. Take a point PinR2 and compute the vector v= P− O.We can specify the location of Pusing the distance rfrom Pto the origin,which is the length of v, and the angle θbetween vand read more..

  • Page - 104

    2.3 Points73for θ, which gives us θ= arccos(x/r). However, the acos()function underC++ only returns an angle in the range of[0,π), so we’ve lost the sign of theangle. Sinceyx=rsin θrcos θ=sin θcos θ= tan θan alternate choice would be arctan(y/x), but this doesn’t handle the case whenx= 0. To manage this, C++ provides a library function called atan2(), whichtakes yand xas separate arguments and computes arctan(y/x). It has no prob-lems with division by 0 and maintains the signed angle read more..

  • Page - 105

    74 Chapter 2 Vectors and Pointsy-axisx-axisOz-axisPFigure2.31 Spherical coordinates.y-axisx-axisOz-axisxzyPFigure2.32 Relationship between spherical and Cartesian coordinates.need to find the value of ρsin φ. This is equal to the projected xylength rsincer= x2+ y2= (ρsin φcos θ)2+ (ρsin φsin θ)2= (ρsin φ)2(cos2 θ+ sin2 θ)= ρsin φ read more..

  • Page - 106

    2.4 Lines75And since, as with polar coordinates,rz=ρsin φρcos φ= tan φwe can compute φ= arctan 2(r, z). Similarly, θ= arctan 2(y, x). Summarizing:ρ= x2+ y2+ z2φ= arctan 2x2+ y2,zθ= arctan 2(y, x)2.4Lines2.4.1 DefinitionAs with the point, a line as a geometric concept should be familiar. Euclid[33] defines a line as “breadthless length” and a straight line as that “whichlies evenly with the points on itself.” A straight line also has been referredto as the shortest distance read more..

  • Page - 107

    76 Chapter 2 Vectors and Points2.4.2 Parameterized LinesOne possible representation is known as a parametric equation. Instead ofrepresenting the line as a single equation with a number of variables, eachcoordinate value is calculated by a separate function. This allows us to useone form for a line that is generalizable across all dimensions. As an example,we will take equation 2.16 and parameterize it.To compute the parametric equation for a line, we need two points on ourline. We can take the read more..

  • Page - 108

    2.4 Lines77dP0P1Figure2.34 Line segment.dP0P1Figure2.35 Ray.along the line in the direction of d. Rays are useful for intersection and visi-bility tests. For example, P0 may represent the position of a camera, and disthe viewing direction.Source CodeLibraryIvMathFilenameIvLine3IvLineSegment3IvRay3In code we’ll be representing our lines, rays, and line segments as a pointon the line Pand a vector d; so for example, the class definition for a line inR3 isclass IvLine3{public:IvLine3( const read more..

  • Page - 109

    78 Chapter 2 Vectors and PointsSubstituting this into the yequation, we gety= dy(x− Px)dx+ PyWe can rewrite this as0=(y− Py)dy−(x− Px)dx= (−dy)x + (dx)y+ (dyPx− dxPy)= ax+ by+ c(2.20)wherea=−dyb= dxc= dyPx− dxPy=−aPx − bPyWe can think of aand bas the components of a 2D vector n, which isperpendicular to the direction vector d, and so is orthogonal to the directionof the line (Figure 2.36). This gives us a way of testing where a 2D point liesrelative to a 2D line. If we read more..

  • Page - 110

    2.4 Lines79where θis the angle between nand Q− P. But since n· (Q− P)= nQ− Pcos θ, we can rewrite this asd=n· (Q− P)nIf Qis lying on the opposite side of the line, then we take the dot product withthe negative of n,sod=−n · (Q− P)− n=−n· (Q− P)nSince dis always positive, we can just take the absolute value of n· (Q− P)to getd=|n · (Q− P)|n(2.21)If we know that nis normalized, we can drop the denominator. If Q= (x, y)and (as we’ve stated) n= (a, b), we can expand read more..

  • Page - 111

    80 Chapter 2 Vectors and Points2.5 PlanesEuclid [33] defines a surface as “that which has length and breadth only,” anda plane surface, or just a plane, as “a surface which lies evenly with the straightlines on itself.” Another way of thinking of this is that a plane is created bytaking a straight line and sweeping each point on it along a second straightline. It is a flat, limitless, infinitely thin surface.2.5.1 Parameterized PlanesAs with lines, we can express a plane algebraically read more..

  • Page - 112

    2.5 Planes81n 5 (a, b, c)P0Figure2.37 Normal form of plane.We can pull all the constants into one term to get0= ax+ by+ cz− (ax0+ by0+ cz0)= ax+ by+ cz+ dSo, extending equation 2.20 to three dimensions gives us the equation for aplane inR3.This is the generalized plane equation. As with the generalized line equa-tion, this equation can be used to test where a point lies relative to either sideof a plane. Again, comparable to the line equation, it can be proved that ifnis normalized,|ax + by+ read more..

  • Page - 113

    82 Chapter 2 Vectors and PointsWe usually normalize nat this point so that we can take advantage of thedistance-measuring properties of the plane equation. This gives us our valuesa, b, and c. Taking Pas the point on the plane, we compute dbyd=−(aPx + bPy+ cPz)We can also use this to convert our parameterized form to the generalizedform by starting with the cross product step.Source CodeLibraryIvMathFilenameIvPlaneSince we’ll be working inR3 most of the time and because of its read more..

  • Page - 114

    2.6 Polygons and Triangles83A polygon is made up of a set of vertices (which are represented by points)and edges (which are represented by line segments). The edges define howthe vertices are connected together. A convex polygon is one where the setof points enclosed by the vertices and edges is a convex set; otherwise, it’sa concave polygon.The most commonly used polygons for storing geometric data are triangles(three vertices) and quadrilaterals (four vertices). While some rendering read more..

  • Page - 115

    84 Chapter 2 Vectors and PointsWe take the cross product of v0 and v1 to get a normal vector nto the triangle.We then compute three vectors from each vertex to the test point:w0= P− P0w1= P− P1w2= P− P2If the point lies inside the triangle, then the cross product of each vi with eachwi will point in the same direction as n, which we can test by using a dotproduct. If the result is negative, then we know they’re pointing in oppositedirections, and the point lies outside. For example, in read more..

  • Page - 116

    2.6 Polygons and Triangles85Taking the length of both sides givesv× w=|s| v× uThe quantity v× u= u× v. And since Pis inside the triangle, we knowthat to meet the requirements of a convex combination s≥ 0,sos=v× wu× vA similar construction finds thatt=u× wu× vNote that this is equivalent to computing the areas aand bof the twosubtriangles shown in Figure 2.39 and dividing by the total area of thetriangle c,sos=bct=acwherea=12u× wb=12v× wc=12u× vP1P0P2vwuPabFigure2.39 Computing read more..

  • Page - 117

    86 Chapter 2 Vectors and PointsThese simple examples are only a taste of how we can use triangles inmathematical calculations. More details on the use and implementation oftriangles can be found throughout the text, particularly in Chapters 7 and 12.2.7 Chapter SummaryIn this chapter, we have covered some basic geometric entities: vectors andpoints. We have discussed linear and affine spaces, the relationships betweenthem, and how we can use affine combinations of vectors and points to read more..

  • Page - 118

    Chapter3Matrices andLinearTransformations3.1 IntroductionIn the previous chapter we discussed vectors and points and some simpleoperations we can apply to them. Now we’ll begin to expand our discussionto cover specific functions that we can apply to vectors and points; functionsknown as transformations. In this chapter we’ll discuss a class of transfor-mations that we can apply to vectors called linear transformations. Theseencompass nearly all of the common operations we might want to read more..

  • Page - 119

    88 Chapter 3 Matrices and Linear Transformationsto solve systems of linear equations, which is useful to know for certainalgorithms in graphics and physical simulation. For all of these reasons,matrices are primary data structures in graphics application programmerinterfaces (APIs).3.2 Matrices3.2.1 Introduction to MatricesA matrix is a rectangular, 2D array of values. Throughout this book, most ofthe values we use will be real numbers, but they could be complex numbers oreven vectors. Each read more..

  • Page - 120

    3.2 Matrices89elements have the same values, then they are equal. Below, the two matricesare the same size, but they are not equal.⎡⎣01320−3⎤⎦ =⎡⎣002−313⎤⎦The set of elements where the row and column numbers are the same(e.g., row 1, column 1) is called the main diagonal. In the next example themain diagonal is in gray.U=⎡⎢⎢⎣3−5010260001−80001⎤⎥⎥⎦The trace of a matrix is the sum of the main diagonal elements. In thiscase the trace is 3+ 2+ 1+ 1= 7.In read more..

  • Page - 121

    90 Chapter 3 Matrices and Linear Transformations3.2.2 Simple OperationsMatrix Addition and Scalar MultiplicationWe can add and scale matrices just as we can vectors. Adding two matricestogether:S= A+ Bis done componentwise like vectors, thus,si,j= ai,j+ bi,jClearly, in order for this to work, A, B, and Smust all be the same size (alsoknown as conformable for addition). Subtraction works similarly but as withreal numbers and vectors is not commutative.To scale a matrix,P= sAeach element is read more..

  • Page - 122

    3.2 Matrices91TransposeThe transpose of a matrix A(represented by AT) interchanges the rows andcolumns of A. It does this by exchanging elements across the matrix’s maindiagonal, so (AT )i,j= (A)j,i. An example of this is⎡⎣2−10263⎤⎦ =206−123As we can see, the matrix does not have to be square, so an m× nmatrixbecomes an n× mmatrix. Also, the main diagonal doesn’t change, or isinvariant, since (AT )i,i= (A)i,i.A matrix where (A)i,j= (A)j,i (i.e., cross-diagonal entries are read more..

  • Page - 123

    92 Chapter 3 Matrices and Linear Transformations3.2.3 Vector RepresentationIf a matrix has only one row or one column, then we have a row or columnmatrix, respectively:0.50.251−1⎡⎣5−36.9⎤⎦These are often used to represent vectors. There is no particular standard asto which one to use. For example, the OpenGL specification and its documen-tation uses columns, whereas DirectX, by comparison, uses rows. In this textwe will assume that vectors are represented as column matrices (also read more..

  • Page - 124

    3.2 Matrices93and0=00We will sometimes use this to represent a matrix as a set of row or columnmatrices. For example, if we have a matrix A⎡⎣a0,0a0,1a0,2a1,0a1,1a1,2a2,0a2,1a2,2⎤⎦we can represent its rows as three vectorsaT0 = a0,0a0,1a0,2aT1 = a1,0a1,1a1,2aT2 = a2,0a2,1a2,2and represent Aas⎡⎢⎣aT0aT1aT2⎤⎥⎦Similarly, we can represent a matrix Bwith its columns as three vectorsb0=⎡⎣b0,0b1,0b2,0⎤⎦b1=⎡⎣b0,1b1,1b2,1⎤⎦b2=⎡⎣b0,2b1,2b2,2⎤⎦and subsequently read more..

  • Page - 125

    94 Chapter 3 Matrices and Linear Transformations3.2.5 Matrix ProductThe primary operation we will apply to matrices is multiplication, also knownas the matrix product. The product is important to us because it allows us todo two essential things. First, multiplying a matrix by a compatible vectorwill transform the vector. Second, multiplying matrices together will createa single matrix that performs their combined transformations. We’ll discussexactly what is occurring when we discuss linear read more..

  • Page - 126

    3.2 Matrices95We can also multiply by using block matrices:ABCDEFGH=AE+ BGAF+ BHCE+ DGCF+ DHNote that this is only allowable if the submatrices are conformable for additionand multiplication.There is a restriction on which matrices can be multiplied together; inorder to perform a dot product the two vectors have to have the same length.So, to multiply together two matrices, the number of columns in the first(i.e., the width of each row) has to be the same as the number of rows inthe second read more..

  • Page - 127

    96 Chapter 3 Matrices and Linear TransformationsIn general, matrix multiplication is not commutative. As an example, ifwe multiply a row matrix by a column matrix, we perform a dot product:1234= 1· 3+ 2· 4= 11Because of this, you may often see a dot product represented asa· b= aT bIf we multiply them in the opposite order, we get a square matrix:3412=3648Even multiplication of square matrices is not necessarily commutative:36481011=9612810113648=36714Aside from the size restriction and not read more..

  • Page - 128

    3.2 Matrices97Similarly, in matrix multiplication there is a special matrix known as theidentity matrix, represented by the letter I. Thus,A· I= I· A= AA particular identity matrix is a diagonal square matrix, where thediagonal is all 1s:I=⎡⎢⎢⎢⎣10··· 0010.... . ....00··· 1⎤⎥⎥⎥⎦If a particular n× nidentity matrix is needed, it is sometimes referred toas In. Take as an example I3:I3=⎡⎣100010001⎤⎦Rather than referring to it in this way, we’ll just use the term read more..

  • Page - 129

    98 Chapter 3 Matrices and Linear TransformationsIn particular, we can rewrite a projection by a unit vector as(u· ˆv) ˆv = (ˆv ⊗ ˆv)uThis will prove useful to us in the next chapter.We can also perform our other vector product, the cross product, througha matrix multiplication. If we have two vectors vand wand we want tocompute v× w, we can replace vwith a particular skew symmetric matrix,represented as˜v:˜v =⎡⎣0−vzvyvz0−vx−vyvx0⎤⎦Multiplying by read more..

  • Page - 130

    3.2 Matrices99the index order for a 3× 3matrix is⎡⎣012345678⎤⎦The indexing operator for a row major matrix (we have to use operator()because operator[]only works for a single index) isfloat&IvMatrix33::operator()(unsigned int row, unsigned int col){return mV[col + 3*row];}Why won’t this work? Well, in Direct3D matrices are expected to be usedwith row vectors. And even in OpenGL, despite the fact that the documenta-tion is written using column vectors, the internal representation read more..

  • Page - 131

    100 Chapter 3 Matrices and Linear TransformationsAlternatively, if we want to use 2D arrays:float&IvMatrix33::operator()(unsigned int row, unsigned int col){return mV[col][row];}Using column major format and column vectors, matrix–vector multipli-cation becomesIvVector3IvMatrix33::operator*( const IvVector3& vector ) const{IvVector3 result;result.x = mV[0]*vector.x + mV[3]*vector.y + mV[6]*vector.z;result.y = mV[1]*vector.x + mV[4]*vector.y + mV[7]*vector.z;result.z = mV[2]*vector.x + read more..

  • Page - 132

    3.3 Linear Transformations101Matrix addition is justIvMatrix33IvMatrix33::operator+( const IvMatrix33& other ) const{IvMatrix33 result;for(inti=0;i<9; ++i){result.mV[i] = mV[i]+other.mV[i];}return result;}Scalar multiplication of matrices is similar.It is common practice to refer to a matrix intended to be used with rowvectors (i.e., its transformed basis vectors are stored as rows) as row majororder and, similarly, to a matrix intended to be used with column vectors ascolumn major order. read more..

  • Page - 133

    102 Chapter 3 Matrices and Linear TransformationsA function is a relation where every value in the first set maps to one and onlyone value in the second set, for example, f(x)= sin x. An example of a relationthat is not a function is±√x, because there are two possible results for apositive value of x, either positive or negative.A function whose domain is an n-dimensional space and whose range isan m-dimensional space is known as a transformation. A transformation thatmaps fromRn toRm is read more..

  • Page - 134

    3.3 Linear Transformations103On the other hand, the function g(x)= x2is not linear because, for a= 2,x= 1, and y= 1:g(2(1)+ 1)= (2(1)+ 1)2= 32= 9= 2(g(1))+ g(1)= 2(12)+ 12= 3As we might expect, the only operations possible in a linear function aremultiplication by a constant and addition.3.3.2 Null Space and RangeWe define the null space (or kernel) N(T) of a linear transformationT : V→ Was the set of all vectors in Vthat map to 0,orN(T) ={x | T(x) = 0}The dimension of N(T) is called the read more..

  • Page - 135

    104 Chapter 3 Matrices and Linear TransformationsRange (y= 0)Null space (y=x)Figure3.1 Range and null space for transformation T(a, b)= (a+ b,0).3.3.3 Linear Transformations and BasisVectorsUsing standard function notation to represent linear transformations (as inequation 3.1) is not the most convenient or compact format, particularly fortransformations between higher-dimensional vector spaces. Let’s examine theproperties of vectors as they undergo a linear transformation and see how thatcan read more..

  • Page - 136

    3.3 Linear Transformations105So, if we know how our linear transformation affects our basis for V, thenwe can calculate the effect of the linear transformation for any arbitraryvector in V.There is still an open question: What are the components of eachT(vj)equal to? For a member vj of V’s basis, we can representT(vj) in terms of thebasis{w0, w1,..., wm−1} for W, again as a linear combination:T(vj) = a0,j w0+ a1,j w1+ ··· + am−1,j wm−1If{w0,..., wm−1} is the standard basis for W, read more..

  • Page - 137

    106 Chapter 3 Matrices and Linear TransformationsIt should be made clear that applying a linear transformation to a basisdoes not produce the basis for the new vector space. It only shows wherethe basis vectors end up in the new vector space — in our case in terms ofthe standard basis. In fact, a transformed basis may be no longer linearlyindependent. Take as another exampleT(a, b)= (a+ b,0)Applying this to our standard basis forR2,wegetT(1, 0)= (1+ 0, 0)= (1, 0)T(0, 1)= (0+ 1, 0)= (1, 0)The read more..

  • Page - 138

    3.3 Linear Transformations107will store, in order, each of these transformed basis vectors as the columnsof A,orA= a0a1··· an−1Using our matrix multiplication definition to compute the product of Aand a vector xin V, we see that the result for element iin bisbi= ai,0x0+ ai,1x1+ ··· + ai,n−1xn−1This is exactly the same as equation 3.5. So, by setting up our matrix with thetransformed basis vectors in each column, we can use matrix multiplicationto perform linear transformations.This read more..

  • Page - 139

    108 Chapter 3 Matrices and Linear TransformationsWhen postmultiplied by a column vector, it maps a vector (x,y,z)inR3 to avector (y, z,0)on the xyplane. Premultiplying by a row vector, on the otherhand, maps (x,y,z)to (0,x,y)on the yzplane. They have the same dimension,and hence the same rank, but they are not the same vector space.This makes a certain amount of sense. When we multiply by a row vec-tor, we use the row vectors of the matrix as our transformed basis insteadof the column vectors. read more..

  • Page - 140

    3.3 Linear Transformations109multiplying by Bwill transform vectors in VbyT. So we just multiply eachcolumn of Aby Band store the results, in order, as columns in a new matrix C:C= BAIf Uhas dimension n, Vhas dimension m, and Whas dimension l, then Awillbe an m×n matrix and Bwill be an l×m matrix. Since the number of columnsin Bmatches the number of rows in A, the matrix product can proceed, aswe’d expect. The result Cwill be an l× nmatrix and will apply the transfor-mation of Afollowed by read more..

  • Page - 141

    110 Chapter 3 Matrices and Linear TransformationsDoing something similar for a row vector aT:bT= aT N0cT= bT N1dT= cT N2and substituting:dT= bT N1 N2= aT N0 N1 N2= aT NrThe order difference is quite clear. When using row vectors andconcatenating, matrix order follows the left to right progress used in Englishtext. Column vectors work right to left instead, which may not be as intuitive.We will just need to be careful about our matrix order and transpose anymatrices that assume we’re using row read more..

  • Page - 142

    3.4 Systems of Linear Equations111b0= a0,0x0+ a0,1x1+ ··· + a0,n−1xn−1b1= a1,0x0+ a1,1x1+ ··· + a1,n−1xn−1(3.7)−1 = am−1,0x0 + am−1,1x1 + ··· + am−1,n−1xn−1The problem we are trying to solve is: Given a0,0,...,am−1,n−1 andb0,...,bm−1, what are the values of x0,...,xn−1? For a given linear system,the set of all possible solutions is called the solution set.As an example, the system of equationsx0+ 2x1= 13x0− x1= 2has the solution set{x0 = 5/7,x1= read more..

  • Page - 143

    112 Chapter 3 Matrices and Linear TransformationsThe coefficients of the equation become the elements of matrix A, andmatrix multiplication encapsulates our entire linear system. Now the problembecomes one of the form: Given Aand b, what is x?3.4.2 Solving Linear SystemsOne case is very easy to solve. Suppose Alooks like⎡⎢⎢⎢⎣1a0,1··· a0,n−101··· a1,n−1....... . ....00···1⎤⎥⎥⎥⎦This is equivalent to the linear systemb0= x0+ a0,1x1+ ··· + a0,n−1xn−1b1= x1+ read more..

  • Page - 144

    3.4 Systems of Linear Equations113The process we’ve described gives us a clue about how to proceed insolving general systems of linear equations. Suppose we can multiply bothsides of our equation by a series of matrices so that the left-hand side becomesa matrix in row echelon form. Then we can use this in combination with theright-hand side to give us the solution for our system of equations.However, we need to use matrices that preserve the properties of the linearsystem; the solution set read more..

  • Page - 145

    114 Chapter 3 Matrices and Linear Transformationselimination, after Karl Friedrich Gauss, a prolific German mathematician ofthe eighteenth and nineteenth centuries. It involves concatenating the matrixAand vector binto a form called an augmented matrix and then performing aseries of elementary row operations on the augmented matrix, in a particularorder. This will either give us a solution to the system of linear equations ortell us that computing a single solution is not possible; that is, read more..

  • Page - 146

    3.4 Systems of Linear Equations115If we look at column 0, the maximal entry is 3, in row 2. So we begin byswapping row 2 with row 0:⎡⎣3692−121−31355⎤⎦We scale the new row 0 by 1/3 to set the pivot element to 1:⎡⎣1232−121−31155⎤⎦Now we start clearing the lower entries. The first entry in row 1 is 2, so wescale row 0 by−2 and add it to row 1:⎡⎣12310−5 −4 31−31 5⎤⎦We do the same for row 2, scaling by−1 and adding:⎡⎣12310−5 −4 30−5 −2 4⎤⎦We read more..

  • Page - 147

    116 Chapter 3 Matrices and Linear TransformationsThis matrix is now in row echelon form. We have two possibilities at thispoint. We could clear the upper triangle of the matrix in a fashion similar tohow we cleared the lower triangle, but by working up from the bottom andadding multiples of rows. The solution xto the linear system would end upin the right-hand column. This is known as Gauss-Jordan elimination.But let’s look at the linear system we have now:x+ 2y+ 3z= 1y+ 4/5z=−3/5z= 1/2As read more..

  • Page - 148

    3.5 Matrix Inverse117subtract row p times A[r,p] from current row,so that element in pivot column becomes 0// do backwards substitutionfor row = n-1 to 1for col = row+1 to n// subtract out known quantitiesb[row] = b[row] - A[row][col]*b[col]The pseudocode shows what may happen when we encounter a linearsystem with no single solution. If we can’t swap a nonzero entry in the pivotlocation, then there is a column that is all zeros. This is only possible if therank of the matrix (i.e., the number read more..

  • Page - 149

    118 Chapter 3 Matrices and Linear TransformationsCorrespondingly, for a given matrix A, we can define its inverse A−1as amatrix such thatA· A−1= IandA−1· A= IThere are a few things that fall out from this definition. First of all, in orderfor the first multiplication to occur, the number of rows in the inverse mustbe the same as the number of columns in the original matrix. For the secondto occur, the converse is true. So, the matrix and its inverse must be squareand the same size. read more..

  • Page - 150

    3.5 Matrix Inverse119If we multiply both sides by A−1, thenA−1 Ax= A−1 bIx= A−1 bx= A−1 bTherefore, if we could find the inverse of A, we could use it to solve for x. Thisis not usually a good idea, computationally speaking. It’s usually cheaper tosolve for xdirectly, rather than generating the inverse and then performing thematrix multiplication. The latter can also lead to increased numerical error.However, sometimes finding the inverse is a necessary evil.The left-hand side of read more..

  • Page - 151

    120 Chapter 3 Matrices and Linear TransformationsThe nonpivot entries in the first column are zero, so we move to the secondcolumn. Scaling the second row by 1/3 to set the pivot point to 1 gives us⎡⎣1021/20001−301/30001001⎤⎦Again, our nonpivot entries in the second column are 0, so we move to thethird column. Our pivot entry is 1, so we don’t need to scale. We add−2 timesthe last row to the first row to clear that entry, then 3 times the last row to thesecond row to clear that read more..

  • Page - 152

    3.6 Determinant121elements are the reciprocal of the original diagonal elements, as shown by thefollowing:⎡⎣a000b000c⎤⎦−1=⎡⎣1/a0001/b0001/c⎤⎦The third case is a modified identity matrix, where the diagonal is all 1sbut one column or row is nonzero. One such 3× 3matrix is⎡⎣10x01y001⎤⎦For a matrix of this form, we simply negate the nonzero elements to invert it.Using the previous example,⎡⎣10x01y001⎤⎦−1=⎡⎣10−x01−y001⎤⎦Finally, we can combine this read more..

  • Page - 153

    122 Chapter 3 Matrices and Linear Transformationsabsolute value of the determinant is equal to the volume of a parallelepipeddescribed by the three transformed basis vectors (Figure 3.3).The sign of the determinant depends on whether or not we have switchedour ordered basis vectors from being relatively right-handed to being left-handed. In Figure 3.2, the shortest angle from a0 to a1 is clockwise, so theyare left-handed. The determinant, therefore, is negative.We represent the determinant in read more..

  • Page - 154

    3.6 Determinant123det(A)=1−312−12369The diagrams showing area of a parallelogram and volume of aparallelepiped should look familiar from our discussion of cross product andtriple scalar product. In fact, the cross product is sometimes represented asv× w=ijkvxvyvzwxwywzwhile the triple product is represented asu· (v× w)=uxuyuzvxvyvzwxwywzSince det(AT )= det(A), this representation is equivalent.3.6.2 Computing the DeterminantThere are a few ways of representing the determinant computation read more..

  • Page - 155

    124 Chapter 3 Matrices and Linear Transformationsin the matrix. The second does the same but moves along column jinstead ofrow i.Let’s compute an example determinant, expanding by row 0:det⎛⎝⎡⎣11224−336−5⎤⎦⎞⎠ = ?The first element of row 0 is 1, and the submatrix with row 0 and column 0removed is4−36−5The second element is also 1. However, we negate it since we are consideringrow 0 and column 1: 0+ 1= 1, which is odd. The submatrix is Awith row 0and column 1 read more..

  • Page - 156

    3.6 Determinant125And the determinant of a 3× 3matrix isdet⎛⎝⎡⎣abcdefghi⎤⎦⎞⎠ = a· detefhi− b· detdfgi+ c· detdeghora(ei− fh)− b(di− fg)+ c(dh− eg)There are some additional properties of the determinant that will be usefulto us. If we have two n× nmatrices Aand B, the following hold:1. det(AB)= det(A)det( B).2. det(A−1)=1det(A).We can look at the value of the determinant to tell us some features of ourmatrix. First of all, as we have mentioned, any matrix that read more..

  • Page - 157

    126 Chapter 3 Matrices and Linear Transformationsjia0a1Figure3.4 Determinant of example 2× 2orthogonal matrix.3.6.3 Determinants and Elementary RowOperationsSource CodeLibraryIvMathFilenameIvGaussianElimFor 2× 2and 3× 3matrices, computing the determinant in this manner isa simple process. However, for larger and larger matrices, our recursivedefinition becomes unwieldy, and for large enough n, will take an unreason-able amount of time to compute. In addition, computing the determinant inthis read more..

  • Page - 158

    3.6 Determinant127Suppose we have the matrix2−4−11The determinant of this matrix is−2. If we multiply the first row by 1/2, we get1−2−11which has a determinant of−1. Multiplying a row by a scalar kmultiplies thedeterminant by kas well.Now suppose we add two times the first row to the second one. We get1−21−3which also has a determinant of−1. Adding a multiple of one row to anotherhas no effect on the determinant.Finally, we can swap row 1 with row 2:1−31−2which has a read more..

  • Page - 159

    128 Chapter 3 Matrices and Linear Transformationssodet(A)=1p· det(A )We know that the determinant of Ais 1, since the diagonal of the row echelonmatrix is all 1s. So our final determinant is just 1/p. However, this is just theproduct of the multiplies we do to create leading 1s, and−1 for every rowswap, orp=1p0,01p1,1...1pn,n(−1)kwhere kis the number of row swaps. Then,1/p= p0,0p1,1,n(−1)kSo all we need to do is multiply our running product by each pivot elementand negate for each read more..

  • Page - 160

    3.7 Eigenvalues and Eigenvectors129Many graphics engines use Cramer’s method to compute the inverse, andfor 3× 3and 4× 4matrices it’s not a bad choice; for matrices of this size,Cramer’s method is actually faster than Gaussian elimination. Because ofthis, we have chosen to implement IvMatrix33::Inverse()using an efficientform of Cramer’s method.However, whether you’re using Gaussian elimination or Cramer’s method,you’re probably doing more work than is necessary for the matrices read more..

  • Page - 161

    130 Chapter 3 Matrices and Linear TransformationsNow, for a given eigenvalue there will be an infinite number of associatedeigenvectors, all scalar multiples of each other. This is called the eigenspacefor that eigenvalue. To find the eigenspace for a particular eigenvector, wesimply substitute that eigenvalue into equation 3.15 and solve for x.In practice, solving the characteristic equation becomes more and moredifficult the larger the matrix. However, there is a particular class of read more..

  • Page - 162

    3.8 Chapter Summary131For those who are interested in reading further, Anton and Rorres [3]is a standard reference for many first courses in linear algebra. Other textswith slightly different approaches include Axler [4] and Friedberg et al. [39].More information on Gaussian elimination and its extensions, such as LUdecomposition, can be found in Anton and Rorres [3] as well as in the Numer-ical Recipes series [96]. Finally, Blinn has an excellent article in his collectionNotation, Notation, read more..

  • Page - 163

    This page intentionally left blank read more..

  • Page - 164

    Chapter4AffineTransformations4.1IntroductionNow that we’ve chosen a mathematically sound basis for representinggeometry in our game and discussed some aspects of matrix arithmetic, weneed to combine them into an efficient method for placing and moving virtualobjects or models. There are a few reasons we seek this efficiency. Supposewe wish to build a core level in our game space, say the office of a computercompany. We could build all of our geometry in place and hard-code all ofthe read more..

  • Page - 165

    134 Chapter 4 Affine Transformations4.2Affine Transformations4.2.1 Matrix DefinitionIn the last chapter we discussed linear transformations, which map fromone vector space to another. We can apply such transformations to vectorsusing matrix operations. There is a nearly equivalent set of transforma-tions that map between affine spaces, which we can apply to points andvectors in an affine space. These are known as affine transformations andthey too can be applied using matrix operations, read more..

  • Page - 166

    4.2 Affine Transformations135libraries to use the trailing 1 on points and trailing 0 on vectors. If we multiplya vector using this representation by our (m+ 1)× (n+ 1)matrix,Ay0T1v0 =Av0we see that the vector is affected by the upper left m× nmatrix A, but notthe vector y. This has the same effect on the first nelements of vas multiply-ing an n-dimensional vector by A, which is a linear transformation. So, thisrepresentation allows us to use affine transformation matrices to apply read more..

  • Page - 167

    136 Chapter 4 Affine TransformationsWe then multiply by both sides to change the left-most matrix to theidentity:A−100T1A00T1Ay0T1−1=A−100T1I−y0T1(4.3)Ay0T1−1=A−1−A−1 y0T1thereby giving us the inverse on the right-hand side.When we’re working inR3, Awill be a 3×3 matrix and ywill be a 3-vector;hence the full affine matrix will be a 4× 4matrix. Most graphics librariesexpect transformations to be in the 4× 4matrix form, so if we do use themore compact forms in our math read more..

  • Page - 168

    4.2 Affine Transformations137Affine transformations are particularly useful to us because they preservecertain properties of geometry. First, they maintain collinearity, so points ona line will remain collinear and points on a plane will remain coplanar whentransformed.If we transform a line:L(t)= (1− t)P0+ tP1T(L(t)) = T((1 − t)P0+ tP1)= (1− t)T(P0) + tT(P1)the result is clearly still a line (assumingT(P0) andT(P1) aren’t coincident).Similarly, if we transform a plane:P(t)= (1− s− read more..

  • Page - 169

    138 Chapter 4 Affine Transformations4.2.3 Formal RepresentationSuppose we have an affine transformation that maps from affine space Atoaffine space B, where the frame for Ahas basis vectors (v0,..., vn−1) andorigin OA, and the frame for Bhas basis vectors (w0,..., wm−1) and originOB. If we apply an affine transformation to a point P= (x0,...,xn−1) in A,this givesT(P )= T(x0 v0+ ··· + xn−1 vn−1 + OA)= x0T(v0) + ··· + xn−1T(vn−1) + T(OA)As we did with linear read more..

  • Page - 170

    4.3 Standard Affine Transformations139of the frame for Binto the columns of a matrix and use matrix multiplicationto apply the affine transformation to an arbitrary point.4.3Standard Affine TransformationsNow that we’ve defined affine transformations in general, we can discusssome specific affine transformations that will prove useful when manipulat-ing objects in our game. We’ll cover these in terms of transformations fromR3 toR3, since they will be the most common uses. However, we read more..

  • Page - 171

    140 Chapter 4 Affine TransformationsWe can determine the matrix for a translation by computing the transfor-mation for each of the frame elements. For the origin O, this isT(O) = t+ O= tx i+ ty j+ tz k+ OFor a given basis vector, we can find two points Pand Qthat define thevector and compute the transformation of their difference. For example,for i:T(i) = T(P − Q)= T(P) − T(Q)= (t+ P)− (t+ Q)= P− Q= iThe same holds true for jand k, so translation has no effect on the basisvectors in read more..

  • Page - 172

    4.3 Standard Affine Transformations141We can use equation 4.3 to compute the inverse translation trans-formation:T−1t=I−1−I−1 t0T1(4.4)=I−t0T1(4.5)= T−t(4.6)So, the inverse of a given translation negates the original translation vectorto displace the point back to its original position.4.3.2 RotationThe other common rigid transformation is rotation. If we consider the rotationof a vector, we are rigidly changing its direction around an axis without chang-ing its length. InR2, this is read more..

  • Page - 173

    142 Chapter 4 Affine TransformationsFigure4.3 Axis and plane of rotation.xyhP'PFigure4.4 Rotation of point in of rotation is commonly defined as the origin of the current frame(we’ll refer to this as a pure rotation) but can be any arbitrary point. We canthink of this as defining a vector vfrom the center of rotation to the point tobe rotated, rotating v, and then adding the result to the center of rotation tocompute the new position of the point. For now we’ll only cover pure read more..

  • Page - 174

    4.3 Standard Affine Transformations143(a)(b)xyzxyzxyz(c)Figure4.5 (a) x-axis rotation, (b) y-axis rotation, and (c) z-axis rotation.Figure 4.6 shows why this works. Since we’re rotating around the z-axis, nozvalues will change, so we will consider only how the rotation affects the xyvalues of the points. The starting position of the point is (x, y), and we want torotate that θdegrees counterclockwise. Handling this in Cartesian coordinatescan be problematic, but this is one case where polar read more..

  • Page - 175

    144 Chapter 4 Affine Transformations(x', y')(x, y)rFigure4.6 Rotation in xy plane.will be at (r, φ+ θ)(in polar coordinates). Converting to Cartesian coordinates,the final point will lie atx= rcos(φ+ θ)y= rsin(φ+ θ)Using trigonometric identities, this becomesx= rcos φcos θ− rsin φsin θy= rcos φsin θ+ rsin φcos θBut rcos φ= x, and rsin φ= y, so we can substitute and getx= xcos θ− ysin θy= xsin θ+ ycos θWe can derive similar equations for rotation around the x-axis(Figure read more..

  • Page - 176

    4.3 Standard Affine Transformations145To create the corresponding transformation, we need to determine howthe frame elements are transformed. The frame’s origin will not change sinceit’s our center of rotation, so y= 0. Therefore, our primary concern will bethe contents of the 3× 3matrix A.For this matrix, we need to compute where i, j, and kwill go. For example,for rotations around the z-axis we can transform ito getx= (1) cos θ− (0) sin θ= cos θy= (1) sin θ+ (0) cos θ= sin θz= read more..

  • Page - 177

    146 Chapter 4 Affine Transformationsy-axis, and then the x-axis, we can create one form of a generalized rotationmatrix:RxRyRz=⎡⎢⎣CyCz−CySzSySxSyCz+ CxSz−SxSySz + CxCz−SxCy−CxSyCz + SxSzCxSySz+ SxCzCxCy⎤⎥⎦(4.8)whereCx= cos θxSx= sin θxCy= cos θySy= sin θyCz= cos θzSz= sin θzRecall that the inverse of an orthogonal matrix is its transpose. Becausepure rotation matrices are orthogonal, the inverse of any rotation matrix isalso its transpose. Therefore, the inverse of the read more..

  • Page - 178

    4.3 Standard Affine Transformations147wT(v⊥)T(v)vv⊥θv||r>(a)wT(v⊥)v⊥(sin)w(cos)v⊥θ(b)θθFigure4.7 (a) General rotation, showing axis of rotation and rotation plane, and(b) general rotation, showing vectors on rotation plane. read more..

  • Page - 179

    148 Chapter 4 Affine TransformationsThe perpendicular part is what remains of vafter we subtract the parallelpart, orv⊥ = v− (v· ˆr)ˆr(4.10)To properly compute the effect of rotation, we need to create a two-dimensional (2D) basis on the plane of rotation (Figure 4.7b). We’ll use v⊥as our first basis vector, and we’ll need a vector wperpendicular to it for oursecond basis vector. We can take the cross product withˆr for this:w= ˆr × v⊥ = ˆr × v(4.11)In the standard basis read more..

  • Page - 180

    4.3 Standard Affine Transformations149whereˆr = (x,y,z)c= cos θs= sin θt= 1− cos θAs we can see, there is a wide variety of choices for the 3× 3matrix A,depending on what sort of rotation we wish to perform. The full affine matrixfor rotation around the origin isR00T1where Ris one of the rotation matrices just given. For example, the affinematrix for rotation around the x-axis isRx00T1=⎡⎢⎢⎣10000cos θ− sin θ00sin θcos θ00001⎤⎥⎥⎦This is also an orthogonal matrix and read more..

  • Page - 181

    150 Chapter 4 Affine TransformationsRemember that the order of concatenation matters, because matrix multipli-cation — particularly for rotation matrices — is not a commutative operation.4.3.3 ScalingThe remaining affine transformations that we will cover are deformations,since they don’t preserve exact lengths or angles. The first is scaling, whichcan be thought of as corresponding to our other basic vector operation, scalarmultiplication; however, it is not quite the same. Scalar read more..

  • Page - 182

    4.3 Standard Affine Transformations151xyzFigure4.8 Nonuniform scaling.This is a diagonal matrix, with the positive scale factors lying along thediagonal, so the inverse isS−1abc = S 1a1b1c=⎡⎢⎢⎣1/a00001/b00001/c00001⎤⎥⎥⎦4.3.4 ReflectionThe reflection transformation symmetrically maps an object across a planeor through a point. One possible reflection is (Figure 4.9a)x=−xy= yz= zThis reflects across the yzplane and gives an effect like a standard mirror(mirrors don’t swap read more..

  • Page - 183

    152 Chapter 4 Affine Transformationsxy(a)zxyz(b)Figure4.9 (a) yz reflection, and (b) xz reflection.As one might expect, we can create a planar reflection that reflects acrossa general plane, defined by a normalˆn and a point on the plane P0. For nowwe’ll consider only planes that pass through the origin. If we have a vec-tor vin our affine space, we can break it into two parts relative to the planenormal: the orthogonal part v⊥, which will remain unchanged, and paral-lel part v, read more..

  • Page - 184

    4.3 Standard Affine Transformations153nvv⊥v'v||–v||Figure4.10 General reflection.Thus, the linear transformation part Aof our affine transformation is[I − 2(ˆn ⊗ ˆn)]. Writing this as a block matrix, we get:F n=I− 2(ˆn ⊗ ˆn) 00T1While in the real world we usually see planar reflections, in our vir-tual world we can also compute a reflection through a point. The followingperforms a reflection through the origin (Figure 4.11):x=−xy=−yz=−zThe corresponding block matrix read more..

  • Page - 185

    154 Chapter 4 Affine TransformationsxyzFigure4.11 Point reflection.appears to reflect through the z-axis, giving a “funhouse mirror” effect, whereright and left are swapped (if yis left, it becomes−y in the reflection, and soends up on the right side). However, if we examine the transformation closely,we see that while it does perform the desired effect, this is actually a rotation of180 degrees around the z-axis. While both pure rotations and pure reflectionsthrough the origin are read more..

  • Page - 186

    4.3 Standard Affine Transformations155xyzFigure4.12 z-shear on as P0+ v, where P0 is a point on the shear plane, then Pwill be displacedby (ˆn · v) s.The simplest case is when we apply shear perpendicular to one of themain coordinate axes. For example, if we take the yzplane as our shear plane,our normal is iand the shear plane passes through the origin O. We knowfrom this that Owill not change with the transformation, so our translationvector yis 0. As before, to find Awe need to read more..

  • Page - 187

    156 Chapter 4 Affine TransformationsThe vector sin this case is orthogonal to i, therefore it is of the form (0,a,b),so our transformed basis vector will be (1,a,b). Our final matrix AisHx=⎡⎣100a10b01⎤⎦We can go through a similar process to get shear by the y-axis:Hy=⎡⎣1c00100d1⎤⎦and shear by the z-axis:Hz=⎡⎣10e01f001⎤⎦For shearing by a general plane through the origin, we already have theformula for the displacement: (ˆn · v) s. We can rewrite this as a tensor read more..

  • Page - 188

    4.3 Standard Affine Transformations157vv'CyOO'rˆFigure4.13 Rotation of origin around arbitrary center.Let’s look at a particular example — the rotation of a point around anarbitrary center of rotation C— and determine how this transformationaffects the origin of our frame. If we look at Figure 4.13, we see the situation.We have a point Cand our origin O. We want to rotate the difference vectorv= O− Cbetween the two points by matrix Rand determine where the result-ing pointT(O),or C+ read more..

  • Page - 189

    158 Chapter 4 Affine TransformationsThe same construction can be used for all affine transformations that usea center of transformation: rotation, scale, reflection, and shear. The excep-tion is translation, since such an operation has no effect: P− x+ t+ x= P+ t.But for the others, using a point C= (x, 1)as our arbitrary center oftransformation givesMc=A(I− A) x0T1where Ais the upper 3× 3matrix of an origin-centered transformation. Thecorresponding inverse isM−1c=A−1(I− A−1) read more..

  • Page - 190

    4.4 Using Affine Transformations159If we transform all the points on the plane by some matrix M, then tomaintain the relationship between nTand P, we’ll have to transform nbysome unknown matrix Q,or(Qn)T (MP)= 0This can be rewritten asnT QT MP= 0One possible solution for this is ifI= QT MSolving for QgivesQ= M−1TSo, the transformed plane coefficients becomen= M−1TnThe same approach will work if we’re transforming the plane normal andpoint as described earlier. We transform the point P0 read more..

  • Page - 191

    160 Chapter 4 Affine Transformationsthe basic level — the walls, the floor, the ceilings, and so forth — as a single setof triangles with coordinates defined to place them exactly where we mightwant them in the world. However, suppose we have a single desk model thatwe want to duplicate and place in various locations in the level. The artistcould build a new version of the desk for each location in the core level geom-etry, but that would involve unnecessarily duplicating all the memory read more..

  • Page - 192

    4.4 Using Affine Transformations161Typically, the origin of the frame is placed in a position convenient forthe game, either at the center of the object or at the bottom of the object. Thefirst is useful when we want to rotate objects around their centers, the secondfor placement on the ground.When constructing our world, we define a specific coordinate frame, orworld frame, also known as world space. The world frame acts as a commonreference among all the objects, much as the origin acts as read more..

  • Page - 193

    162 Chapter 4 Affine TransformationsFigure4.15 Local-to-world transformation.The most commonly used affine transformations for object placement aretranslation, rotation, and scaling. Translation and rotation are convenientfor two reasons. First, they correspond naturally to two of the character-istics we want to control in our objects: position and orientation. Second,they are rigid transformations, meaning they don’t affect the size or shapeof our object, which is generally the desired read more..

  • Page - 194

    4.4 Using Affine Transformations163of (−ty,tx,tz). As another example, look at Figure 4.16(a), which shows arotation and translation. Figure 4.16(b) shows the equivalent translation androtation.Scaling and rotation are also noncommutative. If we first scale (1, 0, 0) by(sx,sy,sz), we get the point (sx, 0, 0). Rotating this by 90 degrees around z,weend up with (0,sx, 0). Reversing the transformation order, if we rotate (1, 0, 0)by 90 degrees around z, we get the point (0, 1, 0). Scaling this read more..

  • Page - 195

    164 Chapter 4 Affine TransformationsThe final combination is scaling and translation. Again, this is notcommutative. Remember that pure scaling is applied from the origin of theframe. If we translate an object from the origin and then scale, there will beadditional scaling done to the translation of the object. So, for example, ifwe scale (1, 1, 1) by (sx,sy,sz) and then translate by (tx,ty,tz), we end up with(tx+ sx,ty+ sy,tz+ sz). If instead we translate first, we get (tx+ 1,ty+ 1,tz+ read more..

  • Page - 196

    4.4 Using Affine Transformations165rotation, and translation, it would be ideal if we could break Ainto the productof a scaling and rotation matrix. If we know for a fact that Ais the productof only a scaling and rotation matrix, in the order RS, we can multiply it outto get⎡⎢⎢⎣r11r12r130r21r22r230r31r32r3300001⎤⎥⎥⎦⎡⎢⎢⎣sx0000sy0000sz00001⎤⎥⎥⎦ =⎡⎢⎢⎣sxr11syr12szr130sxr21syr22szr230sxr31syr32szr3300001⎤⎥⎥⎦In this case, the lengths of the first three read more..

  • Page - 197

    166 Chapter 4 Affine TransformationsyxFigure4.19 Effect of rotation, then scale.Matrix Uin this case is another orthogonal matrix, and Kis a diagonal matrix.The stretch matrix combines the scale-plus-shear effect we saw in our exam-ple: It rotates the frame to an orientation, scales along the axes, and thenrotates back. Using this, a general affine matrix can be broken into fourtransformations:M= TRNSwhere Tis a translation matrix, Qhas been separated into a rotation matrixRand a reflection read more..

  • Page - 198

    4.4 Using Affine Transformations167Since each scaling transformation is uniformly scaling, we can simplify this toM= Rnσn··· R1σ1R0σ0Using matrix algebra, we can shuffle terms to getM= Rn··· R1R0σn··· σ1σ0= Rσ= RSwhere Ris a rotation matrix and Sis a uniform scaling matrix. So, if we useuniform scaling, we can in fact decompose our matrix into a rotation andscaling matrix, as we just did.Source CodeDemoSeparateHowever, even in this case the decomposition takes three square read more..

  • Page - 199

    168 Chapter 4 Affine TransformationsBut since T0 is applied after R0 and S0, they have no effect on it. So, if wewant to find how the translation changes, we drop them:M= T1 R1 S1 T0Multiplying this out in block format gives usM=It10T1R100T1s1I00T1It00T1=R1t10T1s1Is1 t00T1=s1R1s1R1 t0+ t10T1We can see that the right-hand column vector yis equal to equation 4.16. Toget the final translation we need to apply the second scale and rotation beforeadding the second translation. Another way of read more..

  • Page - 200

    4.5 Object Hierarchies169Which representation is better? It depends on your application. If all youwish to do is an initial scale and then apply sequences of rotations and trans-lations, the 4× 4matrix format works fine and will be faster on a vectorprocessor. If, on the other hand, you wish to make changes to scale as well,using the alternate format should at least be considered. And, as we’ll see, ifwe wish to use a rotation representation other than a matrix, the alternateformation is read more..

  • Page - 201

    170 Chapter 4 Affine TransformationsThe idea is to transform the arm to body space (Figure 4.21(a)) and thencontinue the transform into world space (Figure 4.21(b)). In this case, foreach stage of transformation we perform the order as scale, rotate, and thentranslate. In matrix format the world transformation for the arm would beW= TbodyRbodySbodyTarmRarmSarmAs we’ve indicated, the body and arm are treated as two separate objects, eachwith its own transformations, placed in a hierarchy. The read more..

  • Page - 202

    4.6 Chapter Summary171body’s space. When rendering, for example, we begin by drawing the bodywith its world transformation and then drawing the arm with the concate-nation of the body’s transformation and the arm’s transformation. By doingthis, we can change them independently — rotating the arm around the shoul-der, for example, without affecting the body at all. Similar techniques can beused to create deeper hierarchies, for example, a turret that rotates on topof a tank chassis, with read more..

  • Page - 203

    This page intentionally left blank read more..

  • Page - 204

    Chapter5OrientationRepresentation5.1 IntroductionIn the previous chapter we discussed various types of affine transformationsand how they can be represented by a matrix. In this chapter we will focusspecifically on orientation and the rotation transformation. We’ll look at fourdifferent orientation formats and compare them on the basis of the followingcriteria:■Represents orientation/rotation with a small number of values.■Can be concatenated efficiently to form new read more..

  • Page - 205

    174 Chapter 5 Orientation Representationand how suitable it is for numeric integration in physics. Both of these topicswill be discussed in Chapters 10 and 13, respectively.As we’ll see, there is no one choice that meets all of our requirements;each has its strengths and weaknesses in each area, depending on our imple-mentation needs.5.2 Rotation MatricesSince we have been using matrices as our primary orientation/rotationrepresentation, it is natural to begin our discussion with them.For our read more..

  • Page - 206

    5.3 Fixed and Euler Angles175zy21x3Figure5.1 Order and direction of rotation for z-y-x fixed angles.and multiple rotations around a given axis. Note that we are using radiansrather than degrees to represent our angles; either convention is acceptable,but the trigonometric functions used in C or C++ expect radians.The order we’ve given is somewhat arbitrary, as there is no standard orderthat is used for the three axes. We could have used the sequence x-y-z or z-x-yjust as well. We can even read more..

  • Page - 207

    176 Chapter 5 Orientation RepresentationRollPitchYawFigure5.2 Roll, pitch, and rotations relative to model coordinate axes.(Figure 5.2). Whether a given roll, pitch, or heading rotation is around x, y,or zdepends on how we’ve defined our coordinate frame. Suppose we are usinga coordinate system where the z-axis represents up, the x-axis represents for-ward, and the y-axis represents left. Then heading is rotation around the z-axis,pitch is rotation around the y-axis, and roll is rotation read more..

  • Page - 208

    5.3 Fixed and Euler Angles1775.3.2 Format ConversionBy concatenating three general axis rotation matrices and expanding out theterms, we can create a generalized rotation matrix. The particular matrix willdepend on which axis rotations we’re using and whether they are fixed orEuler angles. For z-y-x fixed angles or x-y-z Euler angles, the matrix looks likeR= RxRyRz=⎡⎣CyCz−CySzSySxSyCz+ CxSz−SxSySz + CxCz−SxCy−CxSyCz + SxSzCxSySz+ SxCzCxCy⎤⎦whereCx= cos θxSx= sin θxCy= cos read more..

  • Page - 209

    178 Chapter 5 Orientation RepresentationNote that we have no idea whether cos θy should be positive or negative, sowe assume that it’s positive. Also, if cos θy= 0, then the xand zaxes havebecome aligned (see Section 5.3.5) and we can’t distinguish between rotationsaround xand rotations around z. One possibility is to assume that rotationaround zis 0, sosin θz= 0cos θz= 1sin θx= R21cos θx= R11Calling arctan2()for each sin/cos pair will return a possible angle in radians,generally in read more..

  • Page - 210

    5.3 Fixed and Euler Angles179(in general the break-even point is five vectors), it’s more efficient to convertto matrix format.5.3.5 Other IssuesAs if all of these disadvantages are not enough, the fatal blow is that in cer-tain cases fixed or Euler angles can lose one degree of freedom. We can thinkof this as a mathematical form of gimbal lock. In aeronautic navigationalsystems, there is often a set of gyroscopes, or gimbals, that control the orien-tation of an airplane or rocket. Gimbal read more..

  • Page - 211

    180 Chapter 5 Orientation RepresentationxzyFigure5.4 Effect of gimbal lock. Rotating the box around the world x-axis, thenthe world y-axis, then the world z-axis ends up having the same effect as rotating thebox around just the y-axis.or (0,π/2,θz−θx). Another way to think of this is: Were this in matrix form wewould not be able to extract unique values for θx and θz. We have effectivelylost one degree of freedom.To try this for yourself, take an object whose orientation can be read more..

  • Page - 212

    5.4 Axis–Angle Representation181other representations (being aware of the dangers of gimbal lock, of course),and our library will be no exception.5.4Axis–Angle Representation5.4.1 DefinitionRecall from Chapter 4 that we can represent a general rotation inR3 by anaxis of rotation, and the amount we rotate around this axis by an angle ofrotation. Therefore, we can represent rotations in two parts: a 3-vector rthatlies along the axis of rotation, and a scalar θthat corresponds to a read more..

  • Page - 213

    182 Chapter 5 Orientation Representationends up dividing by a near-zero value. In those cases, we set θto 0 andˆr toany arbitrary, normalized vector.5.4.2 Format ConversionTo convert an axis–angle represention to a matrix, we can use the derivationfrom Chapter 4:R ˆrθ =⎡⎣tx2+ ctxy− sztxz+ sytxy+ szty2+ ctyz− sxtxz− sytyz+ sxtz2+ c⎤⎦(5.3)whereˆr = (x,y,z)c= cos θs= sin θt= 1− cos θConverting from a matrix to the axis–angle format has similar issuesas the fixed-angle read more..

  • Page - 214

    5.4 Axis–Angle Representation183The values x, y, and zin this case are the components of our axis vectorˆr.Wecan compute ras (R21− R12,R02− R20,R10− R01), and normalize to getˆr.If θequals π, then R− RT= 0, which doesn’t help us at all. In this case,we can use another formulation for the rotation matrix, which only holds ifθ= π:R= I+ 2S2=⎡⎢⎣1− 2y2− 2z22xy2xz2xy1− 2x2− 2z22yz2xz2yz1− 2x2− 2y2⎤⎥⎦The idea is that we can use the diagonal elements to compute read more..

  • Page - 215

    184 Chapter 5 Orientation RepresentationFinally, if R22 is the largest element we usez=12R22− R00− R11+ 1x=R022zy=R122z5.4.3 ConcatenationConcatenating two axis–angle representations is not straightforward. Onemethod is to convert them to two matrices or two quaternions (see below),multiply, and then convert back to the axis–angle format. As one can easilysee, this is more expensive than just concatenating two matrices. Because ofthis, one doesn’t often perform this operation on read more..

  • Page - 216

    5.5 Quaternions1855.4.5 Axis–Angle SummaryWhile being a useful way of thinking about rotation, the axis–angle formatstill has some problems. Concatenating two axis–angle representations isextremely expensive. And unless we store two additional values, rotating vec-tors requires computing transcendental functions, which is not very efficienteither. Our next representation encapsulates some of the useful propertiesof the axis–angle format, while providing a more efficient method for read more..

  • Page - 217

    186 Chapter 5 Orientation RepresentationFrequently, we’ll want to use vectors in combination with quaternions.To do so, we’ll zero out the scalar part and set the vector part equal to ouroriginal vector. So, the quaternion corresponding to a vector uisq u = (0, u)Other than terminology, we aren’t that concerned about Hamilton’s inten-tions for generalized quaternions, because we are only going to consider aspecialized case discovered by Arthur Cayley [18]. In particular, he showedthat read more..

  • Page - 218

    5.5 Quaternions187So, why reformat our previously simple axis and angle to this somewhatstrange representation? As we’ll see shortly, precooking the data in this wayallows us to rotate vectors and concatenate with ease.Our class implementation for quaternions looks likeclass IvQuat{public:// constructor/destructorinline IvQuat() {}inline IvQuat( float_w, float _x, float _y, float _z ):w(_w), x(_x), y(_y), z(_z){}IvQuat(const IvVector3& axis, float angle);explicit IvQuat(const read more..

  • Page - 219

    188 Chapter 5 Orientation Representationrwvwvˆ 2π–θ–rˆFigure5.6 Comparing rotation performed by a normalized quaternion (left) withits negation (right).5.5.4 NegationNegation is a subset of scale, but it’s worth discussing separately. One wouldexpect that negating a quaternion would produce a quaternion that applies arotation in the opposite direction — it would be the inverse. However, whileit does rotate in the opposite direction, it also rotates around the negativeaxis. The end read more..

  • Page - 220

    5.5 Quaternions189Since we’re assuming that our quaternions are normalized, we’ll forgo the useof the notationˆq to keep our equations from being too cluttered.5.5.6 Dot ProductThe dot product of two quaternions should also look familiar:q1 · q2 = w1w2+ x1x2+ y1y2+ z1z2As with vectors, this is still equal to the cosine of the angle between thequaternions, except that our angle is in four dimensions instead of the usualthree. What this gives us is a way of measuring how different two read more..

  • Page - 221

    190 Chapter 5 Orientation RepresentationIf the quaternion is not normalized, we need to scale the matrix by1w2+ x2+ y2+ z2To compute this on a serial processor we can make use of the fact that thereare a lot of duplicated terms. The following is derived from Shoemake [104]:IvMatrix33&IvMatrix33::Rotation( const IvQuat& q ){float s, xs, ys, zs, wx, wy, wz, xx, xy, xz, yy, yz, zz;// if q is normalized, s = 2.0fs = 2.0f/( q.x*q.x + q.y*q.y + q.z*q.z + q.w*q.w );xs = s*q.x;ys = s*q.y;zs = read more..

  • Page - 222

    5.5 Quaternions191If the quaternion is normalized, the product will be the homogeneous rotationmatrix corresponding to the quaternion.To convert a matrix to a quaternion, we can use an approach that is similarto our matrix to axis–angle conversion. Recall that the trace of a rotationmatrix is 2 cos θ+ 1, where θis our angle of rotation. Also, from equation 5.4,we know that the vector r= (R21− R12,R02− R20,R10− R01) will have length2 sin θ. If we add 1 to the trace and use these as the read more..

  • Page - 223

    192 Chapter 5 Orientation Representationz=R02+ R204xw=R21− R124xWe can simplify this by noting that4x2= R00− R11− R22+ 14x24x=R00− R11− R22+ 14xx=R00− R11− R22+ 14xSubstituting this formula for x, we now see that all of the components arescaled by 1/4x. We can accomplish the same thing by taking the numeratorsx= R00− R11− R22+ 1y= R01+ R10z= R02+ R20w= R21− R12and normalizing.Similarly, if the largest diagonal element is R11, we start withy= R11− R00− R22+ 1x= R01+ R10z= read more..

  • Page - 224

    5.5 Quaternions193Converting from a fixed-angle format to a quaternion requires creating aquaternion for each rotation around a coordinate axis, and then concatenatingthem together. For the z-y-x fixed-angle format, the result isw= cosθx2cosθy2cosθz2− sinθx2sinθy2sinθz2x= sinθx2cosθy2cosθz2+ cosθx2sinθy2sinθz2y= cosθx2sinθy2cosθz2− sinθx2cosθy2sinθz2z= cosθx2cosθy2sinθz2+ sinθx2sinθy2cosθz2Converting a quaternion to fixed or Euler angles is, quite frankly, an read more..

  • Page - 225

    194 Chapter 5 Orientation RepresentationWe can use these properties and well-known vector operations to simplifythe product toq2 · q1 = (w1w2− v1· v2,w1 v2+ w2 v1+ v2× v1)Note that we’ve expressed this in a right-to-left order, like our matrices. This isbecause the rotation defined by q1 will be applied first, followed by the rotationdefined by q2. We’ll see this more clearly when we look at how we use quater-nions to transform vectors. Also note the cross product; due to this, read more..

  • Page - 226

    5.5 Quaternions195z = q2.x*q1.y - q2.y*q1.x+ q2.w*q1.z + q1.w*q2.z;return IvQuat(w,x,y,z);}Note that on a scalar processor that concatenating two quaternions canactually be faster than multiplying two matrices together.An example of concatenating quaternions is the conversion from z-y-xfixed-angle format to a quaternion. The corresponding quaternions for eachaxis areqz = cosθz2,0, 0, sinθz2qy = cosθy2,0, sinθy2,0qx = cosθx2,sinθx2,0, 0Multiplying these together in the order qx qy qz gives read more..

  • Page - 227

    196 Chapter 5 Orientation Representationrˆvwvw– (a)rˆ–rˆ–(b)Figure5.7 (a) Relationship between quaternion and its inverse. Inverse rotatesaround the same axis but negative angle. (b) Rotation direction around axis by negativeangle is the same as rotation direction around negative axis by positive angle.then(w,v)−1= cos−θ2,ˆr sin−θ2= cosθ2,−ˆr sinθ2(w,v)−1= (w,−v)(5.9)At first glance, negating the vector part of the quaternion (also known as theconjugate) to reverse read more..

  • Page - 228

    5.5 Quaternions197Equation 5.9 only holds if our quaternion is normalized. While in mostcases it should be since we’re trying to maintain unit quaternions, if it is notthen we need to scale by one over the length squared, orq−1=1q 2(w,−v)(5.10)Avoiding the floating-point divide in this case is another good reason to keepour quaternions normalized.Equation 5.10 may make more sense if we consider the inverse of aquaternion sˆq (i.e., a nonunit quaternion with magnitude s):(sˆq)−1 = read more..

  • Page - 229

    198 Chapter 5 Orientation Representationcomponent vector operations. Assuming that our quaternion is normalized,if we expand the full multiplication and combine terms, we getR q p= (2w2− 1) p+ 2( v· p) v+ 2w( v× p)(5.12)Substituting cos(θ/2)for w, andˆr sin(θ/2)for v,wegetR q (p)= 2 cos2θ2− 1p+ ˆr sinθ2· pˆr sinθ2+ 2 cosθ2ˆr sinθ2× pReducing terms and using the appropriate trigonometric identities, we endup withR q( p)= cos2θ2− sin2θ2p+ 2 sin2θ2(ˆr · p)ˆr+2 read more..

  • Page - 230

    5.5 Quaternions199An alternate version,R q p= (v· p) v+ w2 p+ 2w( v× p)+ v× (v× p)is useful for processors that have fast cross product operations.Neither of these formulas is as efficient as matrix multiplication, but fora single vector it is more efficient to perform these operations rather thanconvert the quaternion to a matrix and then multiply. However, if we need torotate multiple vectors by the same quaternion, matrix conversion becomesworthwhile.To see how concatenation of read more..

  • Page - 231

    200 Chapter 5 Orientation RepresentationWe begin by taking the dot product and cross product of the two vectors:v1· v2= v1v2 cos θv1× v2= v1v2 sin θˆrwhereˆr is our normalized rotation axis. Using these as the scalar and vectorparts, respectively, of a quaternion and normalizing gives usˆq1 = (cos θ,sin θˆr)This should look familiar from our previous discussion of matrix to quater-nion conversion. As before, if we add 1 to w,ˆqh = (cos θ+ 1, sin θˆr)and normalize, we getˆq = read more..

  • Page - 232

    5.6 Chapter Summary201Concatenation using the quaternion is similar to concatenation with ouroriginal separated format, except that we replace multiplication by the rota-tion matrix with quaternion operations:s= s1s0r= r1 r0t= t1+ r1(s1 t0)r−11Again, to add the translations, we first need to scale t0 by s1 and then rotateby the quaternion r1.As with lone quaternions, concatenation on a serial processor can be muchcheaper in this format than using a 4× 4matrix. However, transformation read more..

  • Page - 233

    202 Chapter 5 Orientation RepresentationFor further reading about quaternions, the best place to start is withthe writings of Shoemake, in particular [103]. Hamilton’s original series ofarticles on quaternions [52] are in the public domain and can be foundby searching online. Courant and Hilbert [21] cover applications of quater-nions, in particular to represent rotations. Finally, Eberly has an article [26]comparing orientation formats, and an entire chapter in his latest book [27]on read more..

  • Page - 234

    Chapter6Viewing andProjection6.1 IntroductionIn previous chapters we’ve discussed how to represent objects, basic transfor-mations we can apply to these objects, and how we can use these transforma-tions to move and manipulate our objects within our virtual world. With thatbackground in place, we can begin to discuss the mathematics underlying thetechniques we use to display our game objects on a monitor or other visualdisplay medium.It doesn’t take much justification to understand why we read more..

  • Page - 235

    204 Chapter 6 Viewing and ProjectionModelWorldViewProjectionFrustumClippingScreenFigure6.1 The graphics pipeline.set the camera’s position and orientation based on an affine transformation.Inverting this transformation is the first stage of our pipeline: It allows us totransform objects in the world frame into the point of view of the cameraobject.From there we will want to build and concatenate a matrix that transformsour objects in view into coordinates so they can be represented in an read more..

  • Page - 236

    6.2 View Frame and View Transformation2056.2 View Frame and View Transformation6.2.1 Defining a Virtual CameraIn order to render objects in the world, we need to represent the notion of aviewer. This could be the main character’s viewpoint in a first-person shooter,or an over-the-shoulder view in a third-person adventure game, or a zoomed-out wide shot in a strategy game. We may want to control properties of ourviewer to simulate a virtual camera, for example, we may want to createan in-game read more..

  • Page - 237

    206 Chapter 6 Viewing and Projectionview upview sideview directionview pointFigure6.2 View frame relative to the world frame.and right along the plane of the screen and yvalues vary up and down, whichis very intuitive.The remaining question is what to do with zand the view direction. Inmost systems, the z-axis is treated as the camera-relative view direction vector(Figure 6.3(a)). This has a nice intuitive feel: As objects in front of the viewermove farther away, their zvalues relative to the read more..

  • Page - 238

    6.2 View Frame and View Transformation207x-axisz-axisy-axis(a)x-axisz-axisy-axis(b)Figure6.3 (a) Standard view frame axes. (b) OpenGL view frame axes.likely be visible in our scene. Those on the other side of the plane formedby the view position, the view side vector, and the view up vector are behindthe camera, and therefore not visible. In order to achieve this situation, weneed to create a transformation from world space to view space, known as theworld-to-view transformation, or more simply, read more..

  • Page - 239

    208 Chapter 6 Viewing and Projectionto define which standard basis vector in the view frame maps to a particularview vector in the world frame.Recall that in the standard case, the camera’s local x-axis representsˆvside,the y-axis representsˆvup, and the z-axis representsˆvdir. This mapping indi-cates which columns the view vectors should be placed in, and the viewposition translation vector takes its familiar place in the right-most column.The corresponding transformation matrix read more..

  • Page - 240

    6.2 View Frame and View Transformation209view directioneyepointworld upzxyFigure6.4 Look-at representation.had a mission on a boat at sea and wanted to give the impression that the boatwas rolling from side to side, without affecting the simulation. One method isto change the world up vector over time, oscillating between two keeled-overorientations, and use that to calculate your camera orientation.For now, however, we’ll use kas our world up vector. Our goal is tocompute orthonormal vectors read more..

  • Page - 241

    210 Chapter 6 Viewing and ProjectionIf they point in opposite directions we getvup= k− (k· ˆvdir) ˆvdir= k− (−1) · ˆvdir= 0Clearly, neither case will lead to an orthonormal basis.The recovery procedure is to pick an alternative vector that we know isnot parallel, such as ior j. This will lead to what seems like an instantaneousrotation around the z-axis. To understand this, raise your head upward untilyou are looking at the ceiling. If you keep going, you’ll end up looking at read more..

  • Page - 242

    6.2 View Frame and View Transformation211position. If the three column vectors in our rotation matrix are u, v, and w,then for OpenGL the final transformation matrix isMview→world = TRogl=ijkvpos0001uvw00001−jk −i00001=−vw −uvpos00016.2.4 Constructing the World-to-ViewTransformationUsing the techniques in the previous two sections, now we can create a trans-formation that takes us from view space to world space. To create the reverseoperator, we need only to invert the transformation. read more..

  • Page - 243

    212 Chapter 6 Viewing and ProjectionIvVector3 viewUp;viewDir.Normalize();viewUp = up - up.Dot(viewDir)*viewDir;viewUp.Normalize();viewSide = viewDir.Cross(viewUp);// now set up matrices// build transposed rotation matrixIvMatrix33 rotate;rotate.SetRows( viewSide, viewUp, -viewDir );// transform translationIvVector3 eyeInv = -(rotate*eye);// build 4x4 matrixIvMatrix44 matrix;matrix.Rotation(rotate);matrix(0,3) = eyeInv.x;matrix(1,3) = eyeInv.y;matrix(2,3) = eyeInv.z;// set view to world read more..

  • Page - 244

    6.3 Projective Transformation213We’ve already seen one example of projection: using the dot product toproject one vector onto another. In our current case, we want to project thepoints that make up the vertices of an object onto a plane, called the pro-jection plane or the view plane. We do this by following a line of projectionthrough each point and determining where it hits the plane. These lines couldbe perpendicular to the plane, but as we’ll see, they don’t have to be.To understand read more..

  • Page - 245

    214 Chapter 6 Viewing and ProjectionThere is, of course, one minor problem: The projected image is upsidedown and backwards. One possibility is just to flip the image when we displayit on our medium. This is what happens with a camera: The image is capturedon film upside down, but we can just rotate the negative or print to view itproperly. This is not usually done in graphics. Instead, the projection planeis moved to the other side of the center of projection, which is now treatedas our view read more..

  • Page - 246

    6.3 Projective Transformation215A parallel projection where the lines of projection are perpendicular tothe view plane is called an orthographic projection. By contrast, if they arenot perpendicular to the view plane, this is known as an oblique projection(Figure 6.8). Two common oblique projections are the cavalier projection,where the projection angle is 45 degrees, and the cabinet projection, where theprojection angle is cot−1(1/2). When using cavalier projections, projected lineshave the read more..

  • Page - 247

    216 Chapter 6 Viewing and Projectionare done with a “view camera.” This device has an accordion-pleated hoodthat allows the photographer to bend and tilt the lens up while keeping thefilm parallel to the side of the building. Ansel Adams also used such a camerato capture some of his famous landscape photographs.6.3.2 Normalized Device CoordinatesBefore we begin projecting, our objects have passed through the view stage ofthe pipeline and so are in view frame coordinates. We will be read more..

  • Page - 248

    6.3 Projective Transformation217view windowji(a)ji(1, 1)(–1, –1)(b)Figure6.10 (a) NDC frame in view window, and (b) view window after NDCtransformation.specified by a set of six planes. Anything inside these planes will be rendered;everything outside them will be ignored. This volume is known as the viewfrustum,or view volume.To constrain what we render in the view frame xydirections, we specifyfour planes aligned with the edges of the view window. For perspective projec-tion each plane is read more..

  • Page - 249

    218 Chapter 6 Viewing and Projectionx-axisz-axisy-axisfield of viewview windowFigure6.11 Perspective view frustum (right-handed system).to the view plane. As the field of view gets larger, the distance to the viewplane needs to get smaller to maintain the view window size. Similarly, asmall field of view will lead to a longer view plane distance. Alternatively, wecan set the distance to the view plane to a fixed value and use the field of viewto determine the size of our view window. The read more..

  • Page - 250

    6.3 Projective Transformation219For parallel projection, the xyculling planes are parallel to the direction ofprojection, so opposite planes are parallel and we end up with a parallelopipedthat is open at two ends (Figure 6.12). There is no concept of field of view inthis case.In both cases, to complete a closed view frustum we also define two planesthat constrain objects in the view frame z-direction: the near and far planes(Figure 6.13). With perspective projection it may not be obvious why read more..

  • Page - 251

    220 Chapter 6 Viewing and Projectionfar planenear planeview windowFigure6.13 View frustum with near plane and far plane.6.3.4 Homogeneous CoordinatesThere is one more topic we need to cover before we can start discussingprojection. Previously we stated that a point inR3 can be represented by(x,y,z,1)without explaining much about what that might mean. This rep-resentation is part of a more general representation for points known ashomogeneous coordinates, which prove useful to us when handling read more..

  • Page - 252

    6.3 Projective Transformation221just drop the w: (x ,y ,z ,1)→ (x ,y ,z ). However, in the cases that we’ll beconcerned with next, we need to perform the division by w.What happens when w= 0? In this case, a point inRP3 doesn’t representa point inR3, but a vector. We can think of this as a “point at infinity.” Whilewe will try to avoid cases where w= 0, they do creep in, so checking for thisbefore performing the homogeneous division is often wise.6.3.5 Perspective ProjectionSource read more..

  • Page - 253

    222 Chapter 6 Viewing and ProjectionBut how do we compute d? As we see, the cross section of the yviewfrustum planes are represented as lines from the center of projection throughthe extents of the view window (1,d )and (−1,d ). The angle between theselines is our field of view θfov. We’ll simplify things by considering only the areathat lies above the negative z-axis; this bisects our field of view to an angleof θfov/2. If we look at the triangle bounded by the negative z-axis, the read more..

  • Page - 254

    6.3 Projective Transformation223y-axisprojection plane–z-axisd–zv(yndc, –d)(yv, zv)Figure6.15 Perspective projection similar triangles.are rectangular to match the relative dimensions of a computer monitor orother viewing device. We must correct for this by the aspect ratio of the viewregion. The aspect ratio ais defined asa=wvhvwhere wv and hv are the width and height of the view rectangle, respectively.We’re going to assume that the NDC view window height remains at 2 andcorrect the read more..

  • Page - 255

    224 Chapter 6 Viewing and ProjectionThe first thing to notice is that we are dividing by a zcoordinate, so wewill not be able to represent the entire transformation by a matrix opera-tion, since it is neither linear nor affine. However, it does have some affineelements — scaling by dand d/a, for example — which can be performed bya transformation matrix. This is where the conversion from homogeneousspace comes in. Recall that to transform fromRP3 toR3 we need to divide theother read more..

  • Page - 256

    6.3 Projective Transformation225Dividing out the w(also called the reciprocal divide), we getxndc=dxv−azvyndc=dyv−zvzndc=−dwhich is what we expect.So far, we have dealt with projecting xand yand completely ignored z.Inthe preceding derivation all zvalues map to−d, the negative of the distanceto the projection plane. While losing a dimension makes sense conceptually(we are projecting from a 3D space down to a 2D plane, after all), for practicalreasons it is better to keep some measure of read more..

  • Page - 257

    226 Chapter 6 Viewing and Projectiona perspective matrix with unknowns for the scaling and translation factorsand use the fact that we know the final values for−n and−f to solve for theunknowns. Our starting perspective matrix, then, is⎡⎢⎢⎣d/a0000d0000AB00−10⎤⎥⎥⎦where Aand Bare our unknown scale and translation factors, respectively. Ifwe multiply this by a point (0, 0,−n) on our near plane, we read more..

  • Page - 258

    6.3 Projective Transformation227Setting zndc to 1 and solving for A,wegetAnf− 1−nf= 1Anf− 1=nf+ 1A=nf + 1nf − 1=n+ fn− fIf we substitute this into equation 6.3, we getB=2nfn− fSo, our final perspective matrix isMpersp=⎡⎢⎢⎢⎣da0000d0000n+fn−f2nfn−f00−10⎤⎥⎥⎥⎦The matrix that we have generated is the same one produced by an OpenGLcall: gluPerspective(). This function takes the field of view,1 aspect ratio, andnear and far plane settings, builds the perspective read more..

  • Page - 259

    228 Chapter 6 Viewing and ProjectionUsing the standard view frame and this mapping gives us a perspectivetransformation matrix ofMpD3D=⎡⎢⎢⎢⎣da0000d0000ff−n−nff−n0010⎤⎥⎥⎥⎦This matrix can be derived using the same principles described above.When setting up a perspective matrix, it is good to be aware of the issuesinvolved in rasterizing zvalues. In particular, to maintain zprecision keep thenear and far planes as close together as possible. More details on read more..

  • Page - 260

    6.3 Projective Transformation229We now need to scale to change our interval from a magnitude of (t− b)to a magnitude of 2 by using a scale factor 2/(t− b):yndc=2yt− b−2(t+ b)2(t− b)(6.4)If we substitute nyv/−zv for yand simplify, we getyndc=2nyv−zvt− b−2(t+ b)2(t− b)(left,top, –near)(right,top, –near)(right,bottom, –near)(a)(left,bottom, –near)y-axiseyepointnear plane–z-axis–near(top, –near)(bottom, –near)(b)Figure6.17 (a) View window for glFrustum, 3D view. read more..

  • Page - 261

    230 Chapter 6 Viewing and Projection=2nyv−zvt− b−(t+ b)−zv−zvt− b=1−zv2nt− byv+t+ bt− bzvA similar process gives us the following for the xdirection:xndc=1−zv2nr− lxv+r+ lr− lzvWe can use the same Aand Bfrom our original perspective matrix, so ourfinal projection matrix isMoblpersp=⎡⎢⎢⎢⎢⎣2nr−l0r+lr−l002nt−bt+bt−b000n+fn−f2nfn−f00−10⎤⎥⎥⎥⎥⎦A casual inspection of this matrix gives some sense of what’s going on here.We have a scale in read more..

  • Page - 262

    6.3 Projective Transformation231goggle system that displays the left and right views in each eye appropriately,we can provide a good approximation of stereo vision. We have included anexample of this on the CD-ROM.Finally, this can be used for a system called fishtank VR. Normally wethink of VR as a helmet attached to someone’s head with a display for eacheye. However, by attaching a tracking device to a viewer’s head we can usea single display and create an illusion that we are looking read more..

  • Page - 263

    232 Chapter 6 Viewing and Projectionthem to the interval[−1, 1]. Substituting yv into our range transformationequation 6.4, we getyndc=2yvt− b−t+ bt− bA similar process gives us the equation for xndc. We can do the same for zndc, butsince our viewable zvalues are negative and our values for nand fare positive,we need to negate our zvalue and then perform the range transformation. Theresult of all three equations isMortho=⎡⎢⎢⎢⎢⎣2r−l00−r+lr−l02t−b0−t+bt−b00− read more..

  • Page - 264

    6.3 Projective Transformation233Neither OpenGL nor Direct3D has a particular routine that handlesoblique parallel projections, so we’ll derive one ourselves. We will give ourprojection a slight oblique angle (cot−1(1/2), which is about 63.4 degrees),which gives a 3D look without perspective. More extreme angles in xand ytend to look strangely flat.Figure 6.19 is another example of our familiar cross section, this timeshowing the lines of projection for our oblique projection. As we can see, read more..

  • Page - 265

    234 Chapter 6 Viewing and ProjectionSolving for t,wegett=12(n+ Pz)Plugging this into the formula for the ycoordinate of L(t),wegety= Py+12(n+ Pz)Finally, we can plug this into our range transformation equation 6.4 asbefore to getyndc= 2yv+ 12 (n+ zv)t− b−t+ bt− b=2yvt− b−t+ bt− b+zv+ nt− bOnce again, we examine our transformation equation more carefully. Thisis the same as the orthographic transformation we had before, with an addi-tional z-shear, as we’d expect for an oblique read more..

  • Page - 266

    6.4 Culling and Clipping2356.4 Culling and Clipping6.4.1 Why Cull or Clip?We will now take a detour from discussing the transformation aspect ofour pipeline to discuss a process that often happens at this point in manyrenderers. In order to improve rendering, both for speed and appearance’ssake, it is necessary to cull and clip objects. Culling is the process of remov-ing objects from consideration for some process, whether it be rendering,simulation, or collision detection. In this case, that read more..

  • Page - 267

    236 Chapter 6 Viewing and ProjectionFigure6.21 View frustum clipping.y-axisprojection plane–z-axisFigure6.22 Projection of objects behind the eye.example why. Recall that we finessed the problem of the camera obscurainverting images by moving the view plane in front of the center of projec-tion. However, we still have the same problem if an object is behind the viewposition; it will end up projected upside down. The solution is to cull objectsthat lie behind the view position.Figure 6.23(a) read more..

  • Page - 268

    6.4 Culling and Clipping237projection planeview directionPQ9QP9eye(a)projection planeview directionPQ9QP9eye(b)projection planenear planeview directionPPclip9PclipQ9Qeye(c)Figure6.23 (a) Projection of line segment crossing behind view point. (b) Incor-rect line segment rendering based on projected endpoints. (c) Line segment renderingwhen clipped to near plane.line segment should start at the middle of the view, move up, and wrap aroundto reemerge at the bottom of the view. In practice, however, read more..

  • Page - 269

    238 Chapter 6 Viewing and Projection(x, y,0, 1)will become (x ,y ,z ,0). The resulting transformation into NDCspace will be a division by 0, which is not valid.To avoid all of these issues, at the very least we need to set a near planethat lies in front of the eye so that the view position itself does not lie withinthe view frustum. We first cull any objects that lie on the same side of thenear plane as the view position. We then clip any objects that cross the nearplane. This avoids both the read more..

  • Page - 270

    6.4 Culling and Clipping239frustum planes and all the vertices ax+ by+ cz+ d>0, then we know themodel lies entirely inside the frustum and we don’t need to worry aboutclipping it.While this will work, for models with large numbers of vertices thisbecomes expensive, probably outweighing any savings we might gain by notrendering the objects. Instead, culling is usually done by approximating theobject with a convex bounding volume, such as a sphere, that contains all ofthe vertices for the read more..

  • Page - 271

    240 Chapter 6 Viewing and ProjectionQRPFigure6.24 Clipping edge to plane.x, y, and z,weget0= a(Px+ tvx)+ b(Py+ tvy)+ c(Pz+ tvz)+ d= aPx+ tavx+ bPy+ tbvy+ cPz+ tcvz+ d= aPx+ bPy+ cPz+ d+ t(avx+ bvy+ cvz)t=−aPx − bPy− cPz− davx+ bvy+ cvzAnd now, substituting in Q− Pfor v:t=(aPx+ bPy+ cPz+ d)(aPx+ bPy+ cPz+ d)− (aQx+ bQy+ cQz+ d)We can use Blinn’s notation [7], slightly modified, to simplify this tot=BCPBCP− BCQwhere BCPis the result from the plane equation (the boundary read more..

  • Page - 272

    6.4 Culling and Clipping241QPQPoutput Pno outputQR PQRPoutput P, Routput RFigure6.25 Four possible cases of clipping an edge against a plane.To clip a polygon to a plane, we need to clip each edge in turn. A standardmethod for doing this is to use the Sutherland-Hodgeman algorithm [109].We first test each edge against the plane. Depending on what the result is,we output particular vertices for the clipped polygon. There are four possi-ble cases for an edge from Pto Q(Figure 6.25). If both are read more..

  • Page - 273

    242 Chapter 6 Viewing and Projectionvoid ClipVertex( const IvVector3& end )inline void StartClip() { mFirstVertex = true; }inline void SetPlane( const IvPlane& plane ) { mPlane = plane; }private:IvPlanemPlane;// current clipping planeIvVector3 mStart;// current edge start vertexfloatmBCStart; // current edge start boundary conditionboolmStartInside; // whether current start vertex is insideboolmFirstVertex; // whether expected vertex is start vertex};Note that read more..

  • Page - 274

    6.4 Culling and Clipping243Output( end - t*(end - mStart) );}else{float t = mBCStart/(mBCStart - BCend);Output( mStart + t*(end - mStart) );}}}}mStart = end;mBCStart = BCend;mStartInside = endInside;mFirstVertex = false;}Note that we generate tin the same direction for both clipping cases — frominside to outside. Polygons will often share edges. If we were to clip the sameedge for two neighboring polygons in different directions, we may end upwith two slightly different points due to read more..

  • Page - 275

    244 Chapter 6 Viewing and ProjectionclipTexture = startTexture + t*(endTexture - startTexture);// Output new clip vertex}This is only one example of a clipping algorithm. In most cases, it won’t benecessary to write any code to do clipping. The hardware will handle any clip-ping that needs to be done for rendering. However, for those who have the needor interest, other examples of clipping algorithms are the Liang-Barsky [68],Cohen-Sutherland (found in Foley et al. [38] as well as other read more..

  • Page - 276

    6.4 Culling and Clipping245Instead of clipping our points against general planes in the world frameor view frame, we can clip our points against these simplified planes inRP3space. For example, the plane test for w= xis w− x. The full set of plane testsfor a point PareBCP−x = w+ xBCPx= w− xBCP−y = w+ yBCPy= w− yBCP−z = w+ zBCPz= w− zThe previous clipping algorithm can be used, with these plane tests replacingthe IvPlane::Test()call. While these tests are cheaper to compute in read more..

  • Page - 277

    246 Chapter 6 Viewing and Projectionour plane tests will clip to the upper triangle region of that hourglassshape — any points that lie in the lower region will be inadvertently removed.With the projections that we have defined, this will happen only if we use anegative value for the wvalue of our points. And since we’ve chosen 1as thestandard wvalue for points, this shouldn’t happen. However, if you do havepoints that for some reason have negative wvalues, Blinn [8] recommends read more..

  • Page - 278

    6.5 Screen Transformation247(1, 1)(ws, hs)Figure6.28 Mapping NDC space to screen space.What we’ll need to do is map our NDC area to our screen area(Figure 6.28). This consists of scaling it to the same size as the screen, flippingour ydirection, and then translating it so that the upper left corner becomesthe origin.Let’s begin by considering only the ydirection, because it has the specialcase of the axis flip. The first step is scaling it. The NDC window is two unitshigh, whereas the read more..

  • Page - 279

    248 Chapter 6 Viewing and ProjectionThis assumes that we want to cover the entire screen with our viewwindow. In some cases, for example in a split-screen console game, we wantto cover only a portion of the screen. Again, we’ll have a width and height ofour screen space area, ws and hs, but now we’ll have a different upper left cor-ner position for our area: (sx,sy). The first part of the process is the same; wescale the NDC window to our screen space window and flip the y-axis. read more..

  • Page - 280

    6.6 Picking249example, NTSC televisions have 448 scan lines, with 640 analog pixels perscan line, so it is common practice to render to a 640× 448area and then sendthat to the NTSC converter to be displayed. Using the offscreen buffer sizewould give an aspect ratio of 10:7. But the actual television screen has a 4:3aspect ratio, so the resulting image will be distorted, producing stretching inthe ydirection. The solution is to set a= 4/3despite the aspect ratio of theoffscreen buffer. The image read more..

  • Page - 281

    250 Chapter 6 Viewing and ProjectionInstead, we begin by transforming our screen space point (xs,ys) to anNDC space point (xndc,yndc). Since our NDC to screen space transform is affine,this is easy enough: We need only invert our previous equations 6.5 and 6.6.That gives usxndc=2(xs− sx)ws− 1yndc=−2(ys− sy)hs+ 1Now the tricky part. We need to transform our point in the NDC frame tothe view frame. We’ll begin by computing our zv value. Looking at Figure 6.29again, this is read more..

  • Page - 282

    6.6 Picking251Since this is a system of linear equations, we can express this as a 3× 3matrix:⎡⎣xvyvzv⎤⎦ =⎡⎢⎣2aws0− 2awssx− 10− 2hs2hssy+ 100−d⎤⎥⎦⎡⎣xsys1⎤⎦From here we have a choice. We can try to detect intersection with anobject in the view frame, we can detect in the world frame, or we can detectin the object’s local frame. The first involves transforming every object intothe view frame and then testing against our pick ray. The second read more..

  • Page - 283

    252 Chapter 6 Viewing and Projection6.7 Management of ViewingTransformationsSource CodeLibraryIvEngineFilenameIvGLHelpUp to this point we have presented a set of transformations and correspondingmatrices without giving some sense of how they would fit into a game engine.While the thrust of this book is not about writing renderers, we can still pro-vide a general sense of how some renderers and application programminginterfaces (APIs) manage these matrices, and how to set transformations fora read more..

  • Page - 284

    6.7 Management of Viewing Transformations253// set in OpenGLglMatrixMode(GL_PROJECTION);glLoadMatrix( projection );glMatrixMode(GL_MODELVIEW);glLoadMatrix( viewTransform );And when we render an object, concatenating the world matrix can bedone by the following code.glMatrixMode(GL_MODELVIEW);// push copy of view matrix to top of stackglPushMatrix();// multiply by world matrixglMultMatrix( worldTransform );// render...// pop to view matrixglPopMatrix();The push/pop calls provide a means for read more..

  • Page - 285

    254 Chapter 6 Viewing and Projectionusing the call glViewport(). For the zdirection, OpenGL provides a functionglDepthRange(), which maps[−1, 1] to[near, far], where the defaults for nearand farare 0 and 1, respectively. Similar methods are available for other APIs.In our case, we have decided not to overly complicate things and areproviding simple convenience routines in the IvRenderer class:IvSetWorldMatrix()IvSetViewMatrix()IvSetProjectionMatrix()IvSetViewport()that act as wrappers for the read more..

  • Page - 286

    Chapter7Geometry andProgrammableShading7.1 IntroductionHaving discussed in detail in the preceding chapters how to represent,transform, view, and animate geometry, the next three chapters form asequence that describes the second half of the rendering pipeline. The secondhalf of the rendering pipeline is specifically focused on visual matters: therepresentation, computation, and usage of color.This chapter will discuss how we connect the points we have been trans-forming and projecting to form read more..

  • Page - 287

    256 Chapter 7 Geometry and Programmable Shadingthat arise within them can be explored. By its nature, this chapter focuseson the framework itself, the rendering pipeline, and its two most interestingcomponents, the programmable vertex and fragment shader units.We will also introduce some of the simpler methods of using this pro-grammable pipeline to render colored geometry by introducing the basicsof a common high-level shading language, OpenGL’s GLSL. Common inputsand outputs to and from the read more..

  • Page - 288

    7.2 Color Representation257the code examples in this and the following rendering chapters will describethe book’s Ivrendering APIs, supplied as full source code on the book’saccompanying CD-ROM. Interested readers may look at the implementationsof the referenced Ivfunctions to see how each operation can be written inOpenGL or Direct3D.7.2 Color Representation7.2.1 RGB Color ModelTo represent color, we will use the additive RGB (red, green, blue) color modelthat is almost universal in read more..

  • Page - 289

    258 Chapter 7 Geometry and Programmable ShadingOur colors will be represented by 3-vectors, with the following basisvectors:(1, 0, 0)→ red(0, 1, 0)→ green(0, 0, 1)→ blueOften, as a form of shorthand, we will refer to the red component of a color cas cr and to the green and blue components as cg and cb, respectively.7.2.3 Color Range LimitationThe theoretical RGB color space is semi-infinite in all three axes. There is anabsolute zero value for each component, bounding the negative read more..

  • Page - 290

    7.2 Color Representation259In the rest of this chapter and the following chapter we will work in thesenormalized color coordinates. This space defines an RGB “color cube,” withblack at the origin, white at (1, 1, 1), gray levels down the main diagonalbetween them (a,a,a), and the other six corners representing pure, maximalred (1, 0, 0), green (0, 1, 0), blue (0, 0, 1), cyan (0, 1, 1), magenta (1, 0, 1), andyellow (1, 1, 0).The following sections will describe some of the vector operations read more..

  • Page - 291

    260 Chapter 7 Geometry and Programmable ShadingOr basically, the dot product of the color with a “luminance reference color.”The three color-space transformation coefficients used to scale the color com-ponents are basically constant for modern, standard CRT screens but do notnecessarily apply to television screens, which use a different set of luminanceconversions. Discussion of these may be found in Poynton [94]. Note thatluminance is not equivalent to perceived brightness. The luminance read more..

  • Page - 292

    7.2 Color Representation261rather defines how the combined color interacts with other colors. The mostfrequent use of the alpha component is an opacity value, which defines howmuch of the surface’s color is controlled by the surface itself and how muchis controlled by the colors of objects that are behind the given surface. Whenalpha is at its maximum (we will define this as 1.0), then the color of thesurface is independent of any objects behind it. The red, green, and blue com-ponents of read more..

  • Page - 293

    262 Chapter 7 Geometry and Programmable Shadingless colorful. While this might seem unsatisfactory, it actually can be bene-ficial in some forms of simulated lighting, as it tends to make overly brightobjects appear to “wash out,” an effect that can perceptually appear rathernatural under the right circumstances.Another, more computationally expensive method is to rescale all threecolor components of any color with a component greater than 1.0such thatthe maximal component is 1.0. This may read more..

  • Page - 294

    7.2 Color Representation263Figure7.1 Color-range limitation methods: (a) image colors clamped, and(b) image colors rescaled.range and brighten the buildings to be less in shadow. While we are applyingdifferent scalings to different parts of the image (darkening to the sky andbrightening to the buildings), the relative brightnesses within the buildings’region of the image are kept intact, and the relative brightnesses within thesky’s regions of the image are kept intact. Thus, the sky and the read more..

  • Page - 295

    264 Chapter 7 Geometry and Programmable ShadingFigure7.2 A tonemapped image.7.2.6 Color Storage FormatsA wide range of color storage formats are used by modern rendering systems,both floating point and fixed point (as well as one or two hybrid formats).Common RGBA color formats include:■Single-precision floating-point components (128 bits for RGBA color).■Half-precision floating-point components (64 bits for RGBA color).■16-bit unsigned integer components (64 bits for RGBA read more..

  • Page - 296

    7.2 Color Representation265In general, the floating-point formats are used as would be expected(in fact, on modern systems, the single-precision floating-point colors arenow IEEE 754 compliant, making them useful for noncolor computations aswell). However, the integer formats have a special mapping in most graphicssystems. An integer value of zero maps to zero, but the maximal value mapsto 1.0. Thus, the integer formats are slightly different than those seen in anyfixed-point format.While a read more..

  • Page - 297

    266 Chapter 7 Geometry and Programmable Shading224≈ 16.7million colors, are more than sufficient. While it is true that thenumber of different color “names” in a 24-bit system (where a color is “named”by its 24-bit RGB triple) is a greater number than the human visual system candiscern, this does not take into account the fact that the colors being generatedon current display devices do not map directly to the 1–7 million colors thatcan be discerned by the human visual system. read more..

  • Page - 298

    7.3 Points and Vertices267the surface in an infinitely small neighborhood of the vertex. If we assume thatthe surface passing through the vertex is locally planar (at least in an infinitelysmall neighborhood of the vertex), the surface normal is the normal vector tothis plane (recall the discussion of plane normal vectors from Chapter 2). Inmost cases, this vector is defined in the same space as the vertices, generallymodel (a.k.a. object) space. As will be seen later, the normal vector is read more..

  • Page - 299

    268 Chapter 7 Geometry and Programmable ShadingA smaller, simpler vertex with just position and normal might be asfollows:struct IvNPVertex{IvVector3 normal;IvVector3 position;};Along with the C or C++ representation of a vertex, an application must beable to communicate to the rendering API how the vertices are laid out. Eachrendering API uses its own system, but two different methods are common;the simpler (but less flexible) method is for the API to expose some fixed setof supported vertex read more..

  • Page - 300

    7.3 Points and Vertices269some of an object’s vertex attributes are computed on the host CPU, it maymake sense to keep them in their own array, while leaving the constant vertexattributes in another fully interleaved vertex array. This allows the dynamicdata to be modified without touching or retransferring the static data to devicememory. We will assume an interleaved vertex format for the remainder ofthe rendering discussions.Vertex BuffersProgrammable shaders and graphics rendering read more..

  • Page - 301

    270 Chapter 7 Geometry and Programmable Shading// Lock the vertex buffer and cast to the correct// vertex formatIvCPVertex* verts= (IvCPVertex*)buffer->BeginLoadData();// Loop over all 1024 vertices in verts and// fill in the data...// ...// Unlock the buffer, so it can be usedbuffer->EndLoadData();The vertex buffer is now filled with data and ready to be used to render.7.4Surface RepresentationIn this section we will discuss another important concept used to representand render objects read more..

  • Page - 302

    7.4 Surface Representation271A cloud of points that is infinitely dense on the desired surface canrepresent that surface. Obviously, such a directly stored collection of unstruc-tured points would be far too large to render in real time (or even store) ona computer. We need a method of representing an infinitely dense surface ofpoints that requires only a finite amount of representational data.There are numerous methods of representing surfaces, depending on theintended use. Our requirements read more..

  • Page - 303

    272 Chapter 7 Geometry and Programmable Shading0123645(a)(b)(0,1,2),(0,2,3),(0,3,4),(0,4,5),(0,5,6),(0,6,1)(c)Figure7.3 A hexagonal configuration of triangles: (a) configuration, (b) sevenshared vertices, and (c) index list for shared vertices.for rendering that joins sets of three vertices by spanning them with a triangle.As an example, Figure 7.3(a) depicts a fan-shaped arrangement of six triangles(defining a hexagon) that meet in a single point. The vertex array for thisgeometry is an read more..

  • Page - 304

    7.4 Surface Representation27332-bit index array can represent a surface with more than 4 billion vertices(essentially unlimited).Most rendering APIs support a wide range of indexed geometry. Indexedtriangle lists, such as the ones we’ve just introduced, are simple to understandbut are not as optimal as other representations. The most popular of thesemore optimal representations are triangle strips,or tristrips. In a trianglestrip, the first three vertex indices represent a triangle, just as read more..

  • Page - 305

    274 Chapter 7 Geometry and Programmable ShadingIndexed rendering is not the only way to render triangle lists, strips, etc.The other common method is nonindexed geometry, and is equivalent todereferencing the index list into an array of vertex structures. In other words, anonindexed triangle list with Ttriangles would use no index list, but would usea vertex array with 3Tvertices. Any vertices that were shared in the indexedcase must be duplicated in the nonindexed case. This is generally read more..

  • Page - 306

    7.5 Rendering Pipeline275at least the array of vertices, array of indices, type of primitive (list, strip,etc.), and rendering state defining the appearance of the object. Some APIsmay also require the application to specify the location of each component(normal, position, etc.) within the vertex structure. The Ivrendering enginesets up the geometry and connectivity, and renders in a single call, as follows:IvRenderer& renderer;IvVertexBuffer* vertexBuffer;IvIndexBuffer* indexBuffer;// read more..

  • Page - 307

    276 Chapter 7 Geometry and Programmable ShadingIndex and Vertex ArraysPrimitive ProcessingPer-Vertex OperationsTriangle AssemblyTriangle ClippingViewport TransformFragment GenerationFragment ProcessingOutput ProcessingRendered Image ColorsBlending InformationFragment Uniform ValuesViewportView FrustumIndex ArrayShaded FragmentsUnshaded FragmentsScreen-space Triangles (Shaded Vertex Triples)Clipped Triangles (Shaded Vertex Triples)Triangles (Shaded Vertex Triples)Transformed and Shaded read more..

  • Page - 308

    7.5 Rendering Pipeline277The rendering section of this book covers all of these steps in vari-ous levels of detail. In this chapter we have already discussed the basics ofindexed triangle primitives (primitive processing and triangle assembly). InChapter 6 we discussed projection of vertices (per-vertex operations), clippingand culling (triangle clipping), and transformation into screen space (view-port transform). In this chapter we will provide an overview of other per-vertexoperations and read more..

  • Page - 309

    278 Chapter 7 Geometry and Programmable Shadingprogrammer to assign inputs, outputs, and temporaries to a limited set ofavailable registers. These were difficult to program and often included confus-ing limitations. However, as the 3D rendering hardware became more capable,the register sets and instructions became more powerful and general. This ledto the true real-time shading revolution.Hardware vendors and graphics API vendors began to design and stan-dardize high-level shading languages. read more..

  • Page - 310

    7.6 Shaders279a much more precise definition of a fragment, but for now, the basic conceptis that it is a sample somewhere on the surface of the object, generally at apoint in the interior of one of the triangles, not coincident with any singlevertex defining the surface.The “one in, one out” nature of both types of shader is an inherent limita-tion that is simplifying yet at times frustrating. A vertex shader has access tothe attributes of the current vertex only. It has no knowledge of read more..

  • Page - 311

    280 Chapter 7 Geometry and Programmable Shadingshader input and will garner its own section later in this chapter and inthe chapters to come. While some modern graphics systems and APIs allowsamplers as inputs to both vertex and fragment shaders, this is not universal,and for the purposes of this book, we will discuss them as inputs to fragmentshaders, where they are universally supported.7.6.3 Shader Operations and LanguageConstructsThe set of shader operations in modern shading languages is read more..

  • Page - 312

    7.7 Vertex Shaders281Note that in moving to a completely shader-based pipeline, OpenGLES 2.0’s GLSL-E shading language has far fewer standard, predefined vertexshader attributes and vertex/fragment uniforms than are available in theotherwise similar desktop OpenGL GLSL shading language. For example,since there is no concept of a model view matrix in OpenGL ES 2.0, thereis no corresponding standard uniform. Instead, applications must pass anyneeded matrices via custom uniforms. Desktop read more..

  • Page - 313

    282 Chapter 7 Geometry and Programmable Shading7.7.3 Basic Vertex ShadersThe simplest vertex shader simply transforms the incoming model-spacevertex by the model view and projection matrix, and places the result in therequired output register, as follows:// GLSLvoid main(){gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;}This shader uses nothing but built-in vertex attributes, uniforms, andvarying variables, and thus requires no declarations at all. It transformsa floating-point 4-vector read more..

  • Page - 314

    7.8 Fragment Shaders283cases using conditionals in the shader code. This can lead to large, complexshaders with a lot of conditionals whose results will differ only at the per-object level, a potentially wasteful option. Other applications generate shadersource code in the application itself, as needed, compiling their shaders atruntime. This can be problematic as well, as the shader compilation takessignificant CPU cycles and can stall the application visibly. Finally, someapplications use a read more..

  • Page - 315

    284 Chapter 7 Geometry and Programmable Shading7.8.2 Fragment Shader OutputsThe basic goal of the fragment shader is to compute the color of the currentfragment. The entire pipeline, in essence, comes down to this single outputvalue per fragment. The fragment shader cannot change the other values ofthe fragment, such as the position of the fragment, which remains lockedin screen space. However, some shading systems do allow for a fragment tocancel itself, causing that fragment to go no further read more..

  • Page - 316

    7.8 Fragment Shaders285drivers include a GLSL compiler. Direct3D ships a runtime compiler as anindependent library. OpenGL ES does not require that a platform provide aruntime compiler. However, we will assume the availability of a runtime com-piler in our Ivcode examples. In either case, the source vertex and fragmentshaders must be compiled into compiled shader objects. If there are syntaxerrors in the source files, the compilation will fail.A pair of compiled shaders (a vertex shader and a read more..

  • Page - 317

    286 Chapter 7 Geometry and Programmable Shading7.8.4 Setting Uniform ValuesAs mentioned previously, uniform shader parameters form the mostimmediate application-to-shader communication. These values provide the“global” variables required inside of a shader and can be set on a per-objectbasis. Since they cannot be set during the course of a draw call, there is noway to change uniforms at a finer grain than the per-object level. Only per-vertex attributes (in the vertex shader) and varyings read more..

  • Page - 318

    7.9 Basic Coloring Methods287IvUniformType uniformType = uniform->GetType();unsigned int uniformCount = uniform->GetCount();// We’re expecting an array of two float vector-4’sif ((uniformType == kFloat4Uniform) &&(uniformCount == 2)){// Set the vectors to the Z and X axesuniform->SetValue(IvVector4(0, 0, 1, 0), 0);uniform->SetValue(IvVector4(1, 0, 0, 0), 1);}These interfaces make it possible to pass a wide range of data items downfrom the application code to a shader. We read more..

  • Page - 319

    288 Chapter 7 Geometry and Programmable Shading7.9.1 Per-Object ColorsSource CodeDemoUniformColorsThe simplest form of useful coloring is to assign a single color per object.Constant coloring of an entire object is of very limited use, since the entireobject will appear to be flat, with no color variation. At best, only the filledoutline of the object will be visible against the backdrop. As a result, except insome special cases, per-object color is rarely used as the final shading read more..

  • Page - 320

    7.9 Basic Coloring Methods289vertex colors may be assigned explicitly by the application, or generated on-the-fly via per-vertex lighting or other vertex shader. This linear interpolationis both simple and smooth and can be expressed as a mapping of barycentriccoordinates (s, t)as follows:Color(O, T, (s, t))= sCV 1+ tCV 2+ (1− s− t)CV 3Examining the terms of the equation, it can be seen that Gouraud shad-ing is simply an affine transformation from barycentric coordinates (ashomogeneous read more..

  • Page - 321

    290 Chapter 7 Geometry and Programmable Shading// GLSLvoid main() // vertex shader{gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;gl_FrontColor = gl_BackColor = gl_Color;}// GLSLvoid main() // fragment shader{gl_FragColor = gl_Color;}7.9.3 Per-Triangle ColorsRounding out the “primitive-level” coloring methods is per-triangle coloring.This method simply assigns a color to each triangle. This is also known asfaceted,or flat, shading, because the resulting geometry appears planar on read more..

  • Page - 322

    7.9 Basic Coloring Methods291see this mixture of sloping surfaces (a rounded fender) and hard creases (thesharp edge of a wheelwell). Such an object cannot be drawn using per-trianglecolors, as per-triangle colors will correctly represent the sharp edges, but willnot be able to represent the smooth sections. In these kinds of objects, somesharp geometric edges in the tessellation really do represent the original sur-face accurately, while other sharp edges are designed to be interpolated read more..

  • Page - 323

    292 Chapter 7 Geometry and Programmable Shadinginterpolate colors generated by dynamic lighting. For a detailed discussion ofdynamic lighting, see Chapter Limitations of Basic Shading MethodsReal-world surfaces often have detail at many scales. The shading/coloringmethods described so far require that the fragment shader compute a finalcolor based solely on sources assigned at tessellation-level features, eitherper-triangle or pervertex. While this works well for surfaces whose read more..

  • Page - 324

    7.10 Texture Mapping293detail are the shape of the object and which are simply features of the imageapplied to the surface.A real-world physical analogy to this is theatrical set construction. Often,details in the set will be painted on planar pieces of canvas, stretched overa wooden frame (i.e.,“flats”), rather than built out of actual, 3D wood, brick,or the like. With the right lighting and positioning, these painted flats canappear as very convincing replicas of their real, 3D read more..

  • Page - 325

    294 Chapter 7 Geometry and Programmable ShadingThe process of texturing involves defining three basic mappings:1. To map all points on a surface (smoothly in most neighborhoods) intoa 2D (or in some cases, 1D or 3D) domain.2. To map points in this (possibly unbounded) domain into a unit square(or unit interval, cube, etc.).3. To map points in this unit square to color values.The first stage will be done using a modification of the method we usedfor colors with Gouraud shading, an affine read more..

  • Page - 326

    7.10 Texture Mapping295x26, y11xWidth – 1, y0x0, y0xWidth – 1, yHeight – 1x0, yHeight – 1y11x26Figure7.8 Texel-space coordinates in an image.Put together, these define the image data and their basic interpretation in thesame way that an array of vertices, the vertex format information, and thevertex count define vertex geometry to the rendering pipeline. As with vertexarrays, the array of texel data can be quite sizable. In fact, texture image dataare one of the single-largest read more..

  • Page - 327

    296 Chapter 7 Geometry and Programmable ShadingIvResourceManager* manager;// ...{const unsigned int width = 256;const unsigned int height = 512;IvTexture* texture = manager->CreateTexture(kRGBA32TexFmt,width, height);// ...The preceeding code creates a texture object with a 32-bit-per-texel RGBAtexture image that has a width of 256 texels and a height of 512 texels. Notethat while this function allocates the texture, it does not fill it with imagedata. In order to fill the texture with read more..

  • Page - 328

    7.11 Texture Coordinates2977.10.4 Texture SamplersTextures appear in the shading language in the form of a texture samplerobject. Texture samplers are passed to a fragment shader as a uniform value(which is a handle that represents the sampler). The same sampler can beused multiple times in the same shader, passing different texture coordinatesto each lookup. So, a shader can sample a texture at multiple locations whencomputing a single fragment. This is an extremely powerful technique thatis read more..

  • Page - 329

    298 Chapter 7 Geometry and Programmable ShadingU1.0, V1.0U0.0, V1.0U1.0, V0.0U0.0, V0.0Figure7.9 Mapping Uand Vcoordinates into an image.2D real-valued coordinates are mapped in the same way as texel coordi-nates, except for the fact that Uand Vare normalized, covering the entiretexture with the 0-to-1 interval. Figure 7.9 depicts the common mapping ofUVcoordinates into a texture. These normalized UVcoordinates have theadvantage that they are completely independent of the height and width ofthe read more..

  • Page - 330

    7.11 Texture Coordinates299barycentric coordinates of a point in a triangle, the texture coordinates maybe computed asuv=(uV 1− uV 3)(uV 2− uV 3)uV 3(vV 1− vV 3)(vV 2− vV 3)vV 3⎡⎣st1⎤⎦Although there is a wide range of methods used to map textures onto triangles(i.e., to assign texture coordinates to the vertices), a common goal is to avoid“distorting” the texture. In order to discuss texture distortion, we need todefine the Uand Vbasis vectors in UVspace. If we think of the read more..

  • Page - 331

    300 Chapter 7 Geometry and Programmable ShadingSkewed mappingsOriginal textureNon-perpendicularNon-uniform scaleNon-perpendicularFigure7.10 Examples of “skewed” texture coordinates.triangle give some measure of how closely the texturing across the trianglewill reflect the original planar form of the texture image.7.11.2 Generating Texture CoordinatesTexture coordinates are often generated for an object by some form of pro-jection of the object-space vertex positions inR3 into the per-vertex read more..

  • Page - 332

    7.11 Texture Coordinates3017.11.3 Texture Coordinate DiscontinuitiesAs was the case with per-vertex colors, there are situations that requireshared, collocated vertices to be duplicated in order to allow the verticesto have different texture coordinates. These situations are less commonthan in the case of per-vertex colors, due to the indirection that textur-ing allows. Pieces of geometry with smoothly mapped texture coordinatescan still allow color discontinuities on a per-sample level by read more..

  • Page - 333

    302 Chapter 7 Geometry and Programmable ShadingFront side(Appears to be correctly mapped)Back side(Incorrect, due to sharedvertices along the label “seam”)Figure7.12 Shared vertices can cause texture coordinate problems.The problem is between the eighth vertex and the first vertex. The firstvertex was originally assigned a Uvalue of 0.0, but at the end of our cir-cuit around the can, we would also like to assign it a texture coordinate of1.0, which is not possible for a single vertex. If read more..

  • Page - 334

    7.11 Texture Coordinates303Front side(Correct: unchanged fromprevious mapping)Back side(Correct, due to doubledvertices along the label “seam”)Figure7.13 Duplicated vertices used to solve texturing issues.The most common method of mapping unbounded texture coordinatesinto the texture is known as texture wrapping, texture repeating,or texture tiling.The wrapping of a component uof a texture coordinate is defined aswrap(u)= u− uThe result of this mapping is that multiple “copies” of the read more..

  • Page - 335

    304 Chapter 7 Geometry and Programmable ShadingTexture image(–1,–1)(–1,2)(2,2)(2,–1)Figure7.14 An example of texture wrapping.Wrapping creates a toroidal mapping of the texture, as tiling matches thebottom edge of the texture with the top edge of the neighboring copy (and viceversa), and the left edge of the texture with the right edge of the neighboringcopy (and vice versa). This is equivalent to rolling the texture into a tube(matching the top and bottom edges), and then bringing read more..

  • Page - 336

    7.11 Texture Coordinates305Texture image(–1,–1)(–1,2)(2,2)(2,–1)Per-pixel wrapping(correct)(0,0)(0,1)(1,1)(1,0)Per-vertex wrapping(incorrect)Original UVs(–1,–1)(–1,2)(2,2)(2,–1)Figure7.15 Computing texture wrapping.outside of the unit square. An example of the same square we’ve discussed,but with texture clamping instead of wrapping, is shown in Figure 7.17. Notethat clamping the vertex texture coordinates is very different from textureclamping. An example of the difference read more..

  • Page - 337

    306 Chapter 7 Geometry and Programmable ShadingFigure7.16 Toroidal matching of texture edges when wrapping.Texture image(–1,–1)(–1,2)(2,2)(2,–1)Figure7.17 An example of texture clamping. read more..

  • Page - 338

    7.11 Texture Coordinates307Texture image(–1,–1)(–1,2)(4,2)(4,–1)Per-pixel clamping(correct)(0,0)(0,1)(1,1)(1,0)Per-vertex clamping(incorrect)Original UVs(–1,–1)(–1,2)(4,2)(4,–1)Figure7.18 Computing texture clamping.Clamping is useful when the texture image consists of a section of detailon a solid-colored background. Rather than wasting large expanses of texelsand placing a small copy of the detailed section in the center of the texture,the detail can be spread over the entire read more..

  • Page - 339

    308 Chapter 7 Geometry and Programmable ShadingTexture image(–5,0)(5,0)Textured square(–5,10)(5,10)U clampingV wrappingFigure7.19 Mixing clamping and wrapping in a useful clamping the Udimension of the texture (to allow the lines to stay in themiddle of the road with black expanses on either side) and wrapping in the Vdimension (to allow the road to repeat off into the distance).Most rendering APIs (including the book’s Ivinterfaces) support bothclamping and wrapping read more..

  • Page - 340

    7.12 The Steps of Texturing3097.11.5 Texture Samplers in Shader CodeUsing a texture sampler in shader code is quite simple. As mentioned insection 7.10.4 a fragment shader simply uses a declared texture sampler as anargument to a lookup function. The following shader code declares a texturesampler and uses it along with a set of texture coordinates to determine thefragment color:// GLSL varying vec2 texCoords;void main() // vertex shader{// Grab the first set of texture coordinates// and pass read more..

  • Page - 341

    310 Chapter 7 Geometry and Programmable Shadingthe final sample colors. This is at once the very power of the method andits most confusing aspect. This indirection means that the colors applied toa triangle by texturing can approximate an extremely complex function, farmore complex and detailed than the planar function implied by Gouraudshading. However, it also means that there are far more stages in the methodwhereupon things can go awry. This section aims to pull together all of the read more..

  • Page - 342

    7.12 The Steps of Texturing311Texel centersTexel(0,1)(1,1)(0,0)(1,0))12( ,12htexture)12( ,12(wtexture 12,)12Figure7.20 Texel coordinates and texel centers.The shift of 1/2may seem odd, but Figure 7.20 shows why this is necessary.Texel coordinates are relative to the texel centers. A texture coordinate ofzero is on the boundary between two repetitions of a texture. Since the texelcenters are at the middle of a texel, a texture coordinate that falls on an integervalue is really halfway between the read more..

  • Page - 343

    312 Chapter 7 Geometry and Programmable Shading2. Using the texture coordinate mapping mode (either clamping orwrapping), map the Uand Vvalues into the unit square:uunit,vunit= wrap(u), wrap(v)or,uunit,vunit= clamp(u), clamp(v)3. Using the width and height of the texture image in texels, map the Uand Vvalues into integral texel coordinates via simple scaling:utexel,vtexel= uunit× width , vunit× height4. Using the texture image, map the texel coordinates into colors usingimage lookup:CT= read more..

  • Page - 344

    7.14 Chapter Summary313scene conditions will be unable to create truly convincing, dynamic worlds.Methods that can represent real-world lighting and the dynamic nature ofmoving objects are needed.Programmable shading is tailor-made for these kinds of applications.A very popular method of achieving these goals is to use a simple, fastapproximation of real-world lighting written into vertex and fragmentshaders. The next chapter will discuss in detail many aspects of how lightingcan be approximated read more..

  • Page - 345

    This page intentionally left blank read more..

  • Page - 346

    Chapter8Lighting8.1 IntroductionMuch of the way we perceive the world visually is based on the way objectsin the world react to the light around them. This is especially true when thelighting around us is changing or the lights or objects are moving. Given thesefacts, it is not surprising that one of the most common uses of programmableshading is to simulate the appearance of real-world lighting.The coloring methods we have discussed so far have used colors that arestatically assigned at content read more..

  • Page - 347

    316 Chapter 8 Lightingshaders to implement them, the lighting model we will discuss is basedupon the long-standing OpenGL fixed-function lighting pipeline (introducedin OpenGL 1.x). At the end of the chapter we will introduce several moreadvanced lighting techniques that take advantage of the unique abilities ofprogrammable shaders.We will refer to fixed-function lighting pipelines in many places in thischapter. Fixed-function lighting pipelines were the methods used in ren-dering application read more..

  • Page - 348

    8.2 Basics of Light Approximation3178.2.1 Measuring LightIn order to understand the mathematics of lighting, even the simplified,nonphysical approximation used by most real-time 3D systems, it is help-ful to know more about how light is actually measured. The simplest way toappreciate how we measure light is in terms of an idealized lightbulb andan idealized surface being lit by that bulb. To explain both the brightnessand luminance (these are actually two different concepts; we will define read more..

  • Page - 349

    318 Chapter 8 Lightingdiscuss the illuminance from a point light source. Illuminance in this caseis only the light incident upon a surface, not the amount reflected from thesurface.Light reflection from a surface depends on a lot of properties of the surfaceand the geometric configuration. We will cover approximations of reflectionlater in this chapter. However, the final step in our list of lighting measure-ments is to define how we measure the reflected light reaching the viewer fromthe read more..

  • Page - 350

    8.4 Types of Light Sources319lighting pipeline was similar. Initially, we will speak in terms of lighting a“sample”: a generic point in space that may represent a vertex in a tessellationor a fragment in a triangle. We will attempt to avoid the concepts of verticesand fragments during this initial discussion, preferring to refer to a generalpoint on a surface, along with a local surface normal and a surface material.(As will be detailed later, a surface material contains all of the read more..

  • Page - 351

    320 Chapter 8 Lightingvalue. These color terms are of the form LA, LD, and so on. They will be definedper light and per lighting component and will (in a sense) approximate a scalefactor upon the overall luminous flux from the light source.The values ˆL and iL do not take any information about the surface orien-tation or material itself into account, only the relative positions of the lightsource and the sample point with respect to each other. Discussion of thecontribution of surface read more..

  • Page - 352

    8.4 Types of Light Sources321The value iL for a directional light is constant for all sample positions:iL= 1Since both iL and light vector ˆL are constant for a given light (and indepen-dent of the sample point PV ), directional lights are the least computationallyexpensive type of light source. Neither ˆL nor iL needs to be recomputed foreach sample. As a result, we will pass both of these values to the shader (ver-tex or fragment) as uniforms and use them directly. We define a read more..

  • Page - 353

    322 Chapter 8 LightingLight raysPLFigure8.2 The basic geometry of a point light.This is the normalized vector that is the difference from the sample positionto the light source position. It is not constant per-sample, but rather forms avector field that points toward PL from all points in space. This normalizationoperation is one factor that often makes point lights more computationallyexpensive than directional lights. While this is not a prohibitively expensiveoperation to compute once per read more..

  • Page - 354

    8.4 Types of Light Sources323// per-vertex attribute or a per-sample varying valuelightSampleValues computePointLightValues(in vec4 surfacePosition){lightSampleValues values;values.L = normalize(pointLightPosition - surfacePosition).xyz;// we will add the computation of values.iL laterreturn values;}Unlike a directional light, a point light has a nonconstant function defin-ing iL. This nonconstant intensity function approximates a basic physicalproperty of light known as the inverse-square law: read more..

  • Page - 355

    324 Chapter 8 Lightingdist2dist4 r2r2r2rFigure8.3 The inverse-square law.illuminate an entire stadium. In both cases, the candle provides the sameamount of luminous flux. However, the actual surface areas that must beilluminated in the two cases are vastly different due to distance.The inverse-square law results in a basic iL for a point light equal toiL=1dist2wheredist=|PL − PV|which is the distance between the light position and the sample position.While exact inverse-square law attenuation read more..

  • Page - 356

    8.4 Types of Light Sources325Under such a system, the function iL for a point light isiL=1kc+ kldist+ kqdist2The distance attenuation constants kc, kl, and kq are defined per light anddetermine the shape of that light’s attenuation curve. Figure 8.4 is a visualConstantLinearQuadraticFigure8.4 Distance attenuation. read more..

  • Page - 357

    326 Chapter 8 Lightingexample of constant, linear, and quadratic attenuation curves. The spheres ineach row increase in distance linearly from left to right.Generally, distshould be computed in “eye” or camera coordinates (post–model view transform); this specification of the space used is important, asthere may be scaling differences between model space, world space, and cam-era space, which would change the scale of the attenuation. Most importantly,model-space scaling differs per read more..

  • Page - 358

    8.4 Types of Light Sources3278.4.3 SpotlightsSource CodeDemoSpotLightA spotlight is like a point light source with the ability to limit its light toa cone-shaped region of the world. The behavior is similar to a theatricalspotlight with the ability to focus its light on a specific part of the scene.In addition to the position PL that defined a point light source, a spotlightis defined by a direction vector d, a scalar cone angle θ, and a scalar exponent s.These additional values define the read more..

  • Page - 359

    328 Chapter 8 LightingThe light vector is equivalent to that of a point light source:ˆL =PL− PV|PL − PV|For a spotlight, iL is based on the point light function but adds an addi-tional term to represent the focused, conical nature of the light emitted by aspotlight:iL=spotkc+ kldist+ kqdist2wherespot=(− ˆL · d)s,if (− ˆL · d)≥ cos θ0,otherwiseAs can be seen, the spotterm is 0 when the sample point is outside of thecone. The spotterm makes use of the fact that the light vector and read more..

  • Page - 360

    8.4 Types of Light Sources329both of these expensive attenuation terms must be recomputed per samplemakes the spotlight the most computationally expensive type of standard lightin most systems. When possible, applications attempt to minimize the numberof simultaneous spotlights (or even avoid their use altogether).Spotlights with circular attenuation patterns are not universal. Anotherpopular type of spotlight (see Warn [116]) models the so-called barn doorspotlights that are used in theater, read more..

  • Page - 361

    330 Chapter 8 LightingspotAtten = (spotAtten > spotLightAngleCos)? pow(spotAtten, spotLightExponent): 0.0;values.iL = spotLightIntensity * spotAtten / distAtten;return values;}8.4.4 Other Types of Light SourcesThe light sources above are only a few of the most basic that are seen inmodern lighting pipelines, although they serve the purpose of introducingshader-based lighting quite well. There are many other forms of lights thatare used in shader-based pipelines. We will discuss several of read more..

  • Page - 362

    8.5 Surface Materials and Light Interaction331the penumbra, as opposed to the fully shadowed region, called the umbra)is highly prized in non-real-time, photorealistic renderings for the realisticquality it lends to the results.Soft shadows and other area light effects are not generally supported inlow-level, real-time 3D graphics software development kits (SDKs) (includingOpenGL). However, high-level rendering engines based upon programmableshaders are implementing these effects in a number of read more..

  • Page - 363

    332 Chapter 8 LightingFor our lighting model, we will define four colors for each material andone color for each lighting component. These will be defined in each of thefollowing sections.We will define only one color and one vector for each light: the color of thelight, a 3-vector uniform lightColor, and a vector whose components repre-sent scalar scaling values of that color per lighting component. This 3-vectorwill store the scaling factor for each applicable lighting category in a read more..

  • Page - 364

    8.6 Categories of Light333Ambient light has no direction associated with it. However, most lightingmodels do attenuate the ambient light from each source based on the light’sintensity function at the given point, iL. As a result, point and spotlights do notproduce equal amounts of ambient light throughout the scene. This tends tolocalize the ambient contribution of point and spotlights spatially and keepsambient light from overwhelming a scene. The overall ambient term for agiven light and read more..

  • Page - 365

    334 Chapter 8 Lightingdynamic lighting. However, adding too much ambient light can lead to thescene looking “flat,” as the ambient lighting dominates the coloring.We will store the ambient color of an object’s material in the 3-vectorshader uniform value materialAmbientColor. We will compute the ambientcomponent of a light by multiplying a scalar ambient light factor, lightAmb-DiffSpec.x(we store the ambient scaling factor in the xcomponent of thevector), times the light color, giving read more..

  • Page - 366

    8.6 Categories of Light335aan^L^Figure8.7 A shaft of light striking a perpendicular surface.intersected by the (now oblique) ray of light is represented by δa. From basictrigonometry and Figure 8.8, we can see thatδa=δasinπ2 − θ=δacos θ=δaˆL · ˆnAnd, we can compute the illuminance Eas follows:E∝Iδa∝ IˆL · ˆnδa∝Iδa( ˆL · ˆn)∝ E( ˆL · ˆn) read more..

  • Page - 367

    336 Chapter 8 Lightingaan^90 L^Figure8.8 The same shaft of light at a glancing angle.Note that if we evaluate for the original special caseˆn = ˆL, the result is E= E,as expected. Thus, the reflected diffuse luminance is proportional to ( ˆL · ˆn).Figure 8.9 provides a visual example of a sphere lit by a single light sourcethat involves only diffuse lighting.Generally, both the material and the light include diffuse color values (MDand LD, respectively). The resulting diffuse color for a read more..

  • Page - 368

    8.6 Categories of Light337Figure8.9 Sphere lit by diffuse light.of the red, green, and blue components of the 4-vector. We assume that thesurface normal vector at the sample point,ˆn, is passed into the function. Thisvalue may either be a per-vertex attribute in the vertex shader, an interpolatedvarying value in the fragment shader, or perhaps even computed in eithershader. The source of the normal is unimportant to this calculation.// GLSL Codeuniform vec3 materialDiffuseColor;uniform vec3 read more..

  • Page - 369

    338 Chapter 8 Lighting8.6.4 SpecularA perfectly smooth mirror reflects all of the light from a given directionˆL out along a single direction, the reflection directionˆr. While few sur-faces approach completely mirrorlike behavior, most surfaces have at leastsome mirrorlike component to their lighting behavior. As a surface becomesrougher (at a microscopic scale), it no longer reflects all light from ˆL out alonga single directionˆr, but rather in a distribution of directions centered read more..

  • Page - 370

    8.6 Categories of Light339We know that the component of ˆL in the direction ofˆn (ln) is the projectionof ˆL ontoˆn,orln= ˆL · ˆnNow we can compute lpˆp by substitution of our value for ln:ˆL = lnˆn + lpˆpˆL = ( ˆL · ˆn) ˆn + lpˆplpˆp = ˆL − ( ˆL · ˆn) ˆnSo, the reflection vectorˆr equalsˆr = lnˆn − lpˆp= ( ˆL · ˆn) ˆn − lpˆp= ( ˆL · ˆn) ˆn − ( ˆL − ( ˆL · ˆn) ˆn)= ( ˆL · ˆn) ˆn − ˆL + ( ˆL · ˆn) ˆn= 2( ˆL · ˆn) ˆn − read more..

  • Page - 371

    340 Chapter 8 Lightinghighlight, which makes the surface appear more dull and matte, whereas alarger value of mshine leads to a smaller, more intense highlight, which makesthe surface appear shiny. This shininess factor is considered a property of thesurface material and represents how smooth the surface appears. Generally,the complete specular term includes a specular color defined on the material(MS), which allows the highlights to be tinted a given color. The specularlight color is often set read more..

  • Page - 372

    8.6 Categories of Light341in vec4 surfacePosition,in lightSampleValues light){vec3 viewVector = normalize(;vec3 reflectionVector= 2.0 * dot(light.L, surfaceNormal) * surfaceNormal- light.L;return (dot(surfaceNormal, light.L) <= 0.0)? vec3(0.0,0.0,0.0): (light.iL * (lightColor * lightAmbDiffSpec.z)* materialSpecularColor* pow(max(0.0, dot(reflectionVector, viewVector)),materialSpecularExp));}Figure8.11 Sphere lit by specular light. read more..

  • Page - 373

    342 Chapter 8 LightingInfinite Viewer ApproximationOne of the primary reasons that the specular term is the most expensivecomponent of lighting is the fact that a normalized view and reflection vectormust be computed for each sample, requiring at least one normalization persample, per light. However, there is another method of approximating spec-ular reflection that can avoid this expense in common cases. This method isbased on a slightly different approximation to the specular highlight read more..

  • Page - 374

    8.7 Combined Lighting Equation343By itself, this new method of computing the specular highlight wouldnot appear to be any better than the reflection vector system. However, if weassume that the viewer is at infinity, then we can use a constant view vectorfor all vertices, generally the camera’s view direction. This is analogous to thedifference between a point light and a directional (infinite) light. Thanks tothe fact that the halfway vector is based only on the view vector and the read more..

  • Page - 375

    344 Chapter 8 Lightingwhere the results are1. CV , the computed, lit RGB color of the sample.2. AV , the alpha component of the RGBA color of the sample.The intermediate, per-light values used to compute the results are3. CA, the ambient light term, which is equal toCA= iLMALA4. CD, the diffuse light term, which is equal toCD= iLMDLD(max(0, ˆL · ˆn))5. CS, the specular light term, which is equal toCS= iLMSLSmax(0,( ˆh · ˆn))mshine, if ˆL · ˆn >00,otherwiseThe shader code to compute read more..

  • Page - 376

    8.7 Combined Lighting Equation345For a visual example of all of these components combined, see the litsphere in Figure 8.13.Source CodeDemoMultipleLightsMost interesting scenes will contain more than a single light source. Thus,the lighting model and the code must take this into account. When lightinga given point, the contributions from each component of each active light Lare summed to form the final lighting equation, which is detailed as follows:CV= Emissive+ Ambient+lightsLPer-light read more..

  • Page - 377

    346 Chapter 8 Lightingshader code, we need to compute iL and ˆL per active light. The shader codefor computing these values required source data for each light. In addition, thetype of data required differed by light type. The former issue can be solvedby passing arrays of uniforms for each value required by a light type. Theelements of the arrays represent the values for each light, indexed by a loopvariable. For example, if we assume that all of our lights are directional, thecode to compute read more..

  • Page - 378

    8.7 Combined Lighting Equation347in vec4 surfacePositon,in lightSampleValues light,in int i){vec3 viewVector = normalize(;vec3 reflectionVector= 2.0 * dot(light.L, surfaceNormal) * surfaceNormal- light.L;return (dot(surfaceNormal, light.L) <= 0.0)? vec3(0.0,0.0,0.0): (light.iL * (lightColor[i] * lightAmbDiffSpec[i].z)* materialSpecularColor* pow(max(0.0, dot(reflectionVector, viewVector)),materialSpecularExp));}vec3 computeLitColor(in lightSampleValues light,in vec4 read more..

  • Page - 379

    348 Chapter 8 Lightingthese approaches and the number of uniforms that must be sent to the shadercan be prohibitive for some systems. As a result, it is common for renderingengines to either generate specific shaders for the lighting cases they knowthey need, or else generate custom shader source code in the engine itself,compiling these shaders at runtime as they are required.Clearly, many different values and components must come together tolight even a single sample. This fact can make read more..

  • Page - 380

    8.8 Lighting and Shading3498.8.1 Flat-Shaded LightingHistorically, the simplest shading method applied to lighting was per-triangle,flat shading. This method involved evaluating the lighting equation once pertriangle and using the resulting color as the constant triangle color. This coloris assigned to every pixel covered by the triangle. In older, fixed-function sys-tems, this was the highest-performance lighting/shading combination, owingto two facts: the more expensive lighting equation read more..

  • Page - 381

    350 Chapter 8 Lightingfrom the light. While the centroid of the triangle is a reasonable choice, thefact that it must be computed specifically for lighting makes it less desirable.For reasons of efficiency (and often to match with the graphics system), themost common sample point for flat shading is one of the triangle vertices, asthe vertices already exist in the desired space. This can lead to artifacts, sincea triangle’s vertices are (by definition) at the edge of the area of the read more..

  • Page - 382

    8.8 Lighting and Shading351Figure8.15 Sphere lit and shaded by per-vertex lighting and Gouraud shading.only as areas for interpolation. This shift to vertices as localized surfacerepresentations lends focus to the fact that we will need smooth surfacenormals at each vertex. The next section will discuss several methods forgenerating these vertex normals.Generating Vertex NormalsIn order to generate smooth lighting that represents a surface at each vertex,we need to generate a single normal that read more..

  • Page - 383

    352 Chapter 8 LightingThis is the vertex position, treated as a vector (thus the subtraction ofthe zero point) and normalized. Analytical normals can create very realisticimpressions of the original surface, as the surface normals are pivotal tothe overall lighting impression. Examples of surfaces for which analyti-cal normals are available include implicit surfaces and parametric surfacerepresentations, which generally include analytically defined normal vectorsat every point in their read more..

  • Page - 384

    8.8 Lighting and Shading353Basically, the algorithm sums the normals of all of the faces that areincident upon the current vertex and then renormalizes the resulting summedvector. Since this algorithm is (in a sense) a mean-based algorithm, it can beaffected by tessellation. Triangles are not weighted by area or other such fac-tors, meaning that the face normal of each triangle incident upon the vertexhas an equal “vote” in the makeup of the final vertex normal. While the methodis far from read more..

  • Page - 385

    354 Chapter 8 Lightingnormals on either side of the crease. By having different surface normals incopies of colocated vertices, the triangles on either side of an edge can be litaccording to the correct local surface orientation. For example, at each vertexof a cube, there will be three vertices, each one with a normal of a differentface orientation, as we see in Figure Per-Fragment LightingSource CodeDemoPerFragmentLightingThere are significant limitations to per-vertex lighting. read more..

  • Page - 386

    8.8 Lighting and Shading355the nonlinearity of the specular exponent term and to the rapid changes inthe specular halfway vector ˆh with changes in viewer location).For example, let us examine the specular lighting term for the sur-face shown in Figure 8.18. We draw the 2D case, in which the triangle isrepresented by a line segment. In this situation, the vertex normals all pointoutward from the center of the triangle, meaning that the triangle is repre-senting a somewhat curved (domed) read more..

  • Page - 387

    356 Chapter 8 LightingPhong shading ofsingle triangleCorrect lighting ofsmooth surfacen • h ≈ 0^^n • h ≈ 0^^n^n^Interpolatedvertex normaln • h ≈ 1^^v h L n^^^^ViewerPoint lightFigure8.19 Phong shading of the same configuration.point near the center of the triangle in this case, we would find an extremelybright specular highlight there. The specular lighting across the surface ofthis triangle is highly nonlinear, and the maximum is internal to the trian-gle. Even more problematic read more..

  • Page - 388

    8.8 Lighting and Shading357works by evaluating the lighting equation once for each fragment covered bythe triangle. The difference between Gouraud and Phong shading may be seenin Figures 8.18 and 8.19. For each sample across the surface of a triangle, thevertex normals, positions, reflection, and view vectors are interpolated, andthe interpolated values are used to evaluate the lighting equation. However,since triangles tend to cover more than 1–3 pixels, such a lighting method willresult in read more..

  • Page - 389

    358 Chapter 8 LightinglightingNormal = gl_NormalMatrix * gl_Normal;gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;}An example of a fragment shader implementing lighting by a singledirectional light is as follows:// GLSL Codeuniform vec3 materialEmissiveColor;uniform vec3 materialAmbientColor;uniform vec4 materialDiffuseColor;uniform vec3 materialSpecularColor;uniform vec4 dirLightPosition;uniform float dirLightIntensity;uniform vec3 lightAmbDiffSpec;uniform vec3 lightColor;varying vec4 read more..

  • Page - 390

    8.9 Textures and Lighting3598.9.1 Basic ModulationThe simplest methods of merging these two techniques involves simplyviewing each of the two methods as generating a color per sample and mergingthem. With texturing, this is done directly via texture sampling; with lighting,it is done by evaluating the lighting equation. These two colors must be com-bined in a way that makes visual sense. The most common way of combiningtextures and lighting results is via multiplication, or “modulation.” In read more..

  • Page - 391

    360 Chapter 8 LightingAssuming for the moment that the lit color is computed and stored inlitColor, either by computation in the fragment shader or passed down asa varying component, a simple textured, lit fragment shader would be asfollows:// GLSL - fragment shaderuniform sampler2D texture;varying vec2 texCoords;void main(){// lit color is in vec4 litColorvec4 litColor;// ...// Sample the texture represented by "texture"// at the location "texCoords"gl_FragColor = litColor * read more..

  • Page - 392

    8.9 Textures and Lighting361(a)(b)Figure8.21 Combining textures and lighting: (a) specular vertex color added todiffuse vertex color, then modulated with the texture, and (b) diffuse vertex colormodulated with the texture, then specular vertex color added.diffuseLighting) and the specular component into another (which we’ll callspecularLighting). Having computed these independently, we merge them asfollows:// GLSL - fragment shaderuniform sampler2D texture;varying vec2 texCoords;void read more..

  • Page - 393

    362 Chapter 8 Lighting8.9.3 Textures as MaterialsThe next step in using textures and lighting together involves using multipletextures on a single surface. As shown in the previous section, a texture canbe used to modulate one or more material colors on a surface. In fact, tex-tures also can be used to replace one or more surface material components.Common surface material colors to be replaced with textures are:■Material diffuse color. Often called a diffuse map, this is extremelysimilar to read more..

  • Page - 394

    8.10 Advanced Lighting363void main(){vec4 diffuseLighting;vec4 specularLighting;// ...vec4 diffuseAndGlossMap = texture2D (texture, texCoords);// Sample the texture represented by "texture"// at the location "texCoords"gl_FragColor.rgb = diffuseLighting.rgb * diffuseAndGlossMap.rgb+ specularLighting.rgb * diffuseAndGlossMap.a;}8.10 Advanced LightingProgrammable shaders make an almost endless array of lighting effects possi-ble. We will discuss a few of these methods and read more..

  • Page - 395

    364 Chapter 8 Lightingbetween neighboring texels, was limited to surfaces that looked embossed.Very sharp changes in surface orientation were difficult with bump mapping.For a discussion of these limitations, see Theodore [110].In order to add more detail to the lighting at this fine level, we must beable to generate a surface normal per fragment that contains real information(not just interpolated information) for each fragment. By storing the normalin a texture, we can generate normals that read more..

  • Page - 396

    8.10 Advanced Lighting365understand, but more expensive computationally) and transform the normalinto camera space.// GLSL Codeuniform sampler2D normalMap; varying vec2 texCoords;uniform vec3 materialEmissiveColor;uniform vec3 materialAmbientColor;uniform vec4 materialDiffuseColor;uniform vec3 materialSpecularColor;uniform vec4 dirLightPosition;uniform float dirLightIntensity;uniform vec3 lightAmbDiffSpec;uniform vec3 lightColor;varying vec4 lightingPosition;{vec4 finalColor;finalColor.rgb = read more..

  • Page - 397

    366 Chapter 8 Lightingautomatically convert a very high polygon–count object (millions of triangles)into a low polygon–count geometry object and a high-resolution normal map.Put together, the low-resolution geometry object and the high-resolution nor-mal map can be used to efficiently render what appears to be a very convincingfacsimile of the original object.8.11 Reflective ObjectsWhile specular lighting can provide the basic impression of a shiny object,large expanses of a reflective read more..

  • Page - 398

    8.12 Shadows367large or infinitely distant. Thus, any normalized direction vector maps to asingle, fixed location in the environment map.Environment maps are applied to a surface dynamically — they are notsampled via a priori texture coordinates; the mapping of the environmentmap will change as the scene and the view changes. The most common methodused is to compute the reflection of the view vector in a manner similar to thatused for specular lighting earlier in this chapter. The read more..

  • Page - 399

    368 Chapter 8 Lightinglight’s location is exactly the geometry that will be in shadow. Geometry thatfalls on the boundaries of these two cases is likely to be in the penumbra,orpartially shadowed region, when rendering soft shadows.Since the real core of shadowing methods lie in the structure of the two-pass algorithms rather than in the mathematics of lighting, the details ofshadowing algorithms are beyond the scope of this book. A technique knownas ray tracing (see Glassner [40]) uses read more..

  • Page - 400

    Chapter9Rasterization9.1 IntroductionThe final major stage in the rendering pipeline is called rasterization.Rasterization is the operation that takes screen-space geometry, a fragmentshader, and the inputs to that shader and actually draws the geometry to thelow-level two-dimensional (2D) display device. Once again, we will focus ondrawing sets of triangles, as these are the most common primitive in three-dimensional (3D) graphics systems. In fact, for much of this chapter, wewill focus on read more..

  • Page - 401

    370 Chapter 9 RasterizationBy its very nature, rasterization is time consuming when compared tothe other stages in the rendering pipeline. Whereas the other stages of thepipeline generally require per-object, per-triangle, or per-vertex computation,rasterization inherently requires computation of some sort for every pixel.At the time of this book’s publication, displays 1,600 pixels wide by 1,200pixels high — resulting in approximately two million pixels on the screen —are quite popular. read more..

  • Page - 402

    9.3 Conceptual Rasterization Pipeline371for a “frame,” or a screen’s worth of image). In basic terms, a framebufferis a 2D digital image: a block of memory that contains numerical valuesthat represent colors at each point on the screen. Each color value rep-resents the color of the screen at a given point — a pixel. Each pixel hasred, green, and blue components. Put together, this framebuffer representsthe image that is to be drawn on the screen. The display hardware readsthese colors read more..

  • Page - 403

    372 Chapter 9 Rasterizationsome cases be skipped; for example, if the scene geometry is known to coverthe entire screen, then there is no need to clear the screen. The old imagewill be entirely overwritten by the new image in such a case. But for mostapplications, this step involves using the rendering application programminginterface (API) to set all of the pixels in the framebuffer (in a single functioncall) to a fixed color.The second step is to rasterize the geometry to the framebuffer. We read more..

  • Page - 404

    9.4 Determining the Fragments: Pixels Covered by a Triangle3739.4Determining the Fragments: PixelsCovered by a Triangle9.4.1 FragmentsIn order to progress any further in the rasterization phase of rendering, wemust break triangles (or more generally, geometry) in screen space into piecesthat more directly match the pixels in the framebuffer. This involves determin-ing the intersection of pixel rectangles or pixel center points with a triangle.In the color and lighting chapters, we used the term read more..

  • Page - 405

    374 Chapter 9 RasterizationFigure9.2 A screen-space triangle to be rasterized.CompleteFragmentsPartialFragmentsFigure9.3 Fragments generated by the triangle. Complete fragments are darkgray; partial fragments are light gray. read more..

  • Page - 406

    9.4 Determining the Fragments: Pixels Covered by a Triangle375screen is covered by geometry, then there may be many pixels that contain nofragments from the scene. On the other hand, if a lot of triangles overlapone another in screen space, then many pixels on the screen may containmore than one fragment. The ratio of the number of fragments in the scenein a given frame to the number of pixels on the screen is called the depthcomplexity or overdraw, because this ratio represents how many full read more..

  • Page - 407

    376 Chapter 9 RasterizationMin yMax yFigure9.4 A triangle and its raster spans.triangle, unless the scan line intersects a vertex, that scan line will intersectexactly two of the edges of the triangle: one to cross from outside the triangleinto it, and one to leave again. These two intersection points will define theminimum and maximum xvalues of the span.9.4.4 Handling Partial FragmentsComplete fragments always continue on to the next stage of the rasterizationprocess. The fate of partial read more..

  • Page - 408

    9.4 Determining the Fragments: Pixels Covered by a Triangle377Figure9.5 Fragments from Figure 9.3 rasterized using point sampling.sampling of geometry, as an entire fragment is generated or not generatedbased on a single point sample within each pixel. Figure 9.5 shows the sametriangle as in Figure 9.3, but with the partial fragments either dropped orpromoted to complete fragments, based on whether the fragment containsthe pixel’s center point.The behavior of such a graphics system when a read more..

  • Page - 409

    378 Chapter 9 Rasterizationpoint-sampled fill conventions, see Hecker’s Game Developer Magazine articleseries [57].9.5 Determining Visible GeometryThe overall goal in rendering geometry is to ensure that the final renderedimages convincingly represent the given scene. At the highest level, this meansthat objects must appear to be correctly obscured by closer objects and mustnot be obscured by more distant objects. This process is known as visiblesurface determination (VSD), and there are read more..

  • Page - 410

    9.5 Determining Visible Geometry379the depth of the current nearest fragment at each pixel, along with the colorof that fragment.Having stored this information, we can compute a simple test each timea fragment is drawn to a pixel. If the new fragment’s depth is closer thanthe currently stored depth value at that pixel, then the new fragment winsthe pixel. The color of the new fragment is computed and this new fragmentcolor is written to the pixel. The fragment’s depth value replaces the read more..

  • Page - 411

    380 Chapter 9 Rasterizationrepresents the rendering of the fragments from two triangles to a small depthbuffer. Note how the closer triangle’s fragment always wins the pixel (thecorrect result), even if it is drawn first.Because the method is per-pixel and thus per fragment, the depth of eachtriangle is computed on a per-fragment granularity, and this value is usedin the depth comparison. As a result of this finer subtriangle granularity,the depth buffer automatically handles triangle read more..

  • Page - 412

    9.5 Determining Visible Geometry381every fragment and compare it to the buffer. However, it can make overdrawless of an issue in some cases, since it is not necessary to compute or writethe color of any fragment that fails the depth test. In fact, some applicationswill try to render their depth-buffered scenes in roughly near-to-far order-ing (while still avoiding per-triangle, per-frame sorting on the CPU) so thatthe later geometry is likely to fail the depth buffer test and not require read more..

  • Page - 413

    382 Chapter 9 Rasterization(xndc,yndc) must intersect this ray. Normally, we cannot “invert” the projectionmatrix, since a point on the screen maps to a ray in view space. However, byknowing the plane of the triangle, we can intersect the triangle with the viewray as follows. All points Pin view space that fall in the plane of the triangleare given by equation 9.1. In addition, we know that the point on the trianglethat projects to (xndc,yndc) must be equal to trfor some t. Substituting read more..

  • Page - 414

    9.5 Determining Visible Geometry383Substituted into equation 9.3 this evaluates to the expected constant zview=zconst:zview=dist(−zconst)0xndc+ 0yndc− 1dist=−dist zconst−dist= zconstAs defined in equation 9.3, zview is an expensive value to compute perfragment (in the general, nonconstant depth case), because it is a fractionwith a nonconstant denominator. This would require a per-fragment divi-sion to compute zview, which is more expensive than we would like. However,depth buffering read more..

  • Page - 415

    384 Chapter 9 Rasterizationfrom screen-space pixel coordinates to 1/zview. As a result, for a given projectedtriangle,1zview= fxs+ gys+ hwhere f, g, and hare real values and are constant per triangle. We define thepreceding mapping for a given triangle asInvZ(xs,ys)= fxs+ gys+ hAn interesting property of InvZ(xs,ys) (or of any affine mapping, for thatmatter) can be seen from the derivationInvZ(xs+ 1,ys)− InvZ(xs,ys)= (f(xs+ 1)+ gys+ h)− (fxs+ gys+ h)= f(xs+ 1)− (fxs)= fmeaning read more..

  • Page - 416

    9.5 Determining Visible Geometry385112.5 112.5112.5112.5(2,2,150)(6,2,200)(4,0,100)125125150150(8,4,300)(0,4,200)( 4,4,100)(4,4,200)Figure9.7 Forward differencing the depth value.a zndc value that is equal to−1 at the near plane and 1at the far plane and wasof the formzndc=a+ bzviewzview= a1zview+ bwhich is an affine mapping of InvZ. As a result, we find that our existing valuezndc is screen affine and is suitable for use as a depth buffer value. This is thespecial case of depth buffering read more..

  • Page - 417

    386 Chapter 9 Rasterizationhave infinite precision (recall the discussion in Chapter 1), surfaces that arenot coplanar can map to the same depth value. This can lead to objects beingdrawn in the wrong order.If our depth values were mapped linearly into view space, then a 16-bit,fixed-point depth buffer would be able to correctly sort any objects whosesurfaces differed in depth by about 1/60,000 of the difference between thenear and far plane distances. This would seem to be more than enough read more..

  • Page - 418

    9.5 Determining Visible Geometry387The simplest way to avoid these issues is to maximize usage of the depthbuffer by moving the near plane as far out as possible so that the accuracyclose to the near plane is not wasted. Another method that is popular in 3Dhardware is known as the w-buffer. The w-buffer interpolates a screen-affinevalue for depth (often 1/w) at a high precision, then computes the inverse ofthe interpolation at each pixel to produce a value that is linear in view space(i.e., 1 read more..

  • Page - 419

    388 Chapter 9 Rasterizationif you are clearing both buffers at the start of a frame, it can be faster on somesystems to clear them both with a single call, which is done as follows in Iv:renderer->ClearBuffers(kColorDepthClear);To enable or disable depth testing we simply set the desired test modeusing the IvRendererfunction SetDepthTest. To disable testing, pass kDis-ableDepthTest. To enable testing, pass one of the other test modes (e.g.,kLessDepthTest). By default, depth testing is read more..

  • Page - 420

    9.6 Computing Fragment Shader Inputs389stage of the rasterization pipeline, blending (which will be discussed later inthis chapter).The next few sections will discuss how shader source values are computedper fragment from the sources we have listed. While there are many possiblemethods that may be used, we will focus on methods that are fast to computein screen space and are well suited to the scan line–centric nature of mostrasterizer software and even some rasterizer hardware.9.6.1 Uniform read more..

  • Page - 421

    390 Chapter 9 Rasterizationwhere both a, c= 0. If we assume that a triangle’s vertices are all at thesame depth (i.e., view space Zis equal to a constant zconst for all points in thetriangle), then the projection of a point in the triangle isxs=axviewzconst+ b=azconstxview+ b= a xview+ bys=cyviewzconst+ d=czconstyview+ d= c yview+ dNote that a, c= 0implies that a ,c= 0, so we can rewrite these such thatxview=xs− bayview=ys− dcThus, for triangles of constant depth zconst:■Projection forms read more..

  • Page - 422

    9.6 Computing Fragment Shader Inputs391meaning thatColor(xs+ 1,ys)= Color(xs,ys)+ Cxand similarlyColor(xs,ys+ 1)= Color(xs,ys)+ CyAs with inverse Z, we can compute per-fragment values for per-vertexattributes for a constant-z triangle simply by computing forward differencesof the color of a “base fragment” in the triangle.When a triangle that does not have constant depth in camera spaceis projected using a perspective projection, the resulting mapping is notscreen affine. From our read more..

  • Page - 423

    392 Chapter 9 RasterizationInstitute of Technology’s offline renderer interpolated colors incorrectly inperspective for several years before anyone noticed! As a result, softwaregraphics systems have often avoided the expensive, perspective-correct projec-tive interpolation of Gouraud colors and have simply used the affine mappingand forward differencing. However, our next interpolant, texture coordinates,will not be so forgiving of issues in perspective-correct interpolation.9.6.3 read more..

  • Page - 424

    9.6 Computing Fragment Shader Inputs393Wire-frame viewTextured viewFigure9.9 Two textured triangles parallel to the view plane.Wire-frame viewTextured viewFigure9.10 Two textured triangles oblique to the view plane, drawn using aprojective mapping.Wire-frame viewTextured viewFigure9.11 Two textured triangles oblique to the view plane, drawn using anaffine mapping. read more..

  • Page - 425

    394 Chapter 9 Rasterizationdegrees of freedom in the transformation. Each triangle defines its transformindependent of the other triangles, and the result is a bend in what should bea set of lines across the square.The projective transform, however, has additional degrees of freedom,represented by the depth values associated with each vertex. These depthvalues change the way the texture coordinate is interpolated across thetriangle and allow straight lines in the mapped texture image to read more..

  • Page - 426

    9.7 Evaluating the Fragment Shader395texture coordinates themselves. This is an extremely powerful techniquecalled indirect texturing. The first texture lookup forms a “table lookup,” or“indirection,” that generates a new texture coordinate for the second texturelookup.Indirect texturing is an example of a more general case of texturing inwhich evaluating a texture sample generates a “value” other than a color.Clearly, not all texture lookups are used as colors. However, for ease of read more..

  • Page - 427

    396 Chapter 9 Rasterization9.8.1 Texture Coordinate ReviewWe will be using a number of different forms of coordinates through-out our discussion of rasterizing textures. This includes the application-level, normalized, texel-independent texture coordinates (u, v), as well asthe texture size-dependent texel coordinates (utexel,vtexel), both of which areconsidered real values. We used these coordinates in our introduction totexturing.A final form of texture coordinate is the integer texel read more..

  • Page - 428

    9.8 Rasterizing Textures397the texel containing the fragment center point and use its color directly.This method, called nearest-neighbor texturing, is very simple to compute. Forany (utexel,vtexel) texel coordinate, the integer texel coordinate (uint,vint) is thenearest integer texel center, computed via rounding:(uint,vint)= ( utexel+ 0.5 , vtexel+ 0.5 )Having computed this integer texel coordinate, we simply use the Image()function to look up the value of the texel. The returned color is read more..

  • Page - 429

    398 Chapter 9 RasterizationWith nearest-neighbor texturing, all (utexel,vtexel) texel coordinates in thesquareiint− 0.5≤ utexel <iint+ 0.5jint− 0.5≤ vtexel <jint+ 0.5will map to the integer texel coordinates (iint,jint) and thus produce a constantfragment shader value. This is a square of height and width 1in texel space,centered at the texel center. This results in obvious squares of constant color,which tends to draw attention to the fact that a low-resolution image hasbeen read more..

  • Page - 430

    9.8 Rasterizing Textures399(uint,vint)ufrac0.5vfrac0.75(uint,vint1)(utexel,vtexel)Pixel mapped intotexel space(uint1,vint1)0.50.75(uint1,vint)Figure9.13 Finding the four texels that “bound” a pixel center and the fractionalposition of the pixel.Pixel mapped intotexel spaceC01C11C10C00Figure9.14 The four corners of the texel-space bounding square around the pixelcenter. read more..

  • Page - 431

    400 Chapter 9 RasterizationWe use Image()to look up the texel colors at the four corners of the square.For ease of notation, we define the following shorthand for the color of thetexture at each of the four corners of the square (Figure 9.14):C00= Image(uint,vint)C10= Image(uint+ 1,vint)C01= Image(uint,vint+ 1)C11= Image(uint+ 1,vint+ 1)Then, we define a smooth interpolation of the four texels surroundingthe texel coordinate. We define the smooth mapping in two stages, as shownin Figure 9.15. read more..

  • Page - 432

    9.8 Rasterizing Textures401and similarly along the maximum-v edge:CMaxV= C01(1− ufrac)+ C11ufracFinally, we linearly interpolate between these two values using the fractionalvcoordinate:CFinal= CMinV (1− vfrac)+ CMaxV vfracSee Figure 9.15 for a graphical representation of these two steps. Substitutingthese into a single, direct formula, we getCFinal= C00(1− ufrac)(1− vfrac)+ C10ufrac(1− vfrac)+ C01(1− ufrac)vfrac+ C11ufracvfracThis is known as bilinear texture filtering because the read more..

  • Page - 433

    402 Chapter 9 Rasterizationgreatly improve the image quality of magnified textures by reducing the visual“blockiness,” it will not add new detail to a texture. If a texture is magnifiedconsiderably (i.e., one texel maps to many pixels), the image will look blurrydue to this lack of detail. The texture shown in Figure 9.16 is highly magnified,leading to obvious blockiness in the left image (a) and blurriness in the rightimage (b).Texture Magnification in PracticeThe IvAPIs use the read more..

  • Page - 434

    9.8 Rasterizing Textures403In an extreme (but actually quite common) case, the entire high-detailtexture could be mapped in such a way that it maps to only a few fragments.Figure 9.17 provides such an example; in this case, note that if the objectmoves even slightly (even less than a pixel), the exact texel covering the frag-ment’s center point can change drastically. In fact, such a point sample isalmost random in the texture and can lead to the point-sampled color ofthe texture used for the read more..

  • Page - 435

    404 Chapter 9 Rasterization(a)(b)Figure9.18 Mapping the square screen-space area of a pixel back into texel space:(a) screen space with pixel of interest highlighted and (b) texel-space back-projectionof pixel area.fragment fairly, we need to compute a weighted average of the colors of all ofthe texels in this quadrilateral, based on the relative area of the quadrilateralcovered by each texel. The more of the fragment that is covered by a giventexel, the greater the contribution of that read more..

  • Page - 436

    9.8 Rasterizing Textures405of extra storage per texture (in fact, it increases the number of texels thatmust be stored by approximately one-third). Mipmapping is a popular filteringalgorithm in both hardware and software rasterizers and is relatively simpleconceptually.To understand the basic concept behind mipmapping, imagine a 2× 2–texel texture. If we look at a case where the entire texture is mapped to asingle fragment, we could replace the 2× 2texture with a 1× 1texture (asingle read more..

  • Page - 437

    406 Chapter 9 Rasterizationoriginal 2× 2texel texture. Each of these two versions of the texture has auseful feature that the other does not.Mipmapping takes this method and generalizes it to any texture withpower-of-two dimensions. For the purposes of this discussion, we assumethat textures are square (the algorithm does not require this, as we shall seelater in our discussion of mipmapping in practice). Mipmapping takes the ini-tial texture image Image0 (abbreviated I0) of dimension wtexture= read more..

  • Page - 438

    9.8 Rasterizing Textures407Note that if we use the same original texture coordinates for both versionsof the texture, Image1 simply appears as a blurry version of Image0 (withhalf the detail of Image0). If a block of about four adjacent texels in Image0covers a fragment, then we can simply use Image1 when texturing. Butwhat about more extreme cases of minification? The algorithm can be con-tinued recursively. For each image Imagei whose dimensions are greaterthan 1, we can define Imagei+1, read more..

  • Page - 439

    408 Chapter 9 RasterizationTexturing a Fragment with a MipmapThe most simple, general algorithm for texturing a fragment with a mipmapcan be summarized as follows:1. Determine the mapping of the fragment in screen space back into aquadrilateral in texture space by determining the texture coordinatesat the corners of the fragment.2. Having mapped the fragment square into a quadrilateral in texturespace, select whichever mipmap level comes closest to exactly mappingthe quadrilateral to a single read more..

  • Page - 440

    9.8 Rasterizing Textures409If a fragment maps to about one texel, then∂utexel∂xs2+∂vtexel∂xs2≈ 1, and∂utexel∂ys2+∂vtexel∂ys2≈ 1In other words, even if the texture is rotated, if the fragment is about the samesize as the texel mapped to it, then the overall change in texture coordinatesover a single fragment has a length of about one texel. Note that all four ofthese differences are independent. These partials are dependent upon utexeland vtexel, which are in turn dependent read more..

  • Page - 441

    410 Chapter 9 RasterizationThis gives us a closed-form method that can convert existing partials (usedto interpolate the texture coordinates across a scan line) to a specific mipmaplevel L. The final formula isL= log2⎛⎝max⎛⎝∂utexel∂xs2+∂vtexel∂xs2,∂utexel∂ys2+∂vtexel∂ys2⎞⎠⎞⎠= log2⎛⎝ max∂utexel∂xs2+∂vtexel∂xs2,∂utexel∂ys2+∂vtexel∂ys2⎞⎠=12log2 max∂utexel∂xs2+∂vtexel∂xs2,∂utexel∂ys2+∂vtexel∂ys2Note that the value of Lis real, read more..

  • Page - 442

    9.8 Rasterizing Textures411if point sampling is used with a nonmipmapped texture, adjacent pixels mayrequire reading widely separated parts of the texture. These large per-pixelstrides through a texture can result in horrible cache behavior and can impedethe performance of nonmipmapped rasterizers severely. These cache missstalls make the cost of computing mipmapping information (at least on a per-triangle basis) worthwhile, independent of the significant increase in visualquality. In fact, read more..

  • Page - 443

    412 Chapter 9 Rasterizationexact 3D analog to bilinear interpolation. It is the most expensive of thesemipmap filtering operations, requiring the lookup of eight texels per frag-ment, as well as seven linear interpolations (three per each of the two mipmaplevels, and one additional to interpolate between the levels), but it also pro-duces the smoothest results. Filtering between mipmap levels also increasesthe amount of texture memory bandwidth used, as the two mipmap levelsmust be accessed per read more..

  • Page - 444

    9.8 Rasterizing Textures413Table9.2 Mipmap level size progressionLevelWidthHeight03281164282341421511the larger dimension continues to decrease. So, for a 32× 8–texel texture, themipmap levels are shown in Table 9.2.Note that the texels of the mipmap level images set in the array returned byBeginLoadDatamust be computed by the application. Ivsimply accepts theseimages as the mipmap levels and uses them directly. Once all of the mipmaplevels for a texture are specified, the texture may be read more..

  • Page - 445

    414 Chapter 9 Rasterizationtexture->EndLoadData(level);}// ...As a convenience, APIs such as Ivsupport automatic box filtering andcreation of mipmap pyramids from a single image. In Iv, an application mayprovide the top-level image via the methods above and then automaticallygenerate the remaining levels via the IvTexturefunction GenerateMipmapPyra-mid. The preceding code could be completely replaced with the followingautomatic mipmap generation.IvTexture* texture;// ...{unsigned int width = read more..

  • Page - 446

    9.9 From Fragments to Pixels415common mipmapped mode (as described previously) is trilinear filtering,which is set usingIvTexture* texture;// ...texture->SetMinFiltering(kBilerpMipmapLerpTexMinFilter);// ...9.9 From Fragments to PixelsThus far, this chapter has discussed generating fragments, computing theper-fragment source values for a fragment’s shader, and some details of themore complex aspects of evaluating a fragment’s shader (texture lookups).However, the first few sections of read more..

  • Page - 447

    416 Chapter 9 Rasterizationpixels containing multiple partial fragments. We will close the chapter with adiscussion of each.9.9.1 Pixel BlendingSource CodeDemoAlphaBlendingPixel blending is more commonly referred to by the name of its most ubiq-uitous special case: alpha blending. Although it is really just a special caseof general pixel blending, alpha blending is by far the most common formof pixel blending. It is called alpha blending because it involves interpolatingbetween the existing read more..

  • Page - 448

    9.9 From Fragments to Pixels417hardware) require that the pixel color be read from the framebuffer foreach fragment blended. This increased memory bandwidth means that alphablending can impact performance on some systems (in a manner analogousto depth buffering). In addition, alpha blending has several other propertiesthat make its use somewhat challenging in practice.Alpha blending is designed to compute a new pixel color based on theidea that the new fragment color represents a possibly read more..

  • Page - 449

    418 Chapter 9 Rasterizationthan the number of opaque triangles. Given a set of triangles, one method ofattempting to correctly compute the blended pixel color is as follows:1. Collect the opaque triangles in the scene into a list, O.2. Collect the translucent triangles in the scene into another list, T.3. Render the triangles in O normally, using depth buffering.4. Sort the triangles in T by depth into a far-to-near ordering.5. Render the sorted list T with blending, using depth buffering.This read more..

  • Page - 450

    9.9 From Fragments to Pixels419require the opaque objects to be drawn first, followed by the blended objects,but neither requires the blended objects to be sorted into a depthwise ordering.As a result, these blending modes are very popular for particle system effects,in which many thousands of tiny, blended triangles are used to simulatesmoke, steam, dust, or water.Note that if depth buffering is used with unsorted, blended objects, theblended objects must be drawn with depth buffer writing read more..

  • Page - 451

    420 Chapter 9 Rasterization9.9.2 AntialiasingThe other simplifying rasterization assumption we made earlier, the idea thatpartial fragments are either ignored or “promoted” to complete fragments,induces its own set of issues. The idea of converting all fragments into all-or-nothing cases was to allow us to assume that a single fragment would “win” apixel and determine its color. We used this assumption to reduce per-fragmentcomputations to a single-point sample.This is reasonable if we read more..

  • Page - 452

    9.9 From Fragments to Pixels421Point samples of partialfragmentsFinal on-screen color of pixelsEntire pixels may be assignedan unrepresentative colorPoint samples can fall inunrepresentative parts of pixelsFigure9.23 A point sample may not accurately represent the overall color of dark gray, with only a very small square in the center being bright white.As a result, selecting a pixel color of bright white does not accurately representthe color of the pixel rectangle as a whole. Our read more..

  • Page - 453

    422 Chapter 9 RasterizationWhite fragmentcovers a pixelcenterWhite partial fragmentdrawn to screenFinal on-screen colorof pixelsWhite fragment moves(dotted outline showsprevious position)Fragment nolonger covers apixel centerFigure9.24 Subpixel motion causing a large change in point-sampled pixel much better. In Figure 9.25, we can see that the white fragment coversapproximately 10 percent of the area of the pixel, leaving the other 90 percentas dark gray. Weighting the color by the read more..

  • Page - 454

    9.9 From Fragments to Pixels423PixelPoint-sampled pixel colorPoint samplelocationArea-sampled pixel colorScreen-space pixel coverage10% coverage,(1,1,1) color90% coverage, ( ,,) color141414Figure9.25 Area sampling of a really a special case of a more general definite integral. If we imagine thatwe have a screen-space function that represents the color of every positionon the screen (independent of pixels or pixel centers) C(x, y), then the colorof a pixel defined as the region l≤ read more..

  • Page - 455

    424 Chapter 9 RasterizationWhile area sampling does avoid completely missing or overemphasizingany single sample, it is not the only method used, nor is it the best at repre-senting the realities of display devices (where the intensity of a physical pixelmay not actually be constant within the pixel rectangle). The area samplingshown in equation 9.4 implicitly weights all regions of the pixel equally, givingthe center of the pixel weighting equal to that of the edges. As a result, it isoften read more..

  • Page - 456

    9.9 From Fragments to Pixels425desiring more depth, Glassner [41] and Wohlberg [122] detail a wide range ofsampling theory.Supersampled AntialiasingThe methods so far discussed show theoretical ways for computing area-basedpixel colors. These methods require that pixel-coverage values be computedper fragment. Computing analytical (exact) pixel-coverage values for trianglescan be complicated and expensive. In practice, the pure area-based methodsdo not lead directly to simple, fast hardware read more..

  • Page - 457

    426 Chapter 9 Rasterization2 samples4 samples9 samples4 samples, rotatedFigure9.26 Commonsample-pointdistributionsformultisample-basedantialiasing.the fragment shader itself as many as Mtimes more frequently per frame thannormal rendering. This per-sample full rendering pipeline is very powerful,since each sample truly represents the color of the geometry at that sam-ple. It is also extremely expensive, requiring the entire rasterization pipelineto be invoked per sample and thus increasing read more..

  • Page - 458

    9.9 From Fragments to Pixels427color is stored for each visible sample that the fragment covers. The existingcolor at a sample (from an earlier fragment) may be replaced with the newfragment’s color. But this is done at a per-sample level. At the end of theframe, a “resolve” is still needed to compute the final color of the pixel fromthe multiple samples. However, only a coverage value (a simple geometricoperation) and possibly a depth value is computed per sample, per frag-ment. The read more..

  • Page - 459

    428 Chapter 9 RasterizationMSAA-compatible framebuffers. Some rendering APIs allow the applicationto specify the number and event layout of samples in the pixel format, whileothers simply use a single flag for enabling a single (unspecified) level ofMSAA. Ivdoes not support MSAA, so we will describe the methods used inOpenGL and D3D to enable it.In OpenGL, the creation of the framebuffer is platform-specific. As aresult, the specification of MSAA is also platform-specific, often read more..

  • Page - 460

    9.10 Chapter Summary429depth buffering system can help a programmer build a scene that avoidsvisual artifacts during visible surface determination. Understanding the innerworkings of rasterizers can help a 3D programmer quickly debug problemsin the geometry pipeline. Finally, this knowledge can guide the program-mer to better optimize their geometry pipeline, “feeding” their rasterizer withhigh-performance datasets. read more..

  • Page - 461

    This page intentionally left blank read more..

  • Page - 462

    Chapter10Interpolation10.1 IntroductionUp to this point, we have considered only motions (more specifically,transformations) that have been created programmatically. In order to createa particular motion (e.g., a submarine moving through the world), we have towrite a specific program to generate the appropriate sequence of transforma-tions for our model. However, this takes time and it can be quite tedious tomove objects in this fashion. It would be much more convenient to predefineour read more..

  • Page - 463

    432 Chapter 10 InterpolationHowever, there are a number of problems with this. First, by settingthe animation set to a rate of 60 f.p.s. and then playing it back directly, wehave effectively locked the frame rate for the game at 60 f.p.s. as well. Manymonitors can run at 85 f.p.s., and when running in windowed mode, the graph-ics can be updated much faster than that. It would be much better if we couldfind some way to generate 85 f.p.s. or more from a 60 f.p.s. dataset. In otherwords, we need read more..

  • Page - 464

    10.2 Interpolation of Position433approximating position. Next, we’ll look at how we can extend thosetechniques for orientation. Finally, we’ll look at some applications, in par-ticular, the motion of a constrained camera.10.2 Interpolation of Position10.2.1 General DefinitionsThe general class of functions we’ll be using for both interpolating and approxi-mating are called parametric curves. We can think of a curve as a squiggle inspace, where the parameter controls where we are in the read more..

  • Page - 465

    434 Chapter 10 InterpolationThis can be taken further: A function f(x)has tangential, or C1, continuityacross an interval (a, b)if the first derivative f (x)of the function is continuousacross the interval. In our case, the derivative Q (u)for parameter uis a tangentvector to the curve at location Q(u). Correspondingly, the derivative of a spacecurve is Q (u)= (x (u), y (u), z (u)).Occasionally, we may be concerned with C2continuity, also known as cur-vature continuity. A function f(x)has read more..

  • Page - 466

    10.2 Interpolation of Position43510.2.2 Linear InterpolationDefinitionThe most basic parametric curve is our example above: a line passing throughtwo points. By using the parameterized line equation based on the two points,we can generate any point along the line. This is known as linear interpolationand is the most commonly used form of interpolation in game programming,mainly because it is the fastest. From our familiar line equation,Q(u)= P0+ u(P1− P0)we can rearrange to getQ(u)= (1− read more..

  • Page - 467

    436 Chapter 10 InterpolationWith this formulation, the result UMGwill be a 1× 3matrix:UMG= x(u)y(u)z(u)= (1− u)x0+ ux1(1− u)y0+ uy1(1− u)z0+ uz1This is counter to our standard convention of using column vectors. However,rather than write out Gas individual coordinates, we can write Gas a columnmatrix of npoints, where for linear interpolation this isG=P0P1Then, using block matrix multiplication, the result UMGbecomesUMG= (1− u)P0+ uP1This form allows us to use a convenient shorthand to read more..

  • Page - 468

    10.2 Interpolation of Position437For a given time value t, we need to find the stored time values ti and ti+1such that ti≤ t≤ ti+1. From there we look up their corresponding Pi and Pi+1values and interpolate. If we start with n+ 1points, we will end up with aseries of nsegments labeled Q0,Q1,...,Qn−1. Each Qi is defined by points Piand Pi+1 whereQi(u)= (1− u)Pi+ uPi+1and Qi(1)= Qi+1(0). This last condition guarantees C0continuity. This isexpressed as code as follows:IvVector3 read more..

  • Page - 469

    438 Chapter 10 InterpolationP0P1P2P3Q0Q1Q2Figure10.1 Piecewise linear interpolation.interpolation of n+ 1points as a single function f(t)over[t0,tn], we find thatthe derivative f (t)is discontinuous at the sample points, so f(t)is not C1continuous. In animation this expresses itself as sudden changes in the speedand direction of motion, which may not be desirable. Despite this, becauseof its speed, piecewise linear interpolation is a reasonable choice if the slopesof the piecewise line segments read more..

  • Page - 470

    10.2 Interpolation of Position439P1P0Q0P′0P′1Figure10.2 Hermite curve.Using our given constraints, or boundary conditions, let’s derive our cubicequation. A generalized cubic function and corresponding derivative areQ(u)= au3+ bu2+ cu+ D(10.2)Q (u)= 3 au2+ 2 bu+ c(10.3)We’ll solve for our four unknowns a, b, c, and Dby using our fourboundary conditions. We’ll assume that when u= 0, Q(0)= P0 and Q (0)= P0.Similarly, at u= 1, Q(1)= P1 and Q (1)= P1. Substituting these values read more..

  • Page - 471

    440 Chapter 10 InterpolationSubstituting our now known values for a, b, c, and Dinto equation 10.2givesQ(u)= 2(P0− P1)+ P0 + P1 u3+ 3(P1− P0)− 2 P0 − P1 u2+ P0u + P0This can be rearranged in terms of the boundary conditions to produce ourfinal equation:Q(u)= (2u3− 3u2+ 1)P0+ (−2u3 + 3u2)P1+ (u3− 2u2+ u)P0 + (u3− u2)P1This is known as a Hermite curve. We can also represent this as the product ofa matrix multiplication, just as we did with linear interpolation. In this case,the read more..

  • Page - 472

    10.2 Interpolation of Position441P1P0Q0P′Q′ (1)Q′ (0)Q1P2P′0P′2101Figure10.3 Piecewise Hermite curve. Tangents at P1 match direction andmagnitude.Figure 10.3 shows this situation in the piecewise Hermite curve.The above assumes that our our time values occur at uniform intervals;that is, there is a constanttbetween t0 and t1, and t1 and t2, etc. However, asmentioned under linear interpolation, the difference between time values ti toti+1 may vary from segment to segment. The solution read more..

  • Page - 473

    442 Chapter 10 InterpolationP1P0(a)P1P0(b)P′0P′0P′1P′1Figure10.4 Hermite curve with (a) small tangent and low curvature and (b) largetangent and higher curvature.P1P0P2Q0Q′ (0)Q′ (1)01Q1Figure10.5 Piecewise Hermite curve. Tangents at P1 have same direction butdiffering magnitudes.There is, of course, no reason that the tangents Qi(1) and Qi+1(0) haveto match. One possibility is to match the tangent directions but not the tan-gent magnitudes — this gives us G1continuity. The read more..

  • Page - 474

    10.2 Interpolation of Position443P1P0P2Q0Q1Q′ (0)Q′ (1)01Figure10.6 Piecewise Hermite curve. Tangents at P1 have differing directionsand magnitudes.P1P0P2Figure10.7 Possible interface for Hermite curves, showing in–out tangent set two tangents at each internal sample point Pi, which we’ll express asPi,1 (the “incoming” tangent) and Pi,0 (the “outgoing”tangent). Alternatively,we can think of a curve segment as being defined by two points Pi and Pi+1,and two tangents read more..

  • Page - 475

    444 Chapter 10 Interpolationby allowing three different tangent types. For example, Jasc’s Paint Shop Prorefers to them as symmetric, asymmetric, and cusp. With the symmetric node,clicking and dragging on one of the segment ends rotates both segments andchanges their lengths equally, to maintain equal tangents. With an asymmet-ric node, clicking and dragging will rotate both segments to maintain equaldirection but change only the length of the particular tangent clicked on. Andwith a cusp, read more..

  • Page - 476

    10.2 Interpolation of Position445This can be rewritten to place our knowns on one side of the equation andunknowns on the other:2 Pi + 8 Pi+1 + 2 Pi+2 = 6[(Pi+2 − Pi+1) + (Pi+1 − Pi)]This simplifies toPi + 4 Pi+1 + Pi+2 = 3(Pi+2 − Pi)Applying this to all of our sample points{P0,...,Pn} creates n− 1linearequations. This can be written as a matrix product as follows:⎡⎢⎢⎢⎢⎢⎢⎣141··· 000141··· 00...00··· 141000··· read more..

  • Page - 477

    446 Chapter 10 InterpolationSolving this system of equations gives us the appropriate tangent vectors.This is not as bad as it might seem. Because this matrix (known as a tridia-gonal matrix) is sparse and extremely structured, the system is very easy andefficient to solve using a modified version of Gaussian elimination known asthe Thomas algorithm.If we express our tridiagonal matrix generally as⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣b0c000···00a1b1c10···000a2b2c2···00...00··· read more..

  • Page - 478

    10.2 Interpolation of Position447Natural End ConditionsSource CodeDemoAutoHermiteIn the preceding examples, we generated splines assuming that the beginningand end tangents were clamped to values set by the programmer or the user.This may not be convenient; we may want to avoid specifying tangents at all.An alternative approach is to set conditions on the end tangents, just as wedid with the internal tangents, to reduce the amount of input needed.One such possibility is to assume that the second read more..

  • Page - 479

    448 Chapter 10 InterpolationWe can substitute equations 9.12 and 9.13 for our first and last equations inthe clamped case, to get the following matrix product:⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣2100··· 001410··· 000141··· 00...00··· 141000··· 014100··· 0012⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦⎡⎢⎢⎢⎢⎢⎢⎣P0P1...Pn−1Pn⎤⎥⎥⎥⎥⎥⎥⎦=⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣3(P1− P0)3(P2− P0)3(P3− P1)...3(Pn−1 − Pn−3)3(Pn− Pn−2)3(Pn− read more..

  • Page - 480

    10.2 Interpolation of Position449P1P0P2–(P2– P1)(P2– P1)P′0P′1Figure10.8 Automatic generation of tangent vector at P0, based on positions ofP1 and P2.This provides a definition for curve segments Q1 to Qn−2, so it can beused to generate a C1curve from P1 to Pn−1. However, since there is no P−1 orPn+1, we once again have the problem that curves Q0 and Qn−1 are not validdue to undefined tangents at the endpoints. And as before, these either can beprovided by the artist or read more..

  • Page - 481

    450 Chapter 10 InterpolationRewriting in terms of P0, P1, and P2 givesQ0(u)=12u2−32u+ 1 P0+ −u2 + 2u P1+12u2−12u P2As before, we can write this in matrix form:Q0(u)= u2u112⎡⎣1−21−34 −1200⎤⎦⎡⎣P0P1P2⎤⎦A similar process can be used to derive Qn−1:Qn−1(u) = u2u112⎡⎣1−21−10 1020⎤⎦⎡⎣Pn−2Pn−1Pn⎤⎦10.2.5 Kochanek-Bartels SplinesSource CodeDemoKochanekAn extension of Catmull-Rom splines are Kochanek-Bartels splines [66]. LikeCatmull-Rom splines, the read more..

  • Page - 482

    10.2 Interpolation of Position451through the point will change from a very rounded curve to a very tight curve.One can think of it as increasing the influence of the control point on the curve(Figure 10.9(a)).Continuity does what one might expect — it varies the continuity at thecontrol point. A continuity setting of 0 means that the curve will have C1continuity at that point. As the setting approaches−1 or 1, the curve will endup with a corner at that point; the sign of the continuity read more..

  • Page - 483

    452 Chapter 10 InterpolationNote that these splines have the same problem as Catmull-Rom splineswith undefined tangents at the endpoints as there is only one neighboringpoint. As before, this can be handled by the user setting these tangents byhand or building quadratic curves for the first and last segments. The processfor generating these is similar to what we did for Catmull-Rom splines.Kochanek-Bartels splines are useful because they provide more controlover the resulting curve than read more..

  • Page - 484

    10.2 Interpolation of Position453curve mimics the shape of the control polygon. Note that the four points inthis case do not have to be coplanar, which means that the curve generatedwill not necessarily lie on a plane either.The tangent vector at point P0 points in the same direction as the vectorP1− P0. Similarly, the tangent at P3 has the same direction as P3− P2.Aswewill see, there is a definite relationship between these vectors and the tangentvectors used in Hermite curves. For now, we read more..

  • Page - 485

    454 Chapter 10 Interpolationgame Quake 3 used them) but don’t have quite the flexibility of cubic curves.For example, they don’t allow for the familiar Sshape in Figure 10.10(b).To generate something similar with quadratic curves requires two piecewisecurves, and hence more data.The standard representation of an order nBézier curve is to use an orderedlist of points P0,...,Pn as the control points. Using this representation, we canexpand the general definition to get the formula for the read more..

  • Page - 486

    10.2 Interpolation of Position455Piecewise Bézier CurvesAs with linear interpolation and Hermite curves, we can interpolate a curvethrough more than two points by creating curve segments between each neigh-boring pair of interpolation points. Many of the same principles apply withBézier curves as did with Hermite curves. In order to maintain matching direc-tion for our tangents, giving us G1continuity, each interpolating point andits neighboring control points need to be collinear. To obtain read more..

  • Page - 487

    456 Chapter 10 Interpolationautomatically generating Bézier control points from a set of sample positions,as shown in Figure 10.14. Given four points Pi−1, Pi, Pi+1, and Pi+2, we want tocompute the two control points between Pi and Pi+1. We compute the tangentvector at Pi by computing the difference between Pi+1 and Pi−1. From that wecan compute the first control point as Pi+ 1/3(Pi+1 − Pi−1). The same can bedone to create the second control point as Pi+1 − 1/3(Pi+2 − Pi). This is read more..

  • Page - 488

    10.2 Interpolation of Position457this, B-splines are not yet in common usage in games, either for animation orsurface construction.B-splines are computed similarly to Bézier curves. We set up a basis func-tion for each control point in our curve, and then for each parameter value u,we multiply the appropriate basis function by its point and add the results. Ingeneral, this can be represented byQ(u)=ni=0PiBi(u)where each Pi is a point and Bi is a basis function for that point. The basisfunctions read more..

  • Page - 489

    458 Chapter 10 Interpolationand surfaces, they are extremely useful in computer-aided design (CAD)systems and modeling for computer animation. Like B-splines, rational curvesand particularly NURBS are not yet used much in games because of theirrelative performance cost and because of concern by artists about lack ofcontrol.10.3 Interpolation of OrientationSo far in our exploration of animation we’ve considered only interpolationof position. For a coordinate frame, this means only translating read more..

  • Page - 490

    10.3 Interpolation of Orientation459Figure10.16 Example of skeleton showing relationship between bones.same is true of interpolating orientation, except that our curve doesn’t passthrough a series of positions, but a series of orientations.We can think of this as wanting to interpolate from one coordinate frameto another. If we were simply interpolating two vectors v1 and v2, we couldfind the rotation between them via the axis–angle representation (θ,r), andthen interpolate by rotating v1 read more..

  • Page - 491

    460 Chapter 10 InterpolationB2B0B1Figure10.17 Relative bone poses for bending arm.simultaneously. We could use the same process for all three basis vectors, butit’s not guaranteed that they will remain orthogonal. What we would need todo is find the overall rotation in axis–angle form from one coordinate frameto another, and then apply the process described. This is not a simple thingto do, and as it turns out, there are better ways.However, for fixed angles and axis–angle formats, we read more..

  • Page - 492

    10.3 Interpolation of Orientation461(90, 22.5, 90). The consequence of interpolating linearly from one sequenceof Euler angles to another is that the object tends to sidle along, rotatingaround mostly one axis and then switching to rotations around mostly anotheraxis, instead of rotating around a single axis, directly from one orientation toanother.We can mitigate this problem by defining Hermite or higher-order splinesto better control the interpolation, and some 3D modeling packages read more..

  • Page - 493

    462 Chapter 10 Interpolationquaternion to a rotation of 90 degrees around z. This second quaternion is(√2/2, 0, 0,√2/2). The resulting interpolated quaternion when t= 1/2isr=12(1, 0, 0, 0)+12√22,0, 0,√22=2+√24,0, 0,√24The length of ris 0.9239— clearly, not 1. Just as with matrices, we hadto reorthogonalize after performing linear interpolation; with quaternionswe will have to renormalize. Fortunately, this is a cheaper operation thanorthogonalization, so quaternions have the read more..

  • Page - 494

    10.3 Interpolation of Orientation463Figure10.19 Linear orientation interpolation, showing intermediate vectorstracing path along line.Figure10.20 Effect of linear orientation interpolation on arc length wheninterpolating over 1/4 intervals.Those closest to the center of interpolation are longer. The effect is that insteadof moving at a constant rate of rotation throughout the interpolation, we willmove at a slower rate at the endpoints and faster in the middle. This is par-ticularly noticeable read more..

  • Page - 495

    464 Chapter 10 InterpolationThis is clearly not a rotation matrix, and no amount of orthogonalization willhelp us. The problem is that our two rotations (a rotation of π/2around yanda rotation of−π/2 around y, respectively) produce opposing orientations —they’re 180 degrees apart. As we interpolate between the pairs of transformediand kbasis vectors, we end up passing through the origin.Quaternions are no less susceptible to this. Suppose we have a rotationof πradians counterclockwise read more..

  • Page - 496

    10.3 Interpolation of Orientation465r=12(−1, 0, 0, 0)+12−√22,0, 0,−√22= −2+√24,0, 0,−√24This gives us the negation of our original result, but this isn’t a problem as itwill rotate to the same orientation.This also takes care of the case of interpolating from a quaternion to itsnegative, so for example, interpolating from (0, 0, 1, 0)to (0, 0,−1, 0)isr=−12(0, 0, 1, 0)+12(0, 0,−1, 0)= (0, 0,−1, 0)Negating the first one ends up interpolating to and from the same read more..

  • Page - 497

    466 Chapter 10 InterpolationpqFigure10.21 Effect of spherical linear interpolation when interpolating at quarterintervals. Interpolates equally along arc and angle.pqt(1–t)rFigure10.22 Construction for quaternion slerp. Angle θis divided by interpolanttinto subangles tθand (1− t)θ.and convert it to the slerp function,slerp (P,Q,t)= P(P −1Q)tFor matrices, the question is how to take a matrix Rto a power t. We can use amethod provided by Eberly [26] as follows. Since we know that Ris a read more..

  • Page - 498

    10.3 Interpolation of Orientation467quaternion r. The angle between pand qis θ, calculated as θ= arccos( p· q).Since slerp interpolates the angle, the angle between pand rwill be a fractionof θas determined by t,or tθ. Similarly, the angle between rand qwill be(1− t)θ.The general interpolation of pand qcan be represented asr= a(t)p+ b(t)q(10.15)The goal is to find two interpolating functions a(t)and b(t)so that they meetthe criteria for slerp.We determine these as follows. If we take read more..

  • Page - 499

    468 Chapter 10 Interpolationit is much cheaper than the matrix method. It is clearly preferable to usequaternions versus matrices (or any other form) if you want to interpolateorientation.One thing to notice is that as θapproaches 0 (i.e., as pand qbecome closeto equal) sin θand thus the denominator of the slerp function approach 0.Testing for equality is not enough to catch this case, because of finitefloating-point precision. Instead, we should test cos θbefore proceeding. Ifit’s close read more..

  • Page - 500

    10.3 Interpolation of Orientation46910.3.4 Performance ImprovementsSource CodeDemoSlerpApproxAs we’ve seen, using slerp for interpolation, even when using quaternions, cantake quite a bit of time — something we don’t usually have. A typical charactercan have 30+ bones, all of which are being interpolated once a frame. If wehave a team of characters in a room, there can be up to 20 characters beingrendered at one time. The less time we spend interpolating, the better.The simplest speed-up read more..

  • Page - 501

    470 Chapter 10 Interpolationwherek= 0.5069269(1− 0.7878088 cos θ)2and cos θis the dot product between the two quaternions. This technique tendsto diverge from the slerp result when t>0.5, so Blow recommends detectingthis case and swapping the two quaternions (i.e., interpolate from qto pinstead of from pto q). In this way our interpolant always lies between 0and 0.5.The nice thing about this method is that it requires very few floating-point operations, doesn’t involve any read more..

  • Page - 502

    10.4 Sampling Curves471An alternative that is slightly faster is to use Horner’s rule, which expressesthe same cubic curve asQ(u)= ((au+ b)u+ c)u+ DThis will take only 9 multiplies and 9 adds per point. In addition, it canactually improve our floating-point accuracy under certain circumstances.10.4.1 Forward DifferencingPreviously we assumed that there is no pattern to how we evaluate our curve.Suppose we know that we want to sample our curve at even intervals of u,say at a time step of every read more..

  • Page - 503

    472 Chapter 10 Interpolation{u+=h;x += dx1;output(x);dx1 = (3ah)uˆ2 + (3ahˆ2 + 2bh)u + (ahˆ3 + bhˆ2 + ch);}While we have removed the cubic equation, we have introduced evaluation ofa quadratic equationx1(u). Fortunately, we can perform the same processto simplify this equation. Computing the difference betweenx1(u+ h)andx1(u) asx2(u),wegetx2(u)= x1(u+ h)− x1(u)= (3ah)(u+ h)2+ (3ah2+ 2bh)(u+ h)+ (ah3+ bh2+ ch)−[(3ah)u2 + (3ah2+ 2bh)u+ (ah3+ bh2+ ch)]= 3ahu2+ 6ah2u+ 3ah3+ (3ah2+ 2bh)u+ read more..

  • Page - 504

    10.4 Sampling Curves473x3(u)= x2(u+ h)− x2(u)= 6ah2(u+ h)+ (6ah3+ 2bh2)− 6ah2u+ (6ah3+ 2bh2)= 6ah2u+ 6ah3+ (6ah3+ 2bh2)− 6ah2u+ (6ah3+ 2bh2)= 6ah3Our final code for forward differencing becomes the following:x=d;output(x);dx1 = ahˆ3 + bhˆ2 + ch;dx2 = 6ahˆ3 + 2bhˆ2;dx3 = 6ahˆ3;for(i=1;i <=n; i++ ){x += dx1;output(x);dx1 += dx2;dx2 += dx3;}We have simplified our evaluation of xfrom 3 multiplies and 3 adds, downto 3 adds. We’ll have to perform similar calculations for yand z, with read more..

  • Page - 505

    474 Chapter 10 Interpolationmore accurate and more efficient representation of the curve than forwarddifferencing, since more curve segments will be generated in areas with highcurvature (areas that we might cut across with forward differencing) andfewer in areas with lower curvature.We can perform this subdivision by taking a curve Q(u)and breaking itinto two new curves L(s)and R(t), usually at the midpoint Q(1/2). In this case,L(s)is the subcurve of Q(u)where 0≤ u≤ 1/2, and R(t)is the read more..

  • Page - 506

    10.4 Sampling Curves475(a)(b)Figure10.23 Midpoint test for curve straightness. (a) Total distance fromendpoints to midpoint (black dot) is compared to distance between endpoints,(b) example of midpoint test failure.P0P3P1P2Figure10.24 Test of straightness for Bézier curve. Measure distance of P1 andP2 to line segment P0P3.The convex hull properties of the Bézier curve lead to a particularly effi-cient method for testing straightness, with no need to calculate a midpoint.If the interior read more..

  • Page - 507

    476 Chapter 10 InterpolationL05 P0R2P1P2P35 R3R1HL1L2L35 R0Figure10.25 de Casteljau’s method for subdividing Bézier curves.Figure 10.25 shows the construction for a cubic Bézier curve. L0 and R3are already known: They are the original control points P0 and P3, respectively.Point L1 lies on segment P0P1 at position (1− u)P0+ uP1. Similarly, point Hlies on segment P1P2 at (1− u)P1+ uP2, and point R2 at (1− u)P2+ uP3.Wethen linearly interpolate along the newly formed line segments L1H and read more..

  • Page - 508

    10.4 Sampling Curves477points. At first glance, computing the length of a curve may not appear tobe very related to sampling and tessellation. However, as mentioned above,some methods for subdividing a curve require knowing the arc lengths ofsubsections of that curve. Also, as we’ll see, some arc length methods requiresampling the curve to obtain their results.The most accurate method of computing the length of a smooth curve (seeAppendix B on the CD-ROM) Q(u)from Q(a)to Q(b)is to directly read more..

  • Page - 509

    478 Chapter 10 Interpolationcomputing arc length. Similar to using adaptive subdivision for rendering, wecut the current curve segment in half. We use Gaussian quadrature to measurethe length of each half, and compare their sum to the length of the entirecurve, again computed using Gaussian quadrature. If the results are closeenough, we stop and return the sum of lengths of the two halves. Otherwise,we recursively compute their lengths via subdivision.There are other arc length methods that read more..

  • Page - 510

    10.4 Sampling Curves479If we are using cubic Bézier curves, we can use a method described byGravesen [47]. First of all, given a parameter uwe can subdivide the curve(using de Casteljau’s method) to be the subcurve from[0,u]. The new controlpoints for this new subcurve can be used to calculate bounds on the length.The length of the curve is bounded by the length of the chord P0P3 as theminimum, and the sum of the lengths of the line segments P0P1, P1P2, andP2P3 as the maximum. We can read more..

  • Page - 511

    480 Chapter 10 Interpolation10.5 Controlling Speed along a Curve10.5.1 Moving at Constant SpeedSource CodeDemoSpeedControlOne common requirement for animation is that the object animated moveat a constant speed along a curve. However, in most interesting cases, usinga given curve directly will not achieve this. The problem is that in order toachieve variety in curvature, the first derivative must vary as well, and hencethe distance we travel in a constant time will vary depending on where read more..

  • Page - 512

    10.5 Controlling Speed along a Curve481Suppose we have a function f(x)where we want to find psuch that f(p)= 0.We begin with a guess for p, which we’ll call¯x, such that f (¯x) = 0and|p −¯x|is relatively small. In other words,¯x may not quite be pbut it’s a pretty goodguess. If we use¯x as a basis for the Taylor series polynomial, we getf(x)= f(¯x) + (x−¯x)f (¯x) +12(x−¯x)2f (ξ(x))We assume that ξ(x)is bounded by xand¯x, so we can ignore the remainderof the terms. If we read more..

  • Page - 513

    482 Chapter 10 Interpolation// make first guessfloatp=u1+ s/len;for(inti=0;i< MAX_ITER; ++i){// compute function value and test against zerofloat func = ArcLength(u1,p) - s;if ( fabsf(func) < EPSILON ){return p;}// perform Newton-Raphson iteration stepp -= func/Length(Derivative(p));}// done iterating, return last guessreturn p;}The first test ensures that the distance we wish to travel is not greater thanthe remaining length of the curve. In this case, we assume that this is thelast read more..

  • Page - 514

    10.5 Controlling Speed along a Curve483hybrid approach will look like (for brevity’s sake we have only included theparts that are different) the following:float FindParameterByDistance( float u1, float s ){// set endpoints for bisectionfloata=u1;float b = 1.0f;// ensure that we remain within valid parameter space// get total length of curve// make first guessfor(inti=0;i< MAX_ITER; ++i){// compute function value and test against zero// update endpoints for bisectionif (func < read more..

  • Page - 515

    484 Chapter 10 Interpolationenough. If the speed is nonzero but sufficiently small, func/speedcouldbe sufficiently large to cause us to step outside the bisection interval oreven the valid parameter space of the curve. So that gives us our test: Ifp - func/speedis less than aor greater than b, use bisection. We can writethis as follows.if (p - func/speed<a||p- func/speed > b)// do bisection stepelse// perform Newton-Raphson iteration stepMultiplying by speedand rearranging terms gives us read more..

  • Page - 516

    10.5 Controlling Speed along a Curve485Again, we can use linear interpolation to approximate the parameter u, whichgives us length sasu≈sj+1 − ssj+1 − sjuj+s− sjsj+1 − sjuj+1To find the parameter bgiven a starting parameter aand a length s, we computethe length at aand add that to s. We then use the preceding process with thetotal length to find parameter b.The obvious disadvantage of this scheme is that it takes additionalmemory for each curve. However, it is simple to implement, read more..

  • Page - 517

    486 Chapter 10 Interpolationneed to accelerate a physical camera, move it, and slow it down to a stop.Figure 10.28 shows the distance–time graph for one such function.Parent [88] describes two methods for constructing ease-in/ease-outdistance–time functions. One is to use sinusoidal pieces for the accelera-tion/deceleration areas of the function and a constant velocity in the middle.The pieces are carefully chosen to ensure C1continuity over the entire func-tion. The second method involves read more..

  • Page - 518

    10.5 Controlling Speed along a Curve487particular arrival and departure characteristics. Standard parlance includessuch terms as fast-in, fast-out, slow-in, and slow-out. In and out in this caserefer to the incoming and outgoing speed at the key point, respectively; fastmeans that the speed is greater than 1, and slow that it is less than 1. Anexample curve with both fast-in/fast-out and slow-in/slow-out can be seenin Figure 10.30. There also can be linear keys, which represent the linearrate read more..

  • Page - 519

    488 Chapter 10 InterpolationWith all of these, the final distance–time curve can be easily generatedwith the techniques described in Section 10.2.3. More detail can be found inVan Verth [114].10.6 Camera ControlSource CodeDemoCameraControlOne common use for a parametric curve is as a path for controlling the motionof a virtual camera. In games this comes into play most often when setting upin-game cinematics, where we want to play a series of scripted events in enginewhile giving a game a read more..

  • Page - 520

    10.6 Camera Control489As mentioned, we set T= Q (u). We compute Bas the cross product of thefirst and second derivatives:B= Q (u)× Q (u)Then, finally, Nis the cross product of the other two:N= B× TNormalizing T, N, and Bgives us our orthonormal basis.Parent [88] describes a few flaws with using the Frenet frame directly.First of all, the second derivative may be 0, which means that Band hence Nwill be 0. One solution is to interpolate between two frames on either side ofour current read more..

  • Page - 521

    490 Chapter 10 InterpolationFigure10.33 Frame interpolation issues. Discontinuity of second derivative atpoint.the direction of the second derivative points generally down along that sec-tion of path. This means that our view up vector will end up parallel to theground for that section of curve — again, probably not the intention of theanimator.A further refinement of this technique is to use something called theparallel transport frame [53]. This is an extension of the interpolation read more..

  • Page - 522

    10.7 Chapter Summary491We can take this one step further by separating our view direction fromthe Frenet frame and using our familar look-at point method, again fromChapter 6. The choice of what we use as our look-at point can depend onthe camera effect desired. For example, we might pick a fixed point on theground and then perform a fly-by. We could use the position of an object orthe centroid of positions for a set of objects. We could set an additional path,and use the position along that read more..

  • Page - 523

    492 Chapter 10 Interpolationand explain some of the numerical methods used with curves, in particularintegration techniques and the Newton-Raphson method.We have not discussed parametric surfaces, but many of the same princi-ples apply: Surfaces are approximated or interpolated by a grid of points andare usually rendered using a subdivision method. Rogers [97] is an excellentresource for understanding how NURBS surfaces, the most commonly usedparametric surfaces, are created and used. read more..

  • Page - 524

    Chapter11Random Numbers11.1 IntroductionNow that we’ve spent some time in the deterministic worlds of puremathematics, graphics, and interpolation, it’s time to look at some techniquesthat can make our world look less structured and more organic. We’ll beginin this chapter by considering randomness and generating random numbersin the computer.So why do we need random numbers in games? We can break down ourneeds into a few categories: the basic randomness needed for games of chance,as in read more..

  • Page - 525

    494 Chapter 11 Random NumbersChevalier de Mere. His question was: Which is more likely, rolling at least one6 in 4 throws of a single die, or at least one double 6 in 24 throws of a pair ofdice? (We’ll answer this question at the end of the next section.)These days probability can be used to predict the likelihood of otherevents such as the weather (i.e., the chance of rain is 60 percent) and evenhuman behavior. In the following section we will summarize some elementsof probability, enough for read more..

  • Page - 526

    11.2 Probability495second is the Bayesian approach, which is more philosophical and is basedon the fact that many events are not in practice repeatable. The probability ofsuch events is based on a personal assessment of likelihood. Both have theirapplications, but for the purposes of this chapter, we will be focusing on thefrequentist definition.As an example of the law of large numbers, look at Figure 11.1.Figure 11.1(a) shows the result of a computer simulation of rolling a fairdie 1,000 read more..

  • Page - 527

    496 Chapter 11 Random NumbersDie value012345650100150200250300# RollsDie value123456# Rolls050100150200250300(a)(b)Figure11.1 (a) Simulation results for rolling a fair die 1000 times, and(b) simulation results for rolling a loaded die 1000 times. read more..

  • Page - 528

    11.2 Probability497throws of a single die? We’ll represent this as P(E). It’s a little easier to turnthis around and ask, what is the probability of not rollinga6in4 throws ofone die? We can call the event of not throwinga6onthe ith roll Ai, and theprobability of this event is P(Ai). Then the probability of all 4 is P(A1 and A2and A3 and A4). As each roll is an independent event, we can just multiply the4 probabilities together to get a probability of (5/6)4. But this probability isP(not E), read more..

  • Page - 529

    498 Chapter 11 Random Numbers0. colorProbabilityFigure11.2 Probability mass function for drawing one ball out of a jar with 3 redballs, 2 blue balls and 5 yellow balls.probability distribution function. This function has three important properties:its domain is the sample space of a random variable; for all values x, m(x)≥ 0(i.e., there are no negative probabilities); and the sum of the probabilities ofall outcomes is 1, orn−1i=0m(xi)= 1where nis the read more..

  • Page - 530

    11.2 Probability4990. Example of a probability density function.encapsulate this. Figure 11.3 shows one such function over the canonicalrandom variable. This function f(x)is known as a probability density func-tion (PDF). It has characteristics similar to the probability mass function forthe discrete case: All values f(x)≥ 0and the area under the curve is equalto 1. As with the discrete case, the second read more..

  • Page - 531

    500 Chapter 11 Random NumbersSometimes we want to know the probability of a random value being less thanor equal to some value y. Using the mass function, we can compute this in thediscrete case asF(y)=yx=x0m(x)or in the continuous case using the density function asF(y)=y−∞f(x)dxThis function F(x)is known as the cumulative distribution function (CDF). Wecan think of this as a cumulative sum across the domain. Note that becausethe CDF is the integral of the PDF in the continuous realm, the read more..

  • Page - 532

    11.2 Probability50111.2.3 Mean and Standard DeviationSuppose we conduct Nrandom trials with the random variable X, giving usresults (or samples) X0,X1,...,XN−1. If we take what is commonly known asthe average of the values, we get the sample mean¯X =1NN−1i=0XiWe can think of this as representing the center of the values produced. Wecan get some sense of spread of the values from the center by computing thesample variance s2ass2=1N− 1N−1i=0(Xi− ¯X)2The larger the sample variance, the read more..

  • Page - 533

    502 Chapter 11 Random Numbersand in the continuous case asσ2=∞−∞(x− μ)2f(x)dxAs before, the square root of the variance, or σ, is called the standard deviation.We’ll be making use of these quantities below, when we discuss the normaldistribution and the Central Limit Theorem.11.2.4 Special Probability DistributionsThere are a few specific probability mass functions and probability densityfunctions that are good to be aware of. The first is the uniform distribu-tion. A uniform read more..

  • Page - 534

    11.2 Probability503This is known as the binomial coefficient. If we graph the result for n= 8,p= 2/3, and all values of kfrom 1 to n, we get a lopsided pyramid shape(Figure 11.5). Note that the mean lies near the peak of the pyramid. It will onlylie at the peak if the result is symmetric, which only happens if the probabilityp= 1/2.This discrete distribution can lead to a continuous density function.Suppose that ngets larger and larger. As napproaches∞, the discrete dis-tribution will start read more..

  • Page - 535

    504 Chapter 11 Random Numbers543210x1234500.450.40.350. The standard normal distribution.Figure 11.7 shows a general normal distribution with a mean of 3.75 anda standard deviation of 2.4. For any value of p, the binomial distributionof ntrials can be approximated by a normal distribution with μ= npandσ= np(1−p). Also, for a further intuitive sense of standard deviation it’s help-ful to note that in the normal distribution 68 percent of results are within read more..

  • Page - 536

    11.3 Determining Randomness505xf(x)21000. General normal distribution with mean of 3.75 and standarddeviation of Determining RandomnessUp to this point we have been talking about random variables and probabilitieswhile dancing around the primary topic of this chapter — randomness. Whatdoes it mean for a variable to be random? How can we determine that ourmethod for generating a random variable is, in fact, random? Unfortunately,as read more..

  • Page - 537

    506 Chapter 11 Random Numbersprocess is random, we mean that it lacks bias and correlation. A biased processwill tend toward a single value or set of values, such as rolling a loaded die.Informally, correlation implies that values within the sequence are relatedto each other by a pattern, usually some form of linear equation. As we willsee, when generating random numbers on a computer we can’t completelyremove correlation, but we can minimize it enough so that it doesn’t affectany random read more..

  • Page - 538

    11.3 Determining Randomness507However, in our case we’re going to use a different technique knownas Pearson’s chi-square test, or more generally the chi-square (or χ2) test.Chi-square in this case indicates a certain probability distribution, sothere can be other chi-square tests, which we won’t be concerned with inthis text.To see how the chi-square test works, let’s work through an example.Suppose we want to simulate the roll of two dice, summed together. Theprobabilities of each read more..

  • Page - 539

    508 Chapter 11 Random Numbersoi, square the result, and divide by the theoretical value. Sum all these up andyou have the chi-square value. In equation form, this isV=ni=0(ei− oi)2eiUsing this, we can now compute the chi-square values for our two trials. Forthe first we get 1.269, and for the second we get 21.65.Now that we have a chi-square value, we can now compute a p-value.To do that, we compare our result against the chi-square distribution. Ormore accurately, we compare against the read more..

  • Page - 540

    11.3 Determining Randomness5090. densityk1k2k3k4k1k2k3k4(a) densityx(b) (a) The chi-square probability density function for values of kfrom1 to 4, and (b) the chi-square cumulative density function for values of kfrom1to4. read more..

  • Page - 541

    510 Chapter 11 Random NumbersTable11.1 Chi-square CDF values for various degrees of freedom kp= 0.01p= 0.05p= 0.1p= 0.9p= 0.95p= 0.99k= 10.000160.003930.015792.705543.841466.63489k= 20.020100.102590.210724.605185.991489.21035k= 30.11480.351840.584386.251397.8147211.3449k= 40.297100.710721.063627.779439.4877213.2767k= 50.554301.145481.610319.2363511.070415.0863k= 60.87201.635382.2041310.644612.591616.811k= 71.239032.167342.8331112.017014.067118.4753k= 81.64652.732633.4895413.361615.507320.0901k= read more..

  • Page - 542

    11.3 Determining Randomness511to the theoretical ones using the chi-square test. If the p-value generated isacceptable, we move on, otherwise, the random number generator has failed.Note that if a generator passes the test, it only means that the random num-ber generator produces good results for that statistic. If the statistic is one wemight use in our game, that might be good enough. If it fails, it may requiremore testing, since we might have gotten bad results for that one run. Withthis in read more..

  • Page - 543

    512 Chapter 11 Random NumbersThis is both easier to count, and the probabilities easier to compute. Ingeneral, if we’re generating numbers from 0 to d−1, with a poker hand of sizek, Knuth gives the probability of rdifferent values aspr=d(d− 1)...(d− r+ 1)dkkrwherekr=1r!rj=0(−1)r−jrjjkThis last term is known as a Stirling number of the second kind, and countsthe number of ways to partition kelements into rsubsets.These three are just a few of the possibilities. There are other tests, read more..

  • Page - 544

    11.4 Random Number Generators513This means that no point could be generated in the space between theseplanes — not very random. For many bad RNGs, this can be seen by doinga two-dimensional (2D) plot, for others, a three-dimensional (3D) plot isnecessary. Some extreme examples can be seen in Figure 11.9.In fact, Marsaglia [71] showed that for certain classes of RNGs (the linearcongruential generators, which we’ll cover below) this alignment is impossibleto avoid. For a given dimension k, the read more..

  • Page - 545

    514 Chapter 11 Random Numbers050100150200250050100150200250050100150200250050100150200250(a)(b)Figure11.9 Examples of randomly generating points that “stay mainly within theplanes.” read more..

  • Page - 546

    11.4 Random Number Generators515Why study random number algorithms when most languages these dayscome with a built-in RNG? The reason is that these built-in RNGs areusually not very random. Understanding why they are flawed is importantif we intend on using them and working around their flaws, and under-standing what makes a good generator is important if we want to createour own.Our goal in building an RNG is to generate a series or stream of numberswith properties close to those of actual read more..

  • Page - 547

    516 Chapter 11 Random NumbersThe value mis often one more than the largest representable number, althoughas we’ll see below, other values work better with certain algorithms.Another final concept we need to discuss before diving in is the period of arandom number sequence. Because of the modulus, eventually all generatorswill repeat their values; you will end up generating your original seed valuesand the sequence will start again. For example, take this (very poor) RNG(please):xn= (xn−1 + read more..

  • Page - 548

    11.4 Random Number Generators517where0≤ m0≤ a<m0≤ c<mIn this case, mis called the modulus, ais called the multiplier, and cis calledthe increment.If cis 0, this is called a multiplicative congruential method,otherwise it is a mixed congruential method.Note that no matter what the values are, the maximum period is m. Thismakes sense; because we’re only tracking one variable, if we ever repeat avalue the sequence will begin again from that point. So, the maximum we canpossibly do is read more..

  • Page - 549

    518 Chapter 11 Random NumbersIf d= 2e, we can think of this as representing the eth least significant bits ofxn−1. It can be shown thatyn= (ayn−1 + c)mod dIn other words, while our random sequence may have a maximum periodof m, its least significant bits have a maximum period of d— they are muchless random than the most significant bits.This really only comes into play if we’re using our RNG for small value sim-ulations such as rolling dice. One solution is to shift the result from read more..

  • Page - 550

    11.4 Random Number Generators519x = a*(x%q) - r(x/q);if (x <= 0)x+=m;Note that this only works if r<q, otherwise overflow will still occur.Of course, a simpler solution is to do our calculations in a larger word sizeand truncate down to our desired, smaller word size (i.e., compute in 64-bitintegers for a 32-bit result), but that assumes this option is available to us.Choosing the MultiplierSo these are our two logical possibilities for a modulus: either a power of 2 ora large prime read more..

  • Page - 551

    520 Chapter 11 Random Numbersof the spectral test. Fortunately for us, many people have already done stud-ies of the primitive elements for specific values of m. A fairly recent work byL’Ecuyer [67] in particular has laid out tables of possible values for all of thecases we’re interested in, including the power of two cases with no addition.Using a value from these tables will guarantee excellent results. For our gen-erators, we have chosen default values of a= 2862933555777941757for the read more..

  • Page - 552

    11.4 Random Number Generators521A better approach is to use a lagged Fibonacci generator, where we lookfurther back into the sequence for values, and they are not necessarily oneafter the other. This can be generalized asxn= (xn−j xn−k) mod mwhereis any binary operation (addition, subtraction, multiplication, andexclusive-Or are common choices) and 0 <j<k. Assuming that m= 2wandaddition, subtraction, or exclusive-Or is used, the maximum possible periodfor lagged Fibonacci generators is read more..

  • Page - 553

    522 Chapter 11 Random NumbersAgain,÷ represents an integer divide. As we can see, the bits that wouldnormally be cast out from the modulus operation are added to the nextstage, thereby mixing the lower bits. Something similar can be done withthe subtract-with-borrow generator:xn= (xn−k − xn−r − cn−1) mod mcn= (xn−k − xn−r − cn−1) ÷ mWhile these generators have large periods, solve the least significant bit issue,and otherwise show some promise, it was shown that they also read more..

  • Page - 554

    11.4 Random Number Generators523This can be extended further, giving a period of 2118.k=30903*(k&65535)+(k>>16);j=18000*(j&65535)+(j>>16);i=29013*(i&65535)+(i>>16);l=30345*(l&65535)+(l>>16);m=30903*(m&65535)+(m>>16);n=31083*(n&65535)+(n>>16);return((k+i+m)<<16)+j+l+n);This is a considerable improvement over the previous two methods: It givesus very large periods, it does a good job of randomizing the bits, it works wellwith read more..

  • Page - 555

    524 Chapter 11 Random Numberscryptological uses and that is efficient. Its only negative is that it requires abuffer of 624 integers, but for a game, this is a drop in the bucket comparedto the quality that we receive.The Mersenne Twister was developed by Matsumoto and Nishimura in1997, building on Matsumoto and Kurita’s work with generalized feedbackshift register (GFSR) algorithms [75]. These are a subclass of lagged Fibonaccialgorithms that use exclusive-or (represented as⊕) as their read more..

  • Page - 556

    11.4 Random Number Generators525where Ais called the twist matrix. This is equal toA=⎡⎢⎢⎢⎣11. . .a0a1··· an−1⎤⎥⎥⎥⎦This in turn boils down to a sequence of simple bit operations:A(x) = (x >> 1) ^ ((x & 0x01)?a:0);where ais a special constant value.The performance of the twisted GFSR (TGFSR) algorithm is quite good,at the same level as the simple multiply-with-carry algorithm describedabove.The one problem with this algorithm is that it still suffers from one read more..

  • Page - 557

    526 Chapter 11 Random Numberslower rbits of xk+1. So, this operation takes the upper bits of one entry andthe lower bits of the following entry and concatenates them to form a newentry, just like the multiply-carry operation.To give final better results, they also do tempering like the other TFGSRimprovement, with a slightly different algorithm.y=x^(x>>u);y = (y ^ ((y<<s)& b);y = (y ^ ((y<<t)& c);y=y^(y>>l);Note that uand lin this case are different from the uand read more..

  • Page - 558

    11.5 Special Applications52711.5 Special ApplicationsUp to this point, we’ve been discussing only how to randomly generate uni-formly distributed unsigned integers. However, randomness in a computergame extends beyond this. In this section we’ll discuss a few of the more com-mon applications and how we can use our uniform generator to constructthem.11.5.1 Integers and Ranges of IntegersIn addition to unsigned integers, it is useful to be able to generate other typesof values, and in various read more..

  • Page - 559

    528 Chapter 11 Random NumbersThis does cost more than the integer-only version, but it handles the bit-mixingand overflow problems nicely.11.5.2 Floating-Point NumbersUsually when generating floating-point numbers, we want the range[0, 1].Commonly, this is computed asfloat f = float(random())*RECIP_MAX_RAND;where RECIP_MAX_RANDis the floating-point representation of 1 over themaximum possible random number.An alternative is to set the exponent of the floating-point number tobias+ 1(see read more..

  • Page - 560

    11.5 Special Applications529floating-point number in the interval[0, 1), we can then find the minimumentry that is greater than that value, and generate that. Let’s take our balldrawing problem as an example. Figure 11.10 shows the CDF for the proba-bility distribution in Figure 11.2. Notice that due to the discrete nature of thedistribution the CDF is a step function. If, for example, we randomly generatethe value 0.43, we find that value in the y-axis, and then trace along horizon-tally read more..

  • Page - 561

    530 Chapter 11 Random Numbers11.5.4 Spherical SamplingOne common example of randomness in a game is generating the initialrandom direction for a particle. The most commonly used particle systemof this type is spherical, where all the particles expand from a common point.We can compute the direction vector for this easily by generating a randompoint on a unit sphere.One possible (but wrong) solution for this is to generate randomcomponents (v0,v1,v2), where each vi is a floating-point value in read more..

  • Page - 562

    11.5 Special Applications531but can require a large number of RNG evaluations, so we’ll consider oneother option.Rather than using Cartesian coordinates, let’s look at sphericalcoordinates, which may be a little more natural to use on (say) a sphere.Recall that φis the angle from the z-axis down, from 0 to πradians, and θisthe angle from the x-axis, from 0 to 2πradians. Since we’re talking about aunit sphere, our radius ρin this case is 1. So, we could generate two valuesξ0 and ξ1 read more..

  • Page - 563

    532 Chapter 11 Random NumbersHowever, again we don’t quite get the distribution that we expect. InFigure 11.12, we see that the points are now clustered around the poles ofthe sphere. The solution is to note that we want a latitude/longitude distribu-tion, where zis our latitude and is uniformly distributed, and θis longitudeand also uniformly distributed. The radius at our latitude line will dependon z— we want to guarantee that x2+ y2+ z2= 1. The following calculationhandles this:z= 1− read more..

  • Page - 564

    11.5 Special Applications533and sor= ξ0θ= 2πξ1x= rcos θy= rsin θHowever, we find that we get clustering in the center, as we did in the sphericalcase (Figure 11.14). This may be close to what we want if we’re calculatingbullet trajectories, where we want them to cluster around the aim direction.However, let’s assume this is undesirable. The insight here is to set r=√ξ0.This pushes the values back to the edges of the disc and gives us uniformsampling across the area of the disc read more..

  • Page - 565

    534 Chapter 11 Random Numbers110. 0.2 0.4 0.6 0.810. 0.2 0.4 0.6 0.81Figure11.14 Disc sampling. Result of randomizing polar coordinates; the pointstend to collect at the center.11.5.6 Noise and TurbulenceSource CodeDemoPerlinWe will conclude our discussion of random numbers by briefly lookingat some common noise functions and how they can be used to generateprocedural textures. The first question is: Why do we want to add random-ness to our procedural textures? The main read more..

  • Page - 566

    11.5 Special Applications535212120.820.620.420.20 0.2 0.4 0.6 0.8120.8 20.6 20.4 20.20 0.2 0.4 0.6 0.81Figure11.15 Disc sampling. Result of randomizing polar coordinates with radiuscorrection; the result is correct.The common way to apply noise to textures is to build a noise lattice. Inthis case, we place random values at regular intervals in the texture space, andthen interpolate between them to obtain the intermediary values. By using anappropriate interpolation function (usually cubic), we read more..

  • Page - 567

    536 Chapter 11 Random Numbersacross the surfaces they’re applied to. While this may be desireable in somecases, we’d like to control the situation. To manage this, most noise systemspregenerate a table of random values and then hash into the table, where thehash is usually based on the lattice coordinates. We also want these randomvalues to be bounded — the most common interval is[−1, 1].The most basic lattice noise is known as value noise. In this case, we generaterandom values at each read more..

  • Page - 568

    11.5 Special Applications537Figure11.16 Sky texture generated using Perlin noise in a turbulence function.we use it as a blending factor between our sky and cloud colors. Figure 11.16shows the result.We can do something similar to generate a marble texture. The baseinterpolant for the marble is the sine of the local ycoordinate. We then per-turb the base position by our turbulence to remove the regularity of the sinefunction as follows.varying vec3 localPos;void main(){vec3 light = vec3(0.7, read more..

  • Page - 569

    538 Chapter 11 Random NumbersFigure11.17 Marble texture generated using Perlin noise in a turbulencefunction.function calls, which can get rather expensive. Because of this, graphics engi-neers often will generate a texture with different noise octave values in eachcolor component and then do a lookup into that texture.These examples give just a taste of what is available by making use ofnoise functions. Noise is used for generating wood textures, turbulence infire texture, terrain, and many read more..

  • Page - 570

    11.6 Chapter Summary539of random number techniques can be found in Knuth [65]. A great dealof detail is given in this text to demonstrating the correctness of randomalgorithms and discussing techniques for measuring randomness. For thoseinterested in unusual random distributions, particularly for graphics, Pharrand Humphries [91] is an excellent text. Finally, Ebert et al. [29] is the standardbook for studying procedural algorithms. read more..

  • Page - 571

    This page intentionally left blank read more..

  • Page - 572

    Chapter12IntersectionTesting12.1 IntroductionIn the previous chapters we have been primarily focused on manipulating anddisplaying our game objects in isolation. Whether we are rendering an objector animating it, we haven’t been concerned with how it might be interactingwith other objects in our scene. This is neither realistic nor interesting. Forexample, you are manipulating an object right now: this book. You can holdit in your hand, turn its pages, or drop it on the floor. In the latter read more..

  • Page - 573

    542 Chapter 12 Intersection Testingon by generating a pick ray from a screen-space mouse click, and determiningthe first object we hit with that ray. Another way this is used is in artificialintelligence (AI). In order to simulate whether one AI agent can see another,we cast a ray from the first to the second and see if it intersects any objects.If not, then we can say that the first agent’s target is in sight.We have also mentioned a third use of object intersection before: deter-mining read more..

  • Page - 574

    12.2 Closest Point and Distance Tests543QQPwvprojvwFigure12.1 Closest point on a line.relationships between the point and line. In particular, we notice that thedotted line segment between Qand Qis orthogonal to the line. This line seg-ment corresponds to a line of projection: To find Q, we need to project Qontothe line.To do this, we begin by computing the difference vector wbetween QandP,or w= Q− P. Then we project this onto v, to get the component of wthatpoints along v. Recall that this read more..

  • Page - 575

    544 Chapter 12 Intersection TestingQPwvw⎪⎪w⊥Q9Figure12.2 Computing distance from point to line, using a right triangle.12.2.2 Line–Point DistanceSource CodeLibraryIvMathFilenameIvLine3As before, we’re given a point Qand a line Ldefined by a point Pand avector v. In this case, we want to find the distance between the point and theline. One way is to compute the closest point on the line and compute thedistance between that and Q. A more direct approach is to use the Pythag-orean read more..

  • Page - 576

    12.2 Closest Point and Distance Tests545float vsq = mDirection.Dot(mDirection);float wsq = w.Dot(w);float proj = w.Dot(mDirection);return wsq - proj*proj/vsq;}Note that in this case we’re computing the squared distance. In most caseswe’ll be using this to avoid computing a square root. Another optimization ispossible if we can guarantee that vis normalized; in that case, we can avoidcalculating and dividing by v· v, since its value is Closest Point on Line Segment to PointSource read more..

  • Page - 577

    546 Chapter 12 Intersection TestingQ0Q2Q1P0P1Figure12.3 Three cases when projecting a point onto line segment.Testing tdirectly requires a floating-point division. By modifying our testwe can defer the division to be performed only when we truly need it; that is,when the point lies on the segment. Since v· v >0, then w· v <0in order fort<0. And in order for t>1, then w· v >v· v.The equivalent code is as follows:IvVector3 IvLineSegment3::ClosestPoint(const IvVector3& read more..

  • Page - 578

    12.2 Closest Point and Distance Tests547are three cases: the closest point is P0, P1, or a point somewhere else on thesegment, which we’ll calculate.If the closest point is P0, then we can compute the distance as Q− P0 .Since w= Q− P0, then the squared distance is equal to w· w.If the closest point is P1, then the squared distance is (Q− P1)· (Q− P1).However, we’re representing our endpoint as P1= P0+ v, so this becomes(Q− P0− v)· (Q− P0− v). We can rewrite this read more..

  • Page - 579

    548 Chapter 12 Intersection Testing12.2.5 Closest Points Between Two LinesSource CodeLibraryIvMathFilenameIvLine3Sunday [107] provides the following construction for finding the closestpoints between two lines. Note that in this case there are two closest points,one on each line, since there are two degrees of freedom. The situation isshown in Figure 12.4. Line L1 is described by the point P0 and the vector u.Correspondingly, line L2 is described by the point Q0 and the vector v,orL1(s)= P0+ read more..

  • Page - 580

    12.2 Closest Point and Distance Tests549In order for wc to represent the vector of closest distance, it needs to beperpendicular to both L1 and L2. This means thatwc· u= 0wc· v= 0Substituting in equation 12.1 and expanding, we get0= w0· u+ sc u· u− tc u· v(12.2)0= w0· v+ sc u· v− tc v· v(12.3)We have two equations and two unknowns sc and tc, so we can solve for thissystem of equations. Doing so, we get the result thatsc=be− cdac− b2(12.4)tc=ae− bdac− b2(12.5)wherea= u· ub= read more..

  • Page - 581

    550 Chapter 12 Intersection Testingvoid ClosestPoints( IvVector3& point1,IvVector3& point2,const IvLine3& line1,const IvLine3& line2 ){IvVector3 w0 = line1.mOrigin - line2.mOrigin;float a = line1.mDirection.Dot( line1.mDirection );float b = line1.mDirection.Dot( line2.mDirection );float c = line2.mDirection.Dot( line2.mDirection );float d = line1.mDirection.Dot( w0 );float e = line2.mDirection.Dot( w0 );float denom = a*c - b*b;if ( ::IsZero(denom) ){point1 = line1.mOrigin;point2 read more..

  • Page - 582

    12.2 Closest Point and Distance Tests551float denom = a*c - b*b;// if lines parallelif ( ::IsZero(denom) ){IvVector3 wc = w0 - (e/c)*line2.mDirection;return wc.Dot(wc);}// otherwiseelse{IvVector3 wc = w0 + ((b*e - c*d)/denom)*line1.mDirection- ((a*e - b*d)/denom)*line2.mDirection;return wc.Dot(wc);}}12.2.7 Closest Points Between Two LineSegmentsSource CodeLibraryIvMathFilenameIvLineSegment3Finding the closest points between two line segments follows from findingthe closest points between two read more..

  • Page - 583

    552 Chapter 12 Intersection TestingTherefore, for this endpoint we try to find the minimum value forwc· wc= (w0− tc v)· (w0− tc v)(12.6)To do this, we return to calculus. To find a minimum value (in this case, thereis only one) for a function, we find a place where the derivative is 0. Takingthe derivative of equation 12.6 in terms of tc, we get the result0=−2v · (w0− tc v)Solving for tc,wegettc=v· w0v· v(12.7)So, for the fixed point on line L1 at s= 0, this gives us the read more..

  • Page - 584

    12.2 Closest Point and Distance Tests553is greater than 1, then the closest segment point will be at s= 1. Choosing oneor the other, we resolve for tc and check that it lies between 0and 1. If not,we perform the same process to clamp tc to either the t= 0or t= 1endpointand recalculate sc accordingly (with some minor adjustments to ensure thatwe keep sc within 0 and 1).Once again, there is a trick we can do to avoid multiple floating-point divi-sions. Instead of computing, say, sc directly and read more..

  • Page - 585

    554 Chapter 12 Intersection Testing12.2.9 General Linear ComponentsSource CodeLibraryIvMathFilenameIvLine3IvRay3IvLineSegment3Testing ray versus ray or line versus line segments is actually a simplificationof the segment–segment closest point and distance determination. Insteadof clamping against both components, we need only clamp against thoseendpoints that are necessary. So for example, if we treat P0+ suas the para-meterization of a line segment, and Q0+ tvas a line, then we need only read more..

  • Page - 586

    12.3 Object Intersection555Figure12.5 Nonintersecting objects.Figure12.6 Nonintersecting objects with bounding sphere.get away with ignoring the underlying geometry completely and only usingbounding objects to determine intersections. For example, when handlingcollisions in this way, either the action happens so fast that we don’t noticeany overlapping objects or objects reacting to collision when they appear read more..

  • Page - 587

    556 Chapter 12 Intersection Testingseparated, or the error is so slight that it doesn’t matter. In any case, choosingthe side of making the simulation run faster for a better play experience isusually a good decision.One thing to note with the following algorithms is that their performanceis often dependent on the platform that they are run on. For example, manyconsoles don’t have predictive branching, so conditionals are quite slow. So,on such a platform, an algorithm that calculates read more..

  • Page - 588

    12.3 Object Intersection557CrFigure12.7 Bounding sphere.The surface of the sphere is defined as all points Psuch that the length ofthe vector from Cto Pis equal to the radius:(Px− Cx)2+ (Py− Cy)2+ (Pz− Cz)2= ror(P− C)· (P− C)= rIdeally, we’ll want to choose the smallest possible sphere that encom-passes the entire object. Too small a sphere, and we may skip two objects thatare actually intersecting. Too large, and we’ll be unnecessarily performingour more expensive tests for read more..

  • Page - 589

    558 Chapter 12 Intersection Testing(a)(c)(b)(d)(e)Figure12.8 (a) Bounding sphere, offset origin; (b) bounding sphere, outlyingpoint; (c) bounding sphere, using centroid, object vertices; (d) bounding sphere, usingbox center, box vertices; and (e) bounding sphere, using box center, object vertices. read more..

  • Page - 590

    12.3 Object Intersection559point for the majority of the object’s vertices, but there are one or two outlyingvertices that cause problems (Figure 12.8b).Eberly [25] provides a number of methods for finding a better fit. Oneis to average all the vertex locations to get the centroid and use that as ourcenter. This works well for the case of a noncentered origin, but still is aproblem for an object with outlying points (Figure 12.8c). The reason is thatthe majority of the points lie within a read more..

  • Page - 591

    560 Chapter 12 Intersection Testingfor ( unsigned inti=1;i< numPoints; ++i ){float dist = ::DistanceSquared( mCenter, points[i] );if (dist > maxDistance)maxDistance = dist;}mRadius = ::IvSqrt( maxDistance );}It should be noted that none of these methods is guaranteed to find thesmallest bounding sphere. The standard algorithm for this is by Welzl [118],who showed that linear programming can be used to find the optimally small-est sphere surrounding a set of points. Two implementations read more..

  • Page - 592

    12.3 Object Intersection561dr2C2C1r1Figure12.9 Sphere–sphere intersection.The code is as follows:boolIvBoundingSphere::Intersect( const IvBoundingSphere& other ){IvVector3 centerDiff = mCenter - other.mCenter;float radiusSum = mRadius + other.mRadius;return ( centerDiff.Dot(centerDiff) <= radiusSum*radiusSum );}Sphere–Ray IntersectionIntersection between a sphere and a ray is nearly as simple. Instead of testingtwo centers and comparing the distance with the sum of two radii, we test read more..

  • Page - 593

    562 Chapter 12 Intersection Testingfloat wsq = w.Dot(w);float proj = w.Dot(ray.mDirection);float rsq = mRadius*mRadius;// if sphere behind ray, no intersectionif ( proj < 0.0f && wsq > rsq )return false;float vsq = ray.mDirection.Dot(ray.mDirection);// test length of difference vs. radiusreturn ( vsq*wsq - proj*proj <= vsq*mRadius*mRadius );}An additional check has been added since we’re using a ray. If the spherelies behind the origin of the ray, then there is no read more..

  • Page - 594

    12.3 Object Intersection563Sphere–Plane IntersectionTesting whether a sphere lies entirely on one side of a plane can be done quiteefficiently. Recall that we can determine the distance between a point andsuch a plane by taking the absolute value of the result of the plane equation.If the result is positive and the distance is greater than the radius, then thesphere lies on the inside of the plane. If the result is negative, and the distanceis greater than the sphere’s radius, then the read more..

  • Page - 595

    564 Chapter 12 Intersection TestingThe first type we’ll consider is the AABB, or axis-aligned bounding box,so called because the box edges are aligned to the world axes. This makes rep-resentation of the box simple: We use two points, one each for the minimumand maximum xyzpositions (Figure 12.11). When the object is translated, toupdate the box we translate the minimum and maximum points. Similarly, ifthe object is scaled, we scale the two points relative to the box center. How-ever, because read more..

  • Page - 596

    12.3 Object Intersection565they are relatively cheap to compute and cheap to test as well, so they continueto prove useful.One advantage that world axis-aligned boxes have over a box orientedto the object’s local space is that we need only recompute them once perframe, and then we can compare them directly without further transforma-tion, since they are all in the same coordinate frame. So, while AABBs havea high per-frame overhead (since we have to recalculate them each time anobject read more..

  • Page - 597

    566 Chapter 12 Intersection TestingAABB–AABB IntersectionIn order to understand how we find intersections between two axis-alignedboxes, we introduce the notion of a separating plane. The general idea isthis: We check the boxes in each of the coordinate directions in worldspace. If we can find a plane that separates the two boxes in any of thecoordinate directions, then the two boxes are not intersecting. If we failall three separating plane tests, then they are intersecting and we handle read more..

  • Page - 598

    12.3 Object Intersection567min1max1min2max2Figure12.13 Axis-aligned box–box separation test.AABB–Ray IntersectionDetermining intersection between a ray and an axis-aligned box is similar todetermining intersection between two boxes. We check one axis direction ata time as before, except that in this case, there is a little more interactionbetween steps.Figure 12.14 shows a 2D cross section of the situation. The ray Rshownintersects the minimum and maximum xplanes of the box at R(sx) and read more..

  • Page - 599

    568 Chapter 12 Intersection TestingxminxmaxyminymaxR(sy)R(ty)R(sx)R(tx)Figure12.14 Axis-aligned box–ray separation test.Solving for sx and tx,wegetsx=xmin− Pxvxtx=xmax− PxvxTo simplify adjustment of our overlap interval, we want to ensure that sx <tx.This can be handled by checking whether 1/vx <0; if so, we’ll swap the xminand xmax terms.We’ll track our parameter overlap interval by using two values smax andtmin, initialized to the maximum interval. For a ray this is[0, ∞]; read more..

  • Page - 600

    12.3 Object Intersection569The code, abbreviated for space, is as follows.boolIvAABB::Intersect( const IvRay3& ray ){float maxS = 0.0f;// for line, use -FLT_MAXfloat minT = FLT_MAX;// for line segment, use length// do x coordinate test (yz planes)// compute sorted intersection parametersfloat s, t;float recipX = 1.0f/ray.mDirection.x;if ( recipX >= 0.0f ){s = (mMin.x - ray.mOrigin.x)*recipX;t = (mMax.x - ray.mOrigin.x)*recipX;}else{s = (mMax.x - ray.mOrigin.x)*recipX;t = (mMin.x - read more..

  • Page - 601

    570 Chapter 12 Intersection Testingn^Figure12.15 Axis-aligned box–plane separation test.and maximum planes. Normally in this case, we’d need to test whether Px liesbetween xmin and xmax. If not, the ray misses the box and there is no intersec-tion. However, when using the IEEE floating-point standard, division by zerowill return−∞ for a negative numerator, and∞ for a positive numerator.Hence, if the ray would miss the box, the resulting interval will be either[−∞, −∞] or[∞, read more..

  • Page - 602

    12.3 Object Intersection571diagMin.x = mMin.x;diagMax.x = mMax.x;}else{diagMin.x = mMax.x;diagMax.x = mMin.x;}// ditto for y and z directions...// minimum on positive side of plane, box on positive sidefloat test = plane.mNormal.Dot( diagMin ) + plane.mD;if ( test > 0.0f )return test;test = plane.mNormal.Dot ( diagMax ) + plane.mD;// min on nonpositive side, max on nonnegative side, intersectionif ( test >= 0.0f )return 0.0f;// max on negative side, box on negative sideelsereturn read more..

  • Page - 603

    572 Chapter 12 Intersection Testing(a)(b)Figure12.16 (a) Capsule and (b) lozenge.compute a bounding box for the points. If the object is generally axis aligned(not unreasonable considering that the artists usually build objects in thisway), we can use an axis-aligned bounding box. Otherwise we may need anoriented bounding box (see below on how to compute this). We then find thelongest side. The line that we will use for our baseline segment runs throughthe middle of the box. We’ll use the read more..

  • Page - 604

    12.3 Object Intersection573P0P1t = –L(ξ0)Figure12.17 Capsule endcap fitting.The final part to building the capsule is capping the tube with twohemispheres that just contain any points near the end of the object. Eberly [25]describes a method for doing this. The center of each hemisphere is one ofthe two endpoints of the line segment, so finding the hemisphere allows us todefine the line segment. Let’s consider the endpoint with the smaller tvalue —call it L(ξ0) — shown in Figure read more..

  • Page - 605

    574 Chapter 12 Intersection TestingP9uX0^v^w^AdFigure12.18 Determining hemisphere center X0 for given point P.Since this is a hemisphere, we want X0 to be to the right of P,so w≥ ξ0, andthis becomesξ0= w+ r2− (u2+ v2)Computing this for every point Pin our model and finding the minimum ξ0gives us our first endpoint. Similarly, the second endpoint is found by findingthe maximum value ofξ1= w− r2− (u2+ v2)Capsule–Capsule IntersectionHandling capsule–capsule intersection is very read more..

  • Page - 606

    12.3 Object Intersection575Capsule–Ray IntersectionCapsule–ray intersection follows from capsule–capsule collision. Instead offinding the distance between two line segments, we need to find the distancebetween a ray and a line segment and compare them to the radius of thecapsule, as follows.boolIvCapsule::Intersect( const IvRay3& ray ){// test distance between line and segment vs. radiusreturn ( ray.DistanceSquared( mSegment ) <= mRadius*mRadius );}Capsule–Plane read more..

  • Page - 607

    576 Chapter 12 Intersection Testing12.3.4 Object-Oriented BoxesDefinitionSource CodeLibraryIvSimulationFilenameIvOBBWorld axis-aligned boxes are easy to create and fast to use for detectingintersections, but are not a very tight fit around objects that are not them-selves generally aligned to the world axes (Figure 12.12). A more accurateapproach is to create an initial bounding box that is a tight fit aroundthe object in local space, and then rotate and translate the box as well asthe object read more..

  • Page - 608

    12.3 Object Intersection577ar2r1Cr3Figure12.20 Properties of OBBs.To simplify our life, however, we can use boxes aligned to the object’s localcoordinates, with a vector din object space indicating the box center relativeto the object center (as mentioned in Chapter 4, it’s not usually practical tobuild objects with their bounding box center as their origin). In either case,any time we need the box center cin world space we can usec= Robject→world d+ tIf we’re simply simulating an object read more..

  • Page - 609

    578 Chapter 12 Intersection TestingThe value¯x is the mean of xvalues of the npoints, and¯y is the correspondingmean of the yvalues.By computing the eigenvectors of this matrix, we can determine the direc-tion of greatest variance or where the points are most spread out, which willbecome our long axis. The other eigenvectors become the directions of theremaining axes for our OBB. The mean of the points becomes the center ofthe box, and from there we can project the points onto the axes to read more..

  • Page - 610

    12.3 Object Intersection579CaCbacRTbv^^a • v^(RTb) • v^c • vFigure12.21 Example of OBB separation test.use a pseudo–dot product that forces maximum length, so the equivalentto a· vis|axvx|+|ayvy|+|azvz|This is legal because the extents can be taken from any of the eight octants,so we can get any sign we want for any term.An equivalent equation can be found for (RT b)· v. The final separatingaxis equation is|c · v| >i|aivi|+i|(RT b)ivi|(12.11)While this gives us our test, there is read more..

  • Page - 611

    580 Chapter 12 Intersection TestingFor the Bterms, it’s convenient to transform vto be relative to B’s basisvia RT:RT (i× r1)=⎡⎣r0· (i× r1)r1· (i× r1)r2· (i× r1)⎤⎦ =⎡⎣i· (r1× r0)i· (r1× r1)i· (r1× r2)⎤⎦ =⎡⎣i· (−r2)i· 0i· r0⎤⎦So, vin Bspace isRT v= (−r20, 0,r00)(12.13)Substituting equations 12.12 and 12.13 into equation 12.11 and multiplyingout the terms, the final axis test is|c2r11 − c1r12| >a1|r12|+ a2|r11|+ b0|r20|+ b2|r00|The test for read more..

  • Page - 612

    12.3 Object Intersection581Each axis has two parallel planes associated with it. If we treat the box’scenter as the origin of our frame, the extent vector acontains the magnitudeof our dvalues for these planes. For example, two of the parallel box planesare r00x+ r10y+ r20z+ ax= 0and r00x+ r10y+ r20z− ax= 0.If we translate our ray so that its origin is relative to the box origin, wecan determine sand tparameters for the intersections with these planes, justas we did with the axis-aligned read more..

  • Page - 613

    582 Chapter 12 Intersection Testing}floats=(e- mA[i])/f;floatt=(e+ mA[i])/f;// fix order...// adjust min and max values...// check for intersection failure...}// done, have intersectionreturn true;}Performance can be improved here by storing the rotation matrix as an arrayof three vectors instead of an IvMatrix33.OBB–Plane IntersectionAs we did with with OBB–ray intersection, we can classify the intersectionbetween an OBB and a plane by transforming the plane to the OBB’s frameand using read more..

  • Page - 614

    12.3 Object Intersection583The resulting code is as follows:float IvOBB::Classify( const IvPlane& plane ){IvVector3 xNormal = ::Transpose(mRotation)*plane.mNormal;float r = mExtents.x*::IvAbs(xNormal.x) + mExtents.y*::IvAbs(xNormal.y)+ mEextents.z*::IvAbs(xNormal.z);float d = plane. Test(mCenter);if (::IvAbs(d) < r)return 0.0f;else if (d < 0.0f)returnd+r;elsereturnd-r;}12.3.5 TrianglesSource CodeLibraryIvMathFilenameIvTriangleAll of the bounding objects we’ve discussed up until now read more..

  • Page - 615

    584 Chapter 12 Intersection TestingP1P2P0R0Q2Q1Q0R1Figure12.22 Triangle intersection.or0= ax+ by+ cz+ dIn this case, the plane normal is computed from (P1− P0)× (P2− P0) andnormalized, and the plane point is P0.Now we take our second triangle Q, composed of points Q0, Q1, and Q2.We plug each point into P’s plane equation and test whether all three lie onthe same side of the plane. This is true if all three results have the same sign.If they do, there is no intersection and we quit. read more..

  • Page - 616

    12.3 Object Intersection585between the segment vectors, say n= (Q0− Q1)× (P0− P2). Then, we cancompute the signed distance between each plane and the origin by taking thedot product of the plane normal with a point on each plane (i.e., d0= n· Q0and d1= n· P0). Then, the signed distance between the planes is just d0− d1,or n· (Q0− P0).Note that this will not work if the two lines are parallel. Most of the caseswhere this might occur are culled out during the initial steps. The one read more..

  • Page - 617

    586 Chapter 12 Intersection TestingP1e0P2P0e1Figure12.23 Affine space of triangle.We want the contribution of each point to be nonnegative, so for a point insidethe triangle,u≥ 0v≥ 0u+ v≤ 1If uor v<0, then the point is on the outside of one of the two axis edges.If u+ v>1, the point is outside the third edge. So, if we can computethe barycentric coordinates for the intersection point T(u, v), we can easilydetermine whether the point is outside the triangle.To compute the u, read more..

  • Page - 618

    12.3 Object Intersection587P1ue0P2P0ve1T(u,v)Figure12.24 Barycentric coordinates of line intersection.u=p· sp· e1v=q· dp· e1wheree1= V1− V0e2= V2− V0s= P− V0p= d× e2q= s× e1The final algorithm includes checks for division by zero and intersectionsthat lie outside the triangle.boolTriangleIntersect( const IvVector3& v0, const IvVector3& v1,const IvVector3& v2, const IvRay& ray ){// test ray direction against triangleIvVector3 e1 = v1 - v0;IvVector3 e2 = v2 - read more..

  • Page - 619

    588 Chapter 12 Intersection Testing// if result zero, no intersection or infinite intersections// (ray parallel to triangle plane)if ( ::IsZero(a) )return false;// compute denominatorfloat f = 1.0f/a;// compute barycentric coordinatesIvVector3 s = ray.mOrigin - v0;u = f*s.Dot(p)if (u < 0.0f || u > 1.0f) return false;IvVector3 q = s.Cross(e1);v = f*ray.mDirection.Dot(q);if (v < 0.0f || u+v > 1.0f) return false;// compute line parametert = f*e2.Dot(q);return (t >= 0);}Parameters u, read more..

  • Page - 620

    12.4 A Simple Collision System589case, we’ll use a submarine game as our example. This is to keep things assimple as possible and to illustrate various points to consider when buildingyour own system. It’s also good to keep in mind that a particular subsystemof a game, whether it is collision or rendering, needs only to be as accurate asthe game calls for. Building a truly flexible collision system that handles allpossible situations may be overkill and eat up processing time that could read more..

  • Page - 621

    590 Chapter 12 Intersection Testing12.4.2 Bounding HierarchiesSource CodeDemoHierarchyUnless our objects are almost exactly the shape of the bounding primitive(such as our pool ball example), then there are still going to be places whereour test indicates intersection where there is visibly no collision. For example,the conning tower of our submarine makes the bounding capsule encompassa large area of empty space at the top of the hull. Suppose a torpedo is headingtoward our submarine and read more..

  • Page - 622

    12.4 A Simple Collision System591Figure12.26 Using bounding hierarchy.a hierarchy to perform coarser but cheaper intersection tests. If two objectsare intersecting, we can traverse the two hierarchies until we get to the twointersecting triangles (there may be more than two if the objects are concave).Obviously, we’ll want to create much larger hierarchies in this case. Gener-ating them so that they are as efficient as possible — they both cull well andhave a reasonably small tree size — read more..

  • Page - 623

    592 Chapter 12 Intersection TestingFor example, in one frame we have two objects moving toward eachother, clearly heading for a collision somewhere in the center of the screen(Figure 12.27(a)). Ideally, in the next frame we want to catch a snapshot ofthem just as they collide or are slightly intersecting. However, if we take toolarge a simulation step, they may have passed partially through each other(Figure 12.27(b)). Using a frame-by-frame static test, we will miss the initialcollision. Worse read more..

  • Page - 624

    12.4 A Simple Collision System593at that tank, traveling at 120 km/hr. We also have a bug in our rendering codethat causes us to drop to 10 frames/s, giving us a travel distance of 3 1/3meters. The missile’s path crosses through the tank at an angle and is alreadythrough it by the next frame. This may seem like an extreme example, but incollision systems it’s often best to plan for the extreme case.Walls, since they are infinitely thin, also insist on a dynamic test of somekind. In a read more..

  • Page - 625

    594 Chapter 12 Intersection Testingfor each object ifor each object j, wherej>iif (i is moving or j is moving)test for collision between i and jThere are other possibilities. We can have two lists: one of moving objectscalled Collidersand one of moving or static objects called Collidables. In thefirst loop we iterate through the Collidersand in the second the Collidables.Each Collidershould be tagged after its turn through the loop, to ensurecollision pairs aren’t checked twice. Still, read more..

  • Page - 626

    12.4 A Simple Collision System595zdirections. For each slab, we store the set of objects that intersect it. Totest for collisions for a particular object, we determine which slabs it inter-sects and then test against only the objects in those slabs. This approachcan be extended to other spatial subdivisions, such as a grid or voxel-basedsystem.One of the disadvantages of the regular spatial subdivisions is that theydon’t handle clumping very well. Let’s consider slabs again. If our world read more..

  • Page - 627

    596 Chapter 12 Intersection Testingx-axisFigure12.29 Dividing collision space by sweep and prune.lists, such as bubble or insertion sort, we can get linear time for our sort andhence an O(n)algorithm.This algorithm still has problems, of course. If our objects are highlylocalized (or clumped) in the xdirection, but separated in the ydirection,then we still may be doing a high number of unnecessary intersection tests.But it is still much better than the naive O(n2)algorithm we were read more..

  • Page - 628

    12.4 A Simple Collision System597vdirFigure12.30 False positive for frustum intersection.When handling frustum culling, the most basic approach involves testingan object against the six frustum planes. If, after this test, we determine thatthe object lies outside one of the planes, then we consider it outside the frus-tum and do not render it. As with ray casting, we can improve performanceby using a bounding hierarchy at progressive levels to remove obvious cases.We can also use a spatial read more..

  • Page - 629

    598 Chapter 12 Intersection Testing(a)(b)Figure12.31 (a) Expanding view frustum for simpler inclusion test, and(b) expanding view frustum for simpler inclusion test. read more..

  • Page - 630

    12.5 Chapter Summary599its center against this shape. In practice, we can just push out the frustumplanes by the sphere radius (Figure 12.31(b)), which is close enough. Similartechniques can be used for other bounding objects; see Akenine-Möller andHaines [82] and Watt and Policarpo [117] for more details.12.4.6 Section SummaryThe proceeding material should give some sense of the decisions that haveto be made when handling collision detection or other systems that involveobject intersection: read more..

  • Page - 631

    This page intentionally left blank read more..

  • Page - 632

    Chapter13Rigid BodyDynamics13.1 IntroductionIn many games, we move our objects around using a very simple movementmodel. In such a game, if we hold down the up arrow key, for example, weapply a constant forward translation, once a frame, to the object until the keyis released, at which point the object immediately stops moving. Similarly,we can apply a constant rotation to the object if the left arrow key is held,and again, it stops upon release. This is fine for something with fast actionlike read more..

  • Page - 633

    602 Chapter 13 Rigid Body Dynamicsthen bounce and possibly roll once. We want the game world to react to ourcharacter as the real world reacts to us, in a physically correct manner.For both of these cases, we will want a better model of movement,known as a physically based simulation. One chapter is hardly enough spaceto encompass this broad topic, which covers the preceding effects as wellas objects deforming due to contact, fluid simulation, and soft-body sim-ulations such as cloth and rope. read more..

  • Page - 634

    13.2 Linear Dynamics603xtFigure13.1 Graph of current motion model, showing x coordinate of particle as afunction of time.If we take the derivative of this function with respect to t, we end up withdXdt= X (t)= v0(13.1)This derivative of the position function is known as velocity, which is usuallymeasured in meters per second, or m/s. For our simple motion model, we havea constant velocity across each segment. If we continue taking derivatives, wefind that the second derivative of our position read more..

  • Page - 635

    604 Chapter 13 Rigid Body Dynamicsinterval, we can instead use the average velocity across the interval, which isjust one-half the starting velocity plus the ending velocity, or¯v =12(v0+ v(t))Substituting this into our original X(t)gives usX(t)= X0+ t12(v0+ v(t))Substituting in for v(t)gives the final result ofX(t)= X0+ tv0+12t2 a(13.4)Our equation for position becomes a quadratic equation, and our velocity isrepresented as a linear equation:Pi(t)= Pi+ tvi+12t2 aivi(t)= vi+ taiSo, given a read more..

  • Page - 636

    13.2 Linear Dynamics605a thrown rock (low initial velocity) to a cannonball (medium initial velocity)to a bullet (high initial velocity).1Within our game, we can use these equations on a frame-by-frame basisto compute the position and velocity at each frame, where the time betweenframes is hi. So, for a given frame i+ 1,Xi+1 = Xi+ hi vi+12h2i aivi+1 = vi+ hi aiThis process of motion with nonzero acceleration is known as dynamics.13.2.2 ForcesOne question that has been left open is how to compute read more..

  • Page - 637

    606 Chapter 13 Rigid Body Dynamicsleaves your hand, that pushing force will be removed, leaving only gravity andair effects. Forces are vectors, so in both cases we can add all forces on anobject together to create a single force that encapsulates their total effect onthe object. We then scale the total force by 1/mto get the acceleration forequation 13.4.For simplicity’s sake, we will assume for now that our forces are appliedin such a way that we have no rotational effects. In Section 13.4 read more..

  • Page - 638

    13.2 Linear Dynamics60713.2.4 Moving with Variable AccelerationThere is a problem with the approach that we’ve been taking so far: We areassuming that total force, and hence acceleration, is constant across the entireinterval. For more complex simulations this is not the case. For example, it iscommon to compute a drag force proportional to but opposite in direction tovelocity:Fdrag=−mρv(13.5)This can provide a simple approximation to air friction; the faster we go, thegreater the friction read more..

  • Page - 639

    608 Chapter 13 Rigid Body DynamicsWe can solve for cby using our velocity v0 at time t= 0:c= v0− 0· a= v0So, our final equation is as before:v(t)= v0+ taWe can perform a similar integration for position. Rewriting equation 13.1givesdX= v(t)dtWe can substitute equation 13.2 into this to getdX= v0+ ta dtIntegrating this, as we did with velocity, produces equation 13.4 again.For general equations we perform the same process, reintegrating dvtosolve for v(t)in terms of a(t). So, using our drag read more..

  • Page - 640

    13.3 Numerical Integration609again, and modify our simulation code accordingly. Since we’ll most likelyhave many different possible situations with many different applications offorce, this could grow to be quite a nuisance. Because of both these reasons,we’ll have to use a numerical method that can approximate the result of theintegration.13.3 Numerical Integration13.3.1 DefinitionThe solutions for vand Xthat we’re trying to integrate fall under a class ofdifferential equation problems read more..

  • Page - 641

    610 Chapter 13 Rigid Body Dynamicsposition and velocity with any constant forces, such as those created fromplayer input. We’ll represent this as Ftot(t, X,v)in our equations. So, givenour current state, the result of our function f(t, y)will bey= f(t, y)=viFtot(ti,Xi, vi)/mThe function f(t, y)is important in understanding how we can solvethis problem. For every point yit returns a derivative y. This represents avector field, where every point has a corresponding associated vector. To geta read more..

  • Page - 642

    13.3 Numerical Integration61113.3.2 Euler’s MethodSource CodeDemoForceAssuming our current time is tand we want to move ahead hin time, we coulduse Taylor’s series to compute y(t+ h):y(t+ h)= y(t)+ hy (t)+h22y (t)+ ··· +hnn!y(n)(t)+ ···We can rewrite this to compute the value for time step i+ 1, where thetime from ti to ti+1 is hi:yi+1 = yi + hi yi +h2i2yi + ··· +hnin!y(n)i+ ···This assumes, of course, that we know all the values for the entire infiniteseries at time step i, read more..

  • Page - 643

    612 Chapter 13 Rigid Body DynamicsSeparating out position and velocity gives usXi+1 ≈ Xi+ hiXi≈ Xi+ hi vivi+1 ≈ vi+ hi vi≈ vi+ hi Ftot(ti,Xi, vi)/mThis is known as Euler’s method.To use this in our game, we start with our initial position and velocity. Ateach new frame, we grab the difference in time between the previous frameand current frame and use that as hi. To compute f(ti, yi) for the velocity,we use our CurrentForce()method to add up all of the forces on our objectand divide read more..

  • Page - 644

    13.3 Numerical Integration613For many cases, this works quite well. If our time steps are small enough,then the resulting approximation points will lie close to the actual functionand we will get good results. However, the ultimate success of this method isbased on the assumption that the slope at the current point is a good estimateof the slope over the entire time interval h. If not, then the approximationcan drift off the function, and the farther it drifts, the worse the tangentapproximation read more..

  • Page - 645

    614 Chapter 13 Rigid Body Dynamicsposition, a large force leads to a large acceleration, which leads to a largerdifference between our approximation and the actual value. Larger values ofhi will magnify this error. Also, if the force changes quickly, this means thatthe magnitude of the velocity’s second derivative is high, and so we can runinto similar problems with velocity. This is known as truncation error, and aswe can see, with Euler’s method the truncation error is O(h2).However, our read more..

  • Page - 646

    13.3 Numerical Integration615x0x1/2x0x1x91/2x91/2(a)(b)Figure13.6 (a) Orbit example, showing first step of midpoint method: gettingthe midpoint derivative. (b) Orbit example, stepping with midpoint derivative to nextestimate.than Euler’s method. Instead of approximating the function with a line, weare approximating it with a quadratic.While the midpoint method does have better error tolerance than Euler’smethod, as we can see from our example, it still drifts off of the desired solu-tion. read more..

  • Page - 647

    616 Chapter 13 Rigid Body Dynamicserror for this approach is still O(h2), due to the fact that we’re taking an inaccu-rate measure of the final derivative. Another approach is Heun’s method, whichtakes 1/4of the starting derivative, and 3/4of an approximated derivative 2/3along the step size. Again, its error is O(h2), or no better than the midpointmethod.The standard O(h4)method is known as Runge-Kutta order four, or sim-ply RK4. RK4 can be thought of as a combination of the midpoint read more..

  • Page - 648

    13.3 Numerical Integration617The most basic Verlet method can be derived by adding the Taylorexpansion for the current time step with the expansion for the previous timestep:y(t+ h)+ y(t− h)= y(t)+ hy (t)+h22y (t)+ ···+ y(t)− hy (t)+h22y (t)− ···Solving for y(t+ h)gives usy(t+ h)= 2 y(t)− y(t− h)+ h2 y (t)+ O(h4)Rewriting in our stepwise format, we getyi+1 = 2 yi − yi−1 + h2i yiThis gives us an O(h2)solution for integrating position from acceleration,without involving read more..

  • Page - 649

    618 Chapter 13 Rigid Body Dynamicsposition calculation:v(t+ h/2)= v(t− h/2)+ ha(t)X(t+ h)= X(t)+ hv(t+ h/2)Like with standard Verlet, we can start this off with a Runge-Kutta methodby computing velocity at a half-step and proceed from there. If velocity ona whole step is required, it can be computed from the velocities, but as withstandard Verlet, one time step behind position:vi=(vi+1/2 − vi−1/2)2As with standard Verlet, leapfrog Verlet is an O(h2)method.The third, and most accurate, read more..

  • Page - 650

    13.3 Numerical Integration61913.3.5 Implicit MethodsAll the methods we’ve described so far integrate based on the currentposition and velocity. They are called explicit methods and make use of knownquantities at each time step, for example Euler’s method:yi+1 = yi + hyiBut as we’ve seen, even higher-order explicit methods don’t handle extremecases of stiff equations very well.Implicit methods make use of quantities from the next time step:yi+1 = yi + hi yi+1This particular implicit read more..

  • Page - 651

    620 Chapter 13 Rigid Body Dynamicsthis example, our force is directly dependent on velocity, but in the opposingdirection. Considering only velocity:vi+1 = vi− hρvi+1Solving for vi+1 gives usvi+1 =vi1+ hρWe can’t always use this approach. Either we will have a function toocomplex to solve in this manner, or we’ll be experimenting with a numberof functions and won’t want to take the time to solve each one individually.Another way is to use a predictor–corrector method. We move ahead read more..

  • Page - 652

    13.3 Numerical Integration621While implicit methods do have some characteristics that we like —they’re good for forces that depend on stiff equations — they do tend to loseenergy and may dampen more than we might want. Again, this is betterthan explicit Euler, but it’s not ideal. They’re also more complex and moreexpensive than explicit Euler. Fortunately there is a solution that provides thesimplicity of explicit Euler with the stability of implicit Euler.13.3.6 Semi-Implicit read more..

  • Page - 653

    622 Chapter 13 Rigid Body Dynamicsx0x1x2Figure13.8 Semi-implicit Euler. The gray arrows indicate the original velocityand its modification by acceleration.13.4Rotational Dynamics13.4.1 DefinitionThe equations and methods that we’ve discussed so far allow us to createphysical simulations that modify an object’s position. However, one aspectof dynamics we’ve passed over is simulating changes in an object’s orienta-tion due to the application of forces, or rotational dynamics. When read more..

  • Page - 654

    13.4 Rotational Dynamics623Figure13.9 Comparing centers of mass. The seesaw balances close to the center,while the hammer has center of mass closer to the end.point associated with an object where, if you apply a force at that point, itwill move without rotating. One can think of it as the point where the objectwould perfectly balance. Figure 13.9 shows the center of mass for some com-mon objects. The center of mass for a seesaw is directly in the center, aswe’d expect. The center of mass for read more..

  • Page - 655

    624 Chapter 13 Rigid Body Dynamicsrrrvvv(a)(b)(c)vvvFigure13.10 (a) Linear velocity of points on the surface of a rotating sphere.Velocity is orthogonal to both angular velocity vector and displacement vector fromthe center of rotation. (b) Comparison of speed of points on surface of rotating disk.Points further from the center of rotation have larger linear velocity. (c) Comparisonof speed of points on surface of rotating sphere. Points closer to the equator of thesphere have larger linear read more..

  • Page - 656

    13.4 Rotational Dynamics625Finally, the linear velocity of a point as we move from the equator to the poleswill decrease to zero (Figure 13.10(c)) and the quantity sin θprovides this.13.4.3 TorqueUp until now we’ve been simplifying our equations by applying forces onlyat the center of mass, and therefore generating only linear motion. On theother hand, if we apply an off-center force to an object, we expect it to spin.The rotational force created, known as torque, is directly dependent on read more..

  • Page - 657

    626 Chapter 13 Rigid Body Dynamicsr1r2F1F2Figure13.12 Adding two torques. If forces and displacements are addedseparately and then the cross product is taken, total torque will be 0. Each torquemust be computed and then added together.13.4.4 Angular Momentum and Inertia TensorRecall that a force Fis the derivative of the linear momentum P. There is arelated quantity Lfor torque, such thatτ=dLdtLike linear momentum, the angular momentum Ldescribes how much anobject tends to stay in motion, but read more..

  • Page - 658

    13.4 Rotational Dynamics627gracefully pull out of the spin. Torque is near zero in this case (ignoring someminimal friction from the ice and air), so we can consider angular momentumto be constant. Since angular velocity is clearly changing and mass is constant,the shape of the skater is the only factor that has a direct effect to cause thischange.So, to represent this effect of shape on rotation, we use a 3× 3symmetricmatrix, whereJ=⎡⎢⎣Ixx−Ixy −Ixz−IxyIyy−Iyz−Ixz read more..

  • Page - 659

    628 Chapter 13 Rigid Body DynamicsFor many purposes, these can be reasonable approximations. If necessary,it is possible to compute an inertia tensor and center of mass for a general-ized model, assuming a constant density. A number of methods have beenpresented to do this, in increasing refinement [11, 27, 63, 79]. The generalconcept is that in order to compute these quantities we need to do a solidintegral across our shape, which is a triple integral across three dimensions.If we assume read more..

  • Page - 660

    13.4 Rotational Dynamics629If we’re using a quaternion representation for orientation, we use asimilar approach. We take our angular velocity vector and convert it to aquaternion w, wherew= (0,ω)We can multiply this by one-half of our original quaternion to get thederivative in quaternion form, giving us, again with Euler’s method,qn+1 = qn + h12wn qn(13.11)A derivation of this equation is provided by Witken and Baraff [121] andEberly [27], for those who are interested.Using either of these read more..

  • Page - 661

    630 Chapter 13 Rigid Body Dynamics// compute new angular momentum, orientationmAngMomentum += h*CurrentTorque( mTranslate, mVelocity,mRotate, mAngVelocity);mAngMomentum.Clean();// update angular velocityIvMatrix33 rotateMat(mRotate);IvMatrix33 worldMomentsInverse =rotateMat*mMomentsInverse*::Transpose(rotateMat);mAngVelocity = worldMomentsInverse*mAngMomentum;mAngVelocity.Clean();IvQuat w = IvQuat( 0.0f, mAngVelocity.x,mAngVelocity.y, mAngVelocity.z );mRotate += read more..

  • Page - 662

    13.5 Collision Response631ABFigure13.13 Point of collision. At the moment of impact between two convexobjects, there is a single point of collision. Also shown is the collision plane and itsnormal.Figure13.14 Interpenetrating objects. There is no single point of completely separate; in the next, they are colliding. In fact, in most caseswhen collision is detected, we have missed the initial point of collision andthe objects are already interpenetrating (Figure 13.14). Because of read more..

  • Page - 663

    632 Chapter 13 Rigid Body Dynamicsstep. We keep doing this, ratcheting time forward or back by smaller andsmaller intervals until we get an exact point of collision (unlikely) or we reacha certain level of iteration. At the end of the search, we’ll either have foundthe exact collision point or will be reasonably close.This technique has a few flaws. First of all, it’s slow. Chances are thatevery time you get a collision, you’ll need to run the simulation at least twoor three additional read more..

  • Page - 664

    13.5 Collision Response633the direction for our collision normal. The penetration distance pis then thesum of the two radii minus the length of this vector, orp= (ra+ rb)− Cb− Ca(13.14)We can move each sphere in opposite directions along this normal by thedistance p/2, which will move them to a position where they just touch. Thisassumes that both objects can move — if one is not expected to move, like aboulder or a church, we translate the other object by the entire normal length.So, for read more..

  • Page - 665

    634 Chapter 13 Rigid Body Dynamicsthat control how contacts are resolved (think of a stack of boxes). We’llbriefly discuss how to manage such problems later, but for our main threadof discussion we’ll concentrate on single points of contact.13.5.2 Linear Collision ResponseSource CodeDemoLinCollisionWhatever method we use, we now have two of the properties of the collision weneed to compute the linear part of our collision response: a collision normalˆn and a collision point P. The other read more..

  • Page - 666

    13.5 Collision Response635vabvavbnABABvabvavbnvn(b)(a)ABv'avav'bnj/man–j/mbnvb(c)Figure13.16 (a) Computing collision response. Calculating relative velocity.(b) Collision response. Computing relative velocity along normal. (c) Collisionresponse. Adding impulses to create outgoing velocities. read more..

  • Page - 667

    636 Chapter 13 Rigid Body DynamicsIn order to compute a proper impulse, two conditions need to be met.First of all, we need to set the ratio of the outgoing velocity along the colli-sion normal to the incoming velocity. We do this by setting a coefficient ofrestitution :vn =− vnor(va − vb) · ˆn =− (va− vb)· ˆn(13.15)Each object will have its own value of . This simulates two differentphysical properties. First of all, when one object collides with another someenergy is lost, usually read more..

  • Page - 668

    13.5 Collision Response637With this, we finally have all the pieces that we need. If we substituteequations 13.16 and 13.17 into equation 13.15 and solve for j, we get the finalimpulse factor equation:ja=−(1 + a) vab· ˆn1ma + 1mb(13.18)The equation for jb is similar, except that we substitute b for a.Now that we have our impulse values, we substitute them back intoequations 13.16 and 13.17, respectively, to get our outgoing velocities(Figure 13.16(c)). Note the effect of mass on the read more..

  • Page - 669

    638 Chapter 13 Rigid Body Dynamics// compute impulse factorfloat modifiedVel = vDotN/(1.0f/mMass + 1.0f/other->mMass);float j1 = -(1.0f+mElasticity)*modifiedVel;float j2 = -(1.0f+other->mElasticity)*modifiedVel;// update velocitiesmVelocity += j1/mMass*collisionNormal;other->mVelocity -= j2/other->mMass*collisionNormal;}In this simple example, we have interleaved the sphere collision detectionwith the computation of the collision point and normal. This is for efficiency’ssake, read more..

  • Page - 670

    13.5 Collision Response639Now the relative velocity vab at the collision point becomesvab= ¯va − ¯vband equation 13.15 becomes(¯va − ¯vb) =− (¯va − ¯vb)(13.19)The other change needed is that in addition to handling linear momentum,we also need to conserve angular momentum. This is a bit more complexcompared to the equations for linear motion, but the general concept is thesame. The outgoing angular momentum should equal the sum of the incomingangular momentum and any momentum read more..

  • Page - 671

    640 Chapter 13 Rigid Body DynamicsWe change our linear collision–handling code in three places to achievethis. First of all, the relative velocity collision incorporates incoming angularvelocity, as follows.// compute relative velocityIvVector3 r1 = collisionPoint - mTranslate;IvVector3 r2 = collisionPoint - other->mTranslate;IvVector3 vel1 = mVelocity + Cross( mAngularVelocity, r1 );IvVector3 vel2 = other->mVelocity + Cross( other->mAngularVelocity,r2 );IvVector3 relativeVelocity = read more..

  • Page - 672

    13.5 Collision Response641features we may want to add. The following present some possible solutionsfor expanding and extending our simple system.Resting ContactThe methods we described above handle the case when two objects areheading toward each other along the collision normal. Obviously, if they’reheading apart, we don’t need to consider these methods — they are separating.However, if their relative velocity along the normal is 0, then we have what iscalled a resting contact. A simple read more..

  • Page - 673

    642 Chapter 13 Rigid Body DynamicsFigure13.17 Mesh of particles constrained by distance.constraints we can set up similarly. For example, suppose we have a collectionof particles, and we want to keep each of them a fixed distance away from theirneighbors, say in a grid (Figure 13.17). This is particularly useful when tryingto simulate cloth. We can also have joint contraints, which keep two pointscoincident while allowing the remainder of the objects to swing free. And thelist goes on. Any case read more..

  • Page - 674

    13.6 Efficiency643Multiple PointsThe final issue we’ll discuss is how to manage multiple constraints andcontacts, both on one object and across multiple objects. In reality, ourconstraint forces and contact impulses are occurring simultaneously so themost accurate way to handle this is to build a large system of equations andsolve for them all at once. This is usually a quite complex process, both in con-structing the equations and solving them. While it often ends up as a linearsystem, using read more..

  • Page - 675

    644 Chapter 13 Rigid Body Dynamicstime with any more processing power than you need to get the effect youwant. While a fully realistic simulation may be desirable, it can’t take toomuch processing power away from the other subsystems, for instance, graph-ics or artificial intelligence. How resources are allocated among subsystemsin a game depends on the game’s focus. If a simpler solution will come closeenough to the appearance of realism, then it is sometimes better to use thatinstead.One read more..

  • Page - 676

    13.7 Chapter Summary64513.7 Chapter SummaryThe use of physical simulation is becoming an important part of provid-ing realistic motion in games and other interactive applications. In thischapter we have described a simple physical simulation system, using basicNewtonian physics. We covered some techniques of numeric integration,starting with Euler’s method, and discussed their pros and cons. Using theseintegration techniques, we have created a simple system for linear and rota-tional read more..

  • Page - 677

    This page intentionally left blank read more..

  • Page - 678

    Bibliography[1] AMD. AMD developer support website.[2] American National Standards Institute and Institute of Electrical andElectronic Engineers. IEEE standard for binary floating-point arithmetic.ANSI/IEEE Standard, Std. 754-1985, New York, 1985.[3] Howard Anton and Chris Rorres. Elementary Linear Algebra: ApplicationsVersion, 7th edition. John Wiley and Sons, New York, 1994.[4] Sheldon Axler. Linear Algebra Done Right, 2nd edition. Springer-Verlag,New York, 1997.[5] Martin read more..

  • Page - 679

    648 Bibliography[15] Thomas Busser. Polyslerp: A fast and accurate polynomial approxi-mation of spherical linear interpolation (slerp). Game Developer, February2004.[16] Erin Catto. Iterative dynamics with temporal coherence. Technicalreport, Crystal Dynamics, 2005.[17] Erin Catto. Fast and simple physics using sequential impulses. GDC2006 Tutorial: Physics for Game Programmers, 2006.[18] Arthur Cayley. The Collected Mathematical Papers of Arthur Cayley.Cambridge University Press, Cambridge, read more..

  • Page - 680

    Bibliography649[33] Euclid. The Elements. Dover Publications, New York, 1956.[34] Cass Everitt. Interactive order-independent transparency. Technicalreport, NVIDIA, 2001.[35] Randima Fernando and Mark J. Kilgard. The Cg Tutorial: The DefinitiveGuide to Programmable Real-Time Graphics. Addison-Wesley, Reading, MA,February 2003.[36] Randima Fernando, editor. GPU Gems: Programming Techniques,Tips, and Tricks for Real-Time Graphics. Addison-Wesley, Reading, MA,March 2004.[37] Agner Fog. Pseudo read more..

  • Page - 681

    650 Bibliography[49] Charles M. Grinstead and J. Laurie Snell. Introduction to Probability.American Mathematical Society, Providence, R.I., 2003.[50] Brian Guenter and Richard Parent. Computing the arc length ofparametric curves. IEEE Computer Graphics and Applications, 10(3):72–78,1990.[51] Philippe Guigue and Olivier Devillers. Fast and robust triangle-triangleoverlap using orientation predicates. Journal of Graphics Tools, 8(1):25–32,2003.[52] William Hamilton. On quaternions, or on a new read more..

  • Page - 682

    Bibliography651[65] Donald E. Knuth. The Art of Computer Programming: SeminumericalAlgorithms, 3rd edition. Addison-Wesley, Reading, MA, 1993.[66] Doris H. U. Kochanek and Richard H. Bartels. Interpolating splineswith local tension, continuity, and bias control. In Computer Graphics(SIGGRAPH ’84 Proceedings), pages 33–41, 1984.[67] P. L’Ecuyer. Tables of linear congruential generators of different sizesand good lattice structure. Mathematics of Computation, 68(225):249–260,1999.[68] read more..

  • Page - 683

    652 Bibliography[81] Tomas Möller and Ben Trumbore. Fast, minimum storage ray/triangleintersection. Journal of Graphics Tools, 2(1):21–28, 1997.[82] Tomas Akenine-Möller and Eric Haines. Real-Time Rendering, 2ndedition. A. K. Peters, Natick, MA, 2002.[83] Hubert Nguyen. Casting shadows. Game Developer Magazine, March1999.[84] NVIDIA. NVIDIA developer support website.[85] OpenGL Architecture Review Board, Mason Woo, Jackie Neider, TomDavis, and Dave Shreiner. read more..

  • Page - 684

    Bibliography653[98] David F. Rogers and J. Alan Adams. Mathematical Elements for ComputerGraphics, 2nd edition. McGraw-Hill Inc., New York, 1990.[99] Randi Rost. OpenGL(R) Shading Language. Addison-Wesley, Reading,MA, February 2004.[100] Philip J. Schneider and David H. Eberly. Geometric Tools for ComputerGraphics. Morgan Kaufmann Publishers, San Francisco, 2002.[101] I. Schrage. A more portable fortran random number generator. ACMTransactions of Mathematics Software, 5(2):132–138, 1979.[102] read more..

  • Page - 685

    654 Bibliography[114] James M. Van Verth. Spline-based time control for animation. In KimPallister, editor, Game Programming Gems 5. Charles River Media, Hingham,MA, 2005.[115] Stephen Vincent and David Forsey. Fast and accurate parametric curvelength computation. Journal of Graphics Tools, 6(4):29–40, 2001.[116] David R. Warn. Lighting controls for synthetic images. In ComputerGraphics (SIGGRAPH ’83 Proceedings), 1983.[117] Alan Watt and Fabio Policarpo. 3D Games: Real-Time Rendering read more..

  • Page - 686

    Index1AAABBs, see Axis-aligned boundingboxes (AABBs)Absolute errorIEEE 754 floating-pointstandard, 12real number representation, 3Acceleration, in linear dynamicsconstant acceleration, 602–605variable acceleration, 607–609Acceleration vector, definition, 37Additioncatastrophic cancelation, 22fixed-point numbers, C-9floating-point addition, 22IEEE 754 floating-pointstandard, 13–14matrix addition, 90, 101modulate with late addition, 360quaternions, 187vectors, 37–40, 43Add mode, pixel read more..

  • Page - 687

    656 IndexAxis-aligned boundingboxes (continued)basic definitions, 563–565exercises, X-18–X-19OBB–ray intersection, 580simple collision system, 591Axis–angle representationconcatenation, 184definition, 181–182, 181–184format conversion, 182–184vector rotation, 184Axis of rotation, affinetransformations, 141BBack substitution, and Gaussianelimination, 116–117Backward Euler methods, rigidbody dynamics, 619Barycentric coordinatesdefinition, 69triangle, 85Basis vectorsdefinition, read more..

  • Page - 688

    Index657linear collision response, 634–638resting contact, 641rotational collision response,638–640Collision space, into slabs, 594Collision systembase primitive choice, 589basic rules, 588–589bounding hierarchies, 590–591dynamic objects, 591–593performance improvements,593–596related systems, 596–599Coloringbasic methods, 287per-object colors, 288per-triangle colors, 290per-vertex colors, 288–290sharp edges and vertexcolors, 290–291textures as materials, 362Color range, read more..

  • Page - 689

    658 IndexDescartes, Rene, Cartesiancoordinates, 64Determinantsadjoint matrix and inverse,128–129computation, 123–126definition, 121–123and elementary row operations,126–128df/dx, definition, B-4Diagonalization, matrices, 130Diagonal matrix, definition, 89DIEHARD, chi-square test, 513DieHarder, chi-square test, 513Diffuse color, textures asmaterials, 362Diffuse lighting, characteristics,334–337Diffuse map, definition, 362Dimension, basis vectors, 63Direct3Dantialiasing application, read more..

  • Page - 690

    Index659Exponentials, calculus overview,B-8–B-9Eye, and object projection, 236Eyepoint, definition, 205FFaceted shading, definition, 290Favorable outcome, basicprobability, 495Fibonacci seriesgeneralized feedback shift registeralgorithms, 525lagged methods, 521–522random number generators,516–517Field of view, viewfrustrum, 217–218Fill convention, partial fragmenthandling, 377Filtering, color operations, 260Finiteness, integral representation,C-1–C-2First derivative, definition, read more..

  • Page - 691

    660 IndexGlobal illumination, definition, 318Gloss map, definition, 362Glow map, definition, 362GL shading language (GLSL)definition, 278fragment shader application,284–285fragment shader inputs, 283fragment shader outputs, 284input and output values, 279inputs, 280–281OpenGL exercises, X-14random number noise, 538–539simple lighting approximation, 319Gouraud shadingdefinition, 288–289limitations, 292per-fragment lighting, 354–355per-vertex attributes, 391–392sharp edges, read more..

  • Page - 692

    Index661Hermite curvesautomatic generation, 444–446definition, 438–441natural end conditions, 447–448tangent manipulation, 441–444Kochanek–Bartels splines, 450–452piecewise Bézier curves, 455–456Intersection testingAABB definitions, 563–565AABB-AABB intersection, 566AABB-plane intersection, 570–571AABB-ray intersection, 567–570capsule-capsule intersection, 574capsule-plane intersection, 575capsule-ray intersection, 575closest point on line to point,542–543closest point read more..

  • Page - 693

    662 IndexLinear transformations (continued)definition, 87exercises, X-5and matrices, 106–108null space and range, 103–104Line-line distance, testing, 550–551Line-point distance, intersectiontesting, 544–545Line of projection, definition, 213Linesclosest point to point, intersectiontesting, 542–543closest points between, 548–550definition, 75generalized equation, 77–79parameterized, 76–77Line segment-line segment distance,intersection testing, 553Line segment-point read more..

  • Page - 694

    Index663MSAA, see Multisampled antialiasing(MSAA)Multiple render targets (MRTs),fragment shader outputs, 284Multiplicationfixed-point numbers, C-10–C-11,C-13–C-14IEEE 754 floating-pointstandard, 15matrix–matrix multiplication, 100matrix–vector multiplication, 106matrix–vector notation, 100quaternions, 194–195scalar, see Scalar multiplicationMultiplicative congruential method,definition, 518Multiplierschoice in LCG, 520–521definition, 518Multisampled antialiasing read more..

  • Page - 695

    664 IndexOrthogonal vectors, and dotproduct, 48–49Orthographic parallel projection,construction, 231–232Orthographic projection,definition, 215Orthonormal, definition, 51–52Output processing, renderingpipeline, 276Overdraw, fragments, 375Overflowfixed-point numbers, C-13–C-16number representation, C-3–C-4Overshooting, Kochanek-Bartelssplines, 451PParallel, vectors, 39Parallelogram, determinant, 122–123Parallelopiped, determinant, 122–123Parallel projectioncharacteristics, read more..

  • Page - 696

    Index665real number representation, 3–4z-buffering, 385–387Predictor-corrector method, rigidbody dynamics, 620Primary colors, RGB model, 257Primitive elementLCG multiplier, 520simple collision system, 589Primitive processing, renderingpipeline, 275Principle axes, rotational dynamics,627Probabilitybasic probability, 495–498definition, 495history, 494–495mean, 502–503random variables, 498–501special distributions, 503–506standard deviation, 502–503Probability density function read more..

  • Page - 697

    666 IndexRasterization (continued)application, 412–415basic concept, 404–407fragment texturing, 408–411and texture filtering, 411–412partial fragment handling, 376–378stages, 372textures, texture coordinates, 396triangles into fragments, 375–376visible geometrydepth buffering application,387–388depth buffering overview,378–381determination, 378per-fragment depth valuecomputation, 381–385z-buffering precision, 385–387Rational curve, definition, 457Ray casting, and read more..

  • Page - 698

    Index667general rotation, 147vs. orientation, 149–150polar and cartesian coordinates,143–144quaternions, 186–187, 197–200transformation creation, 145–146vector, sample code, 198Rotational collision response,methods, 638–640Rotational dynamicsangular momentum and inertiatensor, 626–628integration techniques, 628–630orientation and angular velocity,622–625torque, 625–626Rotation matricescharacteristics, 174exercises, X-7Rounding modes, IEEE 754floating-point standard, read more..

  • Page - 699

    668 IndexShadersfragment, see Fragmentshaders (FS)input and output values, 279–280operations and languages, 280pixel shaders, 278–279vertex shaders, 278–282vertex-triangle-fragment, 278–279Shades, RGB color model, 257Shadingbasic shading, 291–292flat-shaded lighting, 349–350Gouraud-shaded lighting, 350–351via image lookup, 293–294and lighting, 348limitations, 292per-fragment lighting, 354–358static, limitations, 312–313Shadows, in lighting, 367–368Sharp edgesper-vertex read more..

  • Page - 700

    Index669Tensor product, with matrices, 97–98Tessellationdefinition, 271per-vertex lighting, 353Test code, see Sample codeTestingdistance, line–pointdistance, 544–545intersections, see IntersectiontestingTexel addresscoordinate mappingbasic concept, 396magnification application, 402texture magnification, 396–402texture minification, 402–404definition, 396Texel centerscoordinate mapping, 396and texel coordinates, 311Texel coordinatesdefinition, 310and texel centers, 311Texels, read more..

  • Page - 701

    670 IndexUnit vector, definition, 36Unweighted area sampling,definition, 424Upper triangular matrix, definition, 89VValue, IEEE 754 floating-pointstandard, 10Value-gradient noise, randomnumbers, 537Value noise, random numbers, 537Varying values, fragment shaders, 389Vector additionbasic class implementation, 43linear combinations, 39–40rules, 37–38Vector lengthclass implementations, 46examples, 44–45square root functions, 46–47Vector operations, with matrices,97–98Vector product, read more..

  • Page - 702

    TrademarksThe following trademarks, mentioned in this book and the accompanyingCD-ROM, are the property of the following organizations:3D Studio Max is a trademark of Autodesk, Inc.AMD, K6, 3DNow!, ATI and combinations thereof are trademarks of Advan-ced Micro Devices, Inc.ARM is a trademark of ARM Limited.Asteroids, Battlezone, and Tempest are trademarks and © of Atari Interactive,Inc.DirectX, Direct3D, Visual C++, and Windows are trademarks of MicrosoftCorporation.Intel, StrongARM, XScale, read more..

  • Page - 703

    About the CD-ROMIntroductionMany of the concepts in this book are visual, dynamic, or both. While static illustrations are usedthroughout the book to illuminate some of these concepts, the truly dynamic concepts can be bestunderstood only via experiencing them in an interactive illustration. Computer-based examplesserve this purpose quite well.This book includes a CD-ROM that contains numerous interactive demonstration programsfor concepts discussed in the book. The demos are supported on read more..

  • Page - 704

    LICENSE.PDF. This file details the software license agreement (SLA) that all users are bound bywhen using the demo code. The “grant” clause of the (SLA) are as follows:1. Grant. We grant you a nonexclusive, nontransferable, and perpetual license to use The Softwaresubject to the terms and conditions of the Agreement:(a) You must own a copy of The Book (“Own The Book”) to use The Software. Ownership of onebook by two or more people does not satisfy the intent of this constraint.(b) The read more..

  • Page - 705

    addition, there are README files for each of the supported platforms. Put together, these filescontain a wide range of information, including:■ Descriptions of supported platforms, hardware, and development tools.■ Instructions on how to prepare your computer to run the demos on each of the supportedplatforms.■ Instructions on how to build the engine libraries and demos themselves (on each of thesupported platforms).■ Known issues with any of the demos or libraries.The book makes many read more..

Write Your Review