OpenGL Programming Guide 8th Edition

This guide explains how to program with the OpenGL graphics system to deliver the visual effect you want.

Dave Shreiner, Graham Sellers, John Kessenich, Bill Licea-Kane

986 Pages

45570 Reads



PDF Format

18.0 MB

Game Development

Download PDF format

  • Dave Shreiner, Graham Sellers, John Kessenich, Bill Licea-Kane   
  • 986 Pages   
  • 19 Feb 2015
  • Page - 1 read more..

  • Page - 2

    Praise for OpenGL R Programming Guide, Eighth Edition ‘‘Wow! This book is basically one-stop shopping for OpenGL information. It is the kind of book that I will be reaching for a lot. Thanks to Dave, Graham, John, and Bill for an amazing effort.’’ ---Mike Bailey, professor, Oregon State University ‘‘The most recent Red Book parallels the grand tradition of OpenGL; read more..

  • Page - 3

    This page intentionally left blank read more..

  • Page - 4

    OpenGLR Programming Guide Eighth Edition read more..

  • Page - 5 read more..

  • Page - 6

    OpenGLR Programming Guide Eighth Edition The Official Guide to Learning OpenGL R , Version 4.3 Dave Shreiner Graham Sellers John Kessenich Bill Licea-Kane The Khronos OpenGL ARB Working Group Upper Saddle River, NJ • Boston • Indianapolis • San Francisco New York • Toronto • Montreal • London • Munich • Paris • Madrid Capetown • Sydney • Tokyo • Singapore • read more..

  • Page - 7

    Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and the publisher was aware of a trademark claim, the designations have been printed with initial capital letters or in all capitals. The authors and publisher have taken care in the preparation of this book, but make read more..

  • Page - 8

    For my family---Vicki, Bonnie, Bob, Cookie, Goatee, Phantom, Squiggles, Tuxedo, and Toby. ---DRS To Emily: welcome, we’re so glad you’re here! Chris and J.: you still rock! ---GJAS In memory of Phil Karlton, Celeste Fowler, Joan Eslinger, and Ben Cheatham. read more..

  • Page - 9

    This page intentionally left blank read more..

  • Page - 10

    read more..

  • Page - 11

    read more..

  • Page - 12

    read more..

  • Page - 13

    read more..

  • Page - 14

    read more..

  • Page - 15

    read more..

  • Page - 16

    read more..

  • Page - 17

    read more..

  • Page - 18

    read more..

  • Page - 19

    read more..

  • Page - 20

    read more..

  • Page - 21

    read more..

  • Page - 22

    read more..

  • Page - 23

    read more..

  • Page - 24

    read more..

  • Page - 25

    read more..

  • Page - 26

    read more..

  • Page - 27

    read more..

  • Page - 28

    read more..

  • Page - 29

    read more..

  • Page - 30

    read more..

  • Page - 31

    read more..

  • Page - 32

    read more..

  • Page - 33

    read more..

  • Page - 34

    read more..

  • Page - 35

    read more..

  • Page - 36

    read more..

  • Page - 37

    read more..

  • Page - 38

    read more..

  • Page - 39

    read more..

  • Page - 40

    read more..

  • Page - 41

    read more..

  • Page - 42

    read more..

  • Page - 43

    read more..

  • Page - 44

    read more..

  • Page - 45

    read more..

  • Page - 46

    read more..

  • Page - 47

    read more..

  • Page - 48

    read more..

  • Page - 49

    read more..

  • Page - 50

    built on OpenGL for simplifying application development, whether you’re writing a video game, creating a visualization for scientific or medical purposes, or just showing images. However, the more modern version of OpenGL differs from the original in significant ways. In this book, we describe how to use the most recent versions of OpenGL to create those applications. The following read more..

  • Page - 51

    all OpenGL applications is usually similar to the following: • Initialize the state associated with how objects should be rendered. • Specify those objects to be rendered. Before you look at some code, let’s introduce some commonly used graphics terms. Rendering, which we’ve already used without defining previously, is the process by which a computer creates an image from read more..

  • Page - 52

    Figure 1.1 Image from our first OpenGL program: triangles.cpp Example 1.1 triangles.cpp: Our First OpenGL Program /////////////////////////////////////////////////////////////////////// // // triangles.cpp // /////////////////////////////////////////////////////////////////////// #include <iostream> using namespace std; #include "vgl.h" #include "LoadShaders.h" enum VAO_IDs { Triangles, NumVAOs }; enum Buffer_IDs read more..

  • Page - 53

    //--------------------------------------------------------------------- // // init // void init(void) { glGenVertexArrays(NumVAOs, VAOs); glBindVertexArray(VAOs[Triangles]); GLfloat vertices[NumVertices][2] = { { -0.90, -0.90 }, // Triangle 1 { 0.85, -0.90 }, { -0.90, 0.85 }, { 0.90, -0.85 }, // Triangle 2 { 0.90, 0.90 }, { -0.85, 0.90 } }; glGenBuffers(NumBuffers, Buffers); glBindBuffer(GL_ARRAY_BUFFER, read more..

  • Page - 54

    //--------------------------------------------------------------------- // // main // int main(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGBA); glutInitWindowSize(512, 512); glutInitContextVersion(4, 3); glutInitContextProfile(GLUT_CORE_PROFILE); glutCreateWindow(argv[0]); if (glewInit()) { cerr << "Unable to initialize GLEW ... exiting" << endl; exit(EXIT_FAILURE); } init(); read more..

  • Page - 55

    primitives, or image data for use in a technique called texture mapping, which we describe in Chapter 6. In this version of init(), we first specify the position information for the two triangles that we render. After that, we specify shaders we’re going to use in our program. In this case, we only use the required vertex and fragment shaders. The LoadShaders() routine is read more..

  • Page - 56

    toolkit for opening windows and managing input, among other operations. We use a version of GLUT named Freeglut, originally written by Pawel W. Olszta with contributions from Andreas Umbach and Steve Baker (who currently maintains the library), which is a modern variant of the original library. Similarly, you see a single function, glewInit(), which comes from the OpenGL Extension read more..

  • Page - 57

    Table 1.1 Command Suffixes and Argument Data Types Suffix Data Type Typical Corresponding C-Language Type OpenGL Type Definition b 8-bit integer signed char GLbyte s 16-bit integer signed short GLshort i 32-bit integer int GLint, GLsizei f 32-bit floating-point float GLfloat, GLclampf d 64-bit floating-point double GLdouble, GLclampd ub 8-bit unsigned integer unsigned char GLubyte us 16-bit unsigned integer read more..

  • Page - 58

    OpenGL begins with the geometric data you provide (vertices and geo- metric primitives) and first processes it through a sequence of shader stages: vertex shading, tessellation shading (which itself uses two shaders), and finally geometry shading, before it’s passed to the rasterizer. The rasterizer will generate fragments for any primitive that’s inside of the clipping region, and read more..

  • Page - 59

    Vertex Shading For each vertex that is issued by a drawing command, a vertex shader will be called to process the data associated with that vertex. Depending on whether any other pre-rasterization shaders are active, vertex shaders may be very simple, perhaps just copying data to pass it through this shading stage---what we’ll call a pass-through shader---to a very complex shader read more..

  • Page - 60

    Clipping Occasionally, vertices will be outside of the viewport---the region of the window where you’re permitted to draw---and cause the primitive associa- ted with that vertex to be modified so none of its pixels are outside of the viewport. This operation is called clipping and is handled automatically by OpenGL. Rasterization Immediately after clipping, the updated primitives are read more..

  • Page - 61

    If a fragment successfully makes it through all of the enabled tests, it may be written directly to the framebuffer, updating the color (and possibly depth value) of its pixel, or if blending is enabled, the fragment’s color will be combined with the pixel’s current color to generate a new color that is written into the framebuffer. As you saw in Figure 1.2, there’s read more..

  • Page - 62

    The first function, glutInit(), initializes the GLUT library. It processes the command-line arguments provided to the program, and removes any that control how GLUT might operate (such as specifying the size of a window). glutInit() needs to be the first GLUT function that your application calls, as it sets up data structures required by subsequent GLUT routines. glutInitDisplayMode() read more..

  • Page - 63

    functions for processing things like user input, window resizing, and many other operations. GLUT is fully described in Appendix A,‘‘Basics of GLUT: The OpenGL Utility Toolkit’’. The final function in main() is glutMainLoop(), which is an infinite loop that works with the window and operating systems to process user input and other operations like that. It’s glutMainLoop() that read more..

  • Page - 64

    Initializing Our Vertex-Array Objects There’s a lot going on in the functions and data of init(). Starting at the top, we begin by allocating a vertex-array object by calling glGenVertexArrays(). This causes OpenGL to allocate some number of vertex array object names for our use; in our case, NumVAOs , which we specified in the global variable section of the code. read more..

  • Page - 65

    In our example, after we generate a vertex-array object name, we bind it with our call to glBindVertexArray(). Object binding like this is a very common operation in OpenGL, but it may be immediately intuitive how or why it works. When you bind an object for the first time (e.g., the first time glBind*() is called for a particular object name), OpenGL will inter- nally read more..

  • Page - 66

    You’ll find many similar routines of the form glDelete* and glIs* for all the different types of object in OpenGL. Allocating Vertex-Buffer Objects A vertex-array object holds various data related to a collection of vertices. Those data are stored in buffer objects and managed by the currently bound vertex-array object. While there is only a single type of vertex-array object, read more..

  • Page - 67

    void glBindBuffer(GLenum target, GLuint buffer); Specifies the current active buffer object. target must be set to one of GL_ARRAY_BUFFER, GL_ELEMENT_ARRAY_BUFFER, GL_PIXEL_PACK_BUFFER, GL_PIXEL_UNPACK_BUFFER, GL_COPY_READ_BUFFER, GL_COPY_WRITE_BUFFER, GL_TRANSFORM_FEEDBACK_BUFFER, or GL_UNIFORM_BUFFER. buffer specifies the buffer object to be bound to. glBindBuffer() does three things: 1. When using buffer of an read more..

  • Page - 68

    Loading Data into a Buffer Object After initializing our vertex-buffer object, we need to transfer the vertex data from our objects into the buffer object. This is done by the glBufferData() routine, which does dual duty: allocating storage for holding the vertex data and copying the data from arrays in the application to the OpenGL server’s memory. As glBufferData() will be read more..

  • Page - 69

    glBufferData() will generate a GL_OUT_OF_MEMORY error if the requested size exceeds what the server is able to allocate. It will generate a GL_INVALID_VALUE error if usage is not one of the permitted values. We know that was a lot to see at one time, but you will use this function so much that it’s good to make it easy to find at the beginning of the book. For our read more..

  • Page - 70

    For an OpenGL programmer (at this point), a shader is a small function written in the OpenGL Shading Language (OpenGL Shading Language (GLSL)), a special language very similar to C++ for constructing OpenGL shaders. GLSL is used for all shaders in OpenGL, although not every feature in GLSL is usable in every OpenGL shader stage. You provide your GLSL shader to OpenGL as a read more..

  • Page - 71

    data comes from; it merely sees its input variables populated with data every time it executes. It’s our responsibility to connect the shader plumb- ing (this is our term, but you’ll see why it makes sense) so that data in your application can flow into and between the various OpenGL shader stages. In our simple example, we have one input variable named vPosition , which read more..

  • Page - 72

    Similarly, we need a fragment shader to accompany our vertex shader. Here’s the one for our example, shown in Example 1.3. Example 1.3 Fragment Shader for triangles.cpp: triangles.frag #version 430 core out vec4 fColor; void main() { fColor = vec4 (0.0, 0.0, 1.0, 1.0); } Hopefully, much of this looks very familiar, even if it’s an entirely different type of shader. We have the read more..

  • Page - 73

    shader ‘‘in’’ variables to a vertex-attribute array, and we do that with the glVertexAttribPointer() routine. void glVertexAttribPointer(GLuint index, GLint size, GLenum type, GLboolean normalized, GLsizei stride, const GLvoid *pointer); Specifies where the data values for index (shader attribute location) can be accessed. pointer is the offset from the start of the buffer object read more..

  • Page - 74

    Parameter Value Explanation Name normalized GL_FALSE We set this to GL_FALSE for two reasons: First, and most importantly, because positional coordinates values can basically take on any value, so we don ’t want them constrained to the range [−1, 1 ]; and second, the values are not integer types (e.g., GLint, or GLshort). stride 0 As our data are ‘‘tightly packed ’’, which read more..

  • Page - 75

    void glEnableVertexAttribArray(GLuint index); void glDisableVertexAttribArray(GLuint index); Specifies that the vertex array associated with variable index be enabled or disabled. index must be a value between zero and GL_MAX_VERTEX_ATTRIBS − 1. Now, all that’s left is to draw something. Our First OpenGL Rendering With all that setup and data initialization, rendering (for the moment) will be read more..

  • Page - 76

    We discuss depth and stencil buffering, as well as an expanded discussion of color in Chapter 4,‘‘Color, Pixels, and Framebuffers’’. You may be asking yourself how we set the color that glClear() should use. In this first program, we used OpenGL’s default clearing color, which is black. To change the clear color, call glClearColor(). void glClearColor(GLclampf red, GLclampf read more..

  • Page - 77

    Next, we call glDrawArrays(), which actually sends vertex data to the OpenGL pipeline. void glDrawArrays(GLenum mode, GLint first, GLsizei count); Constructs a sequence of geometric primitives using the elements from the currently bound vertex array starting at first and ending at first + count − 1. mode specifies what kinds of primitives are constructed and is one of GL_POINTS, read more..

  • Page - 78

    to render an object, draw a full scene, or any other operations that OpenGL might do. In order to do that accurately, you need to know when OpenGL is completed with whatever operations you want to measure. While the aforementioned command, glFlush(), may sound like the right answer, it’s not. In particular, glFlush() merely requests all pending commands be sent to the OpenGL read more..

  • Page - 79

    determine a feature’s state before changing for your own needs. glIsEnabled() will return if a particular capability is currently enabled. GLboolean glIsEnabled(GLenum capability); Returns GL_TRUE or GL_FALSE, depending on whether or not the queried capability is currently activated. 32 Chapter 1: Introduction to OpenGL read more..

  • Page - 80

    Chapter 2 Shader Fundamentals Chapter Objectives After reading this chapter, you’ll be able to do the following: • Identify the various types of shaders that OpenGL uses to create images. • Construct and compile shaders using the OpenGL Shading Language. • Pass data into shaders using a variety of mechanisms available in OpenGL. • Employ advanced GLSL shading capabilities to read more..

  • Page - 81

    This chapter introduces how to use programmable shaders with OpenGL. Along the way, we describe the OpenGL Shading Language (commonly called GLSL), and detail how shaders will influence your OpenGL applications. This chapter contains the following major sections: • ‘‘Shaders and OpenGL’’ discusses programmable graphics shaders in the context of OpenGL applications. • ’ ‘‘OpenGLs read more..

  • Page - 82

    programming language specially designed for graphics, you’ll find it’s very similar to the ‘‘C’’ language, with a little C++ mixed in. In this chapter, we’ll describe how to write shaders, gradually introducing GLSL along the way, discuss compiling and integrating shaders into your application, and how data in your application passes between the various shaders. OpenGL’s read more..

  • Page - 83

    processing in the fragment-testing and blending parts of the pipeline. Fragment shading operation is discussed in many sections of the text. 5. The Compute shading stage is not part of the graphical pipeline as the stages above, but rather stands on its own as the only stage in a program. A compute shader processes generic work items, driven by an application-chosen range, read more..

  • Page - 84

    having a type (e.g., vec4 , which we’ll get into more momentarily), data is copied into the shader from OpenGL through the in variables, and likewise, copied out of the shader through the out variables. The values in those variables are updated every time OpenGL executes the shader (e.g., if OpenGL is processing vertices, then new values are passed through those variables for read more..

  • Page - 85

    Declaring Variables GLSL is a typed language; every variable must be declared and have an associated type. Variable names conform to the same rules as those for ‘‘C’’: you can use letters, numbers, and the underscore character (_) to compose variable names. However, neither a digit nor an underscore can be the first character in a variable name. Similarly, variable names read more..

  • Page - 86

    • Loop iteration variables, such as i in the loop for (int i = 0; i < 10; ++i) { // loop body } are only scoped for the body of the loop. Variable Initialization Variables may also be initialized when declared. For example: int i, numParticles = 1500; float force, g = −9.8; bool falling = true ; double pi = 3.1415926535897932384626LF; Integer literal constants may be read more..

  • Page - 87

    The above type conversions work for scalars, vectors, and matrices of these types. Conversions will never change whether something is a vector or a matrix, or how many components they have. Conversions also don’t apply to arrays or structures. Any other conversion of values requires explicit conversion using a conversion constructor. A constructor, as in other languages like C++, read more..

  • Page - 88

    Matrix types that list both dimensions, such as mat4x3 , use the first value to specify the number of columns, the second the number of rows. Variables declared with these types can be initialized similar to their scalar counterparts: vec3 velocity = vec3 (0.0, 2.0, 3.0); and converting between types is equally accessible: ivec3 steps = ivec3 (velocity); Vector constructors can also read more..

  • Page - 89

    vec3 column1 = vec3 (1.0, 2.0, 3.0); vec3 column2 = vec3 (4.0, 5.0, 6.0); vec3 column3 = vec3 (7.0, 8.0, 9.0); mat3 M= mat3 (column1, column2, column3); or even vec2 column1 = vec2 (1.0, 2.0); vec2 column2 = vec2 (4.0, 5.0); vec2 column3 = vec2 (7.0, 8.0); mat3 M= mat3 (column1, 3.0, column2, 6.0, column3, 9.0); all yielding the same matrix ⎛ ⎝ ⎞ ⎠ Accessing read more..

  • Page - 90

    Table 2.4 Vector Component Accessors Component Accessors Description (x, y, z, w ) components associated with positions (r, g, b, a ) components associated with colors (s, t, p, q ) components associated with texture coordinates A common use for component-wise access to vectors is for swizzling components, as you might do with colors, perhaps for color space conversion. For example, you read more..

  • Page - 91

    struct Particle { float lifetime; vec3 position; vec3 velocity; }; Particle p = Particle(10.0, pos, vel); // pos, vel are vec3s Likewise, to reference elements of a structure, use the familiar ‘‘dot’’ notation as you would in ‘‘C’’. Arrays GLSL also supports arrays of any type, including structures. As with ‘‘C’’, arrays are indexed using brackets ([ ]). The range of read more..

  • Page - 92

    array syntax for indexing vectors and matrices (m[2] is the third column of a matrix m ). mat3x4 m; int c = m.length(); // number of columns in m: 3 int r = m[0].length(); // number of components in column vector 0: 4 When the length is known at compile time, the length() method will return a compile-time constant that can be used where compile-time constants are required. read more..

  • Page - 93

    Table 2.5 GLSL Type Modifiers Type Modifier Description const Labels a variable as a read-only. It will also be a compile-time constant if its initializer is a compile-time constant. in Specifies that the variable is an input to the shader stage. out Specifies that the variable is an output from a shader stage. uniform Specifies that the value is passed to the shader from the read more..

  • Page - 94

    shader stages enabled in a program and must be declared as global variables. Any type of variable, including structures and arrays, can be specified as uniform . A shader cannot write to a uniform variable and change its value. For example, you might want to use a color for shading a primitive. You might declare a uniform variable to pass that information into your shaders. read more..

  • Page - 95

    Example 2.2 Obtaining a Uniform Variable’s Index and Assigning Values GLint timeLoc; /* Uniform index for variable "time" in shader */ GLfloat timeValue; /* Application time */ timeLoc = glGetUniformLocation(program, "time"); glUniform1f(timeLoc, timeValue); void glUniform{1234}{fdi ui}(GLint location, TYPE value); void glUniform{1234}{fdi ui}v(GLint location, GLsizei count, const TYPE * read more..

  • Page - 96

    buffer Storage Qualifier The recommended way to share a large buffer with the application is through use of a buffer variable. These are much like uniform variables, except that they can be modified by the shader. Typically, you’d use buffer variables in a buffer block, and blocks in general are described later in this chapter. The buffer modifier specifies that the subsequent read more..

  • Page - 97

    Table 2.6 GLSL Operators and Their Precedence Precedence Operators Accepted types Description 1 ( ) --- Grouping of operations 2 [ ] arrays, matrices, vectors Array subscripting f( ) functions Function calls and constructors . (period) structures Structure field or method access ++ -- arithmetic Post-increment and -decrement 3++ -- arithmetic Pre-increment and -decrement + - arithmetic Unary explicit read more..

  • Page - 98

    The normal restrictions apply, that the dimensionality of the matrix and the vector must match. Additionally, scalar multiplication with a vector or matrix will produce the expected result. One notable exception is that the multiplication of two vectors will result in component-wise multiplication of components; however, multiplying two matrices will result in normal matrix multiplication. read more..

  • Page - 99

    Looping Constructs GLSL supports the familiar ‘‘C’’ form of for, while, and do ... while loops. The for loop permits the declaration of the loop iteration variable in the initialization clause of the for loop. The scope of iteration variables declared in this manner is only for the lifetime of the loop. for (int i = 0; i < 10; ++i) { ... } while (n < 10) { read more..

  • Page - 100

    Appendix C as well as support for user-defined functions. User-defined functions can be defined in a single shader object, and reused in multiple shader programs. Declarations Function declaration syntax is very similar to ‘‘C’’, with the exception of the access modifiers on variables: returnType functionName([accessModifier] type1 variable1, [accessModifier] type2 varaible2, ...) { // read more..

  • Page - 101

    Table 2.8 GLSL Function Parameter Access Modifiers Access Modifier Description in Value copied into a function (default if not specified) const in Read-only value copied into a function out Value copied out of a function (undefined upon entrance into the function) inout Value copied into and out of a function The in keyword is optional. If a variable does not include an access read more..

  • Page - 102

    void main() { if (f == g) // f and g might be not equal ; } In this example, it would not matter if invariant or precise was used on any of the variables involved, as they only effect two computations done on the graphics device. The invariant Qualifier The invariant qualifier may be applied to any shader output variable. It will guarantee that if two shader invocations read more..

  • Page - 103

    Generally, you use precise instead of invariant when you need to get the same result from an expression, even if values feeding the expression are permuted in a way that should not mathematically affect the result. For example, the following expression should get the same result if the values for a and b are exchanged. It should also get the same result if the values for read more..

  • Page - 104

    Preprocessor Directives Table 2.9 lists the preprocessor directives accepted by the GLSL preprocessor and their functions. Table 2.9 GLSL Preprocessor Directives Preprocessor Directive Description #define Control the definition of constants and #undef macros similar to the ‘‘C’’ preprocessor #if Conditional code management similar #ifdef to the ‘‘C’’ preprocessor, including the defined operator. read more..

  • Page - 105

    Table 2.10 GLSL Preprocessor Predefined Macros __LINE__ Line number defined by one more than the number of newline characters processed and modified by the #line directive __FILE__ Source string number currently being processed __VERSION__ Integer representation of the OpenGL Shading Language version Likewise, macros (excluding those defined by GLSL) may be undefined by using the #undef read more..

  • Page - 106

    respectively. These options may only be issued outside of a function definition. By default, optimization is enabled for all shaders. Debug Compiler Option The debug option enables or disables additional diagnostic output of the shader. You can enable or disable debugging by issuing either #pragma debug(on) or #pragma debug(off) respectively. Similar to the optimize option, these options may read more..

  • Page - 107

    The options available are shown in Table 2.11 Table 2.11 GLSL Extension Directive Modifiers Directive Description require Flag an error if the extension is not supported, or if the all-extension specification is used. enable Give a warning if the particular extensions specified are not supported, or flag an error if the all-extension specification is used. warn Give a warning if the read more..

  • Page - 108

    Uniform Blocks As your shader programs become more complex, it’s likely that the number of uniform variables they use will increase. Often the same uniform value is used within several shader programs. As uniform locations are generated when a shader is linked (i.e., when glLinkProgram() is called), the indices may change, even though (to you) the values of the uniform variables read more..

  • Page - 109

    arranged (after specifying a layout declaration). The possible qualifiers are detailed in Table 2.12. Table 2.12 Layout Qualifiers for Uniform Layout Qualifier Description shared Specify that the uniform block is shared among multiple programs. (This is the default layout and is not to be confused with the shared storage qualifier.) packed Lay out the uniform block to minimize its memory read more..

  • Page - 110

    Accessing Uniform Blocks from Your Application Because uniform variables form a bridge to share data between shaders and your application, you need to find the offsets of the various uniform variables inside the named uniform blocks in your shaders. Once you know the location of those variables, you can initialize them with data, just as you would any type of buffer object read more..

  • Page - 111

    void glBindBufferRange(GLenum target, GLuint index, GLuint buffer, GLintptr offset, GLsizeiptr size); void glBindBufferBase(GLenum target, GLuint index, GLuint buffer); Associates the buffer object buffer with the named uniform block associated with index. target can either be GL_UNIFORM_BUFFER (for uniform blocks) or GL_TRANSFORM_FEEDBACK_BUFFER (for use with transform feedback; Chapter 5). index is the read more..

  • Page - 112

    The layout of uniform variables in a named uniform block is controlled by the layout qualifier specified when the block was compiled and linked. If you used the default layout specification, you will need to determine the offset and date-store size of each variable in the uniform block. To do so, you will use the pair of calls: glGetUniformIndices(), to retrieve the index of read more..

  • Page - 113

    " mat3 rot = uuT + cos(angle)*(I - uuT) + sin(angle)*S;" " pos *= scale;" " pos *= rot;" " pos += translation;" " fColor = vec4(scale, scale, scale, 1);" " gl_Position = vec4(pos, 1);" "}" }; const char* fShader = { "#version 330 core \n" "uniform Uniforms {" " vec3 translation;" " float scale;" " read more..

  • Page - 114

    CASE(GL_FLOAT_MAT2x3, 6, GLfloat); CASE(GL_FLOAT_MAT2x4, 8, GLfloat); CASE(GL_FLOAT_MAT3, 9, GLfloat); CASE(GL_FLOAT_MAT3x2, 6, GLfloat); CASE(GL_FLOAT_MAT3x4, 12, GLfloat); CASE(GL_FLOAT_MAT4, 16, GLfloat); CASE(GL_FLOAT_MAT4x2, 8, GLfloat); CASE(GL_FLOAT_MAT4x3, 12, GLfloat); #undef CASE default : fprintf(stderr, "Unknown type: 0x%x \n", type); exit(EXIT_FAILURE); break ; } return size; } void init() { GLuint program; read more..

  • Page - 115

    else { enum { Translation, Scale, Rotation, Enabled, NumUniforms }; /* Values to be stored in the buffer object */ GLfloat scale = 0.5; GLfloat translation[] = { 0.1, 0.1, 0.0 }; GLfloat rotation[] = { 90, 0.0, 0.0, 1.0 }; GLboolean enabled = GL_TRUE; /* Since we know the names of the uniforms ** in our block, make an array of those values */ const char* names[NumUniforms] = read more..

  • Page - 116

    glBindBufferBase(GL_UNIFORM_BUFFER, uboIndex, ubo); } ... } Buffer Blocks GLSL buffer blocks, or from the application’s perspective shader storage buffer objects, operate quite similarly to uniform blocks. Two critical differences give these blocks great power, however. First, the shader can write to them, modifying their content as seen from other shader invocations or the application. Second, read more..

  • Page - 117

    In/Out Blocks Shader variables output from one stage and input into the next stage can also be organized into interface blocks. These logical groupings can make it easier to visually verify interface matches between stages, as well as to make linking separate programs easier. For example, a vertex shader might output: out Lighting { vec3 normal; vec2 bumpCoord; }; This would match a read more..

  • Page - 118

    Ex ecutab le Shader Program Shader Object Shader Object Shader Program Shader Source ara cter Ch Str ing s glShader Source glCompileShader glCreateS had er glAttachShader glCreate Pro gra m ram glLinkProg glUseProg ram • • • Figure 2.1 Shader-compilation command sequence For each shader program you want to use in your application, you’ll need to do the following sequence of steps: For each shader read more..

  • Page - 119

    Then, to link multiple shader objects into a shader program, you’ll 1. Create a shader program. 2. Attach the appropriate shader objects to the shader program. 3. Link the shader program. 4. Verify that the shader link phase completed successfully. 5. Use the shader for vertex or fragment processing. Why create multiple shader objects? Just as you might reuse a function in read more..

  • Page - 120

    To compile a shader object’s source, use glCompileShader(). void glCompileShader(GLuint shader); Compiles the source code for shader. The results of the compilation can be queried by calling glGetShaderiv() with an argument of GL_COMPILE_STATUS. Similar to when you compile a ‘‘C’’ program, you need to determine if the compilation finished successfully. A call to glGetShaderiv(), with read more..

  • Page - 121

    accomplished by attaching a shader object to the program by calling glAttachShader(). void glAttachShader(GLuint program, GLuint shader); Associates the shader object, shader, with the shader program, program.A shader object can be attached to a shader program at any time, although its functionality will only be available after a successful link of the shader program. A shader object can read more..

  • Page - 122

    then you can determine the cause of the failure by retrieving the program link information log by calling glGetProgramInfoLog(). void glGetProgramInfoLog(GLuint program, GLsizei bufSize, GLsizei *length, char *infoLog); Returns the log associated with the last compilation of program. The log is returned as a null-terminated character string of length characters in the buffer infoLog. The read more..

  • Page - 123

    void glDeleteProgram(GLuint program); Deletes program immediately if not currently in use in any context, or schedules program for deletion when the program is no longer in use by any contexts. Finally, for completeness, you can also determine if a name is already been reserved as a shader object by calling glIsShader(), or a shader program by calling glIsProgram(): GLboolean read more..

  • Page - 124

    functions, you either created two distinct shaders, or used an if-statement to make a run-time selection, like demonstrated in Example 2.5. Example 2.5 Static Shader Control Flow #version 330 core void func_1() { ... } void func_2() { ... } uniform int func; void main() { if (func == 1) func_1(); else func_2(); } Shader subroutines are conceptually similar to function pointers in C for read more..

  • Page - 125

    3. Finally, specify the subroutine uniform variable that will hold the ‘‘function pointer’’ for the subroutine you’ve selected in your application: subroutine uniform subroutineType variableName; Demonstrating those steps together, consider the following example where we would like to dynamically select between ambient and diffuse lighting: Example 2.6 Declaring a Set of Subroutines read more..

  • Page - 126

    In step 3 described on page 78, a subroutine uniform value was declared, and we will need its location in order to set its value. As compared to other shader uniforms, subroutine uniforms use glGetSubroutineUniformLocation() to retrieve their locations. GLint glGetSubroutineUniformLocation(GLuint program, GLenum shadertype, const char* name); Returns the location of the subroutine uniform named read more..

  • Page - 127

    Once you have both the available subroutine indices, and subroutine uniform location, use glUniformSubroutinesuiv() to specify which subroutine should be executed in the shader. All active subroutine uniforms for a shader stage must be initialized. GLuint glUniformSubroutinesuiv(GLenum shadertype, GLsizei count, const GLuint * indices); Sets count shader subroutine uniforms using the values in read more..

  • Page - 128

    if (ambientIndex == GL_INVALID_INDEX || diffuseIndex == GL_INVALID_INDEX) { // Error: the specified subroutines are not active in // the currently bound program for the GL_VERTEX_SHADER // stage. } else { GLsizei n; glGetIntegerv(GL_MAX_SUBROUTINE_UNIFORM_LOCATIONS, &n); GLuint *indices = new GLuint[n]; indices[materialShaderLoc] = ambientIndex; glUniformSubroutinesuiv(GL_VERTEX_SHADER, n, indices); delete [] read more..

  • Page - 129

    Once your collection of shader programs are combined, you need to use the new shader pipeline constructs to combine shader stages from multiple programs into a usable program pipeline. As with most objects in OpenGL, there is a gen-bind-delete sequence of calls to make. A shader pipeline is created by calling glGenProgramPipelines(), which will create an unused program pipeline read more..

  • Page - 130

    variable values. First, you can select an active shader program with glActiveShaderProgram(), which causes calls to glUniform*() and glUniformMatrix*() to assign values to that particular shader program’s uniform variables. Alternatively, and more preferred, is to call glProgramUniform*() and glProgramUniformMatrix*(), which take an explicit program object in addition to the other parameters used read more..

  • Page - 131

    This page intentionally left blank read more..

  • Page - 132

    Chapter 3 Drawing with OpenGL Chapter Objectives After reading this chapter, you will be able to: • Identify all of the rendering primitives available in OpenGL. • Initialize and populate data buffers for use in rendering geometry. • Optimize rendering using advanced techniques like instanced rendering. 85 read more..

  • Page - 133

    The primary use of OpenGL is to render graphics into a framebuffer. To accomplish this, complex objects are broken up into primitives---points, lines, and triangles that when drawn at high enough density give the appearance of 2D and 3D objects. OpenGL includes many functions for rendering such primitives. These functions allow you to describe the layout of primitives in memory, read more..

  • Page - 134

    Points Points are represented by a single vertex. The vertex represents a point in four-dimensional homogeneous coordinates. As such, a point really has no area, and so in OpenGL it is really an analogue for a square region of the display (or draw buffer). When rendering points, OpenGL determines which pixels are covered by the point using a set of rules called rasterization read more..

  • Page - 135

    only in the fragment shader (it doesn’t make much sense to include it in other shaders) and has a defined value only when rendering points. By simply using gl_PointCoord as a source for texture coordinates, bitmaps and textures can be used instead of a simple square block. Combined with alpha blending or with discarding fragments (using the discard keyword), it’s even possible read more..

  • Page - 136

    horizontally), it is replicated horizontally. If it is x-major then it is replicated vertically. The OpenGL specification is somewhat liberal on how ends of lines are represented and how wide lines are rasterized when antialiasing is turned off. When antialiasing is turned on, lines are treated as rectangles aligned along the line, with width equal to the current line width. read more..

  • Page - 137

    shared point and the next two vertices. An arbitrarily complex convex polygon can be rendered as a triangle fan. Figure 3.2 shows the vertex layout of a triangle fan. 0 1 2 4 5 7 6 3 Figure 3.2 Vertex layout for a triangle fan These primitive types are used by the drawing functions that will be introduced in the next section. They are represented by OpenGL tokens that are read more..

  • Page - 138

    void glPolygonMode(GLenum face, GLenum mode); Controls the drawing mode for a polygon’s front and back faces. The parameter face must be GL_FRONT_AND_BACK; while mode can be GL_POINT, GL_LINE, GL_FILL to indicate whether the polygon should be drawn as points, outlined, or filled. By default, both the front and back faces are drawn filled. Reversing and Culling Polygon Faces By read more..

  • Page - 139

    void glCullFace(GLenum mode); Indicates which polygons should be discarded (culled) before they’re converted to screen coordinates. The mode is either GL_FRONT, GL_BACK, or GL_FRONT_AND_BACK to indicate front-facing, back-facing, or all polygons. To take effect, culling must be enabled using glEnable() with GL_CULL_FACE; it can be disabled with glDisable() and the same argument. Advanced In read more..

  • Page - 140

    void glGenBuffers(GLsizei n, GLuint *buffers); Returns n currently unused names for buffer objects in the array buffers. After calling glGenBuffers(), you will have an array of buffer object names in buffers, but at this time, they’re just placeholders. They’re not actually buffer objects---yet. The buffer objects themselves are not actually created until the name is first bound to read more..

  • Page - 141

    Table 3.2 (continued) Buffer Binding Targets Target Uses GL_PIXEL_UNPACK_BUFFER The pixel unpack buffer is the opposite of the pixel pack buffer---it is used as the source of data for commands like glTexImage2D(). GL_TEXTURE_BUFFER Texture buffers are buffers that are bound to texture objects so that their data can be directly read inside shaders. The GL_TEXTURE_BUFFER binding point provides a read more..

  • Page - 142

    Right, so we now have a buffer object bound to one of the targets listed in Table 3.2, now what? The default state of a newly created buffer object is a buffer with no data in it. Before it can be used productively, we must put some data into it. Getting Data into and out of Buffers There are many ways to get data into and out of buffers in OpenGL. These read more..

  • Page - 143

    Table 3.3 Buffer Usage Tokens Token Fragment Meaning _STATIC_ The data store contents will be modified once and used many times. _DYNAMIC_ The data store contents will be modified repeatedly and used many times. _STREAM_ The data store contents will be modified once and used at most a few times. _DRAW The data store contents are modified by the application and used as the source read more..

  • Page - 144

    Now turn your attention to the second part of the usage tokens. This part of the token indicates who is responsible for updating and using the data. When the token includes _DRAW, this infers that the buffer will be used as a source of data during regular OpenGL drawing operations. It will be read a lot, compared to data whose usage token includes _READ, which is read more..

  • Page - 145

    Example 3.1 Initializing a Buffer Object with glBufferSubData() // Vertex positions static const GLfloat positions[] = { -1.0f, -1.0f, 0.0f, 1.0f, 1.0f, -1.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, -1.0f, 1.0f, 0.0f, 1.0f }; // Vertex colors static const GLfloat colors[] = { 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, 1.0f, }; // The buffer object GLuint buffer; // Reserve read more..

  • Page - 146

    void glClearBufferData(GLenum target, GLenum internalformat, GLenum format, GLenum type, const void * data); void glClearBufferSubData(GLenum target, GLenum internalformat, GLintptr offset, GLintptr size, GLenum format, GLenum type, const void * data); Clear all or part of a buffer object’s data store. The data store of the buffer bound to target is filled with the data stored in data. read more..

  • Page - 147

    Copies part of the data store of the buffer object bound to readtarget into the data store of the buffer object bound to writetarget. The size bytes of data at readoffset within readtarget are copied into writetarget at writeoffset. If readoffset or writeoffset together with size would cause either OpenGL to access any area outside the bound buffer objects, a GL_INVALID_VALUE read more..

  • Page - 148

    glGetBufferSubData()) is that they all cause OpenGL to make a copy of your data. glBufferData() and glBufferSubData() both copy data from your application’s memory into memory owned by OpenGL. Obviously, glCopyBufferSubData() causes a copy of previously buffered data to be made. glGetBufferSubData() copies data from memory owned by OpenGL into memory provided by your application. Depending read more..

  • Page - 149

    bad things will happen, which may include ignoring writes to the buffer, corrupting your data or even crashing your program.3 Note: When you map a buffer whose data store is in memory that will not be accessible to your application, OpenGL may need to move the data around so that when you use the pointer it gives you, you get what you expect. Likewise, when you’re done read more..

  • Page - 150

    Example 3.2 Initializing a Buffer Object with glMapBuffer() GLuint buffer; FILE * f; size_t filesize; // Open a file and find its size f = fopen("data.dat", "rb"); fseek(f, 0, SEEK_END); filesize = ftell(f); fseek(f, 0, SEEK_SET); // Create a buffer by generating a name and binding it to a buffer // binding point - GL_COPY_WRITE_BUFFER here (because the binding means read more..

  • Page - 151

    or copied, OpenGL can start that process when you call glUnmapBuffer() and return immediately, content in the knowledge that it can finish the operation at its leisure without your application interfering in any way. Thus the copy that OpenGL needs to perform can overlap whatever your application does next (making more buffers, reading more files, and so on). If it doesn’t need read more..

  • Page - 152

    Table 3.5 (continued) Flags for Use with glMapBufferRange() Flag Meaning GL_MAP_INVALIDATE_BUFFER_BIT If specified, the entire contents of the buffer may be discarded and considered invalid, regardless of the specified range. Any data lying outside the mapped range of the buffer object becomes undefined, as does any data within the range but not subsequently written by the application. This read more..

  • Page - 153

    ends up in the parts of the buffer that you didn’t map. Either way, setting the flags indicates that you’re planning to update the rest of the buffer with subsequent maps.4 When OpenGL is allowed to throw away the rest of the buffer’s data, it doesn’t have to make any effort to merge your modified data back into the rest of the original buffer. It’s probably read more..

  • Page - 154

    make that data usable such as copying it to graphics processor visible memory, or flushing, or invalidating data caches. It can do these things even though some or all of the buffer is still mapped. This is a useful way to parallelize OpenGL with other operations that your application might perform. For example, if you need to load a very large piece of data from a file read more..

  • Page - 155

    Note that semantically, calling glBufferData() with a NULL pointer does a very similar thing to calling glInvalidateBufferData(). Both methods will tell the OpenGL implementation that it is safe to discard the data in the buffer. However, glBufferData() logically recreates the underlying memory allocation, whereas glInvalidateBufferData() does not. Depending on the OpenGL implementation, it may read more..

  • Page - 156

    specified when packed vertex data is used. The type parameter is a token that specifies the type of the data that is contained in the buffer object. Table 3.6 describes the token names that may be specified for type and the OpenGL data type that they correspond to: Table 3.6 Values of Type for glVertexAttribPointer() Token Value OpenGL Type GL_BYTE GLbyte (signed 8-bit bytes) read more..

  • Page - 157

    Whereas, if the data type is unsigned, the following formula is used: f = 2c + 1 2b − 1 In both cases, f is the resulting floating-point value, c is the incoming integer component, and b is the number of bits in the data type (i.e., 8 for GL_UNSIGNED_BYTE, 16 for GL_SHORT, and so on). Note that unsigned data types are also scaled and biased before being divided by read more..

  • Page - 158

    normalize parameter. normalize is missing because it’s not relevant to integer vertex attributes. Only the integer data type tokens, GL_BYTE, GL_UNSIGNED_BYTE, GL_SHORT, GL_UNSIGNED_SHORT, GL_INT, and GL_UNSIGNED_INT may be used for the type parameter. Double-Precision Vertex Attributes The third variant of glVertexAttribPointer() is glVertexAttribLPointer()--- here the L stands for ‘‘long’’. read more..

  • Page - 159

    been called GL_ZYXW.5 Looking at the data layout within the 32-bit word, you would see the bits divided up as shown in Figure 3.3. W X Y Z 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 Figure 3.3 Packing of elements in a BGRA-packed vertex attribute In Figure 3.3, the elements of the vertex are packed into a single 32-bit integer read more..

  • Page - 160

    These functions are used to tell OpenGL which vertex attributes are backed by vertex buffers. Before OpenGL will read any data from your vertex buffers, you must enable the corresponding vertex attribute arrays with glEnableVertexAttribArray(). You may wonder what happens if you don’t enable the attribute array for one of your vertex attributes. In that case, the static vertex read more..

  • Page - 161

    parameters to the range [0, 1] or [−1, 1] depending on whether the parameters are signed or unsigned. These are: void glVertexAttrib4Nub(GLuint index, GLubyte x, GLubyte y, GLubyte z, GLubyte w); void glVertexAttrib4N{bsi ub us ui}v(GLuint index, const TYPE *v); Specifies a single or multiple vertex-attribute values for attribute index, normalizing the parameters to the range [0, 1] read more..

  • Page - 162

    If you use one of the glVertexAttrib*() functions with less components than there are in the underlying vertex attribute (e.g., you use glVertexAttrib*() 2f to set the value of a vertex attribute declared as a vec4 ), default values are filled in for the missing components. For w, 1.0 is used as the default value, and for y and z,0.0 is used.6 If you use a function read more..

  • Page - 163

    void glDrawElements(GLenum mode, GLsizei count, GLenum type, const GLvoid *indices); Defines a sequence of geometric primitives using count number of elements, whose indices are stored in the buffer bound to the GL_ELEMENT_ARRAY_BUFFER buffer binding point (the element array buffer). indices represents an offset, in bytes, into the element array buffer where the indices begin. type must be read more..

  • Page - 164

    Another command that behaves similarly to glDrawElements() is glDrawRangeElements(). void glDrawRangeElements(GLenum mode, GLuint start, GLuint end, GLsizei count, GLenum type, const GLvoid *indices); This is a restricted form of glDrawElements() in that it forms a contract between the application (i.e., you) and OpenGL that guarantees that any index contained in the section of the element read more..

  • Page - 165

    void glDrawArraysIndirect(GLenum mode, const GLvoid *indirect); Behaves exactly as glDrawArraysInstanced(), except that the parameters for the drawing command are taken from a structure stored in the buffer bound to the GL_DRAW_INDIRECT_BUFFER binding point (the draw indirect buffer). indirect represents an offset into the draw indirect buffer. mode is one of the primitive types that is read more..

  • Page - 166

    As with glDrawArraysIndirect(), the parameters for the draw command in glDrawElementsIndirect() come from a structure stored at offset indirect stored in the element array buffer. The structure’s declaration in ‘‘C’’ is presented in Example 3.4: Example 3.4 Declaration of the DrawElementsIndirectCommand Structure typedef struct DrawElementsIndirectCommand_t { GLuint count; GLuint primCount; GLuint read more..

  • Page - 167

    Calling glMultiDrawArrays() is equivalent to the following OpenGL code sequence: void glMultiDrawArrays(GLenum mode, const GLint * first, const GLint * count, GLsizei primcount) { GLsizei i; for (i = 0; i < primcount; i++) { glDrawArrays(mode, first[i], count[i]); } } Similarly, the multiversion of glDrawElements() is glMultiDrawElements(), and its prototype is as follows: void glMultiDrawElements(GLenum read more..

  • Page - 168

    An extension of glMultiDrawElements() to include a baseVertex parameter is glMultiDrawElementsBaseVertex(). Its prototype is as follows: void glMultiDrawElementsBaseVertex(GLenum mode, const GLint * count, GLenum type, const GLvoid * const * indices, GLsizei primcount, const GLint * baseVertex); Draws multiple sets of geometric primitives with a single OpenGL function call. first, indices, and read more..

  • Page - 169

    void glMultiDrawArraysIndirect(GLenum mode, const void * indirect, GLsizei drawcount, GLsizei stride); Draws multiple sets of primitives, the parameters for which are stored in a buffer object. drawcount independent draw commands are dispatched as a result of a call to glMultiDrawArraysIndirect(), and parameters are sourced from these commands as they would be for glDrawArraysIndirect(). Each read more..

  • Page - 170

    // Color for each vertex static const GLfloat vertex_colors[] = { 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 1.0f }; // Three indices (we’re going to draw one triangle at a time static const GLushort vertex_indices[] = { 0, 1, 2 }; // Set up the element array buffer glGenBuffers(1, ebo); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo[0]); read more..

  • Page - 171

    // DrawArraysInstanced model_matrix = vmath::translation(3.0f, 0.0f, -5.0f); glUniformMatrix4fv(render_model_matrix_loc, 4, GL_FALSE, model_matrix); glDrawArraysInstanced(GL_TRIANGLES, 0, 3, 1); The result of the program in Examples 3.5 and 3.6 is shown in Figure 3.5. It’s not terribly exciting, but you can see four similar triangles, each rendered using a different drawing command. Figure 3.5 Simple read more..

  • Page - 172

    started with the vertex following the index. The primitive restart index is specified by the glPrimitiveRestartIndex() function. void glPrimitiveRestartIndex(GLuint index); Specifies the vertex array element index used to indicate that a new primitive should be started during rendering. When processing of vertex-array element indices encounters a value that matches index, no vertex data is read more..

  • Page - 173

    -1.0f, 1.0f, 1.0f, 1.0f, 1.0f, -1.0f, -1.0f, 1.0f, 1.0f, -1.0f, 1.0f, 1.0f, 1.0f, 1.0f, -1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f }; // Color for each vertex static const GLfloat cube_colors[] = { 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.5f, 0.5f, 0.5f, 1.0f }; // read more..

  • Page - 174

    (const GLvoid *)sizeof(cube_positions)); glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); Figure 3.7 shows how the vertex data given in Example 3.7 represents the cube as two independent triangle strips. 0 1 3 2 4 5 7 6 01 3 2 4 5 6 7 STRIP 1 STRIP 2 Figure 3.7 Two triangle strips forming a cube Example 3.8 Drawing a Cube Made of Two Triangle Strips Using Primitive Restart // read more..

  • Page - 175

    Instanced Rendering Instancing, or instanced rendering, is a way of executing the same drawing commands many times in a row, with each producing a slightly different result. This can be a very efficient method of rendering a large amount of geometry with very few API calls. Several variants of already-familiar drawing functions exist to instruct OpenGL to execute the command read more..

  • Page - 176

    Again, note that the parameters to glDrawElementsInstanced() are identical to glDrawElements(), with the addition of primCount. Each time one of the instanced functions is called, OpenGL essentially runs the whole command as many times as is specified by the primCount parameter. This on its own is not terribly useful. However, there are two mechanisms provided by OpenGL that allow read more..

  • Page - 177

    The glVertexAttribDivisor() function controls the rate at which the vertex attribute is updated. index is the index of the vertex attribute whose divisor is to be set, and is the same as you would pass into glVertexAttribPointer() or glEnableVertexAttribArray(). By default, a new value of each enabled attribute is delivered to each vertex. Setting divisor to zero resets the read more..

  • Page - 178

    int matrix_loc = glGetAttribLocation(prog, "model_matrix"); // Configure the regular vertex attribute arrays - // position and normal. glBindBuffer(GL_ARRAY_BUFFER, position_buffer); glVertexAttribPointer(position_loc, 4, GL_FLOAT, GL_FALSE, 0, NULL); glEnableVertexAttribArray(position_loc); glBindBuffer(GL_ARRAY_BUFFER, normal_buffer); glVertexAttribPointer(normal_loc, 3, GL_FLOAT, GL_FALSE, 0, NULL); read more..

  • Page - 179

    Example 3.11 Instanced Attributes Example Vertex Shader // The view matrix and the projection matrix are constant // across a draw uniform mat4 view_matrix; uniform mat4 projection_matrix; // The output of the vertex shader (matched to the // fragment shader) out VERTEX { vec3 normal; vec4 color; } vertex; // Ok, go! void main(void) { // Construct a model-view matrix from the uniform view read more..

  • Page - 180

    for (n = 0; n < INSTANCE_COUNT; n++) { float a = 50.0f * float (n) / 4.0f; float b = 50.0f * float (n) / 5.0f; float c = 50.0f * float (n) / 6.0f; matrices[n] = rotation(a + t * 360.0f, 1.0f, 0.0f, 0.0f) * rotation(b + t * 360.0f, 0.0f, 1.0f, 0.0f) * rotation(c + t * 360.0f, 0.0f, 0.0f, 1.0f) * translation(10.0f + a, 40.0f + b, 50.0f + c); } // Done. Unmap read more..

  • Page - 181

    Figure 3.8 Result of rendering with instanced vertex attributes There are some inefficiencies in the example shown in Examples 3.9 through 3.12. Work that will produce the same result across all of the vertices in an instance will still be performed per-vertex. Sometimes there are ways to get around this. For example, the computation of model_view_matrix will evaluate to the same read more..

  • Page - 182

    It is possible to internally add an offset to the indices used to fetch instanced vertex attributes from vertex buffers. Similar to the baseVertex parameter that is available through glDrawElementsBaseVertex(), the instance offset is exposed through an additional baseInstance parameter in some versions of the instanced drawing functions. The functions that take a baseInstance parameter are read more..

  • Page - 183

    void glDrawElementsInstancedBaseVertexBaseInstance(GLenum mode, GLsizei count, GLenum type, const GLvoid * indices, GLsizei instanceCount, GLuint baseVertex, GLuint baseInstance); Draws instanceCount instances of the geometric primitives specified by mode, count, indices, and baseVertex as if specified by individual calls to glDrawElementsBaseVertex(). As with glDrawArraysInstanced(), the built-in variable read more..

  • Page - 184

    // These are the TBOs that hold per-instance colors and per-instance // model matrices uniform samplerBuffer color_tbo; uniform samplerBuffer model_matrix_tbo; // The output of the vertex shader (matched to the fragment shader) out VERTEX { vec3 normal; vec4 color; } vertex; // Ok, go! void main(void) { // Use gl_InstanceID to obtain the instance color from the color TBO vec4 color = read more..

  • Page - 185

    Example 3.14 contains the code to set up the TBOs for use with the shader of Example 3.13. Example 3.14 Example Setup for Instanced Vertex Attributes // Get the locations of the vertex attributes in "prog", which is // the (linked) program object that we’re going to be rendering // with. Note that this isn’t really necessary because we specified // locations for read more..

  • Page - 186

    code of Example 3.12 is used intact to produce an identical result to the original program. The proof is in the screenshot (Figure 3.9). Figure 3.9 Result of instanced rendering using gl_InstanceID Instancing Redux To use a instancing in your program • Create some vertex shader inputs that you intend to be instanced. • Set the vertex attribute divisors with read more..

  • Page - 187

    This page intentionally left blank read more..

  • Page - 188

    Chapter 4 Color, Pixels, and Framebuffers Chapter Objectives After reading this chapter, you’ll be able to do the following: • Understand how OpenGL processes and represents the colors in your generated images. • Identify the types of buffers available in OpenGL, and be able to clear and control writing to them. • List the various tests and operations on fragments that occur read more..

  • Page - 189

    The goal of computer graphics, generally speaking, is to determine the colors that make up an image. For OpenGL, that image is usually shown in a window on a computer screen, which itself is made up of a rectangular array of pixels, each of which can display its own color. This chapter further develops how you can use shaders in OpenGL to generate the colors of the read more..

  • Page - 190

    which in terms of physical quantities, is represented by their wavelength (or frequency).2 Photons that we can see have wavelengths in the visible spectrum, which ranges from about 390 nanometers (the color violet) to 720 nanometers (the color red). The colors in between form the dominant colors of the rainbow: violet, indigo, blue, green, yellow, orange, and red. Your eye is read more..

  • Page - 191

    For example, a common format for the color buffer is eight bits for each red, green, and blue. This yields a 24-bit deep color buffer, which is capable of displaying 224 unique colors. ‘‘Data in OpenGL Buffers’’ in Chapter 3 expanded on the types of buffers that OpenGL makes available and describes how to control interactions with those buffers. Buffers and Their Uses read more..

  • Page - 192

    of data for each of the 2,073,600 (1920 × 1080) pixels on the screen. A particular hardware system might have more or fewer pixels on the physical screen, as well as more or less color data per pixel. Any particular color buffer, however, has the same amount of data for each pixel on the screen. The color buffer is only one of several buffers that hold information read more..

  • Page - 193

    The pixels in a color buffer may store a single color per pixel, or may logically divide the pixel into subpixels, which enables an antialiasing technique called multisampling. We discuss multisampling in detail in ‘‘Multisampling’’ on Page 153. You’ve already used double buffering for animation. Double buffering is done by making the main color buffer have two parts: a read more..

  • Page - 194

    that each type of buffer should be initialized to in init() (if we don’t use the default values), and then clear all the buffers we need. The following commands set the clearing values for each buffer: void glClearColor(GLclampf red, GLclampf green, GLclampf blue, GLclampf alpha); void glClearDepth(GLclampd depth); void glClearDepthf(GLclampf depth); void glClearStencil(GLint s); Specifies read more..

  • Page - 195

    void glColorMask(GLboolean red, GLboolean green, GLboolean blue, GLboolean alpha); void glColorMaski(GLuint buffer, GLboolean red, GLboolean green, GLboolean blue, GLboolean alpha); void glDepthMask(GLboolean flag); void glStencilMask(GLboolean mask); void glStencilMaskSeparate(GLenum face, GLuint mask); Sets the masks used to control writing into the indicated buffers. If flag is GL_TRUE for glDepthMask(), read more..

  • Page - 196

    and passed to the fragment shader, which uses that data to determine a color. We’ll demonstrate that in ‘‘Vertex Colors’’ on Page 150 in this chapter. • Supplemental data---but not specifically colors---could be provided to the fragment shader and used in a computation that generates a color (we’ll use this technique in Chapter 7,‘‘Light and Shadow’’). • External read more..

  • Page - 197

    Table 4.1 Converting Data Values to Normalized Floating-Point Values OpenGL OpenGL Minimum Min Value Maximum Max Value Type Enum Value Maps to Value Maps to GLbyte GL_BYTE −128 −1.0 127 1.0 GLshort GL_SHORT −32,768 −1.0 32,767 1.0 GLint GL_INT −2,147,483,648 −1.0 2,147,483,647 1.0 GLubyte GL_UNSIGNED_BYTE 0 0 .0 255 1.0 GLushort GL_UNSIGNED_SHORT 0 0 .0 65,535 1.0 GLint GL_UNSIGNED_INT 0 0 .0 4,294,967,295 read more..

  • Page - 198

    // init // void init(void) { glGenVertexArrays(NumVAOs, VAOs); glBindVertexArray(VAOs[Triangles]); struct VertexData { GLubyte color[4]; GLfloat position[4]; }; VertexData vertices[NumVertices] = { {{ 255, 0, 0, 255 }, { -0.90, -0.90 }}, // Triangle 1 {{ 0, 255, 0, 255 }, { 0.85, -0.90 }}, {{ 0, 0, 255, 255 }, { -0.90, 0.85 }}, {{ 10, 10, 10, 255 }, { 0.90, -0.85 }}, // Triangle 2 {{ 100, read more..

  • Page - 199

    array that we’ll load into our vertex buffer object. As there are now two vertex attributes for our vertex data, we needed to add a second vertex attribute pointer to address the new vertex colors so we can work with that data in our shaders. For the vertex colors, we also ask OpenGL to normalize our colors by setting the fourth parameter to GL_TRUE. To use our read more..

  • Page - 200

    Rasterization Within the OpenGL pipeline, between the vertex shading stages (vertex, tessellation, and geometry shading) and fragment shading, is the rasterizer. Its job is to determine which screen locations are covered by a particular piece of geometry (point, line, or triangle). Knowing those locations, along with the input vertex data, the rasterizer linearly interpolates the data read more..

  • Page - 201

    for the pixel are resolved to determine the final pixel’s color. Aside from a little initialization work, and turning on the feature, multisampling requires very little modification to an application. Your application begins by requesting a multisampled buffer (which is done when creating your window). You can determine if the request was successful (as not all implementations support read more..

  • Page - 202

    Example 4.4 A Multisample-Aware Fragment Shader #version 430 core sample in vec4 color; out vec4 fColor; void main() { fColor = color; } The simple addition of the sample keyword in Example 4.4 causes each instance of the sample shader (which is the terminology used when a fragment shader is executed per sample) to receive slightly different values based on the sample’s location. Using read more..

  • Page - 203

    Additionally, multisampling using sample shading can add a lot more work in computing the color of a pixel. If your system has four samples per pixels, you’ve quadrupled the work per pixel in rasterizing primitives, which can potentially hinder your application’s performance. glMinSampleShading() controls how many samples per pixel receive individually shaded values (i.e., each executing read more..

  • Page - 204

    5. Blending 6. Dithering 7. Logical operations All of these tests and operations are described in detail in the following subsections. Note: As we’ll see in ‘‘Framebuffer Objects’’ on Page 180, we can render into multiple buffers at the same time. For many of the fragment tests and operations, they can be controlled on a per-buffer basis, as well as for all of the read more..

  • Page - 205

    Multisample Fragment Operations By default, multisampling calculates fragment coverage values that are independent of alpha. However, if you glEnable() one of the following special modes, then a fragment’s alpha value is taken into consideration when calculating the coverage, assuming that multisampling itself is enabled and that there is a multisample buffer associated with the framebuffer. read more..

  • Page - 206

    void glSampleMaski(GLuint index, GLbitfield mask); Sets one 32-bit word of the sample mask, mask. The word to set is specified by index and the new value of that word is specified by mask.As samples are written to the framebuffer, only those whose corresponding bits in the current sample mask will be updated and the rest will be discarded. The sample mask can also be read more..

  • Page - 207

    The masked values are all interpreted as nonnegative values. The stencil test is enabled and disabled by passing GL_STENCIL_TEST to glEnable() and glDisable(). By default, func is GL_ALWAYS, ref is zero, mask is all ones, and stenciling is disabled. glStencilFuncSeparate() allows separate stencil function parameters to be specified for front- and back-facing polygons (as set with read more..

  • Page - 208

    You can also determine whether the stencil test is enabled by passing GL_STENCIL_TEST to glIsEnabled(). Table 4.2 Query Values for the Stencil Test Query Value Meaning GL_STENCIL_FUNC stencil function GL_STENCIL_REF stencil reference value GL_STENCIL_VALUE_MASK stencil mask GL_STENCIL_FAIL stencil fail action GL_STENCIL_PASS_DEPTH_FAIL stencil pass and depth buffer fail action GL_STENCIL_PASS_DEPTH_PASS stencil read more..

  • Page - 209

    // Draw a sphere in a diamond-shaped section in the // middle of a window with 2 tori. void display(void) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // draw sphere where the stencil is 1 glStencilFunc(GL_EQUAL, 0x1, 0x1); glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP); drawSphere(); // draw the tori where the stencil is not 1 glStencilFunc(GL_NOTEQUAL, 0x1, 0x1); drawTori(); } // Whenever the read more..

  • Page - 210

    After all the objects are drawn, regions of the screen where no capping is required have zeros in the stencil planes, and regions requiring capping are nonzero. Reset the stencil function so that it draws only where the stencil value is nonzero, and draw a large polygon of the capping color across the entire screen. 2. Stippling---Suppose you want to draw an image with a read more..

  • Page - 211

    More context is provided in ‘‘OpenGL Transformations’’ in Chapter 5 for setting a depth range. Polygon Offset If you want to highlight the edges of a solid object, you might draw the object with polygon mode set to GL_FILL, and then draw it again, but in a different color and with the polygon mode set to GL_LINE. However, because lines and filled polygons are not read more..

  • Page - 212

    To achieve a nice rendering of the highlighted solid object without visual artifacts, you can add either a positive offset to the solid object (push it away from you) or a negative offset to the wireframe (pull it toward you). The big question is: How much offset is enough? Unfortunately, the offset required depends on various factors, including the depth slope of each read more..

  • Page - 213

    For polygons that are at a great angle to the clipping planes, the depth slope can be significantly greater than zero, and a larger offset may be needed. A small, nonzero value for factor, such as 0.75 or 1.0, is probably enough to generate distinct depth values and eliminate the unpleasant visual artifacts. In some situations, the simplest values for factor and units (1.0 read more..

  • Page - 214

    Blending Factors In basic blending mode, the incoming fragment’s color is linearly combined with the current pixel’s color. As with any linear combination, coefficients control the contributions of each term. For blending in OpenGL, those coefficients are called the source- and destination-blending factors. The source-blending factor is associated with the color output from the fragment read more..

  • Page - 215

    Table 4.3. The argument srcfactor indicates how to compute a source blending factor; destfactor indicates how to compute a destination blending factor. glBlendFunc() specifies the blending factors for all drawable buffers, while glBlendFunci() specifies the blending factors only for buffer buffer. The blending factors are clamped to either the range [0, 1] or [−1, 1] for read more..

  • Page - 216

    Table 4.3 Source and Destination Blending Factors Constant RGB Blend Factor Alpha Blend Factor GL_ZERO (0, 0, 0 ) 0 GL_ONE (1, 1, 1 ) 1 GL_SRC_COLOR (Rs, Gs, Bs ) As GL_ONE_MINUS_SRC_COLOR (1, 1, 1 ) − (Rs, Gs, Bs ) 1 − As GL_DST_COLOR (Rd, Gd, Bd ) Ad GL_ONE_MINUS_DST_COLOR (1, 1, 1 ) − (Rd, Gd, Bd ) 1 − Ad GL_SRC_ALPHA (As, As, As ) As GL_ONE_MINUS_SRC_ALPHA (1, 1, 1 ) − read more..

  • Page - 217

    Advanced OpenGL has the ability to render into multiple buffers simultaneously (see ‘‘Writing to Multiple Renderbuffers Simultaneously’’ on Page 193 for details). All buffers can have blending enabled and disabled simultaneously (using glEnable() and glDisable()). Blending settings can be managed on a per-buffer basis using glEnablei() and glDisablei(). The Blending Equation With standard read more..

  • Page - 218

    In Table 4.4, Cs and Cd represent the source and destination colors. The S and D parameters in the table represent the source- and destination-blending factors as specified with glBlendFunc() or glBlendFuncSeparate(). Table 4.4 Blending Equation Mathematical Operations Blending Mode Parameter Mathematical Operation GL_FUNC_ADD CsS + CdD GL_FUNC_SUBTRACT CsS − CdD GL_FUNC_REVERSE_SUBTRACT CdD − CsS read more..

  • Page - 219

    one place in the window to another, from the window to processor memory, or from memory to the window. Typically, the copy doesn’t write the data directly into memory but instead allows you to perform an arbitrary logical operation on the incoming data and the data already present; then it replaces the existing data with the results of the operation. Since this process can read more..

  • Page - 220

    For floating-point buffers, or those in sRGB format, logical operations are ignored. Occlusion Query Advanced The depth buffer determines visibility on a per-pixel basis. For performance reasons, it would be nice to be able to determine if a geometric object is visible before sending all of its (perhaps complex) geometry for rendering. Occlusion querys enable you to determine if a read more..

  • Page - 221

    void glGenQueries(GLsizei n, GLuint *ids); Returns n currently unused names for occlusion query objects in the array ids The names returned in ids do not have to be a contiguous set of integers. The names returned are marked as used for the purposes of allocating additional query objects, but only acquire valid state once they have been specified in a call to glBeginQuery(). read more..

  • Page - 222

    void glEndQuery(GLenum target); Ends an occlusion query. target must be GL_SAMPLES_PASSED, or GL_ANY_SAMPLES_PASSED. Determining the Results of an Occlusion Query Once you’ve completed rendering the geometry for the occlusion query, you need to retrieve the results. This is done with a call to glGetQueryObjectiv() or glGetQueryObjectuiv(), as shown in Example 4.7, which will return the read more..

  • Page - 223

    if (samples > 0) { glDrawArrays(GL_TRIANGLE_FAN}, 0, NumVertices); } Cleaning Up Occlusion Query Objects After you’ve completed your occlusion query tests, you can release the resources related to those queries by calling glDeleteQueries(). void glDeleteQueries(GLsizei n, const GLuint *ids); Deletes n occlusion query objects, named by elements in the array ids. The freed query objects may read more..

  • Page - 224

    if glEndConditionalRender() is called when no conditional render is underway; if id is the name of an occlusion query object with a target different than GL_SAMPLES_PASSED; or if id is the name of an occlusion query in progress. The code shown in Example 4.8 completely replaces the sequence of code in Example 4.7. Not only is it the code more compact, it is far more read more..

  • Page - 225

    take more time than just rendering the conditional part of the scene. In particular, if it is expected that most results will mean that some rendering should take place, then on aggregate, it may be faster to always use one of the NO_WAIT modes even if it means more rendering will take place overall. Per-Primitive Antialiasing You might have noticed in some of your OpenGL read more..

  • Page - 226

    void glHint(GLenum target, GLenum hint); Controls certain aspects of OpenGL behavior. The target parameter indicates which behavior is to be controlled; its possible values are shown in Table 4.6. The hint parameter can be GL_FASTEST to indicate that the most efficient option should be chosen, GL_NICEST to indicate the highest-quality option, or GL_DONT_CARE to indicate no preference. The read more..

  • Page - 227

    Example 4.9 shows the initialization for line antialiasing. Example 4.9 Setting Up Blending for Antialiasing Lines: antilines.cpp glEnable (GL_LINE_SMOOTH); glEnable (GL_BLEND); glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glHint (GL_LINE_SMOOTH_HINT, GL_DONT_CARE); Antialiasing Polygons Antialiasing the edges of filled polygons is similar to antialiasing lines. When different polygons have overlapping read more..

  • Page - 228

    called glutCreateWindow() (and configured by your call to glutInitDisplayMode()). Although you can quite successfully use any technique with just those buffers, quite often various operations require moving data between buffers superfluously. This is where framebuffer objects enter the picture. Using framebuffer objects, you can create our own framebuffers and use their attached renderbuffers to read more..

  • Page - 229

    storage for the object to be allocated and initialized. Any subsequent calls will bind the provided framebuffer object name as the active one. void glBindFramebuffer(GLenum target, GLuint framebuffer); Specifies a framebuffer for either reading or writing. When target is GL_DRAW_FRAMEBUFFER, framebuffer specifies the destination framebuffer for rendering. Similarly, when target is set to read more..

  • Page - 230

    GLboolean glIsFramebuffer(GLuint framebuffer); Returns GL_TRUE if framebuffer is the name of a framebuffer returned from glGenFramebuffers(). Returns GL_FALSE if framebuffer is zero (the window-system default framebuffer) or a value that’s either unallocated or been deleted by a call to glDeleteFramebuffers(). void glFramebufferParameteri(GLenum target, GLenum pname, GLint param); Sets parameters of a read more..

  • Page - 231

    void glGenRenderbuffers(GLsizei n, GLuint *ids); Allocate n unused renderbuffer object names, and return those names in ids. Names are unused until bound with a call to glBindRenderbuffer(). Likewise, a call to glDeleteRenderbuffers() will release the storage associated with a renderbuffer. void glDeleteRenderbuffers(GLsizei n, const GLuint *ids); Deallocates the n renderbuffer objects associated read more..

  • Page - 232

    Creating Renderbuffer Storage When you first call glBindRenderbuffer() with an unused renderbuffer name, the OpenGL server creates a renderbuffer with all its state information set to the default values. In this configuration, no storage has been allocated to store image data. Before you can attach a renderbuffer to a framebuffer and render into it, you need to allocate storage and read more..

  • Page - 233


  • Page - 234

    Example 4.10 Creating a 256 × 256 RGBA Color Renderbuffer glGenRenderbuffers(1, &color); glBindRenderbuffer(GL_RENDERBUFFER, color); glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA, 256, 256); Once you have created storage for your renderbuffer as shown in Example 4.10, you need to attach it to a framebuffer object before you can render into it. Framebuffer Attachments When you render, you can read more..

  • Page - 235

    void glFramebufferRenderbuffer(GLenum target, GLenum attachment, GLenum renderbuffertarget, GLuint renderbuffer); Attaches renderbuffer to attachment of the currently bound framebuffer object. target must either be GL_READ_FRAMEBUFFER, GL_DRAW_FRAMEBUFFER, or GL_FRAMEBUFFER (which is equivalent to GL_DRAW_FRAMEBUFFER). attachment is one of GL_COLOR_ATTACHMENTi, GL_DEPTH_ATTACHMENT, GL_STENCIL_ATTACHMENT, or read more..

  • Page - 236

    glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, 256, 256); glGenFramebuffers(1, &framebuffer); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, framebuffer); glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, renderbuffer[Color]); glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, renderbuffer[Depth]); glEnable(GL_DEPTH_TEST); } void display() { // Prepare to render into the read more..

  • Page - 237

    Framebuffer Completeness Given the myriad of combinations between texture and buffer formats, and between framebuffer attachments, various situations can arise that prevent the completion of rendering when you are using application-defined framebuffer objects. After modifying the attachments to a framebuffer object, it’s best to check the framebuffer’s status by calling glCheckFramebufferStatus(). read more..

  • Page - 238

    Table 4.8 Errors Returned by glCheckFramebufferStatus() Framebuffer Completeness Status Enum Description GL_FRAMEBUFFER_COMPLETE The framebuffer and its attachments match the rendering or reading state required. GL_FRAMEBUFFER_UNDEFINED The bound framebuffer is specified to be the default framebuffer (i.e., glBindFramebuffer() with zero specified as the framebuffer), and the default framebuffer doesn ’t exist. read more..

  • Page - 239

    void glClearBuffer{fi ui}v(GLenum buffer, GLint drawbuffer, const TYPE *value); void glClearBufferfi(GLenum buffer, GLint drawbuffer, GLfloat depth, GLint stencil); Clears the buffer indexed by drawbuffer associated with buffer to value. buffer must be one of GL_COLOR, GL_DEPTH, or GL_STENCIL. If buffer is GL_COLOR, drawbuffer specifies an index to a particular draw buffer, and value is a read more..

  • Page - 240

    void glInvalidateFramebuffer(GLenum target, GLsizei numAttachments, const GLenum *attachments); void glInvalidateSubFramebuffer(GLenum target, GLsizei numAttachmens, const GLenum *attachments, GLint x, GLint y, GLsizei width, GLsizei height); Specifies that a portion, or the entirety, of the bound framebuffer object are not necessary to preserve. For either function, target must be either GL_DRAW_FRAMEBUFFER, read more..

  • Page - 241

    While this technique is used often in GPGPU, it can also be used when generating geometry and other information (like textures or normal map) which is written to different buffers during the same rendering pass. Enabling this technique requires setting up a framebuffer object with multiple color (and potentially depth and stencil) attachments, and modification of the fragment shader. read more..

  • Page - 242

    void glBindFragDataLocation(GLuint program, GLuint colorNumber, const GLchar *name); void glBindFragDataLocationIndexed(GLuint program, GLuint colorNumber, GLuint index, const GLchar *name); Uses the value in color for fragment shader variable name to specify the output location associated with shader program. For the indexed case, index specifies the output index as well as the location. A read more..

  • Page - 243

    You can choose an individual buffer to be the drawing or reading target. For drawing, you can also set the target to draw into more than one buffer at the same time. You use glDrawBuffer(),or glDrawBuffers() to select the buffers to be written and glReadBuffer() to select the buffer as the source for glReadPixels(), glCopyTexImage*(), and glCopyTexSubImage*(). void glDrawBuffer(GLenum read more..

  • Page - 244

    void glReadBuffer(GLenum mode); Selects the color buffer enabled as the source for reading pixels for subsequent calls to glReadPixels(), glCopyTexImage*(), glCopyTexSubImage*(), and disables buffers enabled by previous calls to glReadBuffer(). The value of mode can be one of the following: GL_FRONT GL_FRONT_LEFT GL_NONE GL_BACK GL_FRONT_RIGHT GL_FRONT_AND_BACK GL_LEFT GL_BACK_LEFT GL_COLOR_ATTACHMENTi GL_RIGHT read more..

  • Page - 245

    Dual-Source Blending Advanced Two of the blend factors already described in this chapters are the second source blending factors and are special in that they are driven by a second output in the fragment shader. These factors, GL_SRC1_COLOR and GL_SRC1_ALPHA, are produced in the fragment shader by writing to an output whose index is 1 (rather than the default 0). To create such read more..

  • Page - 246

    Figure 4.4 Close-up of RGB color elements in an LCD panel Another possible use is to set the source and destination factors in the blending equation to GL_ONE and GL_SRC1_COLOR. In this configuration, the first color output is added to the framebuffer’s content, while the second color output is used to attenuate the framebuffer’s content. The equation becomes: RGBdst = RGBsrc0 read more..

  • Page - 247

    Dual-Source Blending and Multiple Fragment Shader Outputs Because the second output from the fragment shader that is required to implement dual source blending may take from the resources available to produce outputs for multiple framebuffer attachments (draw buffers), there are special counting rules for dual-source blending. When dual-source blending is enabled---that is, when any of the read more..

  • Page - 248

    Table 4.9 glReadPixels() Data Formats Format Value Pixel Format GL_RED or GL_RED_INTEGER a single red color component GL_GREEN or GL_GREEN_INTEGER a single green color component GL_BLUE or GL_BLUE_INTEGER a single blue color component GL_ALPHA or GL_ALPHA_INTEGER a single alpha color component GL_RG or GL_RG_INTEGER a red color component, followed by a green component GL_RGB or GL_RGB_INTEGER a red read more..

  • Page - 249

    Table 4.10 Data Types for glReadPixels() Type Value Data Type Packed GL_UNSIGNED_BYTE GLubyte No GL_BYTE GLbyte No GL_UNSIGNED_SHORT GLushort No GL_SHORT GLshort No GL_UNSIGNED_INT GLuint No GL_INT GLint No GL_HALF_FLOAT GLhalf GL_FLOAT GLfloat No GL_UNSIGNED_BYTE_3_3_2 GLubyte Yes GL_UNSIGNED_BYTE_2_3_3_REV GLubyte Yes GL_UNSIGNED_SHORT_5_6_5 GLushort Yes GL_UNSIGNED_SHORT_5_6_5_REV GLushort Yes GL_UNSIGNED_SHORT_4_4_4_4 GLushort Yes read more..

  • Page - 250

    you can control whether the values should be clamped to the normalized range or left at their full range using glClampColor(). void glClampColor(GLenum target, GLenum clamp); Controls the clamping of color values for floating- and fixed-point buffers, when target is GL_CLAMP_READ_COLOR. If clamp is set to GL_TRUE, color values read from buffers are clamped to the range [0, 1]; read more..

  • Page - 251

    If srcX1 < srcX0,or dstX1 < dstX0, the image is reversed in the horizontal direction. Likewise, if srcY1 < srcY0 or dstY1 < dstY0, the image is reversed in the vertical direction. However, If both the source and destination sizes are negative in the same direction, no reversal is done. If the source and destination buffers are of different formats, conversion of the read more..

  • Page - 252

    Chapter 5 Viewing Transformations, Clipping, and Feedback Chapter Objectives After reading this chapter, you’ll be able to do the following: • View a three-dimensional geometric model by transforming it to have any size, orientation, and perspective. • Understand a variety of useful coordinate systems, which ones are required by OpenGL, and how to transform from one to the next. read more..

  • Page - 253

    Previous chapters hinted at how to manipulate your geometry to fit into the viewing area on the screen, but we’ll give a complete treatment in this chapter. This includes feedback, the ability to send it back to the application, as well as clipping, the intersection of your geometry with planes either by OpenGL or by you. Typically, you’ll have many objects with read more..

  • Page - 254

    Viewing Model For the time being, it is important to keep thinking in terms of three-dimensional coordinates while making many of the decisions that determine what is drawn on the screen. It is too early to start thinking about which pixels need to be drawn. Instead, try to visualize three-dimensional space. It is later, after the viewing transformations are completed, after the read more..

  • Page - 255

    5. Stretch or shrink the resulting image to the desired picture size (viewport transformation). For 3D graphics, this also includes stretching or shrinking the depth (depth-range scaling). This is not to be confused with Step 3, which selected how much of the scene to capture, not how much to stretch the result. Notice that Steps 1 and 2 can be considered doing the same read more..

  • Page - 256

    Object units; could be meters, inches, etc. ( x, y, z) object/model coordinates Append w of 1.0 Same units ( x, y, z, 1.0) homogeneous model coordinates User/shader transforms: scale, rotate, translate, project Units normalized such that divide by w leaves visible points between -1.0 to +1.0 ( x, y, z, w) homogeneous clip coordinates OpenGL divide by w Range of -1.0 to +1.0 read more..

  • Page - 257

    Figure 5.3 User coordinate systems unseen by OpenGL (These coordinate systems, while not used by OpenGL, are still vital for lighting and other shader operations.) Viewing Frustum Step 3 in our camera analogy chose a lens, or zoom amount. This selects how narrow or wide of a rectangular cone through the scene the camera will capture. Only geometry falling within this cone will read more..

  • Page - 258

    Figure 5.4 A view frustum OpenGL will additionally exclude geometry that is too close or too far away; that is, those in front of a near plane or those behind a far plane. There is no counterpart to this in the camera analogy (other than cleaning foreign objects from inside your lens), but is helpful in a variety of ways. Most importantly, objects approaching the cone’s read more..

  • Page - 259

    Because OpenGL has to perform this clipping to draw correctly, the application must tell OpenGL where this frustum is. This is part of Step 3 of the camera analogy, where the shader must apply the transformations, but OpenGL must know about it for clipping. There are ways shaders can clip against additional user planes, discussed later, but the six frustum planes are an read more..

  • Page - 260

    The stages of the rendering pipeline that transform three-dimensional coordinates for OpenGL viewing are shown in Figure 5.5. Essentially, they are the programmable stages appearing before rasterization. Because these stages are programmable, you have a lot of flexibility in the initial form of your coordinates and in how you transform them. However, you are constrained to end with the read more..

  • Page - 261

    Matrix Multiply Refresher For our use, matrices and matrix multiplication are nothing more than a convenient mechanism for expressing linear transformations, which in turn are a useful way to do the coordinate manipulations needed for displaying models. The vital matrix mechanism is explained here, while interesting uses for it will come up in numerous places in subsequent discussions. read more..

  • Page - 262

    Being able to compose the B transform and the A transform into a single transform C is a benefit we get by sticking to linear transformations. The following definition of matrix multiplication makes all of this work out. ⎡ ⎢ ⎢⎢ ⎣ b11 b12 b13 b14 b21 b22 b23 b24 b31 b32 b33 b34 b41 b42 b43 b44 ⎤ ⎥ ⎥⎥ ⎦ ⎡ ⎢ ⎢⎢ ⎣ a11 a12 a13 a14 a21 a22 a23 a24 read more..

  • Page - 263

    three-component Cartesian coordinates to four-component homogeneous coordinates. These are 1) the ability to apply perspective and 2) the ability to translate (move) the model using only a linear transform. That is, we will be able to get all the rotations, translations, scaling, and projective transformations we need by doing matrix multiplication if we first move to a four-coordinate read more..

  • Page - 264

    A homogeneous coordinate has one extra component and does not change the point it represents when all its components are scaled by the same amount. For example, all these coordinates represent the same point: (2.0, 3.0, 5.0, 1.0) (4.0, 6.0, 10.0, 2.0) (0.2, 0.3, 0.5, 0.1) In this way, homogeneous coordinates act as directions instead of locations; scaling a direction leaves it read more..

  • Page - 265

    1234 x -4 -3 -2 -1 -1 y 0 y 1 2 2-D skewing (linear transform) allows 1-D transla on (non-linear) transform Skew Skew 0 1234 -4 -3 -2 -1 Embedded 1-D Space Figure 5.7 Translating by skewing The desire is to translate points in the 1D space with a linear transform. This is impossible within the 1D space, as the point 0 needs to move---something 1D linear transformations cannot do. read more..

  • Page - 266

    back to the three-dimensional Cartesian coordinates by dividing their first three components by the last component. This will make the objects farther away (now having a larger w) have smaller Cartesian coordinates, hence getting drawn on a smaller scale. A w of 0.0 implies (x, y ) coordinates at infinity (the object got so close to the viewpoint that its perspective view got read more..

  • Page - 267

    and multiplying by a vector v =(x, y, z,1) gives ⎛ ⎜ ⎜ ⎜ ⎝ x + 2.5 y z 1.0 ⎞ ⎟ ⎟ ⎟ ⎠ = ⎡ ⎢ ⎢⎢ ⎣ ⎤ ⎥ ⎥⎥ ⎦ ⎛ ⎜ ⎜ ⎜ ⎝ x y z 1.0 ⎞ ⎟ ⎟ ⎟ ⎠ This is demonstrated in Figure 5.8. z 0 x y + 2.5 2 13 4 Figure 5.8 Translating an object 2.5 in the x direction Of course, you’ll want such read more..

  • Page - 268

    vmath::mat4 translationMatrix = vmath::translate(1.0, 2.0, 3,0); // Set this matrix into the current program. glUniformMatrix4fv(matrix_loc, 1, GL_FALSE, translationMatrix); . . . After going through the next type of transformation, we’ll show a code example for combining transformations with this utility. Scaling Grow or shrink an object, as in Figure 5.9, by putting the desired scaling factor read more..

  • Page - 269

    The following example makes geometry 3 times larger. S = ⎡ ⎢ ⎢⎢ ⎣ ⎤ ⎥ ⎥⎥ ⎦ ⎛ ⎜ ⎜ ⎜ ⎝ 3x 3y 3z 1 ⎞ ⎟ ⎟ ⎟ ⎠ = ⎡ ⎢ ⎢⎢ ⎣ ⎤ ⎥ ⎥⎥ ⎦ ⎛ ⎜ ⎜ ⎜ ⎝ x y z 1.0 ⎞ ⎟ ⎟ ⎟ ⎠ Note that nonisomorphic scaling is easily done, as the scaling is read more..

  • Page - 270

    z 0 x y 2 1 z 0 x y 2 1 z 0 x 2 1 z 0 x 2 1 Figure 5.10 Scaling an object in place (Scale in place by moving to (0, 0, 0), scaling, and then moving it back.) This would use three matrices, T, S, and T−1 , for translate to (0, 0, 0), scale, and translate back, respectively. When each vertex v of the object is multiplied by each of these matrices in turn, the read more..

  • Page - 271

    vmath::mat4 vmath::scale(float s); Returns a transformation matrix for scaling an amount s. The resulting matrix can be directly multiplied by another such transformation matrix to compose them into a single matrix that performs both transformations. // Application (C++) code #include "vmath.h" . . . // Compose translation and scaling transforms vmath::mat4 translateMatrix = read more..

  • Page - 272

    z 0 x y 2 13 4 50 degrees Figure 5.11 Rotation (Rotating an object 50 degrees in the xy plane, around the z axis. Note if the object is off center, it also revolves the object around the point (0, 0, 0).) z 0 x y 2 1 z 0 x y 2 1 z 0 x 2 1 z 0 x 2 1 Figure 5.12 Rotating in place (Rotating an object in place by moving it to (0, 0, 0), rotating, and then moving it read more..

  • Page - 273

    R = ⎡ ⎢ ⎢⎢ ⎣ cos 50 −sin 50 0.00.0 sin 50 cos 50 0.00.0 ⎤ ⎥ ⎥⎥ ⎦ ⎛ ⎜ ⎜ ⎜ ⎝ cos 50 · x − sin 50 · y sin 50 · x + cos 50 · y z 1.0 ⎞ ⎟ ⎟ ⎟ ⎠ = ⎡ ⎢ ⎢⎢ ⎣ cos 50 −sin 50 0.00.0 sin 50 cos 50 0.00.0 ⎤ ⎥ ⎥⎥ ⎦ ⎛ ⎜ ⎜ ⎜ ⎝ x y z 1.0 ⎞ ⎟ ⎟ ⎟ ⎠ When rotating read more..

  • Page - 274

    To create a rotation transformation with the included utility, you can use vmath::mat4 vmath::rotate(float x, float y, float z); Returns a transformation matrix for rotating x degrees around the x axis, y degrees around the y axis, and z degrees around the z axis. It then multiplies that matrix (on the left) by the current matrix (on the right). Perspective Projection This one read more..

  • Page - 275

    Figure 5.13 Frustum projection (Frustum with the near plane and half its width and height labeled.) We want to project points in the frustum onto the near plane, directed along straight lines going toward (0, 0, 0). Any straight line emanating from (0, 0, 0) keeps the ratio if z to x the same for all its points, and similarly for the ratio of z to y. Thus, the read more..

  • Page - 276

    outside it, as mentioned earlier when looking at an interior wall next to a window. Your direction of view is the positive z axis, which is not going through the window. You see the window off to the side, with an asymmetric perspective view of what’s outside the window. In this case, points on the near plane are already in the correct location, but those further away read more..

  • Page - 277

    The resulting vectors, still having four coordinates, are the homogeneous coordinates expected by the OpenGL pipeline. The final step in projecting the perspective view onto the screen is to divide the (x, y, z ) coordinates in v by the w coordinate in v , for every vertex. However, this is done internally by OpenGL; it is not something you do in your shaders. Orthographic read more..

  • Page - 278

    (positive z going down the middle of the parallelpiped) this can be done with the following matrix: ⎡ ⎢ ⎢⎢ ⎢ ⎣ 1 width /2 0.0 1 height /2 0.00.0 0.00.0 − 1 (zfar−znear)/2 − zfar+znear zfar−znear ⎤ ⎥ ⎥⎥ ⎥ ⎦ For the case of the positive z not going down the middle of the view (but still looking parallel to the z axis to see the read more..

  • Page - 279

    Normal vectors are typically only three-component vectors; not using homogeneous coordinates. For one thing, translating a surface does not change its normal, so normals don’t care about translation, removing one of the reasons we used homogeneous coordinates. Since normals are mostly used for lighting, which we complete in a pre-perspective space, we remove the other reason we use read more..

  • Page - 280

    1. Move the camera to the right view: Translate and rotate. 2. Move the model into view: Translate, rotate, and scale. 3. Apply perspective projection. That’s a total of six matrices. You can use a vertex shader to do all this math, as in Example 5.1. Example 5.1 Multiplying Multiple Matrices in a Vertex Shader #version 330 core uniform mat4 ViewT, ViewR, ModelT, ModelR, read more..

  • Page - 281

    #version 330 core uniform mat4 View, Model, Project; in vec4 Vertex; void main() { gl_Position = View * Model * Project * Vertex; } In this situation, the application would change the model matrix more frequently than the others. This will be economical if enough vertices are drawn per change of the matrix Model. If only a few vertices are drawn per instance, it will be faster read more..

  • Page - 282

    Beyond notation, matrices have semantics for setting and accessing parts of a matrix, and these semantics are always column oriented. In a shader, using array syntax on a matrix yields a vector with values coming from a column of the matrix mat3x4 m; // 3 columns, 4 rows vec4 v = m[1]; // v is initialized to the second column of m Note: Neither the notation we use nor read more..

  • Page - 283

    OpenGL Transformations To tell OpenGL where you want the near and far planes, use the glDepthRange() commands. void glDepthRange(GLclampd near, GLclampd far); void glDepthRangef(GLclampf near, GLclampf far); Sets the near plane to be near on the z axis and the far plane to far on the z axis. This defines an encoding for z-coordinates that’s performed during the viewport read more..

  • Page - 284

    stage can select which viewport subsequent rendering will target. More details and an example are given in ‘‘Multiple Viewports and Layered Rendering’’ on Page 550. Advanced: z Precision One bizarre effect of these transformations is z fighting. The hardware’s floating-point numbers used to do the computation have limited precision. Hence, depth coordinates that are mathematically read more..

  • Page - 285

    Advanced: User Clipping OpenGL automatically clips geometry against the near and far planes as well as the viewport. User clipping refers to adding additional clip planes at arbitrary orientations, intersecting your geometry, such that the display sees the geometry on one side of the plane, but not on the other side. You might use one, for example, to show a cut away of a read more..

  • Page - 286

    results are undefined. To enable OpenGL clipping of the clip plane written to in Example 5.2, enable the following enumerant in your application: glEnable(GL_CLIP_PLANE0); There are also other enumerates like GL_CLIP_PLANE1, GL_CLIP_PLANE2. These enumerants are organized sequentially, so that GL_CLIP_PLANEi is equal to GL_CLIP_PLANE0 + i. This allows programmatic selection of which and how many read more..

  • Page - 287

    void glGenTransformFeedbacks(GLsizei n, GLuint * ids); Reserves n names for transform feedback objects and places the reserved names in the array ids. The parameter n specifies how many transform feedback object names are to be reserved, and ids specifies the address of an array where the reserved names will be placed. If you want only one name, you can set n to one and pass read more..

  • Page - 288

    the context to use the default transform feedback object (unbinding any previously bound transform feedback object in the process). However, as more complex uses of transform feedback are introduced, it becomes convenient to encapsulate the state of transform feedback into transform feedback objects. Therefore, it’s good practice to create and bind a transform feedback object even if read more..

  • Page - 289

    The target parameter should be set to GL_TRANSFORM_FEEDBACK_BUFFER and index should be set to the index of the transform feedback buffer binding point in the currently bound transform feedback object. The name of the buffer to bind is passed in buffer. The total number of binding points is an implementation-dependent constant that can be discovered by querying the value of read more..

  • Page - 290

    Example 5.3 Example Initialization of a Transform Feedback Buffer // Generate the name of a buffer object GLuint buffer; glGenBuffers(1, &buffer); // Bind it to the TRANSFORM_FEEDBACK binding to create it glBindBuffer(GL_TRANSFORM_FEEDBACK_BUFFER, buffer); // Call glBufferData to allocate 1MB of space glBufferData(GL_TRANSFORM_FEEDBACK_BUFFER, // target 1024 * 1024, // 1 MB NULL, // no initial read more..

  • Page - 291

    the first half of the buffer to the first binding point, and again to bind the second half of the buffer to the second binding point. This demonstrates why the buffer needs to be created and allocated first before using it with glBindBufferRange(). glBindBufferRange() takes an offset, size parameters describing a range of the buffer object that must lie within the buffer read more..

  • Page - 292

    An example of the use of glTransformFeedbackVaryings() is shown in Example 5.4 below. Example 5.4 Example Specification of Transform Feedback Varyings // Create an array containing the names of varyings to record static const char * const vars[] = { "foo", "bar", "baz" }; // Call glTransformFeedbackVaryings glTransformFeedbackVaryings(prog, sizeof (vars) / sizeof (vars[0]), read more..

  • Page - 293

    foo[0].x foo[0].y foo[0].z foo[0].w bar[0].x bar[0].y bar[0].z baz[0].x baz[0].y baz[0].z baz[0].w foo[1].x foo[1].y foo[1].z foo[1].w VERTEX 1 bar[1].x bar[1].y bar[1].z baz[1].x VERTEX 2 vec4 vec4 vec4 vec3 vec3 Figure 5.16 Transform feedback varyings packed in a single buffer However, if bufferMode is GL_SEPARATE_ATTRIBS then each of foo , bar , and baz will be packed tightly into its own buffer read more..

  • Page - 294

    In both cases, the attributes will be tightly packed together. The amount of space in the buffer object that each varying consumes is determined by its type in the vertex shader. That is, if foo is declared as a vec3 in the vertex shader, it will consume exactly three floats in the buffer object. In the case where bufferMode is GL_INTERLEAVED_ATTRIBS, the value of bar will read more..

  • Page - 295

    When the other special variable name, gl_NextBuffer name is encountered, OpenGL will start allocating varyings into the buffer bound to the next transform feedback buffer. This allows multiple varyings to be recorded into a single buffer object. Additionally, if gl_NextBuffer is encountered when bufferMode is GL_SEPARATE_ATTRIBS, or if two or more instances of gl_NextBuffer are encountered in read more..

  • Page - 296

    Finally, Example 5.7 shows an (albeit rather contrived) example of the combined use of gl_SkipComponents and gl_NextBuffer , and Figure 5.18 shows how the data ends up laid out in the transform feedback buffers. Example 5.7 Assigning Transform Feedback Outputs to Different Buffers // Declare the transform feedback varying names static const char * const vars[] = { // Record foo, a gap read more..

  • Page - 297

    foo[0].x foo[0].y foo[0].z foo[0].w VERTEX 1 foo[1].x foo[1].y foo[1].z foo[1].w BUFFER 0 bar[0].x bar[0].y bar[0].z BUFFER 1 baz[0].x baz[0].y baz[0].z baz[0].w BUFFER 3 baz[1].x baz[1].y baz[1].z baz[1].w VERTEX 2 VERTEX 1 VERTEX 2 iron[1].x iron[1].y iron[1].z iron[1].w copper[1].x copper[1].y copper[1].z copper[1].w VERTEX 2 iron[0].x iron[0].y iron[0].z iron[0].w copper[0].x copper[0].y copper[0].z copper[0].w VERTEX read more..

  • Page - 298

    The glBeginTransformFeedback() function starts transform feedback on the currently bound transform feedback object. The primitiveMode parameter must be GL_POINTS, GL_LINES, or GL_TRIANGLES, and must match the primitive type expected to arrive at primitive assembly. Note that it does not need to match the primitive mode used in subsequent draw commands if tessellation or a geometry shader read more..

  • Page - 299

    void glPauseTransformFeedback(void ); Pauses the recording of varyings in transform feedback mode. Transform feedback may be resumed by calling glResumeTransformFeedback(). glPauseTransformFeedback() will generate an error if transform feedback is not active, or if it is already paused. To restart transform feedback while it is paused, glResumeTransformFeedback() must be used (not read more..

  • Page - 300

    Input Geometry World Space Geometry Vertex Shader Rasterize Triangles Fragment Shader Particle Position + velocity Collision Detector Particle Position + velocity Rasterize Points Fragment Shader Geometry pass Particle pass Object Space Vertices Generates View and world Space vertices Capture with Transform Feedback View space Geometry rendered As normal Double buffer Position and Velocity data read more..

  • Page - 301

    Example 5.8 contains the source of the vertex shader used to transform the incoming geometry into both world and eye space, and Example 5.9 shows how transform feedback is configured to capture the resulting world space geometry. Example 5.8 Vertex Shader Used in Geometry Pass of Particle System Simulator #version 420 core uniform mat4 model_matrix; uniform mat4 projection_matrix; layout read more..

  • Page - 302

    for loop. The line segment is formed by taking the particle’s current position and using its velocity to calculate where it will be at the end of the time step. This is performed for every captured triangle. If a collision is found, the point’s new position is reflected about the plane of the triangle to make it ‘‘bounce’’ off the geometry. Example 5.10 contains read more..

  • Page - 303

    wu = dot(w, u); wv = dot(w, v); D=uv * uv − uu * vv; float s, t; s = (uv * wv − vv * wu) / D; if (s < 0.0 || s > 1.0) return false ; t = (uv * wu − uu * wv) / D; if (t < 0.0 || (s + t) > 1.0) return false ; return true ; } vec3 reflect_vector(vec3 v, vec3 n) { return v − 2.0 * dot(v, n) * n; } void main(void) { vec3 acceleration = read more..

  • Page - 304

    The code to set up transform feedback to capture the updated particle position and velocity vectors is shown in Example 5.11. Example 5.11 Configuring the Simulation Pass of the Particle System Simulator static const char * varyings[] = { "position_out", "velocity_out" }; glTransformFeedbackVaryings(update_prog, 2, varyings, GL_INTERLEAVED_ATTRIBS); glLinkProgram(update_prog); The inner read more..

  • Page - 305

    else { glBindVertexArray(vao[0]); glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 0, vbo[1]); } glBeginTransformFeedback(GL_POINTS); glDrawArrays(GL_POINTS, 0, min(point_count, (frame_count >> 3))); glEndTransformFeedback(); glBindVertexArray(0); frame_count++; The result of the program is shown in Figure 5.20. Figure 5.20 Result of the particle system simulator 258 Chapter 5: Viewing Transformations, Clipping, and read more..

  • Page - 306

    Chapter 6 Textures Chapter Objectives After reading this chapter, you’ll be able to do the following: • Understand what texture mapping can add to your scene. • Supply texture images in compressed and uncompressed formats. • Control how a texture image is filtered as it is applied to a fragment. • Create and manage texture images in texture objects. • Supply texture read more..

  • Page - 307

    The goal of computer graphics, generally speaking, is to determine the colors that make up each part of an image. While it is possible to calculate the color of a pixel using an advanced shading algorithm, often the com- plexity of such a shader is so great that it is not practical to implement such approaches. Instead, we rely on textures---large chunks of image data read more..

  • Page - 308

    • ‘‘Point Sprites’’ describes a feature of OpenGL that provides texture coordinates automatically for geometry rendered as points, allowing your application to very quickly render small bitmaps to the display. • ‘‘Rendering to Texture Maps’’ explains how to render directly into a texture map by using framebuffer objects. Texture Mapping In the physical world, colors within read more..

  • Page - 309

    • Associate a texture sampler with each texture map you intend to use in your shader. • Retrieve the texel values through the texture sampler from your shader. We’ll discuss each of those steps in the following sections. Basic Texture Types OpenGL supports many types of texture object of varying dimensiona- lities and layout. Each texture object represents a set of images read more..

  • Page - 310

    Table 6.1 Texture Targets and Corresponding Sampler Types Target (GL_TEXTURE* Sampler Type Dimensionality 1D sampler1D 1D 1D_ARRAY sampler1DArray 1D array 2D sampler2D 2D 2D_ARRAY sampler2DArray 2D array 2D_MULTISAMPLE sampler2DMS 2D multisample 2D_MULTISAMPLE_ARRAY sampler2DMSArray 2D multisample array 3D sampler3D 3D CUBE samplerCube cube-map texture ARRAY samplerCubeArray cube-map array RECTANGLE samplerRect 2D rectangle read more..

  • Page - 311

    to their appropriate target. To reserve names for texture objects, call glGenTextures(), specifying the number of names to reserve and the address of an array into which to deposit the names. void glGenTextures(GLsizei n, GLuint *textures); Returns n currently unused names for texture objects in the array textures. The names returned in textures will not necessarily be a contiguous read more..

  • Page - 312

    Immediately on its initial binding, the state of the texture object is reset to the default state for the specified target. In this initial state, texture and sampler properties such as coordinate wrapping modes, and minification and magnification filters are set to their default values, which may be found in the state tables contained in the OpenGL specification. An read more..

  • Page - 313

    GLboolean glIsTexture(GLuint texture); Returns GL_TRUE if texture is the name of a texture that has been bound and has not been subsequently deleted, and returns GL_FALSE if texture is zero or is a nonzero value that is not the name of an existing texture. Once a texture object has reached the end of its useful life, it should be deleted. The function for deleting textures read more..

  • Page - 314

    void glTexStorage1D(GLenum target, GLsizei levels, GLenum internalFormat, GLsizei width); void glTexStorage2D(GLenum target, GLsizei levels, GLenum internalFormat, GLsizei width, GLsizei height); void glTexStorage3D(GLenum target, GLsizei levels, GLenum internalFormat, GLsizei width, GLsizei height, GLsizei depth); Specify immutable texture storage for the texture object currently bound to target. glTexStorage1D() read more..

  • Page - 315

    void glTexStorage2DMultisample(GLenum target, GLsizei samples, GLenum internalFormat, GLsizei width, GLsizei height, GLboolean fixedsamplelocations); void glTexStorage3DMultisample(GLenum target, GLsizei samples, GLenum internalFormat, GLsizei width, GLsizei height, GLsizei depth, GLboolean fixedsamplelocations); Specify immutable texture storage for the multisample texture object currently bound to target. For read more..

  • Page - 316

    void glTexImage3D(GLenum target, GLint level, GLint internalFormat, GLsizei width, GLsizei height, GLsizei depth, GLint border, GLenum format, GLenum type, const void *data); The functions glTexImage1D(), glTexImage2D(), and glTexImage3D() are used to specify mutable storage and to optionally provide initial image data for a single mipmap level of a 1D, 2D, or 3D texture, respec- tively. In read more..

  • Page - 317

    The glTexImage2DMultisample() and glTexImage3DMultisample() functions specify storage for 2D and 2D-array multisample textures, respectively. For glTexImage2DMultisample(), target must be GL_TEXTURE_2D_MULTISAMPLE, and for glTexImage3DMultisample(), target must be GL_TEXTURE_2D_MULTISAMPLE_ARRAY. Unlike nonmultisampled textures, no initial data may be specified for multisample textures, and multisample textures may read more..

  • Page - 318

    comes with a size, performance, and quality tradeoff. It is up to you, the application writer, to determine the appropriate format for your needs. Table 6.2 lists all of the internal formats supported by OpenGL, along with their bit sizes for each component. Table 6.2 Sized Internal Formats Sized Base R G B A Shared Internal Format Internal Format Bits Bits Bits Bits Bits GL_R8 GL_RED 8 read more..

  • Page - 319

    Table 6.2 (continued) Sized Internal Formats Sized Base R G B A Shared Internal Format Internal Format Bits Bits Bits Bits Bits GL_R16F GL_RED f16 GL_RG16F GL_RG f16 f16 GL_RGB16F GL_RGB f16 f16 f16 GL_RGBA16F GL_RGBA f16 f16 f16 f16 GL_R32F GL_RED f32 GL_RG32F GL_RG f32 f32 GL_RGB32F GL_RGB f32 f32 f32 GL_RGBA32F GL_RGBA f32 f32 f32 f32 GL_R11F_G11F_B10F GL_RGB f11 f11 f10 GL_RGB9_E5 GL_RGB 9 9 9 5 GL_R8I GL_RED i8 read more..

  • Page - 320

    For each format listed in Table 6.2 the full format is made up of an identifier representing the base format, one or more size indicators, and an optional type. The base format essentially determines which components of the texture are present. Formats starting with GL_R have only the red component present, GL_RG formats have both red and green, GL_RGB formats contain red, read more..

  • Page - 321

    channel in a special reduced-precision floating point format. The 11-bit components have no sign bit, a 5-bit exponent and a 6-bit mantissa. The format GL_RGB9_E5 is special in that it is a shared exponent format. Each component is stored as an independent 9-bit mantissa but shares a single 5-bit exponent between all of the components. This allows textures to be stored with a read more..

  • Page - 322

    Table 6.3 (continued) External Texture Formats Format Components Present GL_BLUE_INTEGER Blue (Integer) GL_RG_INTEGER Red, Green (Integer) GL_RGB_INTEGER Red, Green, Blue (Integer) GL_RGBA_INTEGER Red, Green, Blue, Alpha (Integer) GL_BGR_INTEGER Blue, Green, Red (Integer) GL_BGRA_INTEGER Blue, Green, Red, Alpha (Integer) Again, notice that the format specifiers listed in Table 6.3 indicate which components are read more..

  • Page - 323


  • Page - 324

    Proxy texture targets may be used to test the capabilities of the OpenGL implementation when certain limits are used in combination with each other. For example, consider an OpenGL implementation that reports a maximum texture size of 16384 texels (which is the minimum requirement for OpenGL 4). If one were to create a texture of 16384 × 16384 texels with an internal format read more..

  • Page - 325

    Example 6.1 Direct Specification of Image Data in C // The following is an 8x8 checkerboard pattern using // GL_RED, GL_UNSIGNED_BYTE data. static const GLubyte tex_checkerboard_data[] = { 0xFF, 0x00, 0xFF, 0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00, 0xFF, 0x00, 0xFF, 0x00, 0xFF, 0x00, 0xFF, 0xFF, 0x00, 0xFF, 0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00, 0xFF, 0x00, 0xFF, 0x00, 0xFF, 0x00, 0xFF, 0xFF, read more..

  • Page - 326

    Replace a region of a texture with new data specified in data. The level, format, and type parameters have the same meaning as in glTexImage1D() through glTexImage3D(). level is the mipmap level number. format and type describe the format and data type of the texture image data pointed to by data. data contains the texture data for the subimage. width, height, and depth (if read more..

  • Page - 327

    // Next, the color, floating-point data. // Bind the next texture glBindTexture(GL_TEXTURE_2D, tex_color); // Allocate storage glTexStorage2D(GL_TEXTURE_2D, 2, GL_RGBA32F, 2, 2); // Specify the data glTexSubImage2D(GL_TEXTURE_2D, // target 0, // First mipmap level 0, 0, // x and y offset 2, 2, // width and height GL_RGBA, GL_FLOAT, // format and type tex_color_data); // data Notice how, in Example read more..

  • Page - 328

    glTexSubImage2D(GL_TEXTURE_2D, // target 0, // First mipmap level 0, 0, // x and y offset 8, 8, // width and height GL_RED, // Format GL_UNSIGNED_BYTE, // Type NULL); // data(an offset into buffer) In Example 6.3, we first place our source data (tex_checkerboard_data) into a buffer object bound to the GL_PIXEL_UNPACK_BUFFER binding point, and then call glTexSubImage2D() as we did before. read more..

  • Page - 329

    When glCopyTexImage1D() or glCopyTexImage2D() is called, it is essentially equivalent to calling glReadPixels(), and then immediately calling either glTexImage1D() or glTexImage2D() to re-upload the image data into the texture. void glCopyTexSubImage1D(GLenum target, GLint level, GLint xoffset, GLint x, GLint y, GLsizei width); void glCopyTexSubImage2D(GLenum target, GLint level, GLint xoffset, GLint read more..

  • Page - 330

    formatted image file---a JPEG, PNG, GIF, or other type for image format--- OpenGL works either with raw pixels or with textures compressed with specific algorithms. As such, your application will need to decode the image file into memory that OpenGL can read to initialize its internal texture store. To simplify that process for our examples, we wrote a function, vglLoadImage(), read more..

  • Page - 331

    struct vglImageData { GLenum target; // Texture target (2D, cube map, etc.) GLenum internalFormat; // Recommended internal format GLenum format; // Format in memory GLenum type; // Type in memory (GL_RGB, etc.) GLenum swizzle[4]; // Swizzle for RGBA GLsizei mipLevels; // Number of present mipmap levels GLsizei slices; // Number of slices (for arrays) GLsizeiptr sliceStride; // Distance between read more..

  • Page - 332

    and texture dimensions to the appropriate texture image function. If the texture is allocated as an immutable object (using glTexStorage2D(), for example), then the image data is specified using a texture subimage command such as glTexSubImage2D(). The vglImageData structure contains all of the parameters required to initialize the image. Example 6.6 shows a simple but complete example of read more..

  • Page - 333

    // Unload the image here as glTexSubImage2D has consumed // the data and we don’t need it any more. vglUnloadImage(&image); return texture; } As you can see, this code could become quite complex depending on how generic your texture-loading function might be and how many types of texture you might want to load. To make things easier for you, we have included the function read more..

  • Page - 334

    Retrieving Texture Data Once you have a texture containing data, it is possible to read that data either back into your application’s memory or back into a buffer object. The function for reading image data from a texture is glGetTexImage(), whose prototype is as follows: void glGetTexImage(GLenum target, GLint lod, GLenum format, GLenum type, GLvoid* image); Retrieves a texture read more..

  • Page - 335

    Texture Data Layout So far, our descriptions of the texture image specification commands have not addressed the physical layout of image data in memory. In many cases, image data is laid out left-to-right, top-to-bottom9 in memory with texels closely following each other. However, this is not always the case, and so OpenGL provides several controls that allow you to describe how read more..

  • Page - 336

    applies to any size element, but has a meaningful effect only for multibyte elements. The effect of byte swapping may differ among OpenGL implementations. If on an implementation, GLubyte has 8 bits, GLushort has 16 bits, and GLuint has 32 bits, then Figure 6.1 illustrates how bytes are swapped for different data types. Note that byte swapping has no effect on single-byte data. read more..

  • Page - 337

    parameters to glTexSubImage2D(), for example. You also need to specify the number of rows and pixels to skip before starting to copy the data for the subrectangle. These numbers are set using the parameters *SKIP_ROWS and *SKIP_PIXELS, as shown in Figure 6.2. By default, both parameters are 0, so you start at the lower left corner. *_ROW_LENGTH *_SKIP_PIXELS *_SKIP_ROWS Subimage Image read more..

  • Page - 338

    If *ALIGNMENT is set to 1, the next available byte is used. If it’s 2, a byte is skipped if necessary at the end of each row so that the first byte of the next row has an address that’s a multiple of 2. In the case of bitmaps (or 1-bit images), where a single bit is saved for each pixel, the same byte alignment works, although you have to count individual read more..

  • Page - 339

    *SKIP_IMAGES defines how many layers to bypass before accessing the first data of the subvolume. If the *SKIP_IMAGES value is a positive integer (call the value n), then the pointer in the texture image data is advanced that many layers (n * the size of one layer of texels). The resulting subvolume starts at layer n and is several layers deep---how many layers deep is read more..

  • Page - 340

    void glGenSamplers(GLsizei count, GLuint *samplers); Returns count currently unused names for sampler objects in the array samplers. The names returned in samplers will not necessarily be a contiguous set of integers. The names in samplers are marked as used, but they acquire sampler state only when they are first bound. The value zero is reserved and is never returned by read more..

  • Page - 341

    the function. As sampler objects have no inherent dimensionality, there is no reason to distinguish among multiple sampler object types. Secondly, the unit parameter is present here, and there is no selector for sampler objects---that is, there is no glActiveSampler() function. Furthermore, in contrast to the parameter to glActiveTexture(), which is a token between GL_TEXTURE0 and read more..

  • Page - 342

    void glTexParameter{fi}(GLenum target, GLenum pname, Type param ); void glTexParameter{fi}v(GLenum target, GLenum pname, const Type *param ); void glTexParameterI{i ui}v(GLenum target, GLenum pname, const Type *param ); Set the parameter pname on the texture object currently bound to target of the active texture unit to the value or values given by param. For glTexParameteri(), param is a read more..

  • Page - 343

    parameters that are represented by a sampler object (or the texture’s own, internal sampler object). A texture is bound to a texture unit and a sampler object is bound to the corresponding sampler unit, and together they are used to read data from the texture’s images. This process is called sampling, and is performed using the texture built-in function in GLSL or one of read more..

  • Page - 344

    The sampler argument passed into the texture function can be an element of a sampler array, or a parameter in a function. In all cases the argument must be dynamically uniform. That is, the argument must be the result of an expression involving uniforms, constants, or variables otherwise known to have the same value for all the instances of the shader (such as loop read more..

  • Page - 345

    In Example 6.8, the two inputs are the vertex position and the input texture coordinate, which is passed directly to the shader’s outputs. In this case, these are the built-in gl_Position output and the vs_tex_coord user-defined output that will be passed to the similarly named input in the fragment shader given in Example 6.7. Texture Coordinates Texture coordinates are the read more..

  • Page - 346

    glGenBuffers(1, &buf); glBindBuffer(GL_ARRAY_BUFFER, buf); glBufferData(GL_ARRAY_BUFFER, quad_data, sizeof (quad_data), GL_STATIC_DRAW); // Setup vertex attributes GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, (GLvoid*)0); glEnableVertexAttribArray(0); glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, (GLvoid*)(16 * sizeof (float))); glEnableVertexAttribArray(1); read more..

  • Page - 347

    Each of the texture lookup functions in GLSL takes a set of coordinates from which to sample the texel. A texture is considered to occupy a domain spanning from 0.0to1.0 along each axis (remember, you may use one-, two-, or even three-dimensional textures). It is the responsibility of the application to generate or supply texture coordinates for these functions to use, as we read more..

  • Page - 348

    final coordinate. This mode can help to eliminate tiling artifacts from repeating textures. Figure 6.6 shows each of the texture modes used to handle texture coordinates ranging from 0.0to4.0. All of these modes, except for GL_CLAMP_TO_BORDER eventually take texels from somewhere in the texture’s data store. In the case of GL_CLAMP_TO_BORDER, the returned texels come from the read more..

  • Page - 349

    Arranging Texture Data Suppose you have an external source of texture data---say an image editing program or another component of your application, perhaps written in another language or using another API over which you have no control. It is possible that the texture data is stored using a component order other than red, green, blue, alpha (RGBA). For example, ABGR is fairly read more..

  • Page - 350

    // An array of tokens to set ABGR swizzle in one function call. static const GLenum abgr_swizzle[] = { GL_ALPHA, GL_RED, GL_GREEN, GL_BLUE }; // Bind the ABGR texture glBindTexture(GL_TEXTURE_2D, abgr_texture); // Set all four swizzle parameters in one call to glTexParameteriv glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, abgr_swizzle); // Now bind the RGBx texture glBindTexture(GL_TEXTURE_2D, read more..

  • Page - 351

    glGetActiveUniform() function and may have their values modified using the glUniform1i() function. The integer value assigned to a uniform sampler is the index of the texture unit to which it refers. The steps to use multiple textures in a single shader (or program) are therefore as follows: first, we need to select the first texture unit using glActiveTexture() and bind a read more..

  • Page - 352

    Example 6.13 Simple Multitexture Example (Fragment Shader) #version 330 core in vec2 tex_coord0; in vec2 tex_coord1; layout (location = 0) out vec4 color; uniform sampler2D tex1; uniform sampler2D tex2; void main(void) { color = texture(tex1, tex_coord0) + texture(tex2, tex_coord1); } In Example 6.13 we are using a different texture coordinate to sample from the two textures. However, it is read more..

  • Page - 353

    The two source textures used in this example are shown in Figure 6.7 and the result of rendering with our updated fragment shader with two textures bound is shown in Figure 6.8. Figure 6.7 Two textures used in the multitexture example Figure 6.8 Output of the simple multitexture example Complex Texture Types Textures are often considered only as one- or two-dimensional images that read more..

  • Page - 354

    3D Textures A 3D texture can be thought of as a volume of texels arranged in a 3D grid. To create a 3D texture, generate a texture object name and bind it initially to the GL_TEXTURE_3D target. Once bound, you may use glTexStorage3D() or glTexImage3D() to create the storage for the texture object. The 3D texture has not only a width and a height but also a depth. The read more..

  • Page - 355

    // Multiply the texture coordinate by the transformation // matrix to place it into 3D space tex_coord = (vec4(in_tex_coord, 0.0, 1.0) * tc_rotate).stp; // Pass position through unchanged. gl_Position = vec4 (in_position, 0.5, 1.0); } Example 6.16 Simple Volume Texture Fragment Shader #version 330 core // Incoming texture coordinate from vertex shader in vec3 tex_coord; // Final color layout read more..

  • Page - 356

    Array Textures For certain applications, you may have a number of one- or two- dimensional textures that you might like to access simultaneously within the confines of a single draw call. For instance, suppose you’re authoring a game that features multiple characters of basically the same geometry, but each of which has its own costume. Or you might want to use multiple layers read more..

  • Page - 357

    square and of the same size. When you sample from a cube map, the texture coordinate used is three dimensional and is treated as a direction from the origin. This direction essentially points at the location on the surface of the cube from where to read the texture. Imagine you were standing in the middle of a square room with a laser pointer. You could point the read more..

  • Page - 358

    // Now that storage is allocated for the texture object, // we can place the texture data into its texel array. for (int face = 0; face < 6; face++) { GLenum target = GL_TEXTURE_CUBE_MAP_POSITIVE_X + face; glTexSubImage2D(target, // Face 0, // Level 0, 0, // X, Y offset 1024, 1024, // Size of face GL_RGBA, // Format GL_UNSIGNED_BYTE, // Type texture_data[face]); // Data } // Now, read more..

  • Page - 359

    GL_RGBA, // Format GL_UNSIGNED_BYTE, // Type texture_data[face][cube_index]); // Data } } Cube-Map Example---Sky Boxes A common use for cube-map texture is as sky boxs. A sky box is an application of texturing where the entire scene is effectively wrapped in a large cube with the viewer placed in the center. As the scene is rendered anything not covered by objects within the scene read more..

  • Page - 360

    To render the images shown in Figure 6.10, we simply render a unit cube centered at the origin and use the object space position as a texture coordinate from which to sample the cube map. The vertex shader for this example is shown in Example 6.19 and the corresponding fragment shader is shown in Example 6.20. Example 6.19 Simple Skybox Example---Vertex Shader #version 330 read more..

  • Page - 361

    Example 6.21 Cube-Map Environment Mapping Example---Vertex Shader #version 330 core // Incoming position and normal layout (location = 0) in vec4 in_position; layout (location = 1) in vec3 in_normal; // Outgoing surface normal and view-space position out vec3 vs_fs_normal; out vec3 vs_fs_position; // Model-view-projection and model-view matrices uniform mat4 mat_mvp; uniform mat4 mat_mv; void main(void) { // read more..

  • Page - 362

    // view-space position around the surface normal. vec3 tc = reflect(-vs_fs_position, normalize(vs_fs_normal)); // Sample the texture and color the resulting fragment // a golden color. color = vec4 (0.3, 0.2, 0.1, 1.0) + vec4 (0.97, 0.83, 0.79, 0.0) * texture(tex, tc); } The fragment shader also slightly modifies the sampled texture value retrieved from the cube map in order to make it read more..

  • Page - 363

    However, if the texture filtering mode is linear, toward the edges of the cube’s faces, the adjoining faces’ texels are not considered when calculating the final filtered texel values. This can cause a noticeable seam to appear in the filtered texture. Even worse, if the texture coordinate wrapping mode is left at one of the repeating modes, then texels from the opposite read more..

  • Page - 364

    Figure 6.13 The effect of seamless cube-map filtering Shadow Samplers A special type of sampler is provided in GLSL called a shadow sampler. A shadow sampler takes an additional component in the texture coordinate that is used as a reference against which to compare the fetched texel values. When using a shadow sampler, the value returned from the texture function is a read more..

  • Page - 365

    float texture(gsampler2DArrayShadow tex, vec4 P[, float bias]); float texture(gsampler2DRectShadow tex, vec3 P); float texture(gsamplerCubeArrayShadow tex, vecP P, float compare); Sample the shadow texture bound to the texture unit referenced by tex at the texture coordinates specified by P. The return value is a floating-point quantity representing the fraction of samples that passed the shadow read more..

  • Page - 366

    Buffer Textures Buffer textures are a special class of texture that allow a buffer object to be accessed from a shader as if it were a large, one-dimensional texture. Buffer textures have certain restrictions and differences from normal one-dimensional textures but otherwise appear similar to them in your code. You create them as normal texture objects, bind them to texture units, read more..

  • Page - 367

    The code shown in Example 6.23 shows an example of creating a buffer, initializing its data store, and then associating it with a buffer texture. Example 6.23 Creating and Initializing a Buffer Texture // Buffer to be used as the data store GLuint buf; // Texture to be used as a buffer texture GLuint tex; // Data is located somewhere else in this program extern const GLvoid* read more..

  • Page - 368

    function17 to read individual samples from it. The texelFetch function for buffer textures is defined as follows: vec4 texelFetch(samplerBuffer s, int coord); ivec4 texelFetch(isamplerBuffer s, int coord); uvec4 texelFetch(usamplerBuffer s, int coord); Perform a lookup of a single texel from texture coordinate coord in the texture bound to s. An example of the declaration of a buffer read more..

  • Page - 369

    textures with various different dimensionalities---perhaps taking a single slice of an array texture and treating it as a single 2D texture, say. OpenGL allows you to share a single data store between multiple textures, each with its own format and dimensions. First, a texture is created and its data store initialized with one of the immutable data storage functions (such as read more..

  • Page - 370

    Table 6.6 (continued) Target Compatibility for Texture Views Original Target (GL_TEXTURE*) Compatible Targets (GL_TEXTURE*) 1D_ARRAY 1D, 1D_ARRAY 2D_ARRAY 2D, 2D_ARRAY CUBE_MAP_ARRAY CUBE_MAP, 2D, 2D_ARRAY, CUBE_MAP_ARRAY 2D_MULTISAMPLE 2D_MULTISAMPLE, 2D_MULTISAMPLE_ARRAY 2D_MULTISAMPLE_ARRAY 2D_MULTISAMPLE, 2D_MULTISAMPLE_ARRAY In addition to target compatibility, the internal format of the new view must be of the same read more..

  • Page - 371


  • Page - 372

    GL_TEXTURE_2D even though the original texture is GL_TEXTURE_2D_ARRAY. Example 6.26 shows an example of this. Example 6.26 Creating a Texture View with a New Target // Create two texture names - one will be our parent, // one will be the view GLuint tex[2]; glGenTextures(2, &tex); // Bind the first texture and initialize its data store // We are going to create a 2D array read more..

  • Page - 373

    Compressed Textures Compression is a mechanism by which the amount of data required to store or transmit information is reduced. Because texture data can consume a very large amount of memory (and consequently, memory bandwidth), OpenGL supports storing textures in compressed forms in order to reduce their size. Compression algorithms fall into two general categories---lossless and lossy. read more..

  • Page - 374

    compressed data to OpenGL directly. This way, you can spend as much time as is necessary to achieve the desired quality level in the resulting texture without sacrificing run-time performance. Under either mechanism, the first step is to choose a compressed internal format. There are a myriad of texture-compression algorithms and formats, and different hardware and implementations of read more..

  • Page - 375

    void glCompressedTexImage3D(GLenum target, GLint level, GLenum internalFormat, GLsizei width, GLsizei height, GLsizei depth, GLint border, GLsizei imageSize, const void *data); Establish storage for textures using a compressed internal format. Any existing data store for level level of the texture bound to target of the active texture unit is released and a new store is established in its read more..

  • Page - 376

    void glCompressedTexSubImage3D(GLenum target, GLint level, GLint xoffset, GLint yoffset, GLint zoffset, GLsizei width, GLsizei height, GLsizei depth, GLenum format, GLsizei imageSize, const void *data); Update the compressed texture data in level of the texture bound to target of the active texture unit. xoffset and width specify the offset in the x-axis and the width of the texture data, read more..

  • Page - 377

    Figure 6.14 Effect of texture minification and magnification the texture map needs to be stretched in one direction and shrunk in the other, OpenGL makes a choice between magnification and minification19 that in most cases gives the best result possible. It’s best to try to avoid these situations by using texture coordinates that map without such distortion. Linear Filtering Linear read more..

  • Page - 378

    For image data, the same technique can be applied. So long as the sampling rate (resolution) of the texture is high enough relative to the sharp peaks in the image data (details), a linear reconstruction of the image will appear to have reasonably high quality. The translation from a signal as shown in Figure 6.15 into a texture is easy to conceive when a 1D texture is read more..

  • Page - 379

    Not only can linear filtering be used to smoothly transition from one sample to the adjacent ones in 1D, 2D, and 3D textures, it can also be used to blend texels sampled from adjacent mipmap levels in a texture. This works in a very similar manner to that described above. OpenGL calculates the mipmap level from which it needs to select samples and the result of this read more..

  • Page - 380

    Using and Generating Mipmaps Textured objects can be viewed, like any other objects in a scene, at different distances from the viewpoint. In a dynamic scene, as a textured object moves farther from the viewpoint, the ratio of pixels to texels in the texture becomes very low and the texture ends up being sampled at a very low rate. This has the effect of producing read more..

  • Page - 381

    Figure 6.17 A pre-filtered mipmap pyramid The parameter GL_TEXTURE_MIN_FILTER controls how texels are constructed when the mipmap level is greater than zero. There are a total of six settings available for this parameter. The first two are the same as for magnification---GL_NEAREST and GL_LINEAR. Choosing one of these two modes disables mipmapping and causes OpenGL to only use the read more..

  • Page - 382

    NEAREST or LINEAR. The first part, {A}, controls how the texels from each of the mipmap levels is constructed and works the same way as the GL_TEXTURE_MAG_FILTER setting. The second, {B}, controls how these samples are blended between the mipmap levels. When it’s NEAREST, only the closest mipmap level is used. When it’s LINEAR, the two closest mipmaps are linearly interpolated. read more..

  • Page - 383

    To use mipmapping, you must provide all sizes of your texture in powers of 2 between the largest size and a 1 × 1 map. If you don’t intend to use mipmapping to go all the way to a 1 × 1 texture, you can set the value of GL_TEXTURE_MAX_LEVEL to the maximum level you have supplied, and OpenGL will not consider any further levels in its evaluation of texture read more..

  • Page - 384

    down the mipmap pyramid. This texture was applied to a large plane extending into the distance. The further the plane gets from the viewer, the narrower it becomes in screen space and the more compressed the texture becomes. OpenGL chooses successively higher mipmap levels (lower resolution levels) from the texture. To further illustrate the effect, the example sets the mipmap read more..

  • Page - 385

    Calculating the Mipmap Level The computation of which mipmap level of a texture to use for a particular pixel depends on the scale factor between the texture image and the size of the polygon to be textured (in pixels). Let’s call this scale factor ρ, and also define a second value, λ, where λ = log 2 ρ + lodbias. (Since texture images can be multidimensional, it is read more..

  • Page - 386

    The default parameters for GL_TEXTURE_MAG_ FILTER and GL_TEXTURE_MIN_FILTER are GL_LINEAR and GL_LINEAR_MIPMAP_LINEAR, respectively. Notice that the default minification filter enables mipmapping. This is important because in order to use mipmapping, the texture must have a complete set of mipmap levels and they must have a con- sistent set of resolutions as described in ‘‘Using and read more..

  • Page - 387

    Advanced Texture Lookup Functions In addition to simple texturing functions such as texture and texelFetch , several more variants of the texture fetch functions are supported by the shading language. These are covered in this subsection. Explicit Level of Detail Normally, when using mipmaps, OpenGL will calculate the level of detail and the resulting mipmap levels from which to sample read more..

  • Page - 388

    the partial derivative of the texture coordinates is given as a parameter. Some key prototypes are listed below. (A full list is in Appendix C,‘‘Built-in GLSL Variables and Functions’’.) gvec4 textureGrad(gsampler1D tex, float P,float dPdx, float dPdy); gvec4 textureGrad(gsampler2D tex, vec2 P,vec2 dPdx, vec2 dPdy); gvec4 textureGrad(gsampler3D tex, vec3 P,vec3 dPdx, vec3 dPdy); gvec4 read more..

  • Page - 389

    gvec4 textureOffset(gsampler1DArray tex, vec2 P, int offset, [float bias]); gvec4 textureOffset(gsampler2DArray tex, vec3 P, ivec2 offset, [float bias]); gvec4 textureOffset(gsampler2DRect tex, vec2 P, ivec2 offset, [float bias]); Sample a texel from the sampler given by tex at the texture coordinates given by P. After the floating-point texture coordinate P has been suitably scaled and converted read more..

  • Page - 390

    gvec4 textureProj(gsamplerRect tex, vec3 P); gvec4 textureProj(gsamplerRect tex, vec4 P); Perform a texture lookup with projection by dividing the texture coordinate specified in P by the last component of P and using the resulting values to perform a texture lookup as would be executed by the normal texture . Texture Queries in Shaders The following two built-in GLSL functions don’t read more..

  • Page - 391

    int textureQueryLevels(gsampler1D tex); int textureQueryLevels(gsampler2D tex); int textureQueryLevels(gsampler3D tex); int textureQueryLevels(gsamplerCube tex); int textureQueryLevels(gsampler1DArray tex); int textureQueryLevels(gsampler2DArray tex); int textureQueryLevels(gsamplerCubeArray tex); int textureQueryLevels(sampler1DShadow tex); int textureQueryLevels(sampler2DShadow tex); int textureQueryLevels(samplerCubeShadow tex); int read more..

  • Page - 392

    Gathering Texels The textureGather function is a special function that allows your shader to read the four samples that would have been used to create a bilinearly filtered texel from a 2D texture (or cube map, rectangle texture, or array of these types). Typically used with single-channel textures, the optional comp component of the function allows you to select a channel other read more..

  • Page - 393

    gvec4 textureProjLod(gsampler2D tex, vec2 P, float lod); gvec4 textureProjGrad(gsampler2D tex, vec3 P, vec2 dPdx, vec2 dPdy); gvec4 textureProjOffset(gsampler2D tex, vec3 P, ivec2 offset[, float bias); gvec4 textureGradOffset(gsampler2D tex, vec2 P, vec2 dPdx, vec2 dPdy, ivec2 offset); gvec4 textureProjLodOffset(gsampler2D tex, vec3 P, float lod, ivec2 offset); gvec4 textureProjGradOffset(gsampler2D tex, vec3 read more..

  • Page - 394

    Textured Point Sprites By using gl_PointCoord to lookup texels in a texture in the fragment shader, simple point sprites can be generated. Each point sprite simply shows the texture as a square. Example 6.27 is the vertex shader used in the example. Notice that we’re writing to gl_PointSize in the vertex shader. This is to control the size of the point sprites---they’re read more..

  • Page - 395

    Figure 6.20 Result of the simple textured point sprite example Analytic Color and Shape You are not limited to sourcing your point sprite data from a texture. Textures have a limited resolution, but gl_PointCoord can be quite precise. The shader shown in Example 6.29 demonstrates how you can analytically determine coverage in the fragment shader. This shader centers gl_PointCoord around read more..

  • Page - 396

    Figure 6.21 shows the output of this example. Figure 6.21 Analytically calculated point sprites By increasing the size of the point sprite and reducing the number of points in the scene, it is possible to see the extremely smooth edges of the discs formed by the fragment shader as shown in Figure 6.22. Figure 6.22 Smooth edges of circular point sprites Point Sprites 349 read more..

  • Page - 397

    Controlling the Appearance of Points Various controls exist to allow the appearance of points to be tuned by your application. These parameters are set using glPointParameterf() or glPointParameteri(). void glPointParameter{if}(GLenum pname, TYPE param); void glPointParameter{if}v(GLenum pname, const TYPE *param); Set the point parameter specified by pname to the value(s) specified by param. pname read more..

  • Page - 398

    attenuated by the point fade factor, which is computed as follows: fade = ⎧ ⎨ ⎩ 1if (derived_size ≥ threshold ) derived_size threshold 2 otherwise Rendering to Texture Maps In addition to using framebuffer objects (as described in ‘‘Framebuffer Objects’’ on Page 180 in Chapter 4) for offscreen rendering, you can also use FBOs to update texture maps. You might do this to read more..

  • Page - 399

    The glFramebufferTexture* family of routines attaches levels of a texture map as a framebuffer attachment. glFramebufferTexture() attaches level of texture object texture (assuming texture is not zero) to attachment. glFramebufferTexture1D(), glFramebufferTexture2D(), and glFramebufferTexture3D() each attach a specified texture image of a texture object as a rendering attachment to a framebuffer read more..

  • Page - 400

    Example 6.30 Attaching a Texture Level as a Framebuffer Attachment: fbotexture.cpp GLsizei TexWidth, TexHeight; GLuint framebuffer, texture; void init() { GLuint renderbuffer; // Create an empty texture glGenTextures(1, &texture); glBindTexture(GL_TEXTURE_2D, texture); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, TexWidth, TexHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL); // Create a depth buffer for our framebuffer read more..

  • Page - 401

    For three-dimensional, or one- and two-dimensional texture arrays, you can also attach a single layer of the texture as a framebuffer attachment. void glFramebufferTextureLayer(GLenum target, GLenum attachment, GLuint texture, GLint level, GLint layer); Attaches a layer of a three-dimensional texture, or a one- or two- dimensional array texture as a framebuffer attachment, in a similar manner read more..

  • Page - 402

    void glInvalidateFramebuffer(GLenum target, GLsizei numAttachments, const GLenum * attachments); void glInvalidateSubFramebuffer(GLenum target, GLsizei numAttachments, const GLenum * attachments, GLint x, GLint y, GLint width, GLint height); Instruct OpenGL that it may discard the contents of the specified framebuffer attachments within the region delimited by x, y, width, and height. read more..

  • Page - 403

    Chapter Summary In this chapter, we have given an overview of texturing in OpenGL. Applications of textures in computer graphics are wide ranging and surprisingly complex. The best that can be done in a single chapter of a book is to scratch the surface and hopefully convey to the reader the depth and usefulness of textures. Entire books could be written on advanced uses of read more..

  • Page - 404

    -- Binding the buffer to a target, preferably the GL_TEXTURE_BUFFER target. -- Defining the storage for the buffer object using glBufferData(). • Attach the buffer object’s data store to the texture by -- Binding the texture to the GL_TEXTURE_BUFFER target and, -- Calling glTexBuffer() with the name of the initialized buffer object. Texture Best Practices Here are some tips to ensure read more..

  • Page - 405

    This page intentionally left blank read more..

  • Page - 406

    Chapter 7 Light and Shadow Chapter Objectives After reading this chapter, you’ll be able to do the following: • Code a spectrum of fragment shaders to light surfaces with ambient, diffuse, and specular lighting from multiple light sources. • Migrate lighting code between fragment and vertex shaders, based on quality and performance trade-offs. • Use a single shader to apply a read more..

  • Page - 407

    In the real world, we see things because they reflect light from a light source or because they are light sources themselves. In computer graphics, just as in real life, we won’t be able to see an object unless it is illuminated by or emits light. We will explore how the OpenGL Shading Language can help us implement such models so that they can execute at interactive read more..

  • Page - 408

    Classic Lighting Model The classic lighting model adds up a set of independently computed lighting components to get a total lighting effect for a particular spot on a material surface. These components are ambient, diffuse, and specular. Each is described below, and Figure 7.1 shows them visually. Figure 7.1 Elements of the classic lighting model (Ambient (top left) plus diffuse read more..

  • Page - 409

    computation depends on the direction of the surface normal and the direction of the light source, but not the direction of the eye. It also depends on the color of the surface. Specular highlighting is light reflected directly by the surface. This highlighting refers to how much the surface material acts like a mirror. A highly polished metal ball reflects a very sharp bright read more..

  • Page - 410

    Example 7.1 Setting Final Color Values with No Lighting ----------------------- Vertex Shader ------------------------- // Vertex shader with no lighting #version 330 core uniform mat4 MVPMatrix; // model-view-projection transform in vec4 VertexColor; // sent from the application, includes alpha in vec4 VertexPosition; // pre-transformed position out vec4 Color; // sent to the rasterizer for interpolation read more..

  • Page - 411

    intensity enables multiplication to model expected interaction. This is demonstrated for ambient light in Example 7.2. It is okay for light colors to go above 1.0 though, especially as we start adding up multiple sources of light. We will start now using the min() function to saturate the light at white. This is important if the output color is the final value for display read more..

  • Page - 412

    of 1.0, or just include only the r, g, and b components in the computation. For example, the two lines of code in the fragment shader could read vec3 scatteredLight = vec3 (Ambient); // this is the only light vec3 rgb = min(Color.rgb * scatteredLight, vec3 (1.0)); FragColor = vec4 (rgb, Color.a); which passes the Color alpha component straight through to the output FragColor alpha read more..

  • Page - 413

    reflection. As the angle widens, the cosine moves toward 0.0, indicating less reflected light. Fortunately, if our vectors are normalized (having a length of 1.0), these cosines are computed with a simple dot product, as shown in Example 7.3. The surface normal will be interpolated between vertices, though it could also come from a texture map or an analytic computation. The far read more..

  • Page - 414

    in vec4 VertexColor; in vec3 VertexNormal; // we now need a surface normal in vec4 VertexPosition; out vec4 Color; out vec3 Normal; // interpolate the normalized surface normal void main() { Color = VertexColor; // transform the normal, without perspective, and normalize it Normal = normalize(NormalMatrix * VertexNormal); gl_Position = MVPMatrix * VertexPosition; } -------------------------- Fragment read more..

  • Page - 415

    // don’t modulate the underlying color with reflected light, // only with scattered light vec3 rgb = min(Color.rgb * scatteredLight + reflectedLight, vec3 (1.0)); FragColor = vec4 (rgb, Color.a); } A couple more notes about this example. First, in this example, we used a scalar Strength to allow independent adjustment of the brightness of the specular reflection relative to the read more..

  • Page - 416

    The additional calculations needed for a point light over a directional light show up in the first few lines of the fragment shader in Example 7.4. The first step is to compute the light direction vector from the surface to the light position. We then compute light distance by using the length() function. Next, we normalize the light direction vector so we can use it in read more..

  • Page - 417

    uniform float LinearAttenuation; uniform float QuadraticAttenuation; in vec4 Color; in vec3 Normal; in vec4 Position; out vec4 FragColor; void main() { // find the direction and distance of the light, // which changes fragment to fragment for a local light vec3 lightDirection = LightPosition - vec3 (Position); float lightDistance = length(lightDirection); // normalize the light direction vector, so // read more..

  • Page - 418

    Spotlights In stage and cinema, spotlights project a strong beam of light that illuminates a well-defined area. The illuminated area can be further shaped through the use of flaps or shutters on the sides of the light. OpenGL includes light attributes that simulate a simple type of spotlight. Whereas point lights are modeled as sending light equally in all directions, OpenGL read more..

  • Page - 419

    in vec4 VertexColor; in vec3 VertexNormal; in vec4 VertexPosition; out vec4 Color; out vec3 Normal; out vec4 Position; void main() { Color = VertexColor; Normal = normalize(NormalMatrix * VertexNormal); Position = MVMatrix * VertexPosition; gl_Position = MVPMatrix * VertexPosition; } -------------------------- Fragment Shader ---------------------------- // Fragment shader computing a spotlight’s effect #version read more..

  • Page - 420

    // how close are we to being in the spot? float spotCos = dot(lightDirection, -ConeDirection); // attenuate more, based on spot-relative position if (spotCos < SpotCosCutoff) attenuation = 0.0; else attenuation *= pow(spotCos, SpotExponent); vec3 halfVector = normalize(lightDirection + EyeDirection); float diffuse = max(0.0, dot(Normal, lightDirection)); float specular = max(0.0, dot(Normal, halfVector)); read more..

  • Page - 421

    in the vertex shader, but not too far apart that the lighting vectors (surface normal, light direction, etc.) point in notably different directions. Example 7.6 goes back to the point-light code (from Example 7.4) and moves some lighting calculations to the vertex shader. Example 7.6 Point-light Source Lighting in the Vertex Shader --------------------------- Vertex Shader read more..

  • Page - 422

    gl_Position = MVPMatrix * VertexPosition; } -------------------------- Fragment Shader ---------------------------- // Fragment shader with point-light calculations done in vertex shader #version 330 core uniform vec3 Ambient; uniform vec3 LightColor; // uniform vec3 LightPosition; // no longer need this uniform float Shininess; uniform float Strength; in vec4 Color; in vec3 Normal; // in vec4 Position; // read more..

  • Page - 423

    lighting artifacts that betray a surface’s tessellation to the viewer. This is especially obvious for coarse tessellations and specular highlights. When surface normals are interpolated and then consumed in the fragment shader, we get variants of Phong shading. This is not to be confused with the Phong reflection model, which is essentially what this entire section on classic read more..

  • Page - 424

    In this example, we are using a couple of Booleans, isLocal and isSpot to select what kind of light is represented. If you end up with lots of different light types to choose from, this would be better done as an int going through a switch statement. This structure also includes an ambient color contribution. Earlier, we used a global Ambient assumed to represent all read more..

  • Page - 425

    Position = MVMatrix * VertexPosition; gl_Position = MVPMatrix * VertexPosition; } -------------------------- Fragment Shader ---------------------------- // Fragment shader for multiple lights. #version 330 core struct LightProperties { bool isEnabled; bool isLocal; bool isSpot; vec3 ambient; vec3 color; vec3 position; vec3 halfVector; vec3 coneDirection; float spotCosCutoff; float spotExponent; float constantAttenuation; float read more..

  • Page - 426

    if (Lights[light].isLocal) { lightDirection = lightDirection - vec3 (Position); float lightDistance = length(lightDirection); lightDirection = lightDirection / lightDistance; attenuation = 1.0 / (Lights[light].constantAttenuation + Lights[light].linearAttenuation * lightDistance + Lights[light].quadraticAttenuation * lightDistance * lightDistance); if (Lights[light].isSpot) { float spotCos = dot(lightDirection, read more..

  • Page - 427

    Some metals and clothes display cool-looking properties as having different underlying colors for scattered light and reflected light. It’s your choice how many of these independent colors you mix together for the effect you want to create. For example, in the method below, setting the material’s specular value to (1.0, 1.0, 1.0, 1.0) would make the model degenerate to the model read more..

  • Page - 428

    struct MaterialProperties { vec3 emission; vec3 ambient; vec3 diffuse; vec3 specular; float shininess; }; // a set of materials to select between, per shader invocation const int NumMaterials = 14; uniform MaterialProperties Material[NumMaterials]; flat in int MatIndex; // input material index from vertex shader . . . void main() { . . . // Accumulate all the lights’ effects scatteredLight += read more..

  • Page - 429

    to do this. Here we chose to double the array and use even indexes for the front and odd indexes for the back. This is likely faster than having two separate arrays. If the properties are extensive and mostly the same, it might be more efficient to just expand MaterialProperties with the one or two differing properties. Example 7.11 Front and Back Material Properties struct read more..

  • Page - 430

    Lighting Coordinate Systems To make any sense, all the normal, direction, and position coordinates used in a lighting calculation must come from the same coordinate system. If light-position coordinates come after model-view transforms but before perspective projection, so should the surface coordinates that will be compared against them. In this typical case, both are in eye space. That read more..

  • Page - 431

    red ball. These nearby objects then reflect a redder ambient light than objects further from the ball. We look at some techniques for addressing this in ‘‘Advanced Lighting Models’’ on Page 384. Other techniques for adding in this realism, loosely referred to as global illumination, are outside the scope of this book. A glowing object or very bright object might also have read more..

  • Page - 432

    traditional computer graphics illumination model attempts to account for this phenomena through an ambient light term. However, this ambient light term is usually applied equally across an object or an entire scene. The result is a flat and unrealistic look for areas of the scene that are not affected by direct illumination. Another problem with the traditional illumination model is read more..

  • Page - 433

    X X Figure 7.2 A sphere illuminated using the hemisphere lighting model In Figure 7.2, a point on the top of the sphere (the black ‘‘x’’) receives illumination only from the upper hemisphere (i.e., the sky color). A point on the bottom of the sphere (the white ‘‘x’’) receives illumination only from the lower hemisphere (i.e., the ground color). A point right on read more..

  • Page - 434

    curves is similar. One is the mirror of the other, but the area under the curves is the same. This general equivalency is good enough for the effect we’re after, and the shader is simpler and will execute faster as well. Actual Solution Chi Ting Solution o° 90° 180° 0 1 Figure 7.3 Analytic hemisphere lighting function (Compares the actual analytic function for hemisphere read more..

  • Page - 435

    Figure 7.4 Lighting model comparison (A comparison of some of the lighting models discussed in this chapter. The model uses a base color of white, RGB =(1.0, 1.0, 1.0), to emphasize areas of light and shadow. (A) uses a directional light above and to the right of the model. (B) uses a directional light directly above the model. These two images illustrate the difficulties read more..

  • Page - 436

    out vec3 Color; void main() { vec3 position = vec3(MVMatrix * VertexPosition); vec3 tnorm = normalize(NormalMatrix * VertexNormal); vec3 lightVec = normalize(LightPosition - position); float costheta = dot(tnorm, lightVec); float a = costheta * 0.5 + 0.5; Color = mix(GroundColor, SkyColor, a); gl_Position = MVPMatrix * VertexPosition; } One of the issues with this model is that it doesn’t read more..

  • Page - 437

    lighting in a scene. These images are high dynamic range (HDR) images that represent each color component with a 32-bit floating-point value. Such images can represent a much greater range of intensity values than can 8-bit-per-component images. For another, he makes available a tool called HDRShop that manipulates and transforms these environment maps. Through links to his various read more..

  • Page - 438

    Figure 7.5 Light probe image (A light-probe image of Old Town Square, Fort Collins, Colorado. (3Dlabs, Inc.)) Figure 7.6 Lat-long map (An equirectangular (or lat-long) texture map of Old Town Square, Fort Collins, Colorado. (3Dlabs, Inc.)) Advanced Lighting Models 391 read more..

  • Page - 439

    Figure 7.7 Cube map (A cube-map version of the Old Town Square light probe image. (3Dlabs, Inc.)) We can simulate other types of objects if we modify the environment maps before they are used. A point on the surface that reflects light in a diffuse fashion reflects light from all the light sources that are in the hemisphere in the direction of the surface normal at that read more..

  • Page - 440

    Again, HDRShop has exactly what we need. We can use HDRShop to create a lat-long image from our original light probe image. We can then use a command built into HDRShop that performs the necessary convolution. This operation can be time consuming, because at each texel in the image, the contributions from half of the other texels in the image must be considered. Luckily, we read more..

  • Page - 441

    Figure 7.8 Effects of diffuse and specular environment maps (A variety of effects using the Old Town Square diffuse and specular en- vironment maps shown in Figure 7.6. Left: BaseColor set to (1.0, 1.0, 1.0), SpecularPercent is 0, and DiffusePercent is 1.0. Middle: BaseColor is set to (0, 0, 0), SpecularPercent is set to 1.0, and DiffusePercent is set to 0. Right: BaseColor is read more..

  • Page - 442

    uniform vec3 BaseColor; uniform float SpecularPercent; uniform float DiffusePercent; uniform samplerCube SpecularEnvMap; uniform samplerCube DiffuseEnvMap; in vec3 ReflectDir; in vec3 Normal; out vec4 FragColor; void main() { // Look up environment map values in cube maps vec3 diffuseColor = vec3 (texture(DiffuseEnvMap, normalize(Normal))); vec3 specularColor = vec3 (texture(SpecularEnvMap, normalize(ReflectDir))); // Add read more..

  • Page - 443

    illumination for games and has applications in computer vision and other areas as well. Spherical harmonics provides a frequency space representation of an image over a sphere. It is analogous to the Fourier transform on the line or circle. This representation of the image is continuous and rotationally invariant. Using this representation for a light probe image, Ramamoorthi and read more..

  • Page - 444

    Table 7.1 Spherical Harmonic Coefficients for Light Probe Images Coefficient Old T o w n s quare Grace cathedral Eucal yptus gr o v e St. P eter ’s basilica Uffizi galler y L00 .87 .88 .86 .79 .44 .54 .38 .43 .45 .36 .26 .23 .32 .31 .35 L1m1 .18 .25 .31 .39 .35 .60 .29 .36 .41 .18 .14 .13 .37 .37 .43 L10 .03 .04 .04 −.34 −.18 −.27 .04 .03 .01 −.02 −.01 .00 .00 .00 read more..

  • Page - 445

    image in the pre-processing phase. The x, y, and z values are the coordinates of the normalized surface normal at the point that is to be shaded. Unlike low dynamic range images (e.g., 8 bits per color component) that have an implicit minimum value of 0 and an implicit maximum value of 255, HDR images represented with a floating-point value for each color component don’t read more..

  • Page - 446

    in vec4 VertexPosition; in vec3 VertexNormal; out vec3 DiffuseColor; void main() { vec3 tnorm = normalize(NormalMatrix * VertexNormal); DiffuseColor = C1 * L22 *(tnorm.x * tnorm.x - tnorm.y * tnorm.y) + C3 * L20 * tnorm.z * tnorm.z + C4 * L00 - C5 * L20 + 2.0 * C1 * L2m2 * tnorm.x * tnorm.y + 2.0 * C1 * L21 * tnorm.x * tnorm.z + 2.0 * C1 * L2m1 * tnorm.y * tnorm.z read more..

  • Page - 447

    Figure 7.9 Spherical harmonics lighting (Lighting using the coefficients from Table 7.1. From the left: Old Town Square, Grace Cathedral, Galileo’s Tomb, Campus Sunset, and St. Peter’s Basilica. (3Dlabs, Inc.)) The trade-offs in using image-based lighting versus procedurally defined lights are similar to the trade-offs between using stored textures versus procedural textures. Image-based read more..

  • Page - 448

    The condensed two-pass description is as follows: • Render the scene from the point of view of the light source. It doesn’t matter what the scene looks like; you only want the depth values. Create a shadow map by attaching a depth texture to a framebuffer object and rendering depth directly into it. • Render the scene from the point of view of the viewer. Project read more..

  • Page - 449

    // Attach the depth texture to it glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, depth_texture, 0); // Disable color rendering as there are no color attachments glDrawBuffer(GL_NONE); In Example 7.15, the depth texture is created and allocated using the GL_DEPTH_COMPONENT32 internal format. This creates a texture that is capable of being used as the depth buffer for rendering read more..

  • Page - 450

    glUniformMatrix4fv(render_light_uniforms.MVPMatrix, 1, GL_FALSE, light_projection_matrix * light_view_matrix * scene_model_matrix); In Example 7.16, we set the light’s position using a function of time (t) and point it towards the origin. This will cause the shadows to move around. FRUSTUM_DEPTH is set to the maximum depth over which the light will influence and represents the far plan of the read more..

  • Page - 451

    At this point we are ready to render the scene into the depth texture we created earlier. We need to bind the framebuffer object with the depth texture attachment and set the viewport to the depth texture size. Then we clear the depth buffer (which is actually our depth texture now) and draw the scene. Example 7.18 contains the code to do this. Example 7.18 Rendering the read more..

  • Page - 452

    Figure 7.10 Depth rendering (Depths are rendered from the light’s position. Within rendered objects, closer points have smaller depths and show up darker.) Using a Shadow Map Now that we have the depth for the scene rendered from the light’s point of view we can render the scene with our regular shaders and use the resulting depth texture to produce shadows as part of our read more..

  • Page - 453

    The code to set all these matrices up is given in Example 7.19. Example 7.19 Matrix Calculations for Shadow Map Rendering mat4 scene_model_matrix = rotate(t * 720.0f, Y); mat4 scene_view_matrix = translate(0.0f, 0.0f, -300.0f); mat4 scene_projection_matrix = frustum(-1.0f, 1.0f, -aspect, aspect, 1.0f, FRUSTUM_DEPTH); mat4 scale_bias_matrix = mat4(vec4(0.5f, 0.0f, 0.0f, 0.0f), vec4(0.0f, 0.5f, 0.0f, read more..

  • Page - 454

    vertex.normal = mat3 (view_matrix * model_matrix) * normal; gl_Position = clip_pos; } Finally, the fragment shader performs lighting calculations for the scene. If the point is considered to be illuminated by the light, the light’s contribution is included in the final lighting calculation, otherwise only ambient light is applied. The shader given in Example 7.21 performs these read more..

  • Page - 455

    Don’t worry about the complexity of the lighting calculations in this shader. The important part of the algorithm is the use of the sampler2DShadow sampler type and the textureProj function. The sampler2DShadow sampler is a special type of 2D texture that, when sampled, will return either 1.0 if the sampled texture satisfies the comparison test for the texture, and 0.0 if it read more..

  • Page - 456

    Figure 7.11 Final rendering of shadow map That wraps up shadow mapping. There are many other techniques, including enhancements to shadow mapping, and we encourage you to explore on your own. Shadow Mapping 409 read more..

  • Page - 457

    This page intentionally left blank read more..

  • Page - 458

    Chapter 8 Procedural Texturing Chapter Objectives After reading this chapter, you’ll be able to do the following: • Texture a surface without using texture look-ups; instead texture a surface using a shader that computes the texture procedurally. • Antialias a procedurally generated texture. • Light a surface using a bump map. • Use noise to modulate shapes and textures to read more..

  • Page - 459

    Generally, this chapter will cover using computation in shaders to supply quality versions of what might normally come from large texture maps, complex geometry, or expensive multisampling. However, accessing textures won’t be forbidden. We’ll still occasionally use them as side tables to drive the calculations performed in the shaders. This chapter contains the following major sections: read more..

  • Page - 460

    shader. When deciding whether to write a procedural texture shader or one that uses stored textures, keep in mind some of the main advantages of procedural texture shaders. • Textures generated procedurally have very low memory requirements compared with stored textures. The only primary representation of the texture is in the algorithm defined by the code in the procedural texture read more..

  • Page - 461

    • Performing the algorithm embodied by a procedural texture shader at each location on an object can take longer than accessing a stored texture. • Procedural texture shaders can have serious aliasing artifacts that can be difficult to overcome. Today’s graphics hardware has built-in capabilities for antialiasing stored textures (e.g., filtering methods and mipmaps). • Because of read more..

  • Page - 462

    The object in Figure 8.1 is a partial torus rendered with a stripe shader. The stripe shader and the application in which it is shown were both developed in 2002 by LightWork Design, a company that develops soft- ware to provide photorealistic views of objects created with commercial CAD/CAM packages. The application developed by LightWork Design contains a graphical user interface read more..

  • Page - 463

    background color, scale, and stripe width must be passed to the fragment shader so that our procedural stripe computation can be performed at each fragment. Stripes Vertex Shader The vertex shader for our stripe effect is shown in Example 8.1. Example 8.1 Vertex Shader for Drawing Stripes #version 330 core uniform vec3 LightPosition; uniform vec3 LightColor; uniform vec3 EyePosition; uniform read more..

  • Page - 464

    As we mentioned, the values for doing the lighting computation (LightPosition, LightColor , EyePosition , Specular , Ambient , and Kd ) are all passed in by the application as uniform variables. The purpose of this shader is to compute DiffuseColor and SpecularColor ,two out variables that will be interpolated across each primitive and made available to the fragment shader at each read more..

  • Page - 465

    value for Fuzz is 0.1. It can be adjusted as the object changes size to prevent excessive blurriness at high magnification levels or aliasing at low magnification levels. It shouldn’t really be set to a value higher than 0.5 (maximum blurriness of stripe edges). The first step in this shader is to multiply the incoming t texture coordinate by the stripe scale factor and read more..

  • Page - 466

    Now all that remains to be done is to apply the diffuse and specular lighting effects computed by the vertex shader and supply an alpha value of 1.0 to produce our final fragment color. By modifying the five basic parameters of our fragment shader, we can create a fairly interesting number of variations of our stripe pattern, using the same shader. Figure 8.2 Stripes read more..

  • Page - 467

    Figure 8.3 Brick patterns (A flat polygon, a sphere, and a torus rendered with the brick shaders.) Bricks Vertex Shader Let’s dive right in with the vertex shader, shown in Example 8.3.Ithas little to do with drawing brick, but does compute how the brick will be lit. If you wish, read through it, and if you’ve internalized the beginning of Chapter 7 as well as the read more..

  • Page - 468

    float spec = 0.0; if (diffuse > 0.0) { spec = max(dot(reflectVec, viewVec), 0.0); spec = pow(spec, 16.0); } LightIntensity = DiffuseContribution * diffuse + SpecularContribution * spec; MCposition = MCvertex.xy; gl_Position = MVPMatrix * MCvertex; } Bricks Fragment Shader The fragment shader contains the core algorithm to make the brick pattern. It is provided in Example 8.4, and we will read more..

  • Page - 469

    The colors to make the brick and mortar are selected by the application and sent in as BrickColor and MortarColor . The size of the brick pattern uses two independent components for width and height and is also sent by the application, in BrickSize . Finally, the application selects what percentage of the pattern will be brick, in BrickPct , with the remaining being mortar. read more..

  • Page - 470

    animations, Luxo Jr. This shader is quite specialized. It shades any surface as long as it’s a sphere. The reason is that the fragment shader exploits the following property of the sphere: The surface normal for any point on the surface points in the same direction as the vector from the center of the sphere to that point on the surface. This property is used to read more..

  • Page - 471

    InOrOutInit -3.0 FWidth 0.005 StarColor 0.6, 0.0, 0.0, 1.0 StripeColor 0.0, 0.3, 0.6, 1.0 BaseColor 0.6, 0.5, 0.0, 1.0 BallCenter 0.0, 0.0, 0.0, 1.0 LightDir 0.57735, 0.57735, 0.57735, 0.0 HVector 0.32506, 0.32506, 0.88808, 0.0 SpecularColor 1.0, 1.0, 1.0, 1.0 SpecularExponent 200.0 Ka 0.3 Kd 0.7 Ks 0.4 Vertex Shader The fragment shader is the workhorse for this shader duo, so the vertex shader read more..

  • Page - 472

    Fragment Shader The toy ball fragment shader is a little bit longer than some of the previous examples, so we build it up a few lines of code at a time and illustrate some intermediate results. The definitions for the local variables that are used in the toy ball fragment shader are as follows: vec3 normal; // Analytically computed normal vec4 pShade; // Point in shader read more..

  • Page - 473

    values. But we can take a little better advantage of the parallel nature of the underlying graphics hardware if we do things a bit differently. You’ll see how in a minute. First, we compute the distance between pShade and the first four half-spaces by using the built-in dot-product function: distance[0] = dot(p, HalfSpace[0]); distance[1] = dot(p, HalfSpace[1]); distance[2] = dot(p, read more..

  • Page - 474

    Figure 8.4 Visualizing the results of the half-space distance calculations (courtesy of AMD) distance.x = dot(pShade, HalfSpace[4]); distance.y = StripeWidth - abs(pShade.z); distance.xy = smoothstep(-FWidth, FWidth, distance.xy); inorout += distance.x; (In this case, we’re performing a smooth-step operation only on the x and y components.) The value for inorout is now in the range [−3, 2]. read more..

  • Page - 475

    Figure 8.5 Intermediate results from the toy ball shader (In (A), the procedurally defined star pattern is displayed. In (B), the stripe is added. In (C), diffuse lighting is applied. In (D), the analytically defined nor- mal is used to apply a specular highlight. (Courtesy of ATI Research, Inc.)) The diffuse part of the lighting equation is computed with these three lines of read more..

  • Page - 476

    Figure 8.6 Intermediate results from ‘‘in’’ or ‘‘out’’ computation (Surface points that are ‘‘in’’ with respect to all five half-planes are shown in white, and points that are ‘‘in’’ with respect to four half-planes are shown in gray (A). The value of inorout is clamped to the range [0, 1] to produce the result shown in (B). (Courtesy of AMD.)) Notice read more..

  • Page - 477

    uniform vec4 SpecularColor; uniform float SpecularExponent; uniform float Ka; uniform float Kd; uniform float Ks; in vec4 ECPosition; // surface position in eye coordinates in vec3 OCPosition; // surface position in object coordinates flat in vec4 ECBallCenter; // ball center in eye coordinates out vec4 FragColor; void main() { vec3 normal; // Analytically computed normal vec4 pShade; // Point in shader read more..

  • Page - 478

    surfColor *= intensity; // Per-fragment specular lighting intensity = clamp(dot(, normal), 0.0, 1.0); intensity = Ks * pow(intensity, SpecularExponent); surfColor.rgb += SpecularColor.rgb * intensity; FragColor = surfColor; } Lattice Here’s a little bit of a gimmick. In this example, we show how not to draw the object procedurally. In this example, we look at how the discard read more..

  • Page - 479

    of the lattice. The fractional part of this scaled texture-coordinate value is computed to provide a number in the range [0, 1]. These values are compared with the threshold values that have been provided. If both values exceed the threshold, the fragment is discarded. Otherwise, we do a simple lighting calculation and render the fragment. In Figure 8.7, the threshold values were read more..

  • Page - 480

    or exclude geometry or to add bumps or grooves. Additional procedural texturing effects are illustrated in this rest of this chapter. In particular, ‘‘Noise’’ shows how an irregular function (noise) can achieve a wide range of procedural texturing effects. Procedural textures are mathematically precise, are easy to parameterize, and don’t require large amounts of texture memory, read more..

  • Page - 481

    fragment shader instead of by the vertex shader where it is often handled. Again, this points out one of the advantages of the programmability that is available through the OpenGL Shading Language. We are free to perform whatever operations are necessary, in either the vertex shader or the fragment shader. We don’t need to be bound to the fixed functionality ideas of where read more..

  • Page - 482

    What we need is a transformation matrix that transforms each incoming vertex into surface-local coordinates (i.e., incoming vertex (x, y, z ) is transformed to (0, 0, 0)). We need to construct this transformation matrix at each vertex. Then, at each vertex, we use the surface-local transformation matrix to transform both the light direction and the viewing direction. In this way, read more..

  • Page - 483

    across the primitive, and the interpolated vectors are used in the frag- ment shader to compute the reflection with the procedurally perturbed normal. Application Setup For our procedural bump map shader to work properly, the application must send a vertex position, a surface normal, and a tangent vector in the plane of the surface being rendered. The application passes the tangent read more..

  • Page - 484

    Case 1: Consistent tangents Case 2: Inconsistent tangents Case 1: Surface-local space for vertex 1 Case 2: Surface-local space for vertex 1 Case 1: Surface-local space for vertex 2 Case 2: Surface-local space for vertex 2 Case 1: Small interpolation between light vectors Case 2: Large interpolation between light vectors 2x y 2y x L1 L2 2x y 2y x L1 L2 L1 B1 B2 T2 T1 L2 L1 B1 B2 T2 read more..

  • Page - 485

    Remember OpenGL does not need to send down a binormal vertex attribute, only a normal vector and a tangent vector. So, we don’t compute the binormal in the application; rather we have the vertex shader compute it automatically. (Simple computation is typically faster than memory access or transfer.) Vertex Shader The vertex shader for our procedural bump-map shader is shown in read more..

  • Page - 486

    v.y = dot(EyeDir, b); v.z = dot(EyeDir, n); EyeDir = normalize(v); gl_Position = MVPMatrix * MCVertex; } Fragment Shader The fragment shader for doing procedural bump mapping is shown in Example 8.10. A couple of the characteristics of the bump pattern are parameterized by being declared as uniform variables, namely, BumpDensity (how many bumps per unit area) and BumpSize (how wide each read more..

  • Page - 487

    Next, we compare d to BumpSize to see if we’re in a bump or not. If we’re not, we set our perturbation vector to 0 and our normalization factor to 1.0. The lighting computation happens in the next few lines. We compute our normalized perturbation vector by multiplying through with the normalization factor f . The diffuse and specular reflection values are computed in the read more..

  • Page - 488

    The results from the procedural bump map shader are shown applied to two objects, a simple box and a torus, in Figure 8.9. The texture coordinates are used as the basis for positioning the bumps, and because the texture coordinates go from 0.0 to 1.0 four times around the diameter of the torus, the bumps look much closer together on that object. Figure 8.9 Simple box and read more..

  • Page - 489

    texture memory as 8-bit per component textures, and performance might be reduced. Figure 8.10 Normal mapping (A normal map (left) and the rendered result on a simple box and a sphere. (3Dlabs, Inc.)) The vertex program is identical to the one described in ‘‘Bump Mapping’’. The fragment shader is almost the same, except that instead of computing the perturbed normal read more..

  • Page - 490

    scene vary at a high spatial frequency with respect to the samples, the samples can’t accurately reproduce the scene; they are hit and miss on interesting features. A periodic pattern needs to be sampled at at least twice the frequency of the pattern itself; otherwise the image will break down when it has a pattern changing faster than every two samples, causing moiré read more..

  • Page - 491

    point sampling and is illustrated in Figure 8.11 (B). The result is ugly aliasing artifacts for edges that don’t line up naturally with the sampling grid (see Figure 8.11 (C)). It actually depends on your display device technology whether pixels are more like overlapping circles (CRT), or collections of smaller red, green, and blue sub pixels (LCD), but the artifacts are read more..

  • Page - 492

    For instance, if you know that a particular object will always be a certain size in the final rendered image, you can design a shader that looks good while rendering that object at that size. This is the assumption behind some of the shaders presented previously in this book. The smoothstep(), mix(), and clamp() functions are handy functions to use to avoid sharp transitions read more..

  • Page - 493

    doesn’t eliminate aliasing, your result is still apt to exhibit signs of aliasing, albeit at a higher frequency (less visibly) than before. Supersampling is illustrated in Figure 8.12. Each of the pixels is rendered by sampling at four locations rather than at one. The average of the four samples is used as the value for the pixel. This averaging provides a better result, read more..

  • Page - 494

    Antialiasing High Frequencies Aliasing does not occur until we attempt to represent a continuous image with discrete samples. This conversion occurs during rasterization. There are only two choices: either don’t have high-frequency detail in the image to render, or somehow deal with undersampling of high-frequency detail. Since the former is almost never desirable due to viewing with a read more..

  • Page - 495

    Figure 8.13 Using the s texture coordinate to create stripes on a sphere (In (A), the s texture coordinate is used directly as the intensity (gray) value. In (B), a modulus function creates a sawtooth function. In (C), the abso- lute value function turns the sawtooth function into a triangle function. (Courtesy of Bert Freudenberg, University of Magdeburg, 2002.)) This isn’t read more..

  • Page - 496

    Figure 8.14 Antialiasing the stripe pattern (We can see that the square wave produced by the step function produces aliasing artifacts (A). The smoothstep() function with a fixed-width filter produces too much blurring near the equator but not enough at the pole (B). An adaptive approach provides reasonable antialiasing in both regions (C). (Courtesy of Bert Freudenberg, University of read more..

  • Page - 497

    smoothing filter adaptively so that transition can be appropriate at all scales in screen space. This requires a measurement of how rapidly the function we’re interested in is changing at a particular position in screen space. Fortunately, GLSL provides a built-in function that can give us the rate of change (derivative) of any parameter in screen space. The function dFdx() gives read more..

  • Page - 498

    The two methods of computing the gradient are compared in Figure 8.15. As you can see, there is little visible difference. Because the value of the gradient was quite small for the function being evaluated on this object, the values were scaled so that they would be visible. Figure 8.15 Visualizing the gradient (In (A), the magnitude of the gradient vector is used as the read more..

  • Page - 499

    in float V; // generic varying in float LightIntensity; out vec4 FragColor; void main() { float sawtooth = fract(V * Frequency); float triangle = abs(2.0 * sawtooth - 1.0); float dp = length(vec2(dFdx(V), dFdy(V))); float edge = dp * Frequency * 2.0; float square = smoothstep(0.5 - edge, 0.5 + edge, triangle); vec3 color = mix(Color0, Color1, square); FragColor = vec4 (color, 1.0); FragColor.rgb read more..

  • Page - 500

    Analytic Integration The weighted average of a function over a specified interval is called a convolution. The values that do the weighting are called the convolution kernel or the convolution filter. In some cases, we can reduce or eliminate aliasing by determining the convolution of a function ahead of time and then sampling the convolved function rather than the original read more..

  • Page - 501

    integral stays constant in this region. At 1, the function jumps back to 1.0, so the integral increases until the function reaches BrickPct.x + 1. At this point, the integral changes to a slope of 0 again, and this pattern of ramps and plateaus continues. Figure 8.17 Periodic step function (The periodic step function, or pulse train, that defines the horizontal com- ponent of read more..

  • Page - 502

    We perform antialiasing by determining the value of the integral over the area of the filter, and we do that by evaluating the integral at the edges of the filter and subtracting the two values. The integral for this function consists of two parts: the sum of the area for all the pulses that have been fully completed before the edge we are considering and the area of read more..

  • Page - 503

    The result is divided by the area of the filter (a box filter is assumed in this case) to obtain the average value for the function in the selected interval. Antialiased Brick Fragment Shader Now we can put all this to work to build better bricks. We replace the simple point sampling technique with analytic integration. The resulting shader is shown in Example 8.12. The read more..

  • Page - 504

    position = MCPosition / BrickSize; // Adjust every other row by an offset of half a brick if (fract(position.y * 0.5) > 0.5) position.x += 0.5; // Calculate filter size fw = fwidth(position); // Perform filtering by integrating the 2D pulse made by the // brick pattern over the filter width and height useBrick = (Integral(position + fw, BrickPct, MortarPct) - Integral(position, read more..

  • Page - 505

    Figure 8.20 Checkerboard pattern (Rendered with the antialiased checkerboard shader. On the left, the filter width is set to 0, so aliasing occurs. On the right, the filter width is computed using the fwidth() function.) a smooth interpolation is performed between the computed color and the average color. Example 8.13 Source Code for an Antialiased Checkerboard Fragment Shader #version 330 read more..

  • Page - 506

    // Determine the width of the projection of one pixel into // s-t space vec2 fw = fwidth(TexCoord); // Determine the amount of fuzziness vec2 fuzz = fw * Frequency * 2.0; float fuzzMax = max(fuzz.s, fuzz.t); // Determine the position in the checkerboard pattern vec2 checkPos = fract(TexCoord * Frequency); if (fuzzMax < 0.5) { // If the filter width is small enough, // compute read more..

  • Page - 507

    Noise In computer graphics, it’s easy to make things look good. By definition, geometry is drawn and rendered precisely. However, when realism is a goal, perfection isn’t always such a good thing. Real-world objects have dents and dings and scuffs. They show wear and tear. Computer graphics artists have to work hard to make a perfectly defined bowling pin look like it has read more..

  • Page - 508

    • Rendering man-made materials (stucco, asphalt, cement, etc.) • Adding imperfections to perfect models (rust, dirt, smudges, dents, etc.) • Adding imperfections to perfect patterns (wiggles, bumps, color variations, etc.) • Adding imperfections to time periods (time between blinks, amount of change between successive frames, etc.) • Adding imperfections to motion (wobbles, jitters, read more..

  • Page - 509

    and zooming in to smaller and smaller scales still shows only smooth variation. • It is a function that is repeatable across time (i.e., it generates the same value each time it is presented with the same input). • It has a well-defined range of output values (usually the range is [−1, 1] or [0, 1]). • It is a function whose small-scale form is roughly read more..

  • Page - 510

    interpolating between these points, as shown in Figure 8.22. The function is repeatable in that, for a given input value, it always returns the same output value. Figure 8.22 A continuous 1D noise function A key choice to be made in this type of noise function is the method used to interpolate between successive points. Linear interpolation is not good enough because it is read more..

  • Page - 511

    frequency 5 4 amplitude 5 1.0 frequency 5 8 amplitude 5 0.5 frequency 5 16 amplitude 5 0.25 frequency 5 32 amplitude 5 0.125 frequency 5 64 amplitude 5 0.0625 Figure 8.23 Varying the frequency and the amplitude of the noise function of noise is greater than twice the frequency of sampling (e.g., pixel spacing), you really do start getting random sample values that read more..

  • Page - 512

    sum of 2 octaves sum of 3 octaves sum of 4 octaves sum of 5 octaves Figure 8.24 Summing noise functions (Shows the result of summing noise functions of different amplitude and frequency.) RenderMan, and it is also intended to be used for implementations of the noise function built into GLSL. Lots of other noise functions have been defined, and there are many ways to vary the read more..

  • Page - 513

    multiplier, such as 2.21, that is not an integer value. This frequency multiplier is called the lacunarity of the function. The word comes from the Latin word lacuna, which means gap. Using a value larger than 2 allows us to build up more ‘‘variety’’ more quickly (e.g., by summing fewer octaves to achieve the same apparent visual complexity). Similarly, it is not read more..

  • Page - 514

    Figure 8.25 Basic 2D noise, at frequencies 4, 8, 16, and 32 (contrast enhanced) Figure 8.26 Summed noise, at 1, 2, 3, and 4 octaves (contrast enhanced) The first image in Figure 8.26 is exactly the same as the first image in Figure 8.25. The second image in Figure 8.26 is the sum of the first image in Figure 8.26 plus half of the second image in Figure 8.25 shifted read more..

  • Page - 515

    and animate it in a realistic way. With a 4D noise function, you can create a 3D object like a planet, and use the fourth dimension to watch it evolve in ‘‘fits and starts’’. Using Noise in the OpenGL Shading Language You include noise in a shader in the following three ways: 1. Use GLSL built-in noise functions. 2. Write your own noise function in GLSL. 3. Use read more..

  • Page - 516

    Example 8.14 C function to Generate a 3D Noise Texture int noise3DTexSize = 128; GLuint noise3DTexName = 0; GLubyte *noise3DTexPtr; void make3DNoiseTexture(void) { int f, i, j, k, inc; int startFrequency = 4; int numOctaves = 4; double ni[3]; double inci, incj, inck; int frequency = startFrequency; GLubyte *ptr; double amp = 0.5; if ((noise3DTexPtr = (GLubyte *) malloc(noise3DTexSize * noise3DTexSize read more..

  • Page - 517

    This function computes noise values for four octaves of noise and stores them in a 3D RGBA texture of size 128 × 128 × 128. This code also assumes that each component of the texture is stored as an 8-bit integer value. The first octave has a frequency of 4 and an amplitude of 0.5. In the innermost part of the loop, we call the noise3 function to generate a read more..

  • Page - 518

    Example 8.15 A Function for Activating the 3D Noise Texture void init3DNoiseTexture() { glGenTextures(1, & noise3DTexName); glActiveTexture(GL_TEXTURE6); glBindTexture(GL_TEXTURE_3D, noise3DTexName); glTexParameterf(GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameterf(GL_TEXTURE_3D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameterf(GL_TEXTURE_3D, GL_TEXTURE_WRAP_R, GL_REPEAT); glTexParameterf(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, read more..

  • Page - 519

    The advantages of using a texture map to implement the noise function are as follows: • Because the noise function is computed by the application, the appli- cation has total control of this function and can ensure matching behavior on every hardware platform. • You can store four noise values (i.e., one each for the R, G, B, and A values of the texture) at each read more..

  • Page - 520

    Vertex Shader The code shown in Example 8.16 is the vertex shader that we use for the four noise fragment shaders that follow. It is fairly simple because it really only needs to accomplish three things. • As in all vertex shaders, our vertex shader transforms the incoming vertex value and stores it in the built-in special variable gl_Position . • Using the incoming read more..

  • Page - 521

    Fragment Shader After we’ve computed a noise texture and used OpenGL calls to download it to the graphics card, we can use a fairly simple fragment shader together with the vertex shader described in the previous section to make an interesting ‘‘cloudy sky’’ effect (see Example 8.17). This shader results in something that looks like the sky on a mostly cloudy day. You read more..

  • Page - 522

    void main() { vec4 noisevec = texture(Noise, MCposition); float intensity = (noisevec[0] + noisevec[1] + noisevec[2] + noisevec[3] + 0.03125) * 1.5; vec3 color = mix(SkyColor, CloudColor, intensity) * LightIntensity; FragColor = vec4 (color, 1.0); } Figure 8.27 Teapots rendered with noise shaders (Clockwise from upper left: a cloud shader that sums four octaves of noise and uses a blue-to-white read more..

  • Page - 523

    so this type of noise can be used to simulate various things like flames or lava. The two-dimensional appearance of this type of noise is shown in Figure 8.28. Figure 8.28 Absolute value noise or ‘‘turbulence’’ Sun Surface Shader We can achieve an effect that looks like a pit of hot molten lava or the surface of the sun by using the same vertex shader as the read more..

  • Page - 524

    Example 8.18 Sun Surface Fragment Shader #version 330 core in float LightIntensity; in vec3 MCposition; uniform sampler3D Noise; uniform vec3 Color1; // (0.8, 0.7, 0.0) uniform vec3 Color2; // (0.6, 0.1, 0.0) uniform float NoiseScale; // 1.2 out vec4 FragColor; void main() { vec4 noisevec = texture(Noise, MCposition * NoiseScale); float intensity = abs(noisevec[0] - 0.25) + abs(noisevec[1] - 0.125) + read more..

  • Page - 525

    void main() { vec4 noisevec = texture(Noise, MCposition); float intensity = abs(noisevec[0] - 0.25) + abs(noisevec[1] - 0.125) + abs(noisevec[2] - 0.0625) + abs(noisevec[3] - 0.03125); float sineval = sin(MCposition.y * 6.0 + intensity * 12.0) * 0.5 + 0.5; vec3 color = mix(VeinColor, MarbleColor, sineval) * LightIntensity; FragColor = vec4 (color, 1.0); } Granite With noise, it’s also easy just read more..

  • Page - 526

    model for simulating the appearance of wood. We can adapt their approach to create wood shaders in GLSL. Following are the basic ideas behind the wood fragment shader shown in Example 8.21: • Wood is composed of light and dark areas alternating in concentric cylinders surrounding a central axis. • Noise is added to warp the cylinders to create a more natural-looking pattern. read more..

  • Page - 527

    Example 8.21 Fragment Shader for Wood #version 330 core uniform sampler3D Noise; uniform vec3 LightWood; uniform vec3 DarkWood; uniform float RingFreq; uniform float LightGrains; uniform float DarkGrains; uniform float GrainThreshold; uniform vec3 NoiseScale; uniform float Noisiness; uniform float GrainScale; in float LightIntensity; in vec3 MCposition; out vec4 FragColor; void main() { vec3 noisevec = vec3 (texture(Noise, read more..

  • Page - 528

    Our tree is assumed to be a series of concentric rings of alternating light wood and dark wood. To give some interest to our grain pattern, we add the noise vector to our object position. This has the effect of adding our low-frequency (first octave) noise to the x coordinate of the position and the third-octave noise to the z coordinate (the y coordinate won’t be read more..

  • Page - 529

    Figure 8.29 A bust of Beethoven rendered with the wood shader (3Dlabs, Inc.) color by adding to it a value we computed by multiplying the LightWood color, the LightGrains color, and our modified noise value. Conversely, if the value of r is greater than GrainThreshold , we modify our current color by subtracting from it a value we computed by multiplying the DarkWood color, the read more..

  • Page - 530

    Noise Summary This section introduced noise, an incredibly useful function for adding irregularity to procedural shaders. After a brief description of the mathematical definition of this function, we used it as the basis for shaders that simulated clouds, turbulent flow, marble, granite, and wood. There is a noise function available as a built-in function in some implementations of read more..

  • Page - 531

    shader antialiasing in terms of the RenderMan shading language, and much of the discussion is germane to the OpenGL Shading Language as well. Darwyn Peachey has a similar discussion in Texturing & Modeling: A Procedural Approach, Third Edition, by David Ebert et al. (2002). Bert Freudenberg developed a GLSL shader to do adaptive antialiasing and presented this work at the read more..

  • Page - 532

    Chapter 9 Tessellation Shaders Chapter Objectives After reading this chapter, you’ll be able to do the following: • Understand the differences between tessellation shaders and vertex shaders. • Identify the phases of processing that occur when using tessellation shaders. • Recognize the various tessellation domains and know which one best matches the type of geometry you need to read more..

  • Page - 533

    This chapter introduces OpenGL’s tessellation shader stages. It has the following major sections: • ‘‘Tessellation Shaders’’ provides an overview of how tessellation shaders work in OpenGL. • ‘‘Tessellation Patches’’ introduces tessellation’s rendering primitive, the patch. • ‘‘Tessellation Control Shaders’’ explains the operation and purpose of the first tessellation read more..

  • Page - 534

    using tessellation coordinatess and sends them to the rasterizer, or for more processing by a geometry shader (which we describe in Chapter 10, ‘‘Geometry Shaders’’). As we describe OpenGL’s process of tessellation, we’ll start at the beginning with describing patches in ‘‘Tessellation Patches’’ on Page 487, then move to describe the tessellation control shader’s read more..

  • Page - 535

    void glPatchParameteri(GLenum pname, GLint value); Specifies the number of vertices in a patch using value. pname must be set to GL_PATCH_VERTICES. A GL_INVALID_ENUM error is generated if value is less than zero, or greater than GL_MAX_PATCH_VERTICES. The default number of vertices for a patch is three. If the number of vertices for a patch is less that value, the patch is read more..

  • Page - 536

    • Specify the tessellation level factors that control the operation of the primitive generator. These are special tessellation control shader variables called gl_TessLevelInner and gl_TessLevelOuter , and are implicitly declared in your tessellation control shader. We’ll discuss each of these actions in turn. Generating Output-Patch Vertices Tessellation control shaders use the vertices specified read more..

  • Page - 537

    Example 9.2 Passing Through Tessellation Control Shader Patch Vertices #version 420 core layout (vertices = 4) out; void main() { gl_out[gl_InvocationID].gl_Position = gl_in[gl_InvocationID].gl_Position; // and then set tessellation levels } Tessellation Control Shader Variables The gl_in array is actually an array of structures, with each element defined as: in gl_PerVertex { vec4 gl_Position; float read more..

  • Page - 538

    If you have additional per-vertex attribute values, either for input or output, these need to be declared as either in or out arrays in your tessellation control shader. The size of an input array needs to be sized to the input-patch size, or can be declared unsized, and OpenGL will appropriately allocate space for all its values. Similarly, per-vertex output attributes, which read more..

  • Page - 539

    inner- and outer-tessellation levels. For instance, if we were to set the tessellation level factors to the following values, OpenGL would tessellate the quad domain as illustrated in Figure 9.1. gl_TessLevelOuter[0] gl_TessLevelOuter[1] gl_T ess Le ve l O uter[ 2] gl_TessLevelOuter[3] (0,0) (1,0) (1,1) (0,1) gl_TessLevelInner[1] gl_TessLevelInner[0] Figure 9.1 Quad tessellation (A tessellation of a quad read more..

  • Page - 540

    Notice that the outer-tessellation levels values correspond to the number of segments for each edge around the perimeter, while the inner-tessellation levels specify how many ‘‘regions’’ are in the horizontal and vertical directions in the interior of the domain. Also shown in Figure 9.1 is a possible triangularization of the domain,1 shown using the dashed lines. Likewise, the read more..

  • Page - 541

    have the property that a + b + c = 1. Think of a, b,or c as weights for each individual triangle vertex. gl_TessLevelOuter[0] gl_TessLevelOuter[1] (0,0) (1,0) (1,1) (0,1) Figure 9.2 Isoline tessellation (A tessellation of an isolines domain using the tessellations levels from Ex- ample 9.4.) As with any of the other domains, the generated tessellation coordinates are a function of the read more..

  • Page - 542

    gl_TessLevelOuter[0] gl_TessLevelOuter[1] gl _ TessLevelOuter[ 2 ] (1,0,0) (0,1,0) (0,0,1) gl_TessLevelInner[0] Figure 9.3 Triangle tessellation (A tessellation of a triangular domain using the tessellation levels from Example 9.5.) As with the other domains, the outer-tessellation levels control the subdivision of the perimeter of the triangle and the inner-tessellation level controls how the interior is read more..

  • Page - 543

    Odd inner tessellation levels create a small triangle in the center of the triangular tessellation domain Even inner tessellation levels create a single tessellation coordinate in the center of the triangular tessellation domain Figure 9.4 Even and odd tessellation (Examples of how even and odd inner tessellation levels affect triangular tessellation.) tessellation level factors using read more..

  • Page - 544

    emits, and is responsible for determining the position of the vertex derived from the tessellation coordinate. As we’ll see, tessellation evaluation shaders look similar to vertex shaders in transforming vertices into screen positions (unless the tessellation evaluation shader’s data is going to be further processed by a geometry shader). The first step in configuring a tessellation read more..

  • Page - 545

    Specifying the Spacing of Tessellation Coordinates Additionally, we can control how fractional values for the outer-tessellation levels are used in determining the tessellation coordinate generation for the perimeter edges. (Inner-tessellation levels are affected by these options.) Table 9.3 describes the three spacing options available, where max represents an OpenGL implementation’s maximum read more..

  • Page - 546

    shader in the gl_in variable, which when combined with tessellation coordinates, can be used to generate the output vertex’s position. Tessellation coordinates are provided to the shader in the variable gl_TessCoord . In Example 9.6, we use a combination of equal-spaced quads to render a simple patch. In this case, the tessellation coordinates are used to color the surface, and read more..

  • Page - 547

    Additionally, the following scalar values, described in Table 9.4, are provided for determining which primitive and for computing the position of the output vertex. Table 9.4 Tessellation Control Shader Input Variables Variable Declaration Description gl_PrimitiveID Primitive index for current input patch gl_PatchVerticesIn Number of vertices in the input patch, which is the dimension of gl_in read more..

  • Page - 548

    Processing Patch Input Vertices Given the information from our patches, we can easily construct the tessellation control shader for our application, which is shown in Example 9.8. Example 9.8 Tessellation Control Shader for Teapot Example #version 420 core layout (vertices = 16) out; void main() { gl_TessLevelInner[0] = 4; gl_TessLevelInner[1] = 4; gl_TessLevelOuter[0] = 4; gl_TessLevelOuter[1] = read more..

  • Page - 549

    Figure 9.5 The tessellated patches of the teapot with p being the final vertex position, vij the input control point at index (i, j ) in our input patch (both of which are vec4 s in GLSL), and B which are two scaling functions. While it might not seem like it, we can map easily the formula to a tessellation evaluation shader, as show in Example 9.9. In the following read more..

  • Page - 550

    { vec4 p= vec4 (0.0); float u = gl_TessCoord.x; float v = gl_TessCoord.y; for (int j = 0; j < 4; ++j) { for (int i = 0; i < 4; ++i) { p += B(i, u) * B(j, v) * gl_in[4*j+i].gl_Position; } } gl_Position = P * MV * p; } Our B function is one of the Bernstein polynomials, which is an entire family of mathematical functions. Each one returns a scalar value. read more..

  • Page - 551

    While that conversation involved more mathematics than most of the other techniques we’ve described in the book, it is representative of what you will encounter when working with tessellated surfaces. While discussion of the mathematics of surfaces is outside of this text, copious resources are available that describe the required techniques. Additional Tessellation Techniques In this read more..

  • Page - 552

    for (int i = 0; i < 4; ++i) { gl_TessLevelOuter[i] = tessLOD; } tessLOD = clamp(0.5 * tessLOD, 0.0, gl_MaxTessGenLevel); gl_TessLevelInner[0] = tessLOD; gl_TessLevelInner[1] = tessLOD; gl_out[gl_InvocationID].gl_Position = gl_in[gl_InvocationID].gl_Position; } Example 9.11 is a very rudimentary method for computing a patch’s level of detail. In particular, each perimeter edge is tessellated the read more..

  • Page - 553

    process. We would modify the tessellation control shader to implement the following: Example 9.12 Specifying Tessellation Level Factors Using Perimeter Edge Centers struct EdgeCenters { vec4 edgeCenter[4]; }; uniform vec3 EyePosition; uniform EdgeCenters patch[]; void main() { for (int i = 0; i < 4; ++i) { float d = distance(patch[gl_PrimitiveID].edgeCenter[i], vec4 (EyePosition, 1.0)); const float read more..

  • Page - 554

    order of accumulation of mathematical operations in the tessellation evaluation shader must also match. Depending upon how the tessellation evaluation shader generates the mesh’s vertices final positions, you may need to reorder the processing of vertices in the tessellation evaluation shader. A common approach to this problem is to recognize the output-patch vertices that contribute to read more..

  • Page - 555

    described in Chapter 6,‘‘Textures’’. In fact, there’s really not much to say, other than you would likely use the tessellation coordinate provided to the tessellation evaluation shader in some manner to sample a texture map containing displacement information. Adding displacement mapping to the teapot from Example 9.9, would requite adding two lines into the tessellation evaluation read more..

  • Page - 556

    Chapter 10 Geometry Shaders Chapter Objectives After reading this chapter, you’ll be able to do the following: • Create and use geometry shaders to further process geometry within the OpenGL pipeline. • Create additional geometric primitives using a geometry shader. • Use geometry shaders in combination with transform feedback to generate multiple streams of geometric data. • read more..

  • Page - 557

    In this chapter, we introduce an entirely new shader stage---the geometry shader. The geometry shader sits logically right before primitive assembly and fragment shading. It receives as its input complete primitives as a collection of vertices, and these inputs are represented as arrays. Typically, the inputs are provided by the vertex shader. However, when tessellation is active, the read more..

  • Page - 558

    geometry shader, pass GL_GEOMETRY_SHADER as the shader type parame- ter to glCreateShader(). The shader source is passed as normal using the glShaderSource() function and then the shader is compiled using glCompileShader(). Multiple geometry shaders may be attached to a single program object and when that program is linked, the attached geometry shaders will be linked into an executable read more..

  • Page - 559

    void main() { int n; // Loop over the input vertices for (n = 0; n < gl_in.length(); n++) { // Copy the input position to the output gl_Position = gl_in[0].gl_Position; // Emit the vertex EmitVertex(); } // End the primitive. This is not strictly necessary // and is only here for illustrative purposes. EndPrimitive(); } This shader simply copies its input into its output. You read more..

  • Page - 560

    Table 10.1 Geometry Shader Primitive Types and Accepted Drawing Modes Geometry Shader Primitive Type Accepted Drawing Command Modes points GL_POINTS, GL_PATCHES1 lines GL_LINES, GL_LINE_STRIP, GL_LINE_LOOP, GL_PATCHES1 triangles GL_TRIANGLES, GL_TRIANGLE_STRIP GL_TRIANGLE_FAN, GL_PATCHES1 lines_adjacency 2 GL_LINES_ADJACENCY, GL_LINE_STRIP_ADJACENCY triangles_adjacency 2 GL_TRIANGLES_ADJACENCY, GL_TRIANGLE_STRIP_ADJACENCY terminating read more..

  • Page - 561

    end of the shader), any incomplete primitives will simply be discarded. That is, if the shader produces a triangle strip with only two vertices or if it produces a line strip with only one vertex, the extra vertices making up the partial strip will be thrown away. Geometry Shader Inputs and Outputs The inputs and outputs of the geometry shader are specified using layout read more..

  • Page - 562

    array, the number of elements in the gl_in array can be found using the .length() method. Returning to our example geometry shader, we see a loop: // Loop over the input vertices for (n = 0; n < gl_in.length(); n++) { ... } The loop runs over the elements of the gl_in array, whose length is dependent on the input primitive type declared at the top of the shader. In read more..

  • Page - 563

    The last two input primitive types represent adjacency primitives, which are special primitives that are accepted by the geometry shader. They have special meaning and interpretation when no geometry shader is present (which will be described shortly), but for most cases where a geometry shader is present can be considered to be simple collections of four or six vertices and it read more..

  • Page - 564

    First, in the vertex shader: out VS_GS_INTERFACE { out vec4 position; out vec3 normal; out vec4 color; out vec2 tex_coord[4]; } vs_out; Now, in the geometry shader: in VS_GS_INTERFACE { out vec4 position; out vec3 normal; out vec4 color; out vec2 tex_coord[4]; } gs_in[]; Now we have declared the output of the vertex shader as vs_out using an interface block, which is matched to gs_in[] in the read more..

  • Page - 565

    These primitives have four and six vertices, respectively, and allow adjacency information---information about adjacent primitives or edges---to be passed into the geometry shader. Lines with adjacency information are generated by using the GL_LINES_ADJACENCY or GL_LINE_STRIP_ADJACENCY primitive mode in a draw command such as glDrawArrays(). Likewise, triangles with adjacency information are produced read more..

  • Page - 566

    vertices. Figure 10.2 below demonstrates this. In Figure 10.2, the first primitive passed to the geometry shader is made up from vertices A, B, C, and D, the second from B, C, D, and E, the third from C, D, E, and F and so on. A B C D E F G H 123 4 5 Figure 10.2 Line-strip adjacency sequence (Vertex sequence for GL_LINE_STRIP_ADJACENCY primitives.) The lines_adjacency read more..

  • Page - 567

    Triangles with Adjacency Like the lines with adjacency primitive types, the triangles_adjacency input primitive type is designed to allow triangles with adjacency information to be passed into a geometry shader. Each triangles_adjacency primitive is constructed from six vertices and so gl_in and the other geometry shader inputs become six-element arrays. There are also two primitive modes read more..

  • Page - 568

    are still used to construct each primitive passed to the geometry shader. The first primitive is made from the first six vertices in the enabled arrays and then a new primitive is constructed for each vertex, reusing the previous five. B F C A J D G E N H K I L O M P 135 246 Figure 10.4 Triangle-strip adjacency layout (Vertex layout for GL_TRIANGLE_STRIP_ADJACENCY primitives.) If the read more..

  • Page - 569

    B F C A J D G E N H K I L O M P Figure 10.5 Triangle-strip adjacency sequence (Vertex sequence for GL_TRIANGLE_STRIP_ADJACENCY primitives.) Triangle 3 will be made up of vertices E, G, and I, and the adjacency vertices will be C, K, and H. This pattern repeats until the end of the strip, where triangle 6 is made from vertices M, K, and O, and the adjacency vertices are I, read more..

  • Page - 570

    one that it’s processing in the mesh. For triangles, the extra vertex is often the third vertex of a triangle sharing an edge (and therefore two vertices) with the current primitive. This vertex likely already exists in the mesh. If indexed vertices are used, then no additional vertex data is required to represent this---only additional indices in the element buffer. In many read more..

  • Page - 571

    other vertices, that doesn’t really matter as those undefined values will never be used. To specify which vertex is to be used as the provoking vertex, you can call glProvokingVertex() with the desired mode. The default is GL_LAST_ VERTEX_CONVENTION, which means that flat shaded interpolants will be taken from the last vertex in each primivite. However, you can specify that they read more..

  • Page - 572

    already---it is available in the fragment shader to identify the primitive to which the fragment belongs. Because the geometry shader may produce a variable amount of output primitives (or none at all), it is not possible for the system to generate gl_PrimitiveID automatically. Instead, the value that would have been generated if no geometry shader were present is passed as an read more..

  • Page - 573

    is possibly the simplest geometry that actually does anything. However, Example 10.5 contains a perfectly legal geometry shader. Example 10.5 A Geometry Shader that Drops Everything #version 330 core layout (triangles) in; layout (triangle_strip, max_vertices =3) out; void main() { /* Do nothing */ } However, this isn’t particularly useful---it doesn’t produce any output primitives and using read more..

  • Page - 574

    Geometry Amplification As you have read, it is possible for a geometry shader to output a different amount of primitives in than it accepts as input. So far we have looked at a simple pass-through geometry shader and at a shader that selectively culls geometry. Now we will look at a shader that produces more primitives on its output than it accepts on its input. This read more..

  • Page - 575

    (shells) and the depth of the fur. The geometry shader produces the fur shells by displacing the incoming vertices along their normals and essentially producing multiple copies of the incoming geometry. As the shells are rendered, the fragment shader uses a fur texture to selectively blend and ultimately discard pixels that are not part of a hair. The geometry shader is shown read more..

  • Page - 576

    vec3 n = vertex_in[i].normal; // Copy it to the output for use in the fragment shader vertex_out.normal = n; // Copy the texture coordinate too - we’ll need that to // fetch from the fur texture vertex_out.tex_coord = vertex_in[i].tex_coord; // Fur "strength" reduces linearly along the length of // the hairs vertex_out.fur_strength = 1.0 - d; // This is the core - read more..

  • Page - 577

    // The fur texture uniform sampler2D fur_texture; // Color of the fur. Silvery gray by default... uniform vec4 fur_color = vec4 (0.8, 0.8, 0.9, 1.0); // Input from the geometry shader in GS_FS_VERTEX { vec3 normal; vec2 tex_coord; flat float fur_strength; } fragment_in; void main() { // Fetch from the fur texture. We’ll only use the alpha channel // here, but we could easily have a read more..

  • Page - 578

    density and distribution to be controlled programmatically. The current depth of the shell being rendered is passed from the geometry shader into the fragment shader. The fragment shader uses this, along with the contents of the fur texture to determine how far along the hair the fragment being rendered is. This information is used to calculate the fragment’s color and opacity, read more..

  • Page - 579

    the individual slices that make up the shells. This means that we need a lot of shells (and thus a lot of amplification in the geometry shader) to produce a visually compelling result and hide this artifact. This can be detrimental to performance. When fur shells are used, we will generally also generate fur fins. Fins are additional primitives emitted perpendicular to the read more..

  • Page - 580

    and rasterization, the geometry shader is capable of producing other, ancillary streams of vertex information that can be captured using transform feedback. By combining the geometry shader’s ability to produce a variable amount of vertices at its output and its ability to send those input vertices to any one of several output streams, some sophisticated sorting, bucketing, and read more..

  • Page - 581

    flat out float electron; // Output stream declarations have no effect on input // declarations elephant is just a regular input in vec2 elephant; // It’s possible to go back to a previously // defined stream layout (stream=0) out; // baz joins it’s cousins foo and bar in stream 0 out vec4 baz; // And then jump right to stream 3, skipping stream 2 // altogether layout read more..

  • Page - 582

    easier to read. Now that we have defined which outputs belong to which streams, we need to direct output vertices to one or more of those streams. As with a regular, single-stream geometry shader, vertices are emitted and primitives are ended programmatically using special built-in GLSL functions. When multiple output streams are active, the function to emit vertices on a specific read more..

  • Page - 583

    across the calls to EmitStreamVerex . This is incorrect, and on some OpenGL implementations, the values of proton , electron , iron and copper will become undefined after the first call to EmitStreamVerex . Such a shader should be written as shown in Example 10.12. Example 10.12 Corrected Emission of Vertices into Multiple Streams // Set up and emit outputs for stream 0 foo = read more..

  • Page - 584

    Examples 10.9 and 10.10, we will record the variables for the first stream (foo, bar , and baz ) into the buffer object bound to the first transform feedback buffer binding point, the variables for the second stream (proton and electron ) into the buffer bound to the second binding point, and finally the variables associated with stream 3 (iron and copper ) into the buffer read more..

  • Page - 585

    be used in subsequent rendering. Because the vertex shader is a simple, one-in, one-out pipeline stage, it is known up front how many vertices the vertex shader will generate. Assuming that the transform feedback buffer is large enough to hold all of the output data, the number of vertices stored in the transform feedback buffer is simply the number of vertices processed by read more..

  • Page - 586

    and void glEndQueryIndexed(GLenum target, GLuint index); Ends the active query on the indexed query target point specified by target and index. Here, target is set to either GL_PRIMITIVES_GENERATED or GL_TRANSFORM_FEEDBACK_PRIMITIVES_WRITTEN, index is the index of the primitive query binding point on which to execute the query, and id is the name of a query object that was previously read more..

  • Page - 587

    glDrawTransformFeedback() and glDrawTransformFeedbackStream() are supplied. The prototypes of these functions are as follows: void glDrawTransformFeedback(GLenum mode, GLuint id); void glDrawTransformFeedbackStream(GLenum mode, GLuint id, GLuint stream); Draw primitives as if glDrawArrays() had been called with mode set as specified, first set to zero and count set to the number of primitives captured read more..

  • Page - 588

    glDrawTransformFeedbackStreamInstanced() are provided. Their prototypes are as follows: void glDrawTransformFeedbackInstanced(GLenum mode, GLuint id, GLsizei instancecount); void glDrawTransformFeedbackStreamInstanced(GLenum mode, GLuint id, GLuint stream, GLsizei instancecount); Draw primitives as if glDrawArraysInstanced() had been called with first set to zero, count set to the number of primitives captured by read more..

  • Page - 589

    void main() { vs_normal = (model_matrix * vec4 (normal, 0.0)).xyz; gl_Position = model_matrix * position; } Vertices enter the geometry shader shown in Example 10.15 in view space. This shader takes the incoming stream of primitives, calculates a per-face normal, and then uses the sign of the X component of the normal to determine whether the triangle is left-facing or right-facing. read more..

  • Page - 590

    for (i = 0; i < gl_in.length(); i++) { // Transform to clip space lf_position = projection_matrix * (gl_in[i].gl_Position - vec4 (30.0, 0.0, 0.0, 0.0)); // Copy the incoming normal to the output stream lf_normal = vs_normal[i]; // Emit the vertex EmitStreamVertex(0); } // Calling EndStreamPrimitive is not strictly necessary as // these are points EndStreamPrimitive(0); } // Otherwise, it’s read more..

  • Page - 591

    "lf_position", "lf_normal" }; glTransformFeedbackVaryings(sort_prog, 5, varyings, GL_INTERLEAVED_ATTRIBS); Notice that the output of the geometry shader for stream zero and stream one are identical. The same data is written to the selected stream regardless of whether the polygon is left- or right-facing. In the first pass, all of the vertex data recorded into the transform read more..

  • Page - 592

    feedback object to manage transform feedback data and primitive counts. The code to set all this up is given in Example 10.18 below. Example 10.18 OpenGL Setup Code for Geometry Shader Sorting // Create a pair of vertex array objects and buffer objects // to store the intermediate data. glGenVertexArrays(2, vao); glGenBuffers(2, vbo); // Create a transform feedback object upon which read more..

  • Page - 593

    back-facing polygons and performs no rasterization. The second and third passes are essentially identical in this example, although a completely different shading algorithm could be used in each. These passes actually render the sorted geometry as if it were supplied by the application. INPUT GEOMETRY SORTING GEOMETRY SHADER REAL VERTEX SHADER OBJECT SPACE VERTICES VIEW SPACE VERTICES PASS-THROUGH read more..

  • Page - 594

    pass. Likewise, in the third pass we draw the right-facing geometry by using glDrawTransformFeedbackStream() with stream one. Example 10.19 Rendering Loop for Geometry Shader Sorting // First pass - start with the "sorting" program object. glUseProgram(sort_prog); // Set up projection and model-view matrices mat4 p(frustum(-1.0f, 1.0f, aspect, -aspect, 1.0f, 5000.0f)); mat4 m; m = read more..

  • Page - 595

    glBindVertexArray(vao[0]); glDrawTransformFeedbackStream(GL_TRIANGLES, xfb, 0); // Now draw stream 1, which contains right facing polygons. glUniform4fv(0, 1, colors[1]); glBindVertexArray(vao[1]); glDrawTransformFeedbackStream(GL_TRIANGLES, xfb, 1); The output of the program shown in Example 10.19 is shown in Figure 10.9. While this is not the most exciting program ever written, it does demonstrate the read more..

  • Page - 596

    Geometry Shader Instancing One type of instancing has already been covered in Chapter 3. In this first type of instancing, functions like glDrawArraysInstanced() or glDrawElementsInstanced() are used to simply run the whole OpenGL pipeline on a set of input data multiple times. This results in the vertex shader running several times on all of the input vertices, with the same vertex read more..

  • Page - 597

    geometry shader may reach a much higher amplification level as with a noninstanced geometry shader, any amplification performed must be limited to the maximum number of output vertices supported by the implementation. By combining API level instancing with geometry shader instancing and amplification in the geometry shader, it is possible to essentially nest three levels of geometry in read more..

  • Page - 598

    void glViewportIndexedf(GLuint index, GLfloat x, GLfloat y, GLfloat w, GLfloat h); void glViewportIndexedfv(GLuint index, const GLfloat * v); void glDepthRangeIndexed(GLuint index, GLclampd n, GLclampd f ); Sets the bounds of a specific viewport. glViewportIndexedf() sets the bounds of the viewport determined by index to the rectangle whose upper left is at (x, y) and whose width and read more..

  • Page - 599

    An example use case is to specify multiple viewports within a single framebuffer (e.g., a top, side, and front view in a 3D modeling application) and use the geometry shader to render the same input vertex data into each of the viewports. This can be performed using any of the techniques discussed previously. For example, the geometry shader could perform a simple loop and read more..

  • Page - 600

    { for (int i = 0; i < gl_in.length(); i++) { // Set the viewport index for every vertex. gl_ViewportIndex = gl_InvocationID; // Color comes from the "colors" array, also // indexed by gl_InvocationID. gs_color = colors[gl_InvocationID]; // Normal is transformed using the model matrix. // Note that this assumes that there is no // shearing in the model matrix. gs_normal = read more..

  • Page - 601

    rotation(360.0f * t * float (i + 1), X) * rotation(360.0f * t * float (i + 2), Y) * rotation(360.0f * t * float (5 - i), Z) * translation(0.0f, -80.0f, 0.0f)); } glUniformMatrix4fv(model_matrix_pos, 4, GL_FALSE, m[0]); Notice in Example 10.22 how glUniformMatrix4fv() is used to set the complete array of four matrix uniforms with a single function call. In the window resize read more..

  • Page - 602

    Figure 10.10 Output of the viewport-array example As with glDepthRangeArrayv() and glViewportArrayv(), there is an array form of glScissorIndexed(), which sets multiple scissor rectangles simultaneously. Its prototype is as follows: void glScissorArrayv(GLuint first, GLsizei count, const GLint * v); Sets the bounds of multiple scissor rectangles with a single command. first contains the index of read more..

  • Page - 603

    viewports and scissor rectangles allows for some combinatorial use. For example, you could specify four viewports and four scissor rectangles, producing 16 possible combinations of viewport and scissor rectangles, which can be indexed in the geometry shader independently. Layered Rendering When rendering into a framebuffer object, it is possible to use a 2D array texture as a color read more..

  • Page - 604

    A different array texture can be attached to each of the framebuffer’s color attachments (GL_COLOR_ATTACHMENTi, where i is the index of the color attachment). It is also possible to create a 2D array texture with a format of GL_DEPTH_COMPONENT, GL_DEPTH_STENCIL, or GL_STENCIL_INDEX and attach it to GL_DEPTH_ATTACHMENT, GL_STENCIL_ATTACHMENT, or GL_DEPTH_STENCIL_ATTACHMENT. This will allow the read more..

  • Page - 605

    vec3 normal; } vertex_in[]; out GS_FS_VERTEX { vec4 color; vec3 normal; } vertex_out; uniform mat4 projection_matrix; uniform int output_slices; void main() { int i, j; mat4 slice_matrix; float alpha = 0.0; float delta = float (output_slices - 1) * 0.5 / 3.1415927; for (j = 0; j < output_slices; ++j) { float s = sin(alpha); float c = cos(alpha); slice_matrix = mat4 (vec4(c, 0.0, -s, 0.0), vec4 read more..

  • Page - 606

    In this particular example, a simple loop is used to amplify the incoming geometry. This is sufficient when the number of layers in the framebuffer attachment is relatively small---less than one third of the maximum number of output vertices allowed by the implementation in a geometry shader. When a larger number of array slices must be rendered, instanced rendering or even read more..

  • Page - 607

    even change the types of primitives. The geometry shader can be used for user-controlled culling, geometric transformations, and even sorting algorithms. It provides access to features such as multiple viewports and rendering into texture arrays, three-dimensional textures, and cube maps. The geometry shader can be instanced, which when combined with its other features is an extremely read more..

  • Page - 608

    To produce geometry do the following: • Use EmitVertex() or EmitStreamVertex(<stream>) to produce vertices and EndPrimitive() or EndStreamPrimitive(<stream>) to break apart long output strips (remember, geometry shaders can only produce points, line strips or triangle strips). The special inputs and outputs available to geometry shaders are as follows: • gl_in[] ---an input array read more..

  • Page - 609

    the vertex shader. If independent triangles are rendered, calculating the values of flat interpolated attributes in the vertex shader will result in that computation being performed for vertices that are not the provoking vertex for the primitive. Moving that work to the geometry shader allows it to be performed only once and then the value (which should be stored in local read more..

  • Page - 610

    Chapter 11 Memory Chapter Objectives After reading this chapter, you’ll be able to do the following: • Read from and write to memory from shaders. • Perform simple mathematical operations directly on memory from shaders. • Synchronize and communicate between different shader invocations. 563 read more..

  • Page - 611

    Everything in the OpenGL pipeline thus far has essentially been free from side effects. That is, the pipeline is constructed from a sequence of stages, either programmable (such as the vertex and fragment shaders) or fixed function (such as the tessellation engine) with well-defined inputs and out- puts (such as vertex attributes or color outputs to a framebuffer). Although it has read more..

  • Page - 612

    which they represent. The OpenGL Shading Language image types are shown in Table 11.1. Table 11.1 Generic Image Types in GLSL Image Type Meaning image1D Floating-Point 1D image2D Floating-Point 2D image3D Floating-Point 3D imageCube Floating-Point Cube Map image2DRect Floating-Point Rectangle image1DArray Floating-Point 1D Array image2DArray Floating-Point 2D Array imageBuffer Floating-Point Buffer image2DMS read more..

  • Page - 613

    Table 11.1 (continued) Generic Image Types in GLSL Image Type Meaning uimage2DArray Unsigned Integer 2D Array uimageBuffer Unsigned Integer Buffer uimage2DMS Multisample 2D Unsigned Integer uimage2DMSArray Unsigned Integer 2D Multisample Array uimageCubeArray Unsigned Integer Cube-Map Array Notice that most of GLSL sampler types have an analog as an image type. The primary differences between a sampler read more..

  • Page - 614

    Table 11.2 (continued) Image Format Qualifiers Image Type OpenGL Internal Format rgba16 GL_RGBA16UI rgb10_a2 GL_RGB10_A2UI rgba8 GL_RGBA8UI rg16 GL_RG16UI rg8 GL_RG8UI r16 GL_R16UI r8 GL_R8UI rgba16_snorm GL_RGBA16_SNORM rgba8_snorm GL_RGBA8_SNORM rg16_snorm GL_RG16_SNORM rg8_snorm GL_RG8_SNORM r16_snorm GL_R16_SNORM r8_snorm GL_R8_SNORM rgba32i GL_RGBA32I rgba16i GL_RGBA16I rgba8i GL_RGBA8I rg32i GL_RG32I rg16i GL_RG16I rg8i GL_RG8I r32i read more..

  • Page - 615

    The image format qualifier is provided as part of the image variable declaration and must be used when declaring an image variable that will be used to read from an image. It is optional if the image will only ever be written to (see the explanation of writeonly below for more details). The image format qualifier used in the declaration of such variables (if present) must read more..

  • Page - 616

    reinterpreted as the type specified in the shader. For example, reading from a texture with the GL_R32F internal format using an image variable declared as r32ui will return an unsigned integer whose bit-pattern represents the floating-point data stored in the texture. The maximum number of image uniforms that may be used in a single shader stage may be determined by querying the read more..

  • Page - 617

    binding to 0. The number of image units supported by the OpenGL implementation may be determined by retrieving the value of GL_MAX_IMAGE_UNITS. A single layer of a texture object must be bound to an image unit before it can be accessed in a shader. To do this, call glBindImageTexture() whose prototype is as follows: void glBindImageTexture(GLuint unit, Gluint texture, GLint level, read more..

  • Page - 618

    Example 11.2 Creating, Allocating, and Binding a Texture to an Image Unit GLuint tex; // Generate a new name for our texture glGenTextures(1, &tex); // Bind it to the regular 2D texture target to create it glBindTexture(GL_TEXTURE_2D, tex); // Allocate immutable storage for the texture glTexStorage2D(GL_TEXTURE_2D, 1, GL_RGBA32F, 512, 512); // Unbind it from the 2D texture target read more..

  • Page - 619

    Example 11.3 Creating and Binding a Buffer Texture to an Image Unit GLuint tex, buf; // Generate a name for the buffer object, bind it to the // GL_TEXTURE_BINDING, and allocate 4K for the buffer glGenBuffers(1, &buf); glBindBuffer(GL_TEXTURE_BUFFER, buf); glBufferData(GL_TEXTURE_BUFFER, 4096, NULL, GL_DYNAMIC_COPY); // Generate a new name for our texture glGenTextures(1, &tex); // Bind read more..

  • Page - 620

    The imageLoad() functions operate similarly to texelFetch(), which is used to directly read texels from textures without any filtering applied. In order to store into images, the imageStore() function may be used. imageStore() is defined as follows: gvec4 imageStore(writeonly gimage1D image, int P, gvec4 data); gvec4 imageStore(writeonly gimage2D image, ivec2 P, gvec4 data); gvec4 read more..

  • Page - 621

    ivec3 imageSize(gimage3D image); ivec3 imageSize(gimage2DArray image); ivec3 imageSize(gimage2DMSArray image); Return the dimensions of the image. For arrayed images, the last component of the return value will hold the size of the array. Cube images return only the dimensions of one face and the number of cubes in the cube-map array, if arrayed. Example 11.4 shows a simple but complete read more..

  • Page - 622

    Figure 11.1 Output of the simple load-store shader As can be seen in Figure 11.1, two copies of the output geometry have been rendered---one in the left half of the image and the other in the right half of the image. The data in the resulting texture was explicitly placed with the shader of Example 11.4. While this may seem like a minor accomplishment, it actually read more..

  • Page - 623

    Shader Storage Buffer Objects Reading data from and writing data to memory using image variables works well for simple cases where large arrays of homogeneous data are needed, or where the data is naturally image-based (such as the output of OpenGL rendering or where the shader is writing into an OpenGL texture). However, in some cases, large blocks of structured data may be read more..

  • Page - 624

    Example 11.6 Creating a Buffer and Using it for Shader Storage GLuint buf; // Generate the buffer, bind it to create it and declare storage glGenBuffers(1, &buf); glBindBuffer(GL_SHADER_STORAGE_BUFFER, buf); glBufferData(GL_SHADER_STORAGE_BUFFER, 8192, NULL, GL_DYNAMIC_COPY); // Now bind the buffer to the zeroth GL_SHADER_STORAGE_BUFFER // binding point glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, buf); read more..

  • Page - 625

    Atomic Operations and Synchronization Now that you have seen how shaders may read and write arbitrary locations in textures (through built-in functions) and buffers (through direct memory access), it is important to understand how these accesses can be controlled such that simultaneous operations to the same memory location do not destroy each other’s effects. In this section, you read more..

  • Page - 626

    uint count = imageLoad(overdraw_count, ivec2 (gl_FragCoord.xy)); // Add one count = count + 1; // Write it back to the image imageStore(output_buffer, ivec2 (gl_FragCoord.xy), count); } The shader in Example 11.8 attempts to count overdraw in a scene. It does so by storing the current overdraw count for each pixel in an image. Whenever a fragment is shaded, the current overdraw count read more..

  • Page - 627

    location, at time 1, it increments it, and at time 2, it writes the value back to memory. The value in memory (shown in the right-most column) is now 1 as expected. Starting at time 3, the second invocation of the shader (fragment 1) executes the same sequence of operations---load, increment, and write, over three time steps. The value in memory at the end of time Step read more..

  • Page - 628

    The reason for the corruption seen in this example is that the increment operations performed by the shader are not atomic with respect to each other. That is, they do not operate as a single, indivisible operation but rather as a sequence of independent operations that may be interrupted or may overlap with the processing performed by other shader invocations accessing the read more..

  • Page - 629

    The result of executing the shader shown in Example 11.9 is shown in Figure 11.4. As you can see, the output is much cleaner. Figure 11.4 Output of the atomic overdraw counter shader imageAtomicAdd is one of many atomic built-in functions in GLSL. These functions include addition and subtraction, logical operations, and comparison and exchange operations. The complete list of GLSL read more..

  • Page - 630

    int imageAtomicExchange(IMAGE_PARAMS mem, int data); uint imageAtomicCompSwap(IMAGE_PARAMS mem, uint compare uint data); int imageAtomicCompSwap(IMAGE_PARAMS mem, int compare, int data); imageAtomicAdd , imageAtomicMin , and imageAtomicMax perform an atomic addition, minimum, and maximum operation between data and the contents of the specified image at the specified coordinates, respectively. imageAtomicAnd , read more..

  • Page - 631

    supported in atomic operations. Each atomic function returns the value that was previously in memory at the specified location. If this value is not required by the shader, it may be safely ignored. Shader compilers may then perform data-flow analysis and eliminate unnecessary memory reads if it is advantageous to do so. As an example, the equivalent code for imageAtomicAdd is read more..

  • Page - 632

    Example 11.12 Equivalent Code for imageAtomicExchange and imageAtomicComp // THIS FUNCTION OPERATES ATOMICALLY uint imageAtomicExchange(uimage2D image, ivec2 P, uint data) { uint val = imageLoad(image, P); imageStore(image, P, data); return val; } // THIS FUNCTION OPERATES ATOMICALLY uint imageAtomicCompSwap(uimage2D image, ivec2 P, uint compare, uint data) { uint val = imageLoad(image, P); if (compare == val) { read more..

  • Page - 633

    void takeLock(ivec2 pos) { int lock_available; do { // Take the lock - the value in lock_image is 0 if the lock // is not already taken. If so, then it is overwritten with // 1 otherwise it is left alone. The function returns the value // that was originally in memory - 0 if the lock was not taken, // 1 if it was. We terminate the loop when we see that the read more..

  • Page - 634

    The code shown in Example 11.13 implements a simple per-pixel mutex using the imageAtomicCompSwap function. To do this, it compares the value already in memory to zero (the third parameter to imageAtomicCompSwap ). If they are equal (i.e., if the current value in memory is zero), it writes the new value (one, here) into memory. imageAtomicCompSwap then returns the value that was read more..

  • Page - 635

    uint atomicAdd(inout uint mem, uint data); int atomicAdd(inout int mem, int data); uint atomicMin(inout uint mem, uint data); int atomicMin(inout int mem, int data); uint atomicMax(inout uint mem, uint data); int atomicMax(inout int mem, int data); uint atomicAnd(inout uint mem, uint data); int atomicAnd(inout int mem, int data); uint atomicOr(inout uint mem, uint data); int atomicOr(inout int read more..

  • Page - 636

    Sync Objects OpenGL operates in a client-server model, where a server operates asynchronously to the client. Originally, this allowed the user’s terminal to render high-performance graphics and for the application to run on a server in a remote location. This was an extension of the X protocol, which was always designed with remote rendering and network operations in mind. In read more..

  • Page - 637

    void glGetSynciv(GLsync sync, GLenum pname, GLsizei bufSize, GLsizei *length, GLint *values); Retrieves the properties of a sync object. sync specifies a handle to the sync object from which to read the property specified by pname. bufSize is the size in bytes of the buffer whose address is given in values. length is the address of an integer variable that will receive the read more..

  • Page - 638

    • GL_ALREADY_SIGNALED is returned if sync was already signaled when the call to glClientWaitSync() was made. • GL_TIMEOUT_EXPIRED is returned if sync did not enter the signaled state before nanoseconds nanoseconds passed. • GL_CONDITION_SATISFIED is returned if sync was not signaled when the call to glClientWaitSync() was made, but became signaled before nanoseconds nanoseconds elapsed. read more..

  • Page - 639

    Example 11.14 Example Use of a Sync Object // This will be our sync object. GLsync s; // Bind a vertex array and draw a bunch of geometry glBindVertexArray(vao); glDrawArrays(GL_TRIANGLES, 0, 30000); // Now create a fence that will become signaled when the // above drawing command has completed s = glFenceSync(); // Map the uniform buffer that’s in use by the above draw void read more..

  • Page - 640

    context (the one which you want to wait on) and then call glWaitSync() in the destination context (the one that will do the waiting). The prototype for glWaitSync() is as follows: void glWaitSync(GLsync sync, GLbitfield flags, GLuint64 timeout); Causes the server to wait for the sync object indicated by sync to become signaled. flags is not used and must be set to zero. read more..

  • Page - 641

    Example 11.15 Basic Spin-Loop Waiting on Memory #version 420 core // Image that we’ll read from in the loop layout (r32ui} uniform uimageBuffer my_image; void waitForImageToBeNonZero() { uint val; do { // (Re-)read from the image at a fixed location. val = imageLoad(my_image, 0).x; // Loop until the value is nonzero } while (val == 0); } In Example 11.15, the function waitForImageToBeNonZero read more..

  • Page - 642

    As may be obvious, each call to the optimized version of waitForImageToBeNonZero in Example 11.16 will either read a nonzero value from the image and return immediately or enter an infinite loop---quite possibly crashing or hanging the graphics hardware. In order to avoid this situation, the volatile keyword must be used when declaring the image uniform to instruct the compiler to read more..

  • Page - 643

    In such cases, writes to one image do not affect the contents of any other image. The compiler can therefore be more aggressive about making optimizations that might otherwise be unsafe. Note that by default, the compiler assumes that aliasing of external buffers is possible and is less likely to perform optimizations that may break otherwise well-formed code. (Note GLSL assumes read more..

  • Page - 644


  • Page - 645

    cache altogether, and the second, to bypass level-1 caches and place data in level-2 caches while ensuring that any work that needs to share data is run only in that cache’s shader processor group. Other GPUs may have ways of keeping the level-2 caches coherent. This type of decision is generally made by the OpenGL driver, but a requirement to do so is given in the read more..

  • Page - 646

    image variable declared as writeonly will generate an error. Note that atomic operations implicitly perform a read operation as part of their read-modify-write cycle and so are not allowed on readonly or writeonly image variables. Memory Barriers Now that we understand how to control compiler optimizations using the volatile and restrict keywords and control caching behavior using the read more..

  • Page - 647

    { val = imageLoad(i, 0).x; } while (val != gl_PrimitiveID); // At this point, we can load data from another global image vec4 frag = imageLoad(my_image, gl_FragCoord.xy); // Operate on it... frag *= 0.1234; frag = pow(frag, 2.2); // Write it back to memory imageStore(my_image, gl_FragCoord.xy, frag); // Now, we’re about to signal that we’re done with processing // the pixel. We read more..

  • Page - 648

    To ensure that our modified image contents are written back to memory before other shader invocations start into the body of the function, we use a call to memoryBarrier between updates of the color image and the primitive counter to enforce ordering. We then insert another barrier after the primitive counter update to ensure that other shader invocations see our update. This read more..

  • Page - 649

    • GL_TEXTURE_FETCH_BARRIER_BIT specifies that any fetch from a texture issued after the barrier should reflect data written to the texture by commands issued before the barrier. • GL_SHADER_IMAGE_ACCESS_BARRIER_BIT specifies that data read from an image variable in shaders executed by commands after the barrier should reflect data written into those images by commands issued before the read more..

  • Page - 650

    In addition to the flags listed above, the special value GL_ALL_BARRIER_BITS may be used to specify that all caches be flushed or invalidated and all pending operations be finished before proceeding. This value is included to allow additional bits to be added to the accepted set by future versions of OpenGL or by extensions in a forward compatible manner. The extension read more..

  • Page - 651

    synchronization will depend on the barriers in order to perform cache control and ordering functions. Controlling Early Fragment Test Optimizations The OpenGL pipeline is defined to perform fragment shading followed by depth and stencil tests before writing to the framebuffer. This is almost always the desired behavior---certainly when a fragment shader writes to gl_FragDepth . However, read more..

  • Page - 652

    High Performance Atomic Counters The OpenGL Shading Language also supports a dedicated, high-performance set of atomic counters. However, to motivate their use, we will start with the ones already introduced; that is, the large suite of functions that perform atomic operations on the content of images, as described in ‘‘Atomic Operations on Images’’ on Page 578. These functions read more..

  • Page - 653

    counts in the buffer---the first being the count of all fragments whose red channel is greater than its green channel and the second being the count of all other fragments. Obviously, the sum is the total number of fragments that executed this shader and is what would have been generated by an occlusion query. This type of operation is fairly common---counting events by read more..

  • Page - 654

    Notice the two new uniforms declared at the top of Example 11.22, red_texels and green_texels . They are declared with the type atomic_uint and are atomic counter uniforms. The values of atomic counters may be reset to particular values and their contents read by the application. To provide this functionality, atomic counters are backed by buffer objects bound to the read more..

  • Page - 655

    atomic counter buffer but the value reported for GL_MAX_COMBINED_ATOMIC_COUNTER_BUFFERS is 2, the program will fail to link. Note: Note that while these limits are queryable, it is only required that an OpenGL implementation support atomic counters in the fragment shader---at least one atomic counter buffer binding and 8 atomic counters are supported in the fragment shader, and all other read more..

  • Page - 656

    Example The following section includes an example of the types of effect and techniques that can implemented using the functionality described in this chapter. Order-Independent Transparency Order-independent transparency is a technique where blending operations are carried out in a manner such that rasterization order is not important. The fixed function blending provided by OpenGL through read more..

  • Page - 657

    the same buffer image, the resulting linked lists are interleaved and each pixel has its own head pointer, stored in a 2D image that is the size of the framebuffer. The head pointer is updated using atomic operations---items are always appended at the head of the image and use of an atomic exchange operation ensures that multiple shader invocations attempting to append to the read more..

  • Page - 658

    To summarize, what is required for this algorithm to function are • A buffer large enough to hold all of the fragments that might be rasterized. • An atomic counter to serve as an allocator for records within the linked list. • A 2D image the size of the framebuffer that will be used to store the head pointer for each pixel’s linked list of fragments. read more..

  • Page - 659

    0, // No border GL_RED_INTEGER, // Single channel GL_UNSIGNED_INT, // Unsigned int NULL); // No data... yet // We will need to re-initialize the head pointer each frame. The // easiest way to do this is probably to copy from a PBO. We’ll // create that here... GLuint head_pointer_initializer; glGenBuffers(1, &head_pointer_initializer); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, head_pointer_initializer); read more..

  • Page - 660

    drawn, the head pointer and atomic counter buffers must be initialized to known values; otherwise, our shader will continue to append to the structures created in the previous frame. The code for this is given in Example 11.25. Example 11.25 Per-Frame Reset for Order-Independent Transparency // First, clear the head-pointer 2D image with known values. Bind it // to the GL_TEXTURE_2D read more..

  • Page - 661

    Example 11.26 Appending Fragments to Linked List for Later Sorting #version 420 core // Turn on early fragment testing layout (early_fragment_tests) in ; // This is the atomic counter used to allocate items in the // linked list layout (binding = 0, offset = 0) uniform atomic_uint index_counter; // Linked list 1D buffer layout (binding =0, rgba32ui ) uniform imageBuffer list_buffer; // Head read more..

  • Page - 662

    // item.y = color item.y = packUnorm4x8(frag_color); // item.z = depth (gl_FragCoord.z) item.z = floatBitsToUint(gl_FragCoord.z); // item.w = unused (so far...) item.w = 0; // Write the data into the buffer at the right location imageStore(list_buffer, index, item); } The shader shown in Example 11.26 appends fragments into the per-pixel linked list using atomics counters, general purpose read more..

  • Page - 663


  • Page - 664

    walk the per-pixel lists and blend all of the fragments together in order to build the final output colors. To do this, we render a full-screen quad using a fragment shader that will read from the list corresponding to its output pixel, sort all of the fragments into order, and then perform the blending operations of our choice. Because the number of fragments per-pixel is read more..

  • Page - 665

    // output color output_color = calculate_final_color(frag_count); } Example 11.27 makes use of three functions. First, build_local_fragment_list traverses the linked list of fragments corresponding to the current pixel and places all of the fragments into the fragments[] array. The code for this function is shown in Example 11.28. Notice that the size of the per-pixel fragment array is read more..

  • Page - 666

    After the local array of fragments has been built, it is sorted in order of depth using the sort_fragment_list function shown in Example 11.29. This function implements a simple bubble-sort algorithm. While this is a very simple algorithm and is not well suited for sorting large amounts of data, because the number of items is very low, the cost of the function is still read more..

  • Page - 667

    // Function for calculating the final output color. Walks the // fragments[] array and blends each pixel on top of each other vec4 calculate_final_color(int frag_count) { // Initialize the final color output vec4 final_color = vec4 (0.0); // For each fragment in the array... for (i = 0; i < frag_count; i++) { // The color is stored packed into the .y channel of the // read more..

  • Page - 668

    Figure 11.8 Result of order-independent transparency incorrect order on left; correct order on right. Example 621 read more..

  • Page - 669

    This page intentionally left blank read more..

  • Page - 670

    Chapter 12 Compute Shaders Chapter Objectives After reading this chapter, you’ll be able to do the following: • Create, compile, and link compute shaders. • Launch compute shaders, which operate on buffers, images, and counters. • Allow compute shader invocations to communicate with each other and to synchronize their execution. 623 read more..

  • Page - 671

    Compute shaders run in a completely separate stage of the GPU than the rest of the graphics pipeline. They allow an application to make use of the power of the GPU for general purpose work that may or may not be related to graphics. Compute shaders have access to many of the same resources as graphics shaders, but have more control over their application flow and how read more..

  • Page - 672

    glAttachShader(). These programs are linked as normal by using glLinkProgram(). Compute shaders are written in GLSL and in general, any functionality accessible to normal graphics shaders (for example, vertex, geometry or fragment shaders) is available. Obviously, this excludes graphics pipeline functionality such as the geometry shaders’ EmitVertex() or EndPrimitive() , or to the similarly read more..

  • Page - 673

    INV. 0,0 INV. 0,1 INV. 0,2 INV. 0,3 INV. 1,0 INV. 1,1 INV. 1,2 INV. 1,3 INV. 2,0 INV. 2,1 INV. 2,2 INV. 2,3 INV. 3,0 INV. 3,1 INV. 3,2 INV. 3,3 { Global Work Group Local Work Group Invocation Figure 12.1 Schematic of a compute workload workgroup is defined in the compute shader source code using an input layout qualifier. The global workgroup size is measured as an integer read more..

  • Page - 674

    Although the simple shader of Example 12.1 does nothing, it is a valid compute shader and will compile, link, and execute on an OpenGL implementation. To create a compute shader, simply call glCreateShader() with type set to GL_COMPUTE_SHADER, set the shader’s source code with glShaderSource() and compile it as normal. Then, attach the shader to a program and call glLinkProgram(). read more..

  • Page - 675

    void glDispatchCompute(GLuint num_groups_x, GLuint num_groups_y, GLuint num_groups_z); Dispatch compute workgroups in three dimensions. num_groups_x, num_groups_y, and num_groups_z specify the number of workgroups to launch in the X, Y, and Z dimensions, respectively. Each parameter must be greater than zero and less than or equal to the corresponding element of the implementation-dependent constant read more..

  • Page - 676

    The data in the buffer bound to GL_DISPATCH_INDIRECT_BUFFER binding could come from anywhere---including another compute shader. As such, the graphics processor can be made to feed work to itself by writing the parameters for a dispatch (or draws) into a buffer object. Example 12.3 shows an example of dispatching compute workloads using glDispatchComputeIndirect(). Example 12.3 Dispatching read more..

  • Page - 677

    compute shader (which would have been set using a layout qualifier in the source of the compute shader), call glGetProgramiv() with pname set to GL_MAX_COMPUTE_WORK_GROUP_SIZE and param set to the address of an array of three unsigned integers. The three elements of the array will be filled with the size of the local workgroup size in the X, Y, and Z dimensions, in that read more..

  • Page - 678

    • gl_LocalInvocationID is the location of the current invocation of a compute shader within the local workgroup. It will range from uvec3 (0) to gl_WorkGroupSize - uvec3 (1) . • gl_WorkGroupID is the location of the current local workgroup within the larger global workgroup. This variable will range from uvec3 (0) to gl_NumWorkGroups - uvec3 (1) . • gl_GlobalInvocationID is derived read more..

  • Page - 679

    the data image at the location given by the global invocation ID. The resulting image shows the relationship between the global and local invocation IDs and clearly shows the rectangular local workgroup size specified in the compute shader (in this case, 32 × 16 work items). The resulting image is shown in Figure 12.2. Figure 12.2 Relationship of global and local invocation ID read more..

  • Page - 680

    Communication The shared keyword is used to declare variables in shaders in a similar manner to other keywords such as uniform , in ,or out . Some example declarations using the shared keyword are shown in Example 12.6. Example 12.6 Example of Shared Variable Declarations // A single shared unsigned integer; shared uint foo; // A shared array of vectors shared vec4 bar[128]; // A read more..

  • Page - 681

    Because it is expected that variables declared as shared will be stored inside the graphics processor in dedicated high-performance resources, and because those resources may be limited, it is possible to query the combined maximum size of all shared variables that can be accessed by a single compute program. To retrieve this limit, call glGetIntegerv() with pname set to read more..

  • Page - 682

    invocation has completed the corresponding write to that variable. To ensure this, you can write to the variable in the source invocation, and then in both invocations execute the barrier() function. When the destination invocation returns from the barrier() call, it can be sure that the source invocation has also executed the function (and therefore completed the write to the read more..

  • Page - 683

    Use of memory barriers is not necessary to ensure the observed order of memory transactions within a single shader invocation. Reading the value of a variable in a particular invocation of a shader will always return the value most recently written to that variable, even if the compiler reordered them behind the scenes. One final function, groupMemoryBarrier() is effectively read more..

  • Page - 684

    number of attractors, each with a position and a mass. The mass of each particle is also considered to be the same. Each particle is considered to be gravitationally attracted to the attractors. The force exerted on the particle by each of the attractors is used to update the velocity of the particle by integrating over time. The positions and masses of the attractors are read more..

  • Page - 685

    pos.w -= 0.0001 * dt; // For each attractor... for (i = 0; i < 4; i++) { // Calculate force and update velocity accordingly vec3 dist = (attractor[i].xyz -; += dt * dt * attractor[i].w * normalize(dist) / (dot(dist, dist) + 10.0); } // If the particle expires, reset it if (pos.w <= 0.0) { = * 0.01; *= 0.01; pos.w += 1.0f; } // read more..

  • Page - 686

    for (i = 0; i < PARTICLE_COUNT; i++) { positions[i] = vmath::vec4(random_vector(-10.0f, 10.0f), random_float()); } glUnmapBuffer(GL_ARRAY_BUFFER); // Initialization of the velocity buffer - also filled with random vectors glBindBuffer(GL_ARRAY_BUFFER, velocity_buffer); glBufferData(GL_ARRAY_BUFFER, PARTICLE_COUNT * sizeof (vmath::vec4), NULL, GL_DYNAMIC_COPY); vmath::vec4 * velocities = (vmath::vec4 *) read more..

  • Page - 687

    Figure 12.3 Output of the physical simulation program as simple points In the fragment shader for rendering the points, we first use the age of the point (which is stored in its w component) to fade the point from red hot to cool blue as it gets older. Also, we turn on additive blending by enabling GL_BLEND and setting both the source and destination factors to GL_ONE. read more..

  • Page - 688

    In our rendering loop, the positions and masses of the attractors are updated before we dispatch the compute shader over the buffers containing the positions and velocities. We then render the particles as points having issued a memory barrier to ensure that the writes performed by the compute shader have been completed. This loop is shown in Example 12.10. Example 12.10 Particle read more..

  • Page - 689

    vmath::translate(0.0f, 0.0f, -60.0f) * vmath::rotate(time * 1000.0f, vmath::vec3(0.0f, 1.0f, 0.0f)); // Clear, select the rendering program and draw a full screen quad glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glUseProgram(render_prog); glUniformMatrix4fv(0, 1, GL_FALSE, mvp); glBindVertexArray(render_vao); glEnable(GL_BLEND); glBlendFunc(GL_ONE, GL_ONE); glDrawArrays(GL_POINTS, 0, PARTICLE_COUNT); Finally, the result read more..

  • Page - 690

    two-dimensional image by applying it first in the horizontal dimension and then again in the vertical dimension. The actual kernel is a central difference kernel −101 . To implement this kernel, each invocation of the compute shader produces a single pixel in the output image. It must read from the input image and subtract the samples to either side of the target pixel. Of read more..

  • Page - 691

    barrier(); // Compute our result and write it back to the image vec4 result = scanline[min(pos.x + 1, 1023)] - scanline[max(pos.x - 1, 0)]; imageStore(output_image, pos.yx, result); } The image processing shader of Example 12.11 uses a one-dimensional local workgroup size of 1024 pixels (which is the largest workgroup size that is guaranteed to be supported by an OpenGL implementation). read more..

  • Page - 692

    glDispatchCompute(1, 1024, 1); // Issue a memory barrier between the passes glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT); // Now bind the intermediate image as input and the final // image for output glBindImageTexture(0, intermediate_image, 0, GL_FALSE, 0, GL_READ_ONLY, GL_RGBA32F); glBindImageTexture(1, output_image, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBA32F); // Dispatch the vertical pass read more..

  • Page - 693

    Figure 12.5 Image processing (Input image (top) and resulting output image (bottom), generated by the image-processing compute-shader example.) 646 Chapter 12: Compute Shaders read more..

  • Page - 694

    Figure 12.6 Image processing artifacts (Output of the image processing example, without barriers, showing arti- facts.) Chapter Summary In this chapter, you have read an introduction to compute shaders. As they are not tied to a specific part of the traditional graphics pipeline and have no fixed intended use, the amount that could be written about compute shaders is enormous. read more..

  • Page - 695

    • Make the program current with glUseProgram(). • Launch compute workloads with glDispatchCompute() or glDispatchComputeIndirect(). In your compute shader: • Specify the local workgroup size using the local_size_x , local_size_y and local_size_z input layout qualifiers. • Read and write memory using buffer or image variables, or by updating the values of atomic counters. The special read more..

  • Page - 696

    Use Barriers Remember to insert control flow and memory barriers before attempting to communicate between compute shader invocations. If you leave out memory barriers, you open your application up to the effects of race conditions. It may appear to work on one machine but could produce corrupted data on others. Use Shared Variables Make effective use of shared variables. Try to read more..

  • Page - 697

    This page intentionally left blank read more..

  • Page - 698

    Appendix A Basics of GLUT: The OpenGL Utility Toolkit In this text, we used the OpenGL Utility Toolkit (GLUT) as a simple, cross-platform application framework to simplify our examples. The two versions of GLUT in circulation are as follows: • Freeglut, written by Pawel W. Olszta with contributions from Andreas Umbach and Steve Baker, is the most up-to-date version and the one read more..

  • Page - 699

    In this appendix, we explain those steps and expand on other options that the GLUT library makes available. For complete details on Freeglut (which is the version we recommend), please visit their Web site This appendix contains the following major sections: • ‘‘Initializing and Creating a Window’’ • ‘‘Accessing Functions’’ • ‘‘Handling read more..

  • Page - 700

    void glutInitDisplayMode(unsigned int mode); Specifies a display mode for windows created when glutCreateWindow() is called. You can specify that the window have an associated depth, stencil, or be an RGB or RGBA window. The mode argument is a bitwise OR combination of GLUT_RGB, GLUT_RGBA, GLUT_DOUBLE, GLUT_ALPHA, GLUT_DEPTH, GLUT_STENCIL, GLUT_MULTISAMPLE, or GLUT_STEREO. Additionally, if read more..

  • Page - 701

    GLUT_FORWARD_COMPATIBLE to specify a testing context for forward application compatibility. void glutInitWindowSize(int width, int height); void glutInitWindowPosition(int x, int y); Request windows created by glutCreateWindow() to have an initial size and position. The arguments (x, y ) indicate the location of a corner of the window, relative to the entire display. The parameters width and read more..

  • Page - 702

    void (*GLUTproc)() glutGetProcAddress(const char *procName); Retrieves the function address associated with procName or returns NULL if procName names a function that’s not supported in the OpenGL implementation. Note: In our examples, you may notice we don’t use glutGetProcAddress() explicitly. Instead, we use GLEW, the OpenGL Extension Wrangler library, which further abstracts away all of read more..

  • Page - 703

    transformation. The glutReshapeFunc() callback is the right place to make those updates. void glutReshapeFunc(void (*func)(int width, int height)); Specifies the function that’s called whenever the window is resized or moved. The argument func is a pointer to a function that expects two arguments, the new width and height of the window. Typically, func calls glViewport(), so that the read more..

  • Page - 704

    Processing mouse input events is more varied. For instance, does your application require that a mouse button be pressed to have the application respond to events? Or are you only interested in knowing if the mouse if moving, regardless of the button state. There are different mouse-event processing routines for these situations. void glutMouseFunc(void (*func)(int button, int state, read more..

  • Page - 705

    input-processing callback. When GLUT detects that there are no more input events, it will call the display callback (the one set with glutDisplayFunc()). void glutPostRedisplay(void); Marks the current window as needing to be redrawn. At the next opportunity, the callback function registered by glutDisplayFunc() will be called. Managing a Background Process You can specify a function to be read more..

  • Page - 706

    Appendix B OpenGL ES and WebGL While the OpenGL API is great for many computer graphics applications, under certain circumstances, it may not be the best solution, which is why the OpenGL API has spawned two other APIs. The first is OpenGL ES, where the ‘‘ES’’ stands for ‘‘Embedded Subsystem’’, and was crafted from the ‘‘desktop’’ version of OpenGL for use read more..

  • Page - 707

    OpenGL ES OpenGL ES is developed to meet the need of early embedded devices like mobile phones and set-top boxes. The original version, OpenGL ES Ver- sion 1.0 was derived from OpenGL Version 1.3, and was quickly expanded to OpenGL ES Version 1.1, which is based on OpenGL Version 1.5, and released in April of 2007. This version reached much popularity in original mobile read more..

  • Page - 708

    Example B.1 An Example of Creating an OpenGL ES Version 2.0 Rendering Context EGLBoolean initializeWindow(EGLNativeWindow nativeWindow) { const EGLint configAttribs[] = { EGL_RENDER_TYPE, EGL_WINDOW_BIT, EGL_RED_SIZE, 8, EGL_GREEN_SIZE, 8, EGL_BLUE_SIZE, 8, EGL_DEPTH_SIZE, 24, EGL_NONE }; const EGLint contextAttribs[] = { EGL_CONTEXT_CLIENT_VERSION, 2, EGL_NONE }; EGLDisplay dpy; dpy = read more..

  • Page - 709

    WebGL WebGL takes OpenGL (or specifically, OpenGL ES Version 2.0) to the Internet by adding high-performance, 3D rendering within HTML5’s Canvas element. Virtually all functions from OpenGL ES Version 2.0 are available in their exact form, except for small changes necessitated because of its JavaScript interface. This section provides a brief introduction to WebGL through a simple read more..

  • Page - 710

    convenient to include this JavaScript file in your WebGL applications.1 It includes the package WebGLUtils and its method setupWebGL(), which makes it easy to enable WebGL on an HTML5 Canvas. Example B.3 expands on the previous example and handles setting up a WebGL context that works in all supported Web browsers. The return value from setupWebGL() is a JavaScript object read more..

  • Page - 711

    false otherwise, which we use to emit an error message. Assuming WebGL is available, we set up some WebGL state, and clear the window---to red now. Once WebGL takes over the Canvas, all of its contents are controlled by WebGL. Now that we know WebGL is supported, we’ll expand our example by initial- izing the required shaders, setting up vertex buffers, and finally read more..

  • Page - 712

    <script id ="vertex-shader" type ="x-shader/x-vertex"> attribute vec4 vPos; attribute vec2 vTexCoord; uniform float uFrame; // Frame number varying vec2 texCoord; void main() { float angle = radians(uFrame); float c = cos(angle); float s = sin(angle); mat4 m = mat4(1.0); m[0][0] = c; m[0][1] = s; m[1][1] = c; m[1][0] = -s; texCoord = vTexCoord; gl_Position = m * vPos; } read more..

  • Page - 713

    it InitShaders(), since there are no files to load; shaders are defined in the HTML source for the page. In order to organize our code better, we created a JavaScript file named InitShaders.js to store the code. Example B.5 Our WebGL Shader Loader: InitShaders.js // // InitShaders.js // function InitShaders(gl, vertexShaderId, fragmentShaderId) { var vertShdr; var fragShdr; var vertElem = read more..

  • Page - 714

    gl.attachShader(program, fragShdr); gl.linkProgram(program); if (!gl.getProgramParameter(program, gl.LINK_STATUS)) { var msg = "Shader program failed to link." + "The error log is:" + "<pre>" + gl.getProgramInfoLog(program) + "</pre>"; alert(msg); return -1; } return program; } While InitShaders() is JavaScript, most of it should look recognizable. The major difference read more..

  • Page - 715

    You first allocate and populate (both of which you can do in a single operation) a typed array to store your vertex data. After that, setting up your VBOs is identical to what you’ve done in OpenGL. We show our initialization in Example B.7. Example B.7 Initializing Vertex Buffers in WebGL var vertices = {}; = new Float32Array( [ -0.5, -0.5, 0.5, -0.5, 0.5, 0.5, read more..

  • Page - 716

    gl.activeTexture(gl.TEXTURE0); gl.bindTexture(gl.TEXTURE_2D, texture); gl.pixelStorei(gl.UNPACK_FLIP_Y_WEBGL, true) ; gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGB, gl.RGB, gl.UNSIGNED_BYTE, image); gl.generateMipmap(gl.TEXTURE_2D); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST_MIPMAP_LINEAR); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST); } The code sequence in configureTexture should look very similar to what read more..

  • Page - 717

    } function init() { canvas = document.getElementById("gl-canvas"); gl = WebGLUtils.setupWebGL(canvas); if (!gl) { alert("WebGL isn’t available"); } gl.viewport(0, 0, canvas.width, canvas.height); gl.clearColor(1.0, 0.0, 0.0, 1.0); // // Load shaders and initialize attribute buffers // var program = InitShaders(gl, "vertex-shader", "fragment-shader"); gl.useProgram(program); var read more..

  • Page - 718

    render(); } image.src = "OpenGL-logo.png"; gl.activeTexture(gl.TEXTURE0); var uTexture = gl.getUniformLocation(program, "uTexture"); gl.uniform1i(uTexture, 0); uFrame = gl.getUniformLocation(program, "uFrame"); } var frameNumber = 0; function render() { gl.uniform1f(uFrame, frameNumber++); gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); gl.drawArrays(gl.TRIANGLE_FAN, 0, 4); window.requestAnimFrame(render, read more..

  • Page - 719

    This page intentionally left blank read more..

  • Page - 720

    Appendix C Built-in GLSL Variables and Functions The OpenGL Shading Language has a small number of built-in variables, a set of constants, and a large collection of built-in functions. This appendix describes each of these, in the following major sections: • ‘‘Built-in Variables’’ lists the variables, first showing the declarations for all stages, followed by the description of read more..

  • Page - 721

    Built-in Variables Each programmable stage has a different set of built-in variables, though there is some overlap. We’ll first show all the built-in variable declarations in ‘‘Built-in Variable Declarations’’ and then describe each one in ‘‘Built-in Variable Descriptions’’. Built-in Variable Declarations Vertex Shader Built-in Variables in int gl_VertexID; in int gl_InstanceID; out read more..

  • Page - 722

    in vec3 gl_TessCoord; patch in float gl_TessLevelOuter[4]; patch in float gl_TessLevelInner[2]; out gl_PerVertex { vec4 gl_Position; float gl_PointSize; float gl_ClipDistance[]; }; Geometry Shader Built-in Variables in gl_PerVertex { vec4 gl_Position; float gl_PointSize; float gl_ClipDistance[]; } gl_in[]; in int gl_PrimitiveIDIn; in int gl_InvocationID; out gl_PerVertex { vec4 gl_Position; float gl_PointSize; float read more..

  • Page - 723

    // work group and invocation IDs in uvec3 gl_WorkGroupID; in uvec3 gl_LocalInvocationID; // derived variables in uvec3 gl_GlobalInvocationID; in uint gl_LocalInvocationIndex; All Shaders’ Built-in State Variables struct gl_DepthRangeParameters { float near; float far; float diff; }; uniform gl_DepthRangeParameters gl_DepthRange; uniform int gl_NumSamples; Built-in Variable Descriptions The descriptions of each of the read more..

  • Page - 724

    shaders, it reads the values written in the previous shader stage. In a fragment shader, gl_ClipDistance[] array contains linearly interpolated values for the vertex values written by a shader to the gl_ClipDistance[] vertex output variable. Only elements in this array that have clipping enabled will have defined values. gl_DepthRange The structure gl_DepthRange contains the locations of read more..

  • Page - 725

    One use of this is to emulate two-sided lighting by selecting one of two colors calculated by a vertex or geometry shader. gl_GlobalInvocationID The compute shader input gl_GlobalInvocationID contains the global index of the current work item. This value uniquely identifies this invocation from all other invocations across all local and global work groups initiated by the current read more..

  • Page - 726

    to select a cube-map face and a layer. Setting gl_Layer to the value layer × 6 + face will render to face face of the cube defined in layer layer. The face values are listed in Table C.1. Table C.1 Cube-Map Face Targets Face Value Face Target 0 TEXTURE_CUBE_MAP_POSITIVE_X 1 TEXTURE_CUBE_MAP_NEGATIVE_X 2 TEXTURE_CUBE_MAP_POSITIVE_Y 3 TEXTURE_CUBE_MAP_NEGATIVE_Y 4 TEXTURE_CUBE_MAP_POSITIVE_Z 5 read more..

  • Page - 727

    useful for uniquely identifying a unique region of shared memory within the local work group for this invocation to use. It is computed as follows: gl_LocalInvocationIndex = gl_LocalInvocationID.z * gl_WorkGroupSize.x * gl_WorkGroupSize.y + gl_LocalInvocationID.y * gl_WorkGroupSize.x + gl_LocalInvocationID.x; gl_NumSamples The uniform input gl_NumSamples to all stages contains the total number of read more..

  • Page - 728

    gl_Position As an output variable, gl_Position is intended for writing the homogeneous vertex position. This value will be used by primitive assembly, clipping, culling, and other fixed functionality operations, if present, that operate on primitives after vertex processing has occurred. Its value is undefined after the vertex processing stage if the vertex shader executable does not write read more..

  • Page - 729

    framebuffer. Any use of this variable in a fragment shader causes the entire shader to be evaluated per-sample. gl_SampleMask The fragment output array gl_SampleMask[] sets the sample mask for the fragment being processed. Coverage for the current fragment will become the logical AND of the coverage mask and the output gl_SampleMask. This array must be sized in the fragment shader read more..

  • Page - 730

    position of the vertex being processed by the shader relative to the primitive being tessellated. Its values will obey the properties gl_TessCoord.x == 1.0 - (1.0 - gl_TessCoord.x) gl_TessCoord.y == 1.0 - (1.0 - gl_TessCoord.y) gl_TessCoord.z == 1.0 - (1.0 - gl_TessCoord.z) to aid in replicating subdivision computations. gl_TessLevelOuter and gl_TessLevelOuter The input variables read more..

  • Page - 731

    of gl_ViewportIndex is undefined for executions of the shader that take that path. As a fragment shader input, gl_ViewportIndex will have the same value that was written to the output variable gl_ViewportIndex in the geometry stage. If the geometry stage does not dynamically assign to gl_ViewportIndex, the value of gl_ViewportIndex in the fragment shader will be undefined. If the read more..

  • Page - 732

    const int gl_MaxComputeImageUniforms = 8; const int gl_MaxComputeAtomicCounters = 8; const int gl_MaxComputeAtomicCounterBuffers = 1; const int gl_MaxVertexAttribs = 16; const int gl_MaxVertexUniformComponents = 1024; const int gl_MaxVaryingComponents = 60; const int gl_MaxVertexOutputComponents = 64; const int gl_MaxGeometryInputComponents = 64; const int gl_MaxGeometryOutputComponents = 128; const int read more..

  • Page - 733

    const int gl_MaxVertexAtomicCounters = 0; const int gl_MaxTessControlAtomicCounters = 0; const int gl_MaxTessEvaluationAtomicCounters = 0; const int gl_MaxGeometryAtomicCounters = 0; const int gl_MaxFragmentAtomicCounters = 8; const int gl_MaxCombinedAtomicCounters = 8; const int gl_MaxAtomicCounterBindings = 1; const int gl_MaxVertexAtomicCounterBuffers = 0; const int gl_MaxTessControlAtomicCounterBuffers = 0; const int read more..

  • Page - 734

    • ‘‘Shader Invocation Control Functions’’ • ‘‘Shader Memory Control Functions’’ Listing all the prototypes for all the GLSL built-in functions would fill this entire book. Instead, we use some generic notations that represent multiple types. These are listed in Table C.2, and allow a single prototype listing below to represent multiple actual prototypes. Table C.2 Notation read more..

  • Page - 735

    great counterexample, where each component of the result is affected by all the components of the input. Angle and Trigonometry Functions Function parameters specified as angle are assumed to be in units of radians. genType radians(genType degrees); Converts degrees to radians: π 180 degrees genType degrees(genType radians); Converts radians to degrees: 180 π radians genType sin(genType angle); read more..

  • Page - 736

    genType asin(genType x); Arc sine. Returns an angle whose sine is x. The range of values returned by this function is −π 2 to π 2 , inclusive. Results are undefined if x > 1or x < −1. genType acos(genType x); Arc cosine. Returns an angle whose cosine is x. The range of values returned by this function is 0 to π, inclusive. Results are undefined if x > 1or read more..

  • Page - 737

    genType tanh(genType x); Returns the hyperbolic tangent function sinh (x) cosh (x) . genType asinh(genType x); Arc hyperbolic sine; returns the inverse of sinh. genType acosh(genType x); Arc hyperbolic cosine; returns the nonnegative inverse of cosh. Results are undefined if x < 1. genType atanh(genType x); Arc hyperbolic tangent; returns the inverse of tanh. Results are undefined if x ≥ read more..

  • Page - 738

    genType log(genType x); Returns the natural logarithm of x, i.e., returns the value y that satisfies the equation x = ey . Results are undefined if x ≤ 0. genType exp2(genType x); Returns 2 raised to the x power; 2x. genType log2(genType x); Returns the base 2 logarithm of x, i.e., returns the value y that satisfies the equation x = 2y. Results are undefined if x ≤ 0. read more..

  • Page - 739

    Common Functions genType abs(genType x); genIType abs(genIType x); genDType abs(genDType x); Return x if x ≥ 0; otherwise it returns −x. genType sign(genType x); genIType sign(genIType x); genDType sign(genDType x); Return 1.0if x > 0, 0.0if x = 0, or −1.0if x < 0. genType floor(genType x); genDType floor(genDType x); Return a value equal to the nearest integer that is less than read more..

  • Page - 740

    genType roundEven(genType x); genDType roundEven(genDType x); Return a value equal to the nearest integer to x. A fractional part of 0.5 will round toward the nearest even integer. (Both 3.5 and 4.5 for x will return 4.0.) genType ceil(genType x); genDType ceil(genDType x); Return a value equal to the nearest integer that is greater than or equal to x. genType fract(genType x); read more..

  • Page - 741

    genDType min(genDType x, genDType y); genDType min(genDType x, double y); genIType min(genIType x, genIType y); genIType min(genIType x, int y); genUType min(genUType x, genUType y); genUType min(genUType x, uint y); Return y if y < x; otherwise it returns x. genType max(genType x, genType y); genType max(genType x, float y); genDType max(genDType x, genDType y); genDType max(genDType x, read more..

  • Page - 742

    genType mix(genType x, genType y, genType a); genType mix(genType x, genType y, float a); genDType mix(genDType x, genDType y, genDType a); genDType mix(genDType x, genDType y, double a); Return the linear blend of x and y, i.e., x (1 − a )+ ya. genType mix(genType x, genType y, genBType a); genDType mix(genDType x, genDType y, genBType a); Select which vector each returned read more..

  • Page - 743

    Return 0.0 if x ≤ edge0 and 1.0 if x ≥ edge1, and performs smooth Hermite interpolation between 0 and 1 when edge0 < x < edge1. This is useful in cases where you would want a threshold function with a smooth transition. This is equivalent to: genType t; t = clamp ((x - edge0) / (edge1 - edge0), 0, 1); return t * t * (3 - 2 * t); (And similarly for read more..

  • Page - 744

    genType fma(genType a, genType b, genType c); genDType fma(genDType a, genDType b, genDType c); Compute and return a × b + c. In uses where the return value is eventually consumed by a variable declared as precise: • fma is considered a single operation, whereas the expression a × b + c consumed by a variable declared precise is considered two operations. • The precision read more..

  • Page - 745

    Floating-Point Pack and Unpack Functions These functions do not operate component-wise, rather, as described in each case. uint packUnorm2x16(vec2 v); uint packSnorm2x16(vec2 v); uint packUnorm4x8(vec4 v); uint packSnorm4x8(vec4 v); First, converts each component of the normalized floating-point value v into 8- or 16-bit integer values. Then, the results are packed into the returned 32-bit read more..

  • Page - 746

    double packDouble2x32(uvec2 v); Returns a double-precision value obtained by packing the components of v into a 64-bit value. If an IEEE 754 Inf or NaN is created, it will not signal, and the resulting floating-point value is unspecified. Otherwise, the bit-level representation of v is preserved. The first vector component specifies the 32 least significant bits; the second component read more..

  • Page - 747

    Geometric Functions These operate on vectors as vectors, not component-wise. float length(genType x); double length(genDType x); Return the length of vector x, i.e., x [0]2 + x [1]2 + ··· float distance(genType p0, genType p1); double distance(genDType p0, genDType p1); Return the distance between p0 and p1: length (p0 − p1). float dot(genType x, genType y); double dot(genDType x, read more..

  • Page - 748

    genType normalize(genType x); genDType normalize(genDType x); Return a vector in the same direction as x but with a length of 1. genType faceforward(genType N, genType I, genType Nref ); genDType faceforward(genDType N, genDType I, genDType Nref ); if (dot(Nref, I) < 0.0) return N; else return -N; genType reflect(genType I, genType N); genDType reflect(genDType I, genDType N); For the read more..

  • Page - 749

    Matrix Functions For each of the following built-in matrix functions, there is both a single-precision floating-point version, where all arguments and return values are single precision, and a double-precision floating-point version, where all arguments and return values are double precision. Only the single-precision floating-point version is shown. mat matrixCompMult(mat x,mat y); Multiply matrix read more..

  • Page - 750

    mat2x3 transpose(mat3x2 m); mat3x2 transpose(mat2x3 m); mat2x4 transpose(mat4x2 m); mat4x2 transpose(mat2x4 m); mat3x4 transpose(mat4x3 m); mat4x3 transpose(mat3x4 m); Return a matrix that is the transpose of m. The input matrix m is not modified. float determinant(mat2 m); float determinant(mat3 m); float determinant(mat4 m); Return the determinant of m. mat2 inverse(mat2 m); mat3 inverse(mat3 m); read more..

  • Page - 751

    bvec lessThan(vec x, vec y); bvec lessThan(ivec x, ivec y); bvec lessThan(uvec x, uvec y); Return the component-wise compare of x < y. bvec lessThanEqual(vec x, vec y); bvec lessThanEqual(ivec x, ivec y); bvec lessThanEqual(uvec x, uvec y); Return the component-wise compare of x ≤ y. bvec greaterThan(vec x, vec y); bvec greaterThan(ivec x, ivec y); bvec greaterThan(uvec x, uvec y); read more..

  • Page - 752

    bvec notEqual(vec x, vec y); bvec notEqual(ivec x, ivec y); bvec notEqual(uvec x, uvec y); bvec notEqual(bvec x, bvec y); Return the component-wise compare of x = y. bool any(bvec x); Returns true if any component of x is true . bool all(bvec x); Returns true only if all components of x are true . bvec not(bvec x); Returns the component-wise logical complement of x. Integer read more..

  • Page - 753

    genUType usubBorrow(genUType x, genUType y, out genUType borrow); Subtracts the 32-bit unsigned integer y from x, returning the difference if nonnegative, or 232 plus the difference otherwise. The value borrow is set to 0 if x ≥ y, or to 1 otherwise. void umulExtended(genUType x, genUType y, out genUType msb, out genUType lsb); void imulExtended(genIType x, genIType y, out genIType read more..

  • Page - 754

    The result will have bits [offset, offset + bits − 1] taken from bits [0, bits − 1] of insert, and all other bits taken directly from the corresponding bits of base.If bits is zero, the result will simply be base. The result will be undefined if offset or bits is negative or if the sum of offset and bits is greater than the number of bits used to store the read more..

  • Page - 755

    Texture Functions Texture lookup functions are available in all shading stages. However, level of detail is implicitly computed only for fragment shaders, so that OpenGL can automatically perform mipmap filtering. Other shading stages use a base level of detail of zero or use the texture directly if it is not mipmapped. When texture functions require implicit derivatives, they must be read more..

  • Page - 756

    For Cube forms, the direction of P is used to select which face to do a two-dimensional texture lookup in. For Array forms, the array layer used will be max (0, min (d − 1, floor (layer + 0.5))) where d is the depth of the texture array and layer comes from the component indicated in the tables below. For depth-stencil textures, the sampler type should match the read more..

  • Page - 757

    ivec2 textureSize(gsampler2DRect sampler); ivec2 textureSize(sampler2DRectShadow sampler); ivec2 textureSize(gsampler1DArray sampler, int lod); ivec3 textureSize(gsampler2DArray sampler, int lod); ivec2 textureSize(sampler1DArrayShadow sampler, int lod); ivec3 textureSize(sampler2DArrayShadow sampler, int lod); int textureSize(gsamplerBuffer sampler); ivec2 textureSize(gsampler2DMS sampler); ivec3 textureSize(gsampler2DMSArray read more..

  • Page - 758

    int textureQueryLevels(gsampler1D sampler); int textureQueryLevels(gsampler2D sampler); int textureQueryLevels(gsampler3D sampler); int textureQueryLevels(gsamplerCube sampler); int textureQueryLevels(gsampler1DArray sampler); int textureQueryLevels(gsampler2DArray sampler); int textureQueryLevels(gsamplerCubeArray sampler); int textureQueryLevels(gsampler1DShadow sampler); int textureQueryLevels(gsampler2DShadow sampler); int read more..

  • Page - 759

    Use the texture coordinate P to do a texture lookup in the texture currently bound to sampler. For shadow forms: When compare is present, it is used as Dref and the array layer comes from P.w. When compare is not present, the last component of P is used as Dref and the array layer comes from the second to last component of P. (The second component of P is unused read more..

  • Page - 760

    Do a texture lookup as in texture but with explicit level of detail; lod specifies λbase and sets the partial derivatives as follows: ∂u ∂x = 0 ∂v ∂x = 0 ∂w ∂x = 0 ∂u ∂y = 0 ∂v ∂y = 0 ∂w ∂y = 0 gvec4 textureOffset(gsampler1D sampler, float P, int offset [, float bias]); gvec4 textureOffset(gsampler2D sampler, vec2 P, ivec2 offset [, float bias]); gvec4 read more..

  • Page - 761

    Note that offset does not apply to the layer coordinate for texture arrays. Note that texel offsets are also not supported for cube maps. gvec4 texelFetch(gsampler1D sampler, int P, int lod); gvec4 texelFetch(gsampler2D sampler, ivec2 P, int lod); gvec4 texelFetch(gsampler3D sampler, ivec3 P, int lod); gvec4 texelFetch(gsampler2DRect sampler, ivec2 P); gvec4 texelFetch(gsampler1DArray sampler, read more..

  • Page - 762

    gvec4 textureProjOffset(gsampler1D sampler, vec2 P, int offset [, float bias]); gvec4 textureProjOffset(gsampler1D sampler, vec4 P, int offset [, float bias]); gvec4 textureProjOffset(gsampler2D sampler, vec3 P, ivec2 offset [, float bias]); gvec4 textureProjOffset(gsampler2D sampler, vec4 P, ivec2 offset [, float bias]); gvec4 textureProjOffset(gsampler3D sampler, vec4 P, ivec3 offset [, float bias]); read more..

  • Page - 763

    Do an offset texture lookup with explicit level of detail. See textureLod and textureOffset. gvec4 textureProjLod(gsampler1D sampler, vec2 P, float lod); gvec4 textureProjLod(gsampler1D sampler, vec4 P, float lod); gvec4 textureProjLod(gsampler2D sampler, vec3 P, float lod); gvec4 textureProjLod(gsampler2D sampler, vec4 P, float lod); gvec4 textureProjLod(gsampler3D sampler, vec4 P, float lod); float read more..

  • Page - 764

    gvec4 textureGrad(gsampler1D sampler, float P, float dPdx, float dPdy); gvec4 textureGrad(gsampler2D sampler, vec2 P, vec2 dPdx, vec2 dPdy); gvec4 textureGrad(gsampler3D sampler, vec3 P, vec3 dPdx, vec3 dPdy); gvec4 textureGrad(gsamplerCube sampler, vec3 P, vec3 dPdx, vec3 dPdy); gvec4 textureGrad(gsampler2DRect sampler, vec2 P, vec2 dPdx, vec2 dPdy); float textureGrad(sampler2DRectShadow sampler, vec3 P, read more..

  • Page - 765

    For a multidimensional texture, set ∂s ∂x = ∂P.s ∂x ∂t ∂x = ∂P.t ∂x ∂r ∂x = ∂P.p ∂x ∂t ∂y = ∂P.t ∂y ∂s ∂y = ∂P.s ∂y ∂r ∂y = ∂P.p ∂y For the cube version, the partial derivatives of P are assumed to be in the coordinate system used before texture coordinates are projected onto the appropriate cube face. gvec4 textureGradOffset(gsampler1D sampler, read more..

  • Page - 766

    gvec4 textureProjGrad(gsampler1D sampler, vec2 P, float dPdx, float dPdy); gvec4 textureProjGrad(gsampler1D sampler, vec4 P, float dPdx, float dPdy); gvec4 textureProjGrad(gsampler2D sampler, vec3 P, vec2 dPdx, vec2 dPdy); gvec4 textureProjGrad(gsampler2D sampler, vec4 P, vec2 dPdx, vec2 dPdy); gvec4 textureProjGrad(gsampler3D sampler, vec4 P, vec3 dPdx, vec3 dPdy); gvec4 textureProjGrad(gsampler2DRect sampler, read more..

  • Page - 767

    float textureProjGradOffset(sampler2DRectShadow sampler, vec4 P, vec2 dPdx, vec2 dPdy, ivec2 offset); gvec4 textureProjGradOffset(gsampler3D sampler, vec4 P, vec3 dPdx, vec3 dPdy, ivec3 offset); float textureProjGradOffset(sampler1DShadow sampler, vec4 P, float dPdx, float dPdy, int offset); float textureProjGradOffset(sampler2DShadow sampler, vec4 P, vec2 dPdx, vec2 dPdy, ivec2 offset); Do a texture lookup read more..

  • Page - 768

    gvec4 textureGather(gsampler2D sampler, vec2 P [, int comp]); gvec4 textureGather(gsampler2DArray sampler, vec3 P [, int comp]); gvec4 textureGather(gsamplerCube sampler, vec3 P [, int comp]); gvec4 textureGather(gsamplerCubeArray sampler, vec4 P [, int comp]); gvec4 textureGather(gsampler2DRect sampler, vec2 P [, int comp]); vec4 textureGather(sampler2DShadow sampler, vec2 P, float refZ); vec4 read more..

  • Page - 769

    vec4 textureGatherOffset(sampler2DShadow sampler, vec2 P, float refZ, ivec2 offset); vec4 textureGatherOffset(sampler2DArrayShadow sampler, vec3 P, float refZ, ivec2 offset); vec4 textureGatherOffset(sampler2DRectShadow sampler, vec2 P, float refZ, ivec2 offset); Perform a texture gather operation as in textureGather by offset as described in textureOffset except that offset can be variable (non constant) read more..

  • Page - 770

    with respect to other forms of access to the counter or that they are serialized when applied to separate counters. Such cases would require additional use of fences, barriers, or other forms of synchronization, if atomicity or serialization is desired. The value returned by an atomic-counter function is the value of an atomic counter, which may be returned and incremented in an read more..

  • Page - 771

    the new value to memory, and return the original value read. The contents of the memory being updated by the atomic operation are guaranteed not to be modified by any other assignment or atomic memory function in any shader invocation between the time the original value is read and the time the new value is written. Atomic memory functions are supported only for a limited read more..

  • Page - 772

    uint atomicOr(inout uint mem, uint data); int atomicOr(inout int mem, int data); Compute a new value by performing a bit-wise or of the value of data and the contents of mem. uint atomicXor(inout uint mem, uint data); int atomicXor(inout int mem, int data); Compute a new value by performing a bit-wise exclusive or of the value of data and the contents of mem. uint read more..

  • Page - 773

    Loads and stores support float, integer, and unsigned integer types. The IMAGE_PARAMS in the prototypes below is a placeholder representing 33 separate functions, each for a different type of image variable. The IMAGE_PARAMS placeholder is replaced by one of the following parameter lists: • gimage1D image, int P • gimage2D image, ivec2 P • gimage3D image, ivec3 P • gimage2DRect read more..

  • Page - 774

    int imageSize(gimage1D image); ivec2 imageSize(gimage2D image); ivec3 imageSize(gimage3D image); ivec2 imageSize(gimageCube image); ivec3 imageSize(gimageCubeArray image); ivec2 imageSize(gimageRect image); ivec2 imageSize(gimage1DArray image); ivec3 imageSize(gimage2DArray image); int imageSize(gimageBuffer image); ivec2 imageSize(gimage2DMS image); ivec3 imageSize(gimage2DMSArray image); Return the dimensions of the image or read more..

  • Page - 775

    uint imageAtomicMin(IMAGE_PARAMS, uint data); int imageAtomicMin(IMAGE_PARAMS, int data); Compute a new value by taking the minimum of the value of data and the contents of the selected texel. uint imageAtomicMax(IMAGE_PARAMS, uint data); int imageAtomicMax(IMAGE_PARAMS, int data); Compute a new value by taking the maximum of the value data and the contents of the selected texel. uint read more..

  • Page - 776

    uint imageAtomicCompSwap(IMAGE_PARAMS, uint compare, uint data); int imageAtomicCompSwap(IMAGE_PARAMS, int compare, int data); Compare the value of compare and the contents of the selected texel. If the values are equal, the new value is given by data; otherwise, it is taken from the original value loaded from the texel. Fragment Processing Functions Fragment processing functions are available read more..

  • Page - 777

    Interpolation Functions Built-in interpolation functions are available to compute an interpolated value of a fragment shader input variable at a shader-specified (x, y ) location. A separate (x, y ) location may be used for each invocation of the built-in function, and those locations may differ from the default (x, y ) location used to produce the default value of the input. For read more..

  • Page - 778

    vec3 interpolateAtOffset(vec3 interpolant, vec2 offset); vec4 interpolateAtOffset(vec4 interpolant, vec2 offset); Return the value of the input interpolant variable sampled at an offset from the center of the pixel specified by offset. The two floating-point components of offset give the offset in pixels in the x and y directions, respectively. An offset of (0, 0) identifies the center read more..

  • Page - 779

    float noise1(genType x); Returns a 1D noise value based on the input value x. vec2 noise2(genType x); Returns a 2D noise value based on the input value x. vec3 noise3(genType x); Returns a 3D noise value based on the input value x. vec4 noise4(genType x); Returns a 4D noise value based on the input value x. Geometry Shader Functions These functions are available only in read more..

  • Page - 780

    primitive for each stream is automatically completed. It is not necessary to call EndStreamPrimitive if the geometry shader writes only a single primitive. Multiple output streams are supported only if the output primitive type is declared to be points. A program will fail to link if it contains a geometry shader calling EmitStreamVertex or EndStreamPrimitive if its output primitive read more..

  • Page - 781

    When multiple output streams are supported, this is equivalent to calling EndStreamPrimitive(0). Shader Invocation Control Functions The shader invocation control function is available only in tessellation con- trol shaders and compute shaders. It is used to control the relative execution order of multiple shader invocations used to process a patch (in the case of tessellation control read more..

  • Page - 782

    void memoryBarrier(); Controls the ordering of memory transactions issued by a single shader invocation. void memoryBarrierAtomicCounter(); Controls the ordering of accesses to atomic-counter variables issued by a single shader invocation. void memoryBarrierBuffer(); Controls the ordering of memory transactions to buffer variables issued within a single shader invocation. void memoryBarrierShared(); Controls read more..

  • Page - 783

    all reads and writes previously performed by the caller that accessed selected variable types, and then return with no other effect. The built-in functions memoryBarrierAtomicCounter, memoryBarrierBuffer, memoryBarrierImage, and memoryBarrierShared wait for the completion of accesses to atomic counter, buffer, image, and shared variables, respectively. The built-in functions memoryBarrier and read more..

  • Page - 784

    Appendix D State Variables This appendix lists the queryable OpenGL state variables, their default values, and the commands for obtaining the values of these variables, and contains the following major sections: • ‘‘The Query Commands’’ • ‘‘OpenGL State Variables’’ 737 read more..

  • Page - 785

    The Query Commands In addition to the basic commands, such as glGetIntegerv() and glIsEnabled(), to obtain the values of simple state variables, there are other specialized commands to return more complex state variables. The prototypes for these specialized commands are listed here. Some of these routines, such as glGetError() and glGetString(), were discussed in more detail in Chapter read more..

  • Page - 786

    void glGetActiveUniformBlockiv(GLuint program, GLuint uniformBlockIndex, GLenum pname, GLint *params); void glGetActiveUniformBlockName(GLuint program, GLuint uniformBlockIndex, GLsizei bufSize, GLsizei *length, GLchar *uniformBlockName); void glGetActiveUniformName(GLuint program, GLuint uniformIndex, GLsizei bufSize, GLsizei *length, GLchar *uniformName); void glGetActiveUniformsiv(GLuint program, GLsizei uniformCount, const GLuint read more..

  • Page - 787

    GLuint glGetDebugMessageLog(GLuint count, GLsizei bufsize, GLenum *sources, GLenum *types, GLuint *ids, GLenum *severities, GLsizei *lengths, GLchar *messageLog); void glGetDoublev(GLenum pname, GLdouble *params); void glGetDoublei_v(GLenum target, GLuint index, GLdouble *data); GLenum glGetError(void); void glGetFloatv(GLenum pname, GLfloat *params); void glGetFloati_v(GLenum target, GLuint index, GLfloat *data); GLint read more..

  • Page - 788

    void glGetInternalformati64v(GLenum target, GLenum internalformat, GLenum pname, GLsizei bufSize, GLint64 *params); void glGetMultisamplefv(GLenum pname, GLuint index, GLfloat *val); void glGetObjectLabel(GLenum identifier, GLuint name, GLsizei bufSize, GLsizei *length, GLchar *label); void glGetObjectPtrLabel(const void *ptr, GLsizei bufSize, GLsizei *length, GLchar *label); void glGetPointerv(GLenum pname, GLvoid* read more..

  • Page - 789

    GLint glGetProgramResourceLocationIndex(GLuint program, GLenum programInterface, const GLchar *name); void glGetProgramResourceName(GLuint program, GLenum programInterface, GLuint index, GLsizei bufSize, GLsizei *length, GLchar *name); void glGetProgramResourceiv(GLuint program, GLenum programInterface, GLuint index, GLsizei propCount, const GLenum *props, GLsizei bufSize, GLsizei *length, GLint *params); void read more..

  • Page - 790

    void glGetSamplerParameterIiv(GLuint sampler, GLenum pname, GLint *params); void glGetSamplerParameterIuiv(GLuint sampler, GLenum pname, GLuint *params); void glGetShaderInfoLog(GLuint shader, GLsizei bufSize, GLsizei *length, GLchar *infoLog); void glGetShaderiv(GLuint shader, GLenum pname, GLint *params); void glGetShaderPrecisionFormat(GLenum shadertype, GLenum precisiontype, GLint *range, GLint *precision); void read more..

  • Page - 791

    void glGetTexParameterfv(GLenum target, GLenum pname, GLfloat *params); void glGetTexParameteriv(GLenum target, GLenum pname, GLint *params); void glGetTexParameterIiv(GLenum target, GLenum pname, GLint *params); void glGetTexParameterIuiv(GLenum target, GLenum pname, GLuint *params); void glGetTransformFeedbackVarying(GLuint program, GLuint index, GLsizei bufSize, GLsizei *length, GLsizei *size, GLenum *type, GLchar *name); read more..

  • Page - 792

    void glGetVertexAttribdv(GLuint index, GLenum pname, GLdouble *params); void glGetVertexAttribfv(GLuint index, GLenum pname, GLfloat *params); void glGetVertexAttribiv(GLuint index, GLenum pname, GLint *params); void glGetVertexAttribIiv(GLuint index, GLenum pname, GLint *params); void glGetVertexAttribIuiv(GLuint index, GLenum pname, GLuint *params); void glGetVertexAttribLdv(GLuint index, GLenum pname, GLdouble *params); read more..

  • Page - 793

    Current Values and Associated Data Table D.1 Current Values and Associated Data State Variable Description Initial Value Get Command GL_PATCH_VERTICES Number of vertices in an input patch 3 glGetIntegerv() GL_PATCH_DEFAULT_OUTER_LEVEL Default outer tessellation level when not using a tessellation control shader (1.0, 1 .0, 1 .0, 1 .0) glGetFloatv() GL_PATCH_DEFAULT_INNER_LEVEL Default inner tessellation level read more..

  • Page - 794

    Vertex Array Object State Table D.2 State Variables for Vertex Array Objects State Variable Description Initial Value Get Command GL_VERTEX_ATTRIB_ARRAY_ENABLED Vertex attribute array enable GL_FALSE glGetVertexAttribiv() GL_VERTEX_ATTRIB_ARRAY_SIZE Vertex attribute array size 4 glGetVertexAttribiv() GL_VERTEX_ATTRIB_ARRAY_STRIDE Vertex attribute array stride 0 glGetVertexAttribiv() GL_VERTEX_ATTRIB_ARRAY_TYPE Vertex attribute read more..

  • Page - 795

    Table D.2 (continued) State Variables for Vertex Array Objects State Variable Description Initial Value Get Command GL_VERTEX_ATTRIB_ARRAY_POINTER Vertex attribute array pointer NULL glGetVertexAttrib Pointerv() GL_LABEL Debug label empty string glGetObjectLabel() GL_ELEMENT_ARRAY_BUFFER_BINDING Element array buffer binding 0 glGetIntegerv() GL_VERTEX_ATTRIB_ARRAY_BUFFER_BINDING Attribute array buffer binding 0 glGetVertexAttribiv() read more..

  • Page - 796

    Vertex Array Data Table D.3 State Variables for Vertex Array Data (Not Stored in a Vertex Array Object) State Variable Description Initial Value Get Command GL_ARRAY_BUFFER_BINDING Current buffer binding 0 glGetIntegerv() GL_DRAW_INDIRECT_BUFFER_BINDING Indirect command buffer binding 0 glGetIntegerv() GL_VERTEX_ARRAY_BINDING Current vertex array object binding 0 glGetIntegerv() GL_PRIMITIVE_RESTART Primitive restart read more..

  • Page - 797

    Buffer Object State Table D.4 State Variables for Buffer Objects State Variable Description Initial Value Get Command GL_BUFFER_SIZE Buffer data size 0 glGetBuffer Parameteri64v() GL_BUFFER_USAGE Buffer usage pattern GL_STATIC_DRAW glGetBuffer Parameteriv() GL_BUFFER_ACCESS Buffer access flag GL_READ_WRITE glGetBuffer Parameteriv() GL_BUFFER_ACCESS_FLAGS Extended buffer access flag 0 glGetBuffer Parameteriv() GL_BUFFER_MAPPED read more..

  • Page - 798

    Transformation State Table D.5 Transformation State Variables State Variable Description Initial Value Get Command GL_VIEWPORT Viewport origin and extent (0, 0, width, height ) where width and height represent the dimensions of the window that OpenGL will render into glGetFloati_v() GL_DEPTH_RANGE Depth range near and far 0,1 glGetDoublei_v() GL_CLIP_DISTANCEiith user clipping plane enabled GL_FALSE read more..

  • Page - 799

    Coloring State Table D.6 State Variables for Controlling Coloring State Variable Description Initial Value Get Command GL_CLAMP_READ_COLOR Read color clamping GL_FIXED_ONLY glGetIntegerv() GL_PROVOKING_VERTEX Provoking vertex convention GL_LAST_VERTEX_CONVENTION glGetIntegerv() 752 Appendix D: State V ariables read more..

  • Page - 800

    Rasterization State Table D.7 State Variables for Controlling Rasterization State Variable Description Initial Value Get Command GL_RASTERIZER_DISCARD Discard primitives before rasterization GL_FALSE glIsEnabled() GL_POINT_SIZE Point size 1.0 glGetFloatv() GL_POINT_FADE_THRESHOLD_SIZE Threshold for alpha attenuation 1.0 glGetFloatv() GL_POINT_SPRITE_COORD_ORIGIN Origin orientation for point sprites GL_UPPER_LEFT glGetIntegerv() read more..

  • Page - 801

    Table D.7 (continued) State Variables for Controlling Rasterization State Variable Description Initial Value Get Command GL_POLYGON_MODE Polygon rasterization mode (front and back) GL_FILL glGetIntegerv() GL_POLYGON_OFFSET_FACTOR Polygon offset factor 0 glGetFloatv() GL_POLYGON_OFFSET_UNITS Polygon offset units 0 glGetFloatv() GL_POLYGON_OFFSET_POINT Polygon offset enable for GL_POINT mode rasterization GL_FALSE glIsEnabled() read more..

  • Page - 802

    Multisampling Table D.8 State Variables for Multisampling State Variable Description Initial Value Get Command GL_MULTISAMPLE Multisample rasterization GL_TRUE glIsEnabled() GL_SAMPLE_ALPHA_TO_COVERAGE Modify coverage from alpha GL_FALSE glIsEnabled() GL_SAMPLE_ALPHA_TO_ONE Set alpha to maximum GL_FALSE glIsEnabled() GL_SAMPLE_COVERAGE Mask to modify coverage GL_FALSE glIsEnabled() GL_SAMPLE_COVERAGE_VALUE Coverage mask value 1 read more..

  • Page - 803

    Textures Table D.9 State Variables for Texture Units State Variable Description Initial Value Get Command GL_TEXTURE_xD True if xD texturing is enabled; x is 1, 2, or 3 GL_FALSE glIsEnabled() GL_TEXTURE_CUBE_MAP True if cube-map texturing is enabled GL_FALSE glIsEnabled() GL_TEXTURE_BINDING_xD Texture object bound to GL_TEXTURE_xD 0 glGetIntegerv() GL_TEXTURE_BINDING_1D_ARRAY Texture object bound to read more..

  • Page - 804

    Table D.9 (continued) State Variables for Texture Units State Variable Description Initial Value Get Command GL_TEXTURE_BINDING_2D_MULTISAMPLE Texture object bound to GL_TEXTURE_2D_MULTISAMPLE 0 glGetIntegerv() GL_TEXTURE_BINDING_2D_MULTISAMPLE_ ARRAY Texture object bound to GL_TEXTURE_2D_MULTISAMPLE_ ARRAY 0 glGetIntegerv() GL_SAMPLER_BINDING Sampler object bound to active texture unit 0 glGetIntegerv() GL_TEXTURE_xD xD texture read more..

  • Page - 805

    Table D.9 (continued) State Variables for Texture Units State Variable Description Initial Value Get Command GL_TEXTURE_CUBE_MAP_NEGATIVE_X −x face cube-map texture image at level-of-detail i --- glGetTexImage() GL_TEXTURE_CUBE_MAP_POSITIVE_Y +y face cube-map texture image at level-of-detail i --- glGetTexImage() GL_TEXTURE_CUBE_MAP_NEGATIVE_Y −y face cube-map texture image at level-of-detail i --- glGetTexImage() read more..

  • Page - 806

    Textures Table D.10 State Variables for Texture Objects State Variable Description Initial Value Get Command GL_TEXTURE_SWIZZLE_R Red component swizzle GL_RED glGetTexParameter*() GL_TEXTURE_SWIZZLE_G Green component swizzle GL_GREEN glGetTexParameter*() GL_TEXTURE_SWIZZLE_B Blue component swizzle GL_BLUE glGetTexParameter*() GL_TEXTURE_SWIZZLE_A Alpha component swizzle GL_ALPHA glGetTexParameter*() GL_TEXTURE_BORDER_COLOR Border color read more..

  • Page - 807

    Table D.10 (continued) State Variables for Texture Objects State Variable Description Initial Value Get Command GL_TEXTURE_WRAP_R Texcoord r wrap mode (3D textures only) GL_REPEAT glGetTexParameter*() GL_TEXTURE_MIN_LOD Minimum level of detail --1000 glGetTexParameterfv() GL_TEXTURE_MAX_LOD Maximum level of detail 1000 glGetTexParameterfv() GL_TEXTURE_BASE_LEVEL Base texture array 0 glGetTexParameterfv() GL_TEXTURE_MAX_LEVEL read more..

  • Page - 808

    Table D.10 (continued) State Variables for Texture Objects State Variable Description Initial Value Get Command GL_IMAGE_FORMAT_COMPATIBILITY_TYPE Compatibility rules for texture use with image units Implementation- dependent selection from either GL_IMAGE_FORMAT_ COMPATIBILITY_BY_ SIZE or GL_IMAGE_FORMAT_ COMPATIBILITY_BY_ CLASS glGetTexParameteriv() GL_TEXTURE_IMMUTABLE_LEVELS Number of texture storage levels 0 read more..

  • Page - 809

    Textures Table D.11 State Variables for Texture Images State Variable Description Initial Value Get Command GL_TEXTURE_WIDTH Specified width 0 glGetTexLevel Parameter*() GL_TEXTURE_HEIGHT Specified height (2D/3D) 0 glGetTexLevel Parameter*() GL_TEXTURE_DEPTH Specified depth (3D) 0 glGetTexLevel Parameter*() GL_TEXTURE_SAMPLES Number of samples per texel 0 glGetTexLevel Parameter*() GL_TEXTURE_FIXED_SAMPLE_LOCATIONS Whether the read more..

  • Page - 810

    Table D.11 (continued) State Variables for Texture Images State Variable Description Initial Value Get Command GL_TEXTURE_x_TYPE Component type (x is GL_RED, GL_GREEN, GL_BLUE, GL_ALPHA, or GL_DEPTH) GL_NONE glGetTexLevel Parameter*() GL_TEXTURE_COMPRESSED True if image has a compressed internal format GL_FALSE glGetTexLevel Parameter*() GL_TEXTURE_COMPRESSED_IMAGE_SIZE Size (in GLubytes) of compressed image 0 read more..

  • Page - 811

    Textures Table D.12 State Variables Per Texture Sampler Object State Variable Description Initial Value Get Command GL_TEXTURE_BORDER_COLOR Border color (0.0, 0 .0, 0 .0, 0 .0) glGetSampler Parameter*() GL_TEXTURE_COMPARE_FUNC Comparison function GL_LEQUAL glGetSampler Parameteriv() GL_TEXTURE_COMPARE_MODE Comparison mode GL_NONE glGetSampler Parameteriv() GL_TEXTURE_LOD_BIAS Texture level of detail bias 0.0 glGetSampler read more..

  • Page - 812

    Table D.12 (continued) State Variables Per Texture Sampler Object State Variable Description Initial Value Get Command GL_TEXTURE_WRAP_S Texcoord s wrap mode GL_REPEAT, or GL_CLAMP_TO_EDGE for rectangle textures glGetSampler Parameter*() GL_TEXTURE_WRAP_T Texcoord t wrap mode (2D, 3D, cube-map textures only) GL_REPEAT, or GL_CLAMP_TO_EDGE for rectangle textures glGetSampler Parameter*() GL_TEXTURE_WRAP_R Texcoord r read more..

  • Page - 813

    Texture Environment Table D.13 State Variables for Texture Environment and Generation State Variable Description Initial Value Get Command GL_ACTIVE_TEXTURE Active texture unit GL_TEXTURE0 glGetIntegerv() 766 Appendix D: State V ariables read more..

  • Page - 814

    Pixel Operations Table D.14 State Variables for Pixel Operations State Variable Description Initial Value Get Command GL_SCISSOR_TEST Scissoring enabled GL_FALSE glIsEnabledi() GL_SCISSOR_BOX Scissor box (0, 0, width, height ) where width and height represent the dimensions of the window that OpenGL will render into glGetIntegeri_v() GL_STENCIL_TEST Stenciling enabled GL_FALSE glIsEnabled() GL_STENCIL_FUNC Front read more..

  • Page - 815

    Table D.14 (continued) State Variables for Pixel Operations State Variable Description Initial Value Get Command GL_STENCIL_PASS_DEPTH_FAIL Front stencil depth buffer fail action GL_KEEP glGetIntegerv() GL_STENCIL_PASS_DEPTH_PASS Front stencil depth buffer pass action GL_KEEP glGetIntegerv() GL_STENCIL_BACK_FUNC Back stencil function GL_ALWAYS glGetIntegerv() GL_STENCIL_BACK_VALUE_MASK Back stencil mask 2s − 1 where s is read more..

  • Page - 816

    Table D.14 (continued) State Variables for Pixel Operations State Variable Description Initial Value Get Command GL_BLEND Blending enabled for draw buffer i GL_FALSE glIsEnabledi() GL_BLEND_SRC_RGB Blending source RGB function for draw buffer i GL_ONE glGetIntegeri_v() GL_BLEND_SRC_ALPHA Blending source A function for draw buffer i GL_ONE glGetIntegeri_v() GL_BLEND_DST_RGB RGB destination blending function for draw read more..

  • Page - 817

    Framebuffer Controls Table D.15 State Variables Controlling Framebuffer Access and Values State Variable Description Initial Value Get Command GL_COLOR_WRITEMASK Color write enables (R,G,B,A) for draw buffer i (GL_TRUE,GL_TRUE, GL_TRUE,GL_TRUE) glGetBooleani_v() GL_DEPTH_WRITEMASK Depth buffer enabled for writing GL_TRUE glGetBooleanv() GL_STENCIL_WRITEMASK Front stencil buffer writemask 1s glGetIntegerv() read more..

  • Page - 818

    Framebuffer State Table D.16 State Variables for Framebuffers per Target State Variable Description Initial Value Get Command GL_DRAW_FRAMEBUFFER_BINDING Framebuffer object bound to GL_DRAW_FRAMEBUFFER 0 glGetIntegerv() GL_READ_FRAMEBUFFER_BINDING Framebuffer object bound to GL_READ_FRAMEBUFFER 0 glGetIntegerv() OpenGL State V ariables 771 read more..

  • Page - 819

    Framebuffer State Table D.17 State Variables for Framebuffer Objects State Variable Description Initial Value Get Command GL_DRAW_BUFFERi Draw buffer selected for color output i GL_BACK if there is a back buffer, otherwise GL_FRONT, unless there is no default framebuffer, then GL_NONE. GL_COLOR_ATTACHMENT0 for framebuffer object fragment color zero, otherwise GL_NONE glGetIntegerv() GL_READ_BUFFER Read read more..

  • Page - 820

    Frambuffer State Table D.18 State Variables for Framebuffer Attachments State Variable Description Initial Value Get Command GL_FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE Type of image attached to framebuffer attachment point GL_NONE glGetFramebuffer Attachment Parameteriv() GL_FRAMEBUFFER_ATTACHMENT_OBJECT_NAME Name of object attached to framebuffer attachment point 0 glGetFramebuffer Attachment Parameteriv() read more..

  • Page - 821

    Table D.18 (continued) State Variables for Framebuffer Attachments State Variable Description Initial Value Get Command GL_FRAMEBUFFER_ATTACHMENT_LAYERED Framebuffer attachment is layered GL_FALSE glGetFramebuffer Attachment Parameteriv() GL_FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING Encoding of components in the attached image --- glGetFramebuffer Attachment Parameteriv() GL_FRAMEBUFFER_ATTACHMENT_COMPONENT_TYPE Data type of components in read more..

  • Page - 822

    Renderbuffer State Table D.19 Renderbuffer State State Variable Description Initial Value Get Command GL_RENDERBUFFER_BINDING Renderbuffer object bound to GL_RENDERBUFFER 0 glGetIntegerv() OpenGL State V ariables 775 read more..

  • Page - 823

    Renderbuffer State Table D.20 State Variables per Renderbuffer Object State Variable Description Initial Value Get Command GL_RENDERBUFFER_WIDTH Width of renderbuffer 0 glGetRenderbuffer Parameteriv() GL_RENDERBUFFER_HEIGHT Height of renderbuffer 0 glGetRenderbuffer Parameteriv() GL_RENDERBUFFER_INTERNAL_FORMAT Internal format of renderbuffer GL_RGBA glGetRenderbuffer Parameteriv() GL_RENDERBUFFER_RED_SIZE Size in bits of read more..

  • Page - 824

    Table D.20 (continued) State Variables Per Renderbuffer Object State Variable Description Initial Value Get Command GL_RENDERBUFFER_DEPTH_SIZE Size in bits of renderbuffer image’s depth component 0 glGetRenderbuffer Parameteriv() GL_RENDERBUFFER_STENCIL_SIZE Size in bits of renderbuffer image’s stencil component 0 glGetRenderbuffer Parameteriv() GL_RENDERBUFFER_SAMPLES Number of samples 0 glGetRenderbuffer Parameteriv() read more..

  • Page - 825

    Pixel State Table D.21 State Variables Controlling Pixel Transfers State Variable Description Initial Value Get Command GL_UNPACK_SWAP_BYTES Value of GL_UNPACK_SWAP_BYTES GL_FALSE glGetBooleanv() GL_UNPACK_LSB_FIRST Value of GL_UNPACK_LSB_FIRST GL_FALSE glGetBooleanv() GL_UNPACK_IMAGE_HEIGHT Value of GL_UNPACK_IMAGE_HEIGHT 0 glGetIntegerv() GL_UNPACK_SKIP_IMAGES Value of GL_UNPACK_SKIP_IMAGES 0 glGetIntegerv() GL_UNPACK_ROW_LENGTH Value read more..

  • Page - 826

    Table D.21 (continued) State Variables Controlling Pixel Transfers State Variable Description Initial Value Get Command GL_UNPACK_COMPRESSED_BLOCK_ HEIGHT Value of GL_UNPACK_COMPRESSED_BLOCK_ HEIGHT 0 glGetIntegerv() GL_UNPACK_COMPRESSED_BLOCK_DEPTH Value of GL_UNPACK_COMPRESSED_ BLOCK_DEPTH 0 glGetIntegerv() GL_UNPACK_COMPRESSED_BLOCK_SIZE Value of GL_UNPACK_COMPRESSED_ BLOCK_SIZE 0 glGetIntegerv() GL_PIXEL_UNPACK_BUFFER_BINDING Pixel unpack read more..

  • Page - 827

    Table D.21 (continued) State Variables Controlling Pixel Transfers State Variable Description Initial Value Get Command GL_PACK_SKIP_ROWS Value of GL_PACK_SKIP_ROWS 0 glGetIntegerv() GL_PACK_SKIP_PIXELS Value of GL_PACK_SKIP_PIXELS 0 glGetIntegerv() GL_PACK_ALIGNMENT Value of GL_PACK_ALIGNMENT 4 glGetIntegerv() GL_PACK_COMPRESSED_BLOCK_WIDTH Value of GL_PACK_COMPRESSED_BLOCK_WIDTH 0 glGetIntegerv() GL_PACK_COMPRESSED_BLOCK_HEIGHT Value of read more..

  • Page - 828

    Shader Object State Table D.22 State Variables for Shader Objects State Variable Description Initial Value Get Command GL_SHADER_TYPE Type of shader (vertex, geometry, or fragment) --- glGetShaderiv() GL_DELETE_STATUS Shader flagged for deletion GL_FALSE glGetShaderiv() GL_COMPILE_STATUS Last compile succeeded GL_FALSE glGetShaderiv() Info log for shader objects empty string glGetShaderInfoLog() GL_INFO_LOG_LENGTH Length read more..

  • Page - 829

    Shader Program Pipeline Object State Table D.23 State Variables for Program Pipeline Object State State Variable Description Initial Value Get Command GL_ACTIVE_PROGRAM Program object updated by Uniform* when PPO bound 0 glGetProgram Pipelineiv() GL_VERTEX_SHADER Name of current vertex shader program object 0 glGetProgram Pipelineiv() GL_GEOMETRY_SHADER Name of current geometry shader program object 0 read more..

  • Page - 830

    Shader Program Object State Table D.24 State Variables for Shader Program Objects State Variable Description Initial Value Get Command GL_CURRENT_PROGRAM Name of current program object 0 glGetIntegerv() GL_PROGRAM_PIPELINE_BINDING Current program pipeline object binding 0 glGetIntegerv() GL_PROGRAM_SEPARABLE Program object capable of being bound for separate pipeline stages GL_FALSE glGetProgramiv() GL_DELETE_STATUS read more..

  • Page - 831

    Table D.24 (continued) State Variables for Shader Program Objects State Variable Description Initial Value Get Command GL_PROGRAM_BINARY_ RETRIEVABLE_HINT Retrievable binary hint enabled GL_FALSE glGetProgramiv() Binary representation of program --- glGetProgramBinary() GL_COMPUTE_WORK_GROUP_ SIZE Local work group size of a linked compute program glGetProgramiv() 0, ... GL_LABEL Debug label empty string glGetObjectLabel() read more..

  • Page - 832

    Table D.24 (continued) State Variables for Shader Program Objects State Variable Description Initial Value Get Command Type of active attribute --- glGetActiveAttrib() Name of active attribute empty string glGetActiveAttrib() GL_ACTIVE_ATTRIBUTE_MAX_ LENGTH Maximum active attribute name length 0 glGetProgramiv() GL_GEOMETRY_VERTICES_OUT Maximum number of output vertices 0 glGetProgramiv() GL_GEOMETRY_INPUT_TYPE Primitive read more..

  • Page - 833

    Table D.24 (continued) State Variables for Shader Program Objects State Variable Description Initial Value Get Command Type of each transform feedback output variable --- glGetTransform FeedbackVarying() Name of each transform feedback output variable --- glGetTransform FeedbackVarying() GL_UNIFORM_BUFFER_BINDING Uniform buffer object bound to the context for buffer object manipulation 0 glGetIntegerv() read more..

  • Page - 834

    Table D.24 (continued) State Variables for Shader Program Objects State Variable Description Initial Value Get Command GL_UNIFORM_NAME_LENGTH Uniform name length --- glGetActiveUniformsiv() GL_UNIFORM_BLOCK_INDEX Uniform block index --- glGetActiveUniformsiv() GL_UNIFORM_OFFSET Uniform buffer offset --- glGetActiveUniformsiv() GL_UNIFORM_ARRAY_STRIDE Uniform buffer array stride --- glGetActiveUniformsiv() GL_UNIFORM_MATRIX_STRIDE read more..

  • Page - 835

    Table D.24 (continued) State Variables for Shader Program Objects State Variable Description Initial Value Get Command GL_UNIFORM_BLOCK_ REFERENCED_BY_VERTEX_ SHADER True if uniform block is actively referenced by the vertex stage GL_FALSE glGetActiveUniform Blockiv() GL_UNIFORM_BLOCK_ REFERENCED_BY_TESS_ CONTROL_SHADER True if uniform block is actively referenced by tessellation control stage GL_FALSE glGetActiveUniform read more..

  • Page - 836

    Table D.24 (continued) State Variables for Shader Program Objects State Variable Description Initial Value Get Command GL_TESS_GEN_MODE Base primitive type for tessellation primitive generator GL_QUADS glGetProgramiv() GL_TESS_GEN_SPACING Spacing of tessellation primitive generator edge subdivision GL_EQUAL glGetProgramiv() GL_TESS_GEN_VERTEX_ORDER Order of vertices in primitives generated by tessellation prim generator read more..

  • Page - 837

    Table D.24 (continued) State Variables for Shader Program Objects State Variable Description Initial Value Get Command GL_ACTIVE_SUBROUTINE_MAX_LENGTH Maximum subroutine name length 0 glGetProgramStageiv() GL_NUM_COMPATIBLE_SUBROUTINES Number of subroutines compatible with a subroutine uniform --- glGetActiveSub routineUniformiv() GL_COMPATIBLE_SUBROUTINES List of subroutines compatible with a subroutine uniform --- read more..

  • Page - 838

    Table D.24 (continued) State Variables for Shader Program Objects State Variable Description Initial Value Get Command GL_ATOMIC_COUNTER_BUFFER_ BINDING Binding point associated with an active atomic-counter buffer --- glGetActiveAtomic CounterBufferiv() GL_ATOMIC_COUNTER_BUFFER_ DATA_SIZE Minimum size required by an active atomic-counter buffer --- glGetActiveAtomic CounterBufferiv() GL_ATOMIC_COUNTER_BUFFER_ read more..

  • Page - 839

    Table D.24 (continued) State Variables for Shader Program Objects State Variable Description Initial Value Get Command GL_ATOMIC_COUNTER_BUFFER_ REFERENCED_BY_TESS_ EVALUTION_SHADER Active atomic-counter buffer has a counter used by tessellation evaluation shaders GL_FALSE glGetActiveAtomic CounterBufferiv() GL_ATOMIC_COUNTER_BUFFER_ REFERENCED_BY_GEOMETRY_ SHADER Active atomic-counter buffer has a counter used by geometry read more..

  • Page - 840

    Program Interface State Table D.25 State Variables for Program Interfaces State Variable Description Initial Value Get Command GL_ACTIVE_RESOURCES Number of active resources on a program interface 0 glGetProgram Interfaceiv() GL_MAX_NAME_LENGTH Maximum name length for active resources 0 glGetProgram Interfaceiv() GL_MAX_NUM_ACTIVE_VARIABLES Maximum number of active variables for active resources 0 glGetProgram read more..

  • Page - 841

    Program Object Resource State Table D.26 State Variables for Program Object Resources State Variable Description Initial Value Get Command GL_NAME_LENGTH Length of active resource name --- glGetProgram Resourceiv() GL_TYPE Active resource type --- glGetProgram Resourceiv() GL_ARRAY_SIZE Active resource array size --- glGetProgram Resourceiv() GL_OFFSET Active resource offset in memory --- glGetProgram Resourceiv() read more..

  • Page - 842

    Table D.26 (continued) State Variables for Program Object Resources State Variable Description Initial Value Get Command GL_ATOMIC_COUNTER_BUFFER_ INDEX Index of atomic-counter buffer owning resource --- glGetProgram Resourceiv() GL_BUFFER_BINDING Buffer binding assigned to active resource --- glGetProgram Resourceiv() GL_BUFFER_DATA_SIZE Minimum buffer data size required for resource --- glGetProgram Resourceiv() read more..

  • Page - 843

    Table D.26 (continued) State Variables for Program Object Resources State Variable Description Initial Value Get Command GL_REFERENCED_BY_GEOMETRY_SHADER Active resource used by geometry shader --- glGetProgram Resourceiv() GL_REFERENCED_BY_FRAGMENT_SHADER Active resource used by fragment shader --- glGetProgram Resourceiv() GL_REFERENCED_BY_COMPUTE_SHADER Active resource used by compute shader --- glGetProgram Resourceiv() read more..

  • Page - 844

    Vertex and Geometry Shader State Table D.27 State Variables for Vertex and Geometry Shader State State Variable Description Initial Value Get Command GL_CURRENT_VERTEX_ATTRIB Current generic vertex attribute values 0.0,0.0,0.0,1.0 glGetVertexAttribfv() GL_PROGRAM_POINT_SIZE Point size mode GL_FALSE glIsEnabled() Query Object State Table D.28 State Variables for Query Objects State Variable Description Initial Value read more..

  • Page - 845

    Image State Table D.29 State Variables per Image Unit State Variable Description Initial Value Get Command GL_IMAGE_BINDING_NAME Name of bound texture object 0 glGetIntegeri_v() GL_IMAGE_BINDING_LEVEL Level of bound texture object 0 glGetIntegeri_v() GL_IMAGE_BINDING_LAYERED Texture object bound with multiple layers GL_FALSE glGetBooleani_v() GL_IMAGE_BINDING_LAYER Layer of bound texture, if not layered 0 read more..

  • Page - 846

    Transform Feedback State Table D.30 State Variables for Transform Feedback State Variable Description Initial Value Get Command GL_TRANSFORM_FEEDBACK_BUFFER_BINDING Buffer object bound to generic bind point for transform feedback 0 glGetIntegerv() GL_TRANSFORM_FEEDBACK_BUFFER_BINDING Buffer object bound to each transform feedback attribute stream 0 glGetIntegeri_v() GL_TRANSFORM_FEEDBACK_BUFFER_START Start offset of binding read more..

  • Page - 847

    Atomic Counter State Table D.31 State Variables for Atomic Counters State Variable Description Initial Value Get Command GL_ATOMIC_COUNTER_BUFFER_BINDING Current value of generic atomic-counter buffer binding 0 glGetIntegerv() GL_ATOMIC_COUNTER_BUFFER_BINDING Buffer object bound to each atomic counter buffer binding point 0 glGetIntegeri_v() GL_ATOMIC_COUNTER_BUFFER_START Start offset of binding range for each atomic read more..

  • Page - 848

    Shader Storage Buffer State Table D.32 State Variables for Shader Storage Buffers State Variable Description Initial Value Get Command GL_SHADER_STORAGE_BUFFER_BINDING Current value of generic shader storage buffer binding 0 glGetIntegerv() GL_SHADER_STORAGE_BUFFER_BINDING Buffer object bound to each shader storage buffer binding point 0 glGetIntegeri_v() GL_SHADER_STORAGE_BUFFER_START Start offset of binding range read more..

  • Page - 849

    Sync Object State Table D.33 State Variables for Sync Objects State Variable Description Initial Value Get Command GL_OBJECT_TYPE Type of sync object GL_SYNC_FENCE glGetSynciv() GL_SYNC_STATUS Sync object status GL_UNSIGNALED glGetSynciv() GL_SYNC_CONDITION Sync object condition GL_SYNC_GPU_COMMANDS_COMPLETE glGetSynciv() GL_SYNC_FLAGS Sync object flags 0 glGetSynciv() GL_LABEL Debug label empty string glGetObjectLabel() 802 read more..

  • Page - 850

    Hints Table D.34 Hints State Variable Description Initial Value Get Command GL_LINE_SMOOTH_HINT Line smooth hint GL_DONT_CARE glGetIntegerv() GL_POLYGON_SMOOTH_HINT Polygon smooth hint GL_DONT_CARE glGetIntegerv() GL_TEXTURE_COMPRESSION_HINT Texture compression quality hint GL_DONT_CARE glGetIntegerv() GL_FRAGMENT_SHADER_DERIVATIVE_HINT Fragment shader derivative accuracy hint GL_DONT_CARE glGetIntegerv() Compute Dispatch State Table read more..

  • Page - 851

    Implementation-Dependent Values Table D.36 State Variables Based on Implementation-Dependent Values State Variable Description Initial Value Get Command GL_MAX_CLIP_DISTANCES Maximum number of user clipping planes 8 glGetIntegerv() GL_SUBPIXEL_BITS Number of bits of subpixel precision in screen xw and yw 4 glGetIntegerv() GL_IMPLEMENTATION_COLOR_ READ_TYPE Implementation preferred pixel type GL_UNSIGNED_BYTE glGetIntegerv() read more..

  • Page - 852

    Table D.36 (continued) State Variables Based on Implementation-Dependent Values State Variable Description Initial Value Get Command GL_MAX_RENDERBUFFER_SIZE Maximum width and height of renderbuffers 16384 glGetIntegerv() GL_MAX_VIEWPORT_DIMS Maximum viewport dimensions Implementation- dependent maximum values glGetFloatv() GL_MAX_VIEWPORTS Maximum number of active viewports 16 glGetIntegerv() GL_VIEWPORT_SUBPIXEL_BITS Number of read more..

  • Page - 853

    Table D.36 (continued) State Variables Based on Implementation-Dependent Values State Variable Description Initial Value Get Command GL_ALIASED_LINE_WIDTH_RANGE Range (low to high) of aliased line widths 1,1 glGetFloatv() GL_SMOOTH_LINE_WIDTH_RANGE Range (low to high) of antialiased line widths 1,1 glGetFloatv() GL_SMOOTH_LINE_WIDTH_GRANULARITY Antialiased line width granularity --- glGetFloatv() GL_MAX_ELEMENTS_INDICES read more..

  • Page - 854

    Table D.36 (continued) State Variables Based on Implementation-Dependent Values State Variable Description Initial Value Get Command GL_MAX_TEXTURE_BUFFER_SIZE Number of addressable texels for buffer textures 65536 glGetIntegerv() GL_MAX_RECTANGLE_TEXTURE_SIZE Maximum width and height of rectangular textures 16384 glGetIntegerv() GL_PROGRAM_BINARY_FORMATS Enumerated program binary formats N/A glGetIntegerv() read more..

  • Page - 855

    Table D.36 (continued) State Variables Based on Implementation-Dependent Values State Variable Description Initial Value Get Command GL_MINOR_VERSION Minor version number supported --- glGetIntegerv() GL_CONTEXT_FLAGS Context full/forward-compatible flag --- glGetIntegerv() GL_EXTENSIONS Supported individual extension names --- glGetStringi() GL_NUM_EXTENSIONS Number of individual extension names --- glGetIntegerv() read more..

  • Page - 856

    Table D.36 (continued) State Variables Based on Implementation-Dependent Values State Variable Description Initial Value Get Command GL_MAX_VERTEX_UNIFORM_VECTORS Number of vectors for vertex shader uniform variables 256 glGetIntegerv() GL_MAX_VERTEX_UNIFORM_BLOCKS Maximum number of vertex uniform buffers per program 14 glGetIntegerv() GL_MAX_VERTEX_OUTPUT_COMPONENTS Maximum number of components of outputs written by a read more..

  • Page - 857

    Tessellation Shader Implementation-Dependent Limits Table D.37 State Variables for Implementation-Dependent Tessellation Shader Values State Variable Description Initial Value Get Command GL_MAX_TESS_GEN_LEVEL Maximum level supported by tessellation primitive generator 64 glGetIntegerv() GL_MAX_PATCH_VERTICES Maximum patch size 32 glGetIntegerv() GL_MAX_TESS_CONTROL_ UNIFORM_COMPONENTS Number of words for tessellation control read more..

  • Page - 858

    Table D.37 (continued) State Variables for Implementation-Dependent Tessellation Shader Values State Variable Description Initial Value Get Command GL_MAX_TESS_CONTROL_ INPUT_COMPONENTS Number components for tessellation-control shader per-vertex inputs 128 glGetIntegerv() GL_MAX_TESS_CONTROL_ UNIFORM_BLOCKS Number of supported uniform blocks for tessellation-control shader 14 glGetIntegerv() GL_MAX_TESS_CONTROL_ ATOMIC_COUNTER_BUFFERS read more..

  • Page - 859

    Table D.37 (continued) State Variables for Implementation-Dependent Tessellation Shader Values State Variable Description Initial Value Get Command GL_MAX_TESS_EVALUATION_ OUTPUT_COMPONENTS Number components for tessellation-evaluation shaderper-vertex outputs 128 glGetIntegerv() GL_MAX_TESS_EVALUATION_ INPUT_COMPONENTS Number components for tessellation-evaluation shaderper-vertex inputs 128 glGetIntegerv() GL_MAX_TESS_EVALUATION_ read more..

  • Page - 860

    Geometry Shader Implementation-Dependent Limits Table D.38 State Variables for Implementation-Dependent Geometry Shader Values State Variable Description Initial Value Get Command GL_MAX_GEOMETRY_UNIFORM_COMPONENTS Number of components for geometry shader uniform variables 512 glGetIntegerv() GL_MAX_GEOMETRY_UNIFORM_BLOCKS Maximum number of geometry uniform buffers per program 14 glGetIntegerv() GL_MAX_GEOMETRY_INPUT_COMPONENTS read more..

  • Page - 861

    Table D.38 (continued) State Variables for Implementation-Dependent Geometry Shader Values State Variable Description Initial Value Get Command GL_MAX_GEOMETRY_TOTAL_OUTPUT_ COMPONENTS Maximum number of total components (all vertices) of active outputs that a geometry shader can emit 1024 glGetIntegerv() GL_MAX_GEOMETRY_TEXTURE_IMAGE_UNITS Number of texture image units accessible by a geometry shader 16 read more..

  • Page - 862

    Fragment Shader Implementation-Dependent Limits Table D.39 State Variables for Implementation-Dependent Fragment Shader Values State Variable Description Initial Value Get Command GL_MAX_FRAGMENT_UNIFORM_COMPONENTS Number of components for fragment shader uniform variables 1024 glGetIntegerv() GL_MAX_FRAGMENT_UNIFORM_VECTORS Number of vectors for fragment shader uniform variables 256 glGetIntegerv() GL_MAX_FRAGMENT_UNIFORM_BLOCKS read more..

  • Page - 863

    Implementation-Dependent Compute Shader Limits Table D.40 State Variables for Implementation-Dependent Compute Shader Limits State Variable Description Initial Value Get Command GL_MAX_COMPUTE_WORK_ GROUP_COUNT Maximum number of work groups that may be dispatched by a single dispatch command (per dimension) 65535 glGetIntegeri_v() GL_MAX_COMPUTE_WORK_ GROUP_SIZE Maximum local size of a compute work group (per read more..

  • Page - 864

    Table D.40 (continued) State Variables for Implementation-Dependent Compute Shader Limits State Variable Description Initial Value Get Command GL_MAX_COMPUTE_ATOMIC_COUNTERS Number of atomic counters accessed by a compute shader 8 glGetIntegerv() GL_MAX_COMPUTE_SHARED_MEMORY_SIZE Maximum total storage size of all variables declared as shared in all compute shaders linked into a single program object 32768 read more..

  • Page - 865

    Implementation-Dependent Shader Limits Table D.41 State Variables for Implementation-Dependent Shader Limits State Variable Description Initial Value Get Command GL_MIN_PROGRAM_TEXEL_OFFSET Minimum texel offset allowed in lookup −8 glGetIntegerv() GL_MAX_PROGRAM_TEXEL_OFFSET Maximum texel offset allowed in lookup 7 glGetIntegerv() GL_MAX_UNIFORM_BUFFER_BINDINGS Maximum number of uniform buffer binding points on the read more..

  • Page - 866

    Table D.41 (continued) State Variables for Implementation-Dependent Shader Limits State Variable Description Initial Value Get Command GL_MAX_COMBINED_TEXTURE_ IMAGE_UNITS Total number of texture units accessible by the GL 96 glGetIntegerv() GL_MAX_SUBROUTINES Maximum number of subroutines per shader stage 256 glGetIntegerv() GL_MAX_SUBROUTINE_ UNIFORM_LOCATIONS Maximum number of subroutine uniform locations per stage read more..

  • Page - 867

    Table D.41 (continued) State Variables for Implementation-Dependent Shader Limits State Variable Description Initial Value Get Command GL_MAX_SHADER_STORAGE_ BUFFER_BINDINGS Maximum number of shader storage buffer binding 8 glGetIntegerv() GL_MAX_SHADER_STORAGE_ BLOCK_SIZE Maximum size of shader storage buffer binding 224 glGetInteger64v() GL_MAX_COMBINED_SHADER_ STORAGE_BLOCKS Maximum number of shader storage buffer accessed read more..

  • Page - 868

    Table D.41 (continued) State Variables for Implementation-Dependent Shader Limits State Variable Description Initial Value Get Command GL_MAX_TESS_CONTROL_ IMAGE_UNIFORMS Number of image variables in tessellation control shaders 0 glGetIntegerv() GL_MAX_TESS_EVALUATION_ IMAGE_UNIFORMS Number of image variables in tessellation evaluation shaders 0 glGetIntegerv() GL_MAX_GEOMETRY_IMAGE_ UNIFORMS Number of image variables in read more..

  • Page - 869

    Table D.41 (continued) State Variables for Implementation-Dependent Shader Limits State Variable Description Initial Value Get Command GL_MAX_COMBINED_TESS_ CONTROL_UNIFORM_ COMPONENTS Number of words for tessellation-control shader uniform variables in all uniform blocks (including default) Implementation dependent glGetIntegerv() GL_MAX_COMBINED_TESS_ EVALUATION_UNIFORM_ COMPONENTS Number of words for tessellation-evaluation read more..

  • Page - 870

    Implementation-Dependent Debug Output State Table D.42 State Variables for Debug Output State State Variable Description Initial Value Get Command GL_MAX_DEBUG_MESSAGE_LENGTH The maximum length of a debug message string, including its null terminator 1 glGetIntegerv() GL_MAX_DEBUG_LOGGED_MESSAGES The maximum number of messages stored in the debug message log 1 glGetIntegerv() GL_MAX_DEBUG_GROUP_STACK_DEPTH Maximum read more..

  • Page - 871

    Implementation-Dependent Values Table D.43 Implementation-Dependent Values State Variable Description Initial Value Get Command GL_MAX_SAMPLE_MASK_WORDS Maximum number of sample mask words 1 glGetIntegerv() GL_MAX_SAMPLES Maximum number of samples supported for all noninteger formats 4 glGetIntegerv() GL_MAX_COLOR_TEXTURE_SAMPLES Maximum number of samples supported for all color formats in a multisample texture 1 read more..

  • Page - 872

    Table D.43 (continued) Implementation-Dependent Values State Variable Description Initial Value Get Command GL_QUERY_COUNTER_BITS Asynchronous query counter bits Implementation dependent glGetQueryiv() GL_MAX_SERVER_WAIT_ TIMEOUT Maximum glWaitSync() timeout interval 0 glGetInteger64v() GL_MIN_FRAGMENT_ INTERPOLATION_OFFSET Furthest negative offset for interpolate AtOffset −0.5 glGetFloatv() GL_MAX_FRAGMENT_ INTERPOLATION_OFFSET Furthest read more..

  • Page - 873

    Internal Format-Dependent Values Table D.44 Internal Format-Dependent Values State Variable Description Initial Value Get Command GL_SAMPLES Supported sample counts Implementation dependent glGetInternalformativ() GL_NUM_SAMPLE_COUNTS Number of supported sample counts 1 glGetInternalformativ() Implementation-Dependent Transform Feedback Limits Table D.45 Implementation-Dependent Transform Feedback Limits State Variable Description read more..

  • Page - 874

    Framebuffer-Dependent Values Table D.46 Framebuffer-Dependent Values State Variable Description Initial Value Get Command GL_DOUBLEBUFFER True if front and back buffers exist --- glGetBooleanv() GL_STEREO True if left and right buffers exist --- glGetBooleanv() GL_SAMPLE_BUFFERS Number of multisample buffers 0 glGetIntegerv() GL_SAMPLES Coverage mask size 0 glGetIntegerv() GL_SAMPLE_POSITION Explicit sample positions --- read more..

  • Page - 875

    This page intentionally left blank read more..

  • Page - 876

    Appendix E Homogeneous Coordinates and Transformation Matrices This appendix presents a brief discussion of homogeneous coordinates, stated in a different way than Chapter 5,‘‘Viewing Transformations, Clipping, and Feedback’’. It also summarizes the forms of the transformation matrices used for rotation, scaling, translation, perspective, and orthographic projection discussed in detail in Chapter read more..

  • Page - 877

    Homogeneous Coordinates OpenGL commands usually deal with two- and three-dimensional vertices, but in fact all are treated internally as three-dimensional homogeneous vertices comprising four coordinates. Every column vector ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ x y z w ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ (which we write as (x, y, z, w )T) represents a homogeneous vertex if at least one of read more..

  • Page - 878

    After transformation, all transformed vertices are clipped so that x, y, and z are in the range [−w, w ] (assuming w > 0). Note that this range corresponds in Euclidean space to [−1.0, 1.0]. Transforming Normals Normal vectors aren’t transformed in the same way as vertices or position vectors are. Mathematically, it’s better to think of normal vectors not as vectors, read more..

  • Page - 879

    Translation T = ⎡ ⎢ ⎢⎢ ⎢ ⎢⎢ ⎢ ⎣ 100 x 010 y 001 z 0001 ⎤ ⎥ ⎥⎥ ⎥ ⎥⎥ ⎥ ⎦ and, T−1 = ⎡ ⎢ ⎢⎢ ⎢ ⎢⎢ ⎢ ⎣ 100 −x 010 −y 001 −z 000 1 ⎤ ⎥ ⎥⎥ ⎥ ⎥⎥ ⎥ ⎦ Scaling S = ⎡ ⎢ ⎢⎢ ⎢ ⎢⎢ ⎢ ⎣ x 001 0 y 01 00 z 1 0001 ⎤ ⎥ ⎥⎥ ⎥ ⎥⎥ ⎥ ⎦ and, S−1 = ⎡ ⎢ ⎢⎢ ⎢ ⎢⎢ ⎢ ⎣ 1 x 00 1 0 1 y 01 00 read more..

  • Page - 880

    Then R = ⎡ ⎢ ⎢⎢ ⎢ ⎢⎢ ⎢ ⎣ mmm 0 mmm 0 mmm 0 000 1 ⎤ ⎥ ⎥⎥ ⎥ ⎥⎥ ⎥ ⎦ where m represents the elements from M, which is the 3 × 3 matrix defined on the preceding page. The R matrix is always defined. If x = y = z = 0, then R is the identity matrix. You can obtain the inverse of R, R−1,by substituting −θ for θ, or by transposition. read more..

  • Page - 881

    Perspective Projection P = ⎡ ⎢ ⎢⎢ ⎢ ⎢⎢ ⎢ ⎢⎢ ⎢ ⎢⎢ ⎢ ⎢⎢ ⎣ 2n r − l 0 r + l r − l 0 0 2n t − b t + b t − b 0 00 − f + n f − n − 2fn f − n 00 −10 ⎤ ⎥ ⎥⎥ ⎥ ⎥⎥ ⎥ ⎥⎥ ⎥ ⎥⎥ ⎥ ⎥⎥ ⎦ P is defined as long as l = r, t = b, and n = f . Orthographic Projection P = ⎡ ⎢ ⎢⎢ ⎢ ⎢⎢ ⎢ ⎢⎢ ⎢ read more..

  • Page - 882

    Appendix F OpenGL and Window Systems OpenGL is available on many different platforms and works with many different window systems. It is designed to complement window systems, not duplicate their functionality. Therefore, OpenGL performs geometric and image rendering in two and three dimensions, but it does not manage windows or handle input events. However, the basic definitions of most read more..

  • Page - 883

    Accessing New OpenGL Functions OpenGL changes all the time. The manufacturers of OpenGL graphics hardware add new extensions and the OpenGL Architecture Review Board approves those extensions and merges them into the core of OpenGL. Since each manufacturer needs to be able to update its version of OpenGL, the header files (like glcorearb.h1), and the library you use to compile read more..

  • Page - 884

    It’s not sufficient in all cases to merely retrieve a function pointer to determine if functionality is present. To verify that the extension the functionality is defined from is available, check the extension string (see ‘‘The Query Commands’’ on Page 738 in Appendix D,‘‘State Variables’’ for details). GLEW: The OpenGL Extension Wrangler To simplify both verifying read more..

  • Page - 885

    glewInit(); // Initialize the GLEW library if (GL_version_3_0) { // We know that OpenGL version 3.0 is supported, which is the // minimum version of OpenGL supporting vertex array objects glGenVertexArrays(1, &vao); glBindVertexArray(vao); } // Initialize and bind all other vertex arrays ... } GLX: OpenGL Extension for the X Window System In the X Window System, OpenGL rendering is read more..

  • Page - 886

    The X Visual is an important data structure for maintaining pixel format information about an OpenGL window. A variable of data type XVisualInfo keeps track of pixel format information, including pixel type (RGBA or color-index), single- or double-buffering, resolution of colors, and presence of depth, stencil, and accumulation buffers. The standard X Visuals (for example, PseudoColor and read more..

  • Page - 887

    calling glXCreateWindow(), which returns a GLXWindow. Similarly for a GLXPixmap, first create an X Pixmap with a pixel depth that matches the vGLXFBConfig. Then use that Pixmap when calling glXCreatePixmap() to create a GLXPixmap. A GLXPbuffer does not require an X Window or an X Pixmap; just call glXCreatePbuffer() with the appropriate GLXFBConfig. Note: If you are using GLX 1.2 or read more..

  • Page - 888

    can draw into one of the current drawable surfaces and read pixel data from the other drawable. In many situations, the draw and read drawables refer to the same GLXDrawable surface. glXGetCurrentContext() returns the current context. You can also obtain the current draw drawable with glXGetCurrentDrawable(), the current read drawable with glXGetCurrentReadDrawable(), and the current X read more..

  • Page - 889

    guarantees that previously issued X rendering calls are executed before any OpenGL calls made after glXWaitX(). Swapping Buffers For drawables that are double-buffered, the front and back buffers can be exchanged by calling glXSwapBuffers(). An implicit glFlush() is done as part of this routine. Using an X Font A shortcut for using X fonts in OpenGL is provided with the command read more..

  • Page - 890

    Obtain available GLX framebuffer configurations: GLXFBConfig * glXGetFBConfigs(Display *dpy, int screen, int *nelements); GLXFBConfig * glXChooseFBConfig(Display *dpy, int screen, const int attribList, int *nelements); Query a GLX framebuffer configuration for attribute or X Visual information: int glXGetFBConfigAttrib(Display *dpy, GLXFBConfig config, int attribute, int *value); XVisualInfo * read more..

  • Page - 891

    GLXContext glXCreateContextAttribsARB((Display *dpy, GLXFBConfig config, GLXContext shareList, Bool direct, const int *attribs); Bool glXMakeContextCurrent(Display *dpy, GLXDrawable drawable, GLXDrawable read, GLXContext context); void glXCopyContext(Display *dpy, GLXContext source, GLXContext dest, unsigned long mask); Bool glXIsDirect(Display *dpy, GLXContext context); GLXContext glXGetCurrentContext(void); Display* read more..

  • Page - 892

    Clean up drawables: void glXDestroyWindow(Display *dpy, GLXWindow win); void glXDestroyPixmap(Display *dpy, GLXPixmap pixmap); void glXDestroyPbuffer(Display *dpy, GLXPbuffer pbuffer); Deprecated GLX Prototypes The following routines have been deprecated in GLX 1.3. If you are using GLX 1.2 or a predecessor, you may need to use several of these routines. Obtain the desired visual: XVisualInfo* read more..

  • Page - 893

    For Win32/WGL, the PIXELFORMATDESCRIPTOR is the key data structure for maintaining pixel format information about the OpenGL window. A variable of data type PIXELFORMATDESCRIPTOR keeps track of pixel information, including pixel type (RGBA or color-index); single- or double-buffering; resolution of colors; and presence of depth, stencil, and accumulation buffers. To get more information about read more..

  • Page - 894

    context current, use wglMakeCurrent(); wglGetCurrentContext() returns the current context. You can also obtain the current device context with wglGetCurrentDC(). You can copy some OpenGL state variables from one context to another with wglCopyContext() or make two contexts share the same display lists and texture objects with wglShareLists(). When you’re finished with a particular context, read more..

  • Page - 895

    layers, use wglRealizeLayerPalette(), which maps palette entries from a given color-index layer plane into the physical palette or initializes the palette of an RGBA layer plane. wglGetLayerPaletteEntries() and wglSetLayerPaletteEntries() are used to query and set the entries in palettes of layer planes. Using a Bitmap or Outline Font WGL has two routines, wglUseFontBitmaps() and read more..

  • Page - 896

    Controlling Rendering Manage or query an OpenGL rendering context: HGLRC wglCreateContext(HDC hdc); HGLRC wglCreateContextAttribsARB(HDC hdc, HGLRC hShareContext, const int *attribList); HGLRC wglCreateLayerContext(HDC hdc, int iLayerPlane); BOOL wglShareLists(HGLRC hglrc1, HGLRC hglrc2); BOOL wglDeleteContext(HGLRC hglrc); BOOL wglCopyContext(HGLRC hglrcSource, HGLRC hlglrcDest, UINT mask); BOOL wglMakeCurrent(HDC hdc, read more..

  • Page - 897

    Exchange front and back buffers: BOOL SwapBuffers(HDC hdc); BOOL wglSwapLayerBuffers(HDC hdc, UINT fuPlanes); Find a color palette for overlay or underlay layers: int wglGetLayerPaletteEntries(HDC hdc, int iLayerPlane, int iStart, int cEntries, CONST COLORREF *pcr); int wglSetLayerPaletteEntries(HDC hdc, int iLayerPlane, int iStart, int cEntries, CONST COLORREF *pcr); BOOL wglRealizeLayerPalette(HDC hdc, int read more..

  • Page - 898

    Mac OS X also makes an implementation of the X Window System available as a client application, which supports GLX as described previously. We’ll begin by describing the CGL, and then discuss the NSOpenGL classes, as they use CGL for suppporting OpenGL. Note: There current window-system framework of Mac OS X is called ‘‘Cocoa’’, and supports the topics described in the read more..

  • Page - 899

    Controlling Rendering Like the other binding APIs, CGL has routines for controlling OpenGL’s interaction with the windowing system, including managing OpenGL contexts. Managing an OpenGL Rendering Context After selecting a pixel format, create an OpenGL context by calling CGLCreateContext() using the returned CGLPixelFormatObj object. Once the context is created, you need to call read more..

  • Page - 900

    CGLError CGLCreateContext(CGLPixelFormatObj pix, CGLContextObj share, CGLContextObj *ctx); CGLContextObj CGLRetainContext(CGLContextObj ctx); void CGLReleaseContext(CGLContextObj ctx); GLuint CGLGetContextRetainCount(CGLContextObj ctx); CGLError CGLDestroyContext(CGLContextObj ctx); CGLContextObj CGLGetCurrentContext(void); CGLError CGLSetCurrentContext(CGLContextObj ctx); Getting and Setting Context Options: CGLError CGLEnable(CGLContextObj read more..

  • Page - 901

    CGLError CGLSetGlobalOption(CGLGlobalOption pname, const GLint *params); CGLError CGLGetGlobalOption(CGLGlobalOption pname, GLint *params); void CGLGetVersion(GLint *majorvers, GLint *minorvers); Retrieve rendering information: CGLError CGLDescribeRenderer(CGLRendererInfoObj rend, GLint rend_num, CGLRendererProperty prop, GLint *value); CGLError CGLDestroyRendererInfo(CGLRendererInfoObj rend); CGLError CGLQueryRendererInfo(GLuint read more..

  • Page - 902

    Once your view is created, you can proceed with OpenGL rendering. Accessing OpenGL Functions As compared to the GLX and WGL OpenGL implementations, Mac OS X provides full-linkage capabilities with its OpenGL framework. That is, there’s no need (or supported functions) for retrieving function pointers to OpenGL extensions. All supported entry poins are exported. The NSOpenGL Classes 855 read more..

  • Page - 903

    This page intentionally left blank read more..

  • Page - 904

    Appendix G Floating-Point Formats for Textures, Framebuffers, and Renderbuffers This appendix describes the floating-point formats used for pixel storage in framebuffers and renderbuffers, and texel storage in textures. It has the following major sections: • ‘‘Reduced-Precision Floating-Point Values’’ • ‘‘16-bit Floating-Point Values’’ • ‘‘10- and 11-bit Unsigned Floating-Point read more..

  • Page - 905

    Reduced-Precision Floating-Point Values In addition to the normal 32-bit single-precision floating-point values you usually use when you declare a GLfloat in your application, OpenGL supports reduced-precision floating-point representations for storing data more compactly than its 32-bit representation. In many instances, your floating-point data may not require the entire dynamic range of a read more..

  • Page - 906

    GLushort F32toF16(GLfloat val) { GLuint f32 = (*(GLuint *) &val); GLushort f16 = 0; /* Decode IEEE 754 little-endian 32-bit floating-point value */ int sign = (f32 >> 16) & 0x8000; /* Map exponent to the range [-127,128] */ int exponent = ((f32 >> 23) & 0xff) - 127; int mantissa = f32 & 0x007fffff; if (exponent == 128) { /* Infinity or NaN */ f16 = read more..

  • Page - 907

    if (mantissa != 0) { const GLfloat scale = 1.0 / (1 << 24); f32.f = scale * mantissa; } } else if (exponent == 31) { f32.ui = sign | F32_INFINITY | mantissa; } else { GLfloat scale, decimal; exponent -= 15; if (exponent < 0) { scale = 1.0 / (1 << -exponent); } else { scale = 1 << exponent; } decimal = 1.0 + (float) mantissa / (1 << 10); read more..

  • Page - 908

    GLuint f32 = (*(GLuint *) &val); GLushort uf11 = 0; /* Decode little-endian 32-bit floating-point value */ int sign = (f32 >> 16) & 0x8000; /* Map exponent to the range [-127,128] */ int exponent = ((f32 >> 23) & 0xff) - 127; int mantissa = f32 & 0x007fffff; if (sign) return 0; if (exponent == 128) { /* Infinity or NaN */ uf11 = UF11_MAX_EXPONENT; if read more..

  • Page - 909

    exponent -= 15; if (exponent < 0) { scale = 1.0 / (1 << -exponent); } else { scale = 1 << exponent; } decimal = 1.0 + (float) mantissa / 64; f32.f = scale * decimal; } return f32.f; } For completeness, we present similar routines for converting 10-bit unsigned floating-point values. #define UF10_EXPONENT_BIAS 15 #define UF10_EXPONENT_BITS 0x1F #define UF10_EXPONENT_SHIFT 5 read more..

  • Page - 910

    return uf10; } #define F32_INFINITY 0x7f800000 GLfloat UF10toF32(GLushort val) { union { GLfloat f; GLuint ui; } f32; int exponent = (val & 0x07c0) >> UF10_EXPONENT_SHIFT; int mantissa = (val & 0x003f); f32.f = 0.0; if (exponent == 0) { if (mantissa != 0) { const GLfloat scale = 1.0 / (1 << 20); f32.f = scale * mantissa; } } else if (exponent == 31) { f32.ui = read more..

  • Page - 911

    This page intentionally left blank read more..

  • Page - 912

    Appendix H Debugging and Profiling OpenGL This appendix describes the facilities provided by debug contexts, which can greatly assist you in finding errors in your programs and with getting the best possible performance from OpenGL. This appendix contains the following major sections: • ‘‘Creating a Debug Context’’ explains how to create OpenGL contexts in debug mode, enabling read more..

  • Page - 913

    Creating a Debug Context To get the most from OpenGL’s debugging facilities, it is necessary to create a debug context, which implies that you need control over the flags and parameters used to create the context. Context creation is a platform- specific task that is often handled by a wrapper layer such as GLUT. Modern implementations of GLUT (such as FreeGLUT, which we read more..

  • Page - 914

    Note that due to some nastiness in the design of WGL, it is not possible to use any WGL extensions without first creating a context. This is because wglGetProcAddress() will return NULL if no context is current at the time it is called. This means that you will need to create a context with wglCreateContext() first, make it current, get the address of the read more..

  • Page - 915

    and you’re no longer debugging, you should turn the debug context off as some of the debugging features supported by OpenGL may come at a performance cost. Once your application is debugged and working correctly, you don’t really need to use a debug context and it’s best to avoid this potential performance loss in a shipping application. Debug Output The primary feature of read more..

  • Page - 916

    printf("Debug Message: SOURCE(0x%04X)," "TYPE(0x%04X)," "ID(0x%08X)," "SEVERITY(0x%04X), \"%s\"\n", source, type, id, severity, message); } void glDebugMessageCallback(DEBUGPROC callback, void* userParam); Sets the current debug message callback function pointer to the value specified in callback. This function will be called when the implementation needs to notify the read more..

  • Page - 917

    • GL_DEBUG_SOURCE_WINDOW_SYSTEM indicates that the message originates from the window system (e.g., WGL, GLX, or EGL). • GL_DEBUG_SOURCE_SHADER_COMPILER indicates that the message is generated by the shader compiler. • GL_DEBUG_SOURCE_THIRD_PARTY indicates that the message is generated by a third-party source such as a utility library, middleware, or tool. • GL_DEBUG_SOURCE_APPLICATION indicates read more..

  • Page - 918

    • GL_DEBUG_TYPE_OTHER is used when the type of the debug message does not fall into any of the above categories. In addition to a source and a type, each debug message has a severity associated with it. Again, these may be used for filtering or otherwise directing output. For example, an application may choose to log all message, but cause a break into a debugger in read more..

  • Page - 919

    go. In other implementations, the OpenGL driver might run in multiple threads that could be behind the application in processing order. The debug output is often generated when parameters are validated, or even cross-validated against each other, and this can happen some time after the actual error has occurred from the application’s perspective. The net result is that the debug read more..

  • Page - 920


  • Page - 921

    0, NULL, // No identifiers GL_FALSE); // Enable a couple of messages by identifiers static const GLuint messages[] = { 0x1234, 0x1337 }; glDebugMessageControl(GL_DONT_CARE, // Don’t care about origin GL_DONT_CARE, // Don’t care about type GL_DONT_CARE, // Don’t care about severity 2, messages, // 2 ids in "messages" GL_TRUE); Application-Generated Messages There are two sources of read more..

  • Page - 922

    Example H.5 Sending Application-Generated Debug Messages // Create a debug context and make it current MakeContextCurrent(CreateDebugContext()); // Get some information about the context const GLchar * vendor = (const GLchar *)glGetString(GL_VENDOR); const GLchar * renderer = (const GLchar *)glGetString(GL_RENDERER); const GLchar * version = (const GLchar *)glGetString(GL_VERSION); // Assemble a message read more..

  • Page - 923

    need to turn certain categories of messages on and off and to restore the debug log to its original state, you would need to query the current state of the debug context to determine whether certain types of messages are enabled or disabled. Rather than trying to implement all of this yourself, you can rely on OpenGL’s debug groups, which is a stack-based system of read more..

  • Page - 924

    than this number of debug groups onto the stack, then glPushDebugGroup() will generate a GL_STACK_OVERFLOW error. Likewise, if you try to pop an item from an empty stack, then glPopDebugGroup() will generate a GL_STACK_UNDERFLOW error. Naming Objects When OpenGL generates debugging messages, it will sometimes make reference to objects such as textures, buffers, or framebuffers. In a read more..

  • Page - 925

    void glGetObjectLabel(GLenum identifier, GLuint name, GLsizei bufsize, GLsizei * length, GLchar * label); void glGetObjectPtrLabel(void * ptr, GLsizei bufsize, GLsizei * length, GLchar * label); glGetObjectLabel() and glGetObjectPtrLabel() retrieve the labels that have previously been assigned to objects by the glObjectLabel() or glObjectPtrLabel() functions, respectively. For glGetObjectLabel(), name and read more..

  • Page - 926

    Profiling Once your application is close to its final state, you may wish to turn your attention to performance tuning. One of the most important aspects of performance tuning is not the modifications you make to your code to make it run faster, but the measurements and experiments you make to determine what you should do to your code to achieve your desired performance read more..

  • Page - 927

    Figure H.1 AMD’s GPUPerfStudio2 profiling Unigine Heaven 3.0 Figure H.2 Screenshot of Unigine Heaven 3.0 able to measure the amount of time various parts of the OpenGL pipeline (such as the texture processor, tessellation engine, blending unit, etc.) spend on individual commands. The tool will tell you if you are making too many draw commands, which ones are the most expensive, read more..

  • Page - 928

    the GPU spends its time on for each one. Profiling tools such as GPUPerfStudio 2 are an invaluable resource for performance tuning and debugging OpenGL applications. In-Application Profiling It is possible for your application to measure its own performance. A naïve approach is to simply measure the amount of time taken for a particular piece of code to execute by reading the read more..

  • Page - 929

    queries to last a very long time (such as the duration of several tens or hundreds of frames), you might want to use glGetQueryObjectui64v(), which retrieves the result as a 64-bit number.4 An example of using an elapsed time query is shown in Example H.6. Example H.6 Using an Elapsed Time Query GLuint timer_query; GLuint nanoseconds; // Generate the timer query glGenQueries(1, read more..

  • Page - 930

    void glQueryCounter(GLuint id, GLenum target); Issues a timestamp query into the OpenGL command queue using the query object whose name is id. target must be GL_TIMESTAMP. When glQueryCounter() is called, OpenGL inserts a command into the GPU’s queue to record its current time into the query object as soon as it comes across it. It may still take some time to get to the read more..

  • Page - 931

    This page intentionally left blank read more..

  • Page - 932

    Appendix I Buffer Object Layouts This appendix describes ways to deterministically lay out buffers that are shared between multiple readers or writers. It has the following major sections: • ‘‘Using Standard Layout Qualifiers’’ • ‘‘The std140 Layout Rules’’ • ‘‘The std430 Layout Rules’’ 885 read more..

  • Page - 933

    Using Standard Layout Qualifiers When you group a number of variables in a uniform buffer or shader storage buffer, and want to read or write their values outside a shader, you need to know the offset of each one. You can query these offsets, but for large collections of uniforms this process requires many queries and is cumbersome. As an alternative, the standard layout read more..

  • Page - 934

    Table I.1 (continued) std140 Layout Rules Variable Type Variable Size and Alignment An array of scalars or vectors The size of each element in the array will be the size of the element type, rounded up to a multiple of the size of a vec4 . This is also the array’s alignment. The array’s size will be this rounded-up element’s size times the number of elements in the read more..

  • Page - 935

    Table I.2 (continued) std430 Layout Rules Variable Type Variable Size and Alignment An array of scalars or vectors The size of each element in the array will be the same size of the element type, where three-component vectors are not rounded up to the size of four-component vectors. This is also the array’s alignment. The array’s size will be the element’s size times the read more..

  • Page - 936

    Glossary affine transformation A transformation that preserves straight lines and the ratio of distances of points lying on lines. aliasing Artifacts created by undersampling a scene, typically caused by assigning one point sample per pixel, where there are edges or patterns in the scene of higher frequency than the pixels. This results in jagged edges (jaggies), moiré patterns, and read more..

  • Page - 937

    antialiasing Rendering techniques that reduce aliasing. These techniques include sampling at a higher frequency, assigning pixel colors based on the fraction of the pixel’s area covered by the primitive being rendered, removing high-frequency components in the scene, and integrating or averaging the area of the scene covered by a pixel, as in area sampling. See antialiasing. API See read more..

  • Page - 938

    binding an object Attaching an object to the OpenGL context, commonly through a function that starts with the word bind, such as glBindTexture(), glBindBuffer(),or glBindSampler(). binomial coefficient The coefficients of the terms in the expansion of the polynomial (1 + x )n. Binomial coefficients are often described using the notation n k , where n k = n ! k !(n − k )! where n ! read more..

  • Page - 939

    bump mapping Broadly, this is adding the appearance of bumps through lighting effects even though the surface being rendered is flat. This is commonly done using a normal map to light a flat surface as if it were shaped as dictated by the normal map, giving lighting as if bumps existed on the surface, even though there is no geometry describing the bumps. byte swapping The read more..

  • Page - 940

    compatibility profile The profile of OpenGL that still supports all legacy functionality. It is primarily intended to allow the continued development of older applications. See also core profile. components Individual scalar values in a color or direction vector. They can be integer or floating-point values. Usually, for colors, a component value of zero represents the minimum value or read more..

  • Page - 941

    control texture A texture that tells the shader where an effect should be done, or that otherwise controls how and where an effect is done, rather than simply being an image. This is likely to be a single- component texture. convex A polygon is convex if no straight line in the plane of the polygon intersects the polygon’s edge more than twice. convex hull The smallest read more..

  • Page - 942

    debug context An OpenGL context that automatically reports errors to simplify debugging of OpenGL applications. decal A method of calculating color values during texture application, where the texture colors replace the fragment colors or, if alpha blending is enabled, the texture colors are blended with the fragment colors, using only the alpha value. default framebuffer The framebuffer read more..

  • Page - 943

    diffuse Diffuse lighting and reflection account for the direction of a light source. The intensity of light striking a surface varies with the angle between the orientation of the object and the direction of the light source. A diffuse material scatters that light evenly in all directions. directional light source See infinite light source. displacement mapping Use of a texture or read more..

  • Page - 944

    event loop In event-based applications, the event loop is a loop in the program that continuously checks for the arrival of new events and decides how to handle them. exponent Part of a floating-point number, the power of two to which the mantissa is raised after normalization. eye coordinates The coordinate system that follows transformation by the model-view matrix, and precedes read more..

  • Page - 945

    fonts Groups of graphical character representations generally used to display strings of text. The characters may be roman letters, mathematical symbols, Asian ideograms, Egyptian hieroglyphics, and so on. fractional Brownian motion A procedural-texturing technique to produce randomized noise textures. fragment Fragments are generated by the rasterization of primitives. Each fragment corresponds to a read more..

  • Page - 946

    front facing The classification of a polygon’s vertex ordering. When the screen-space projection of a polygon’s vertices is oriented such that traveling around the vertices in the order they were submitted to OpenGL results in a counterclockwise traversal (by definition, glFrontFace() controls which faces are front facing). frustum The view volume warped by perspective division. function read more..

  • Page - 947

    GPGPU The short name for General-Purpose computing on GPUs, which is the field of techniques attempting to do general computation (algorithms that you would normally execute on a CPU) on graphics processors. GPU graphics processing unit gradient noise Another name for Perlin noise. gradient vector A vector directed along the directional-derivative of a function. graphics processing The tasks read more..

  • Page - 948

    image plane Another name for the clipping plane of the viewing frustum that is closest to the eye. The geometry of the scene is projected onto the image plane, and displayed in the application’s window. image-based lighting An illumination technique that uses an image of the light falling on an object to illuminate the object, as compared to directly computing the illumation read more..

  • Page - 949

    a single instance of the shader when instancing is turned on. In compute shaders, a single invocation is created for each work item. IRIS GL Silicon Graphics’ proprietary graphics library, developed from 1982 through 1992. OpenGL was designed with IRIS GL as a starting point. jaggies Artifacts of aliased rendering. The edges of primitives that are rendered with aliasing are jagged, read more..

  • Page - 950

    local workgroup The local scope of a workgroup that has access to the same set of shared local variables. logical operation Boolean mathematical operations between the incoming fragment’s RGBA color or color-index values and the RGBA color or color-index values already stored at the corresponding location in the framebuffer. Examples of logical operations include AND, OR, XOR, NAND, and read more..

  • Page - 951

    modulate A method of calculating color values during texture application by which the texture and the fragment colors are combined. monitor The device that displays the image in the framebuffer. multifractal A procedural-texturing technique that varies the fractal dimension of the noise function based on an object’s location. multisampling The process of generating or producing multiple read more..

  • Page - 952

    normal vector See normal. normalize To change the length of a vector to have some canonical form, usually to have a length 1.0. The GLSL built-in normalize does this. To normalize a normal vector, divide each of the components by the square root of the sum of their squares. Then, if the normal is thought of as a vector from the origin to the point (nx , ny , nz ), read more..

  • Page - 953

    orthographic Nonperspective (or parallel) projection, as in some engineering drawings, with no foreshortening. output-patch vertex A vertex generated by the tessellation control shader. These vertices generally form the control mesh of a patch. overloading As in C++, creating multiple functions with the same name but with different parameters, allowing a compiler to generate different signatures read more..

  • Page - 954

    ping-pong buffers A GPGPU technique of writing values to a buffer (usually a texture map) that is immediately rebound as a texture map to be read from to do a subsequent computation. Effectively, you can consider the buffer written-to, and subsequently read-from as being a collection of temporary values. Ping-ponging buffers is usually done using framebuffer objects. pixel Short for read more..

  • Page - 955

    the bulk of the resources to create the desired effect come from computation rather from a stored image. procedural texture shader A shader that helps perform procedural shading. procedural texturing See procedural shading. programmable blending The blending of colors under shader control, as compared to OpenGL’s fixed-function blending operations. programmable graphics pipeline The mode of read more..

  • Page - 956

    rasterizer The fixed function unit that converts a primitive (point, line, or triangle) into a sequence of fragments ready for shading. The raste- rizer performs rasterization. ray tracing A family of algorithms that produce images or other outputs by calculating the path of rays through media. rectangle A quadrilateral whose alternate edges are parallel to each other in object read more..

  • Page - 957

    RGBA mode An OpenGL context is in RGBA mode if its color buffers store red, green, blue, and alpha color components, rather than color indices. sample A subpixel entity used for multisampled antialiasing. A pixel can store color (and potentially depth and stencil) data for multiple samples. Before the final pixel is rendered on the screen, samples are resolved into the final pixel read more..

  • Page - 958

    shader program A set of instructions written in a graphics shading language (the OpenGL Shading Language, also called GLSL) that control the processing of graphics primitives. shader stage A logical part of the shading pipeline that executes a particular type of shader. A shader stage may not be a physically separate execution unit in the OpenGL implementation; for example, a hardware read more..

  • Page - 959

    slice An element of an array texture. smooth shading See Gouraud shading. source-blending factor The coefficient associated with the source color (i.e., the color output from the fragment shader) used in blending computations. specular Specular lighting and reflection incorporate reflection off shiny objects and the position of the viewer. Maximum specular reflectance occurs when the angle read more..

  • Page - 960

    stereo Enhanced three-dimensional perception of a rendered image by computing separate images for each eye. Stereo requires special hardware, such as two synchronized monitors or special glasses, to alternate viewed frames for each eye. Some implementations of OpenGL support stereo by including both left and right buffers for color data. stipple A one- or two-dimensional binary pattern that read more..

  • Page - 961

    per-patch parameters for consumption by the tessellation evaluation shader. tessellation coordinates The generated barycentric coordinates within the tessellation domain produced by the fixed-function tessellator and provided to the tessellation evaluation shader. tessellation domain The domain over which a high-order-surface is tessellated. This includes quad, triangle, and isoline domains. tessellation read more..

  • Page - 962

    texture object A named cache that stores texture data, such as the image array, associated mipmaps, and associated texture parameter values: width, height, border width, internal format, resolution of components, minification and magnification filters, wrapping modes, border color, and texture priority. texture sampler A variable used in a shader to sample from a texture. texture streaming A read more..

  • Page - 963

    undersampling Choosing pixel colors to display by point sampling at intervals further apart than the detail in the scene to render. More formally, it is sampling at less than double the frequency of the highest frequencies present in the scene. Point sampling always under samples edges, since edges are step functions containing arbitrarily high frequencies. This results in aliasing. read more..

  • Page - 964

    vertex winding The order of vertices that will be used to determine whether a polygon is front facing or back facing. view volume The volume in clip coordinates whose coordinates satisfy the following three conditions: −w < x < w −w < y < w −w < z < w Geometric primitives that extend outside this volume are clipped. viewing model The conceptual model used read more..

  • Page - 965

    work item A single item of work within a workgroup. Also known as an invocation. workgroup A group of work items that collectively operate on data. See also global workgroup and local workgroup. X Window System A window system used by many of the machines on which OpenGL is implemented. GLX is the name of the OpenGL extension to the X Window System. (See Appendix F.) read more..

  • Page - 966

    Index abs(), 692 acos(), 689 acosh(), 690 adjacency primitives, 511, 516--523 aliasing, 178, 442 all(), 705 alpha, 25, 143, 166 alpha value, 166 ambient light, 361, 363 amplification geometry, 527 analytic integration, 452 anisotropic filtering, 330 antialiasing, 153, 442--459 any(), 705 application programming interface, 2 area sampling, 453 array textures, 262 arrays, 44 asin(), 689 asinh(), 690 atan(), read more..

  • Page - 967

    buffer objects, 11 buffer ping-ponging, 181 built-in variables compute shader, 630 geometry shader, 561 bump map, 441 bump mapping, 433--442 byte swapping, 289 cache, 596 coherency, 596 hierarchy, 596 callback function, 868 callback(), 869 cascading style sheet, 662 ceil(), 693 CGL CGLChoosePixelFormat(), 851 CGLCreateContext(), 852 CGLDescribeRenderer(), 851 CGLDestroyPixelFormat(), 851 CGLDestroyRendererInfo(), 851 read more..

  • Page - 968

    dFdx(), 729 dFdy(), 729 diffuse light, 361 directional light, 365 dispatch, 627 indirect, 628 displacement mapping, 487, 507 display, 4 display callback, 15 display(), 8, 9, 15, 18, 28--30, 651 distance(), 700 dithering, 171 dot(), 700 double buffering, 146 dual-source blending, 168 dynamically uniform, 297 edge detection, 643 emission, 384 emissive lighting, 380 EmitStreamVertex(), 733 EmitVertex(), 733 read more..

  • Page - 969

    glActiveSampler(), 294 glActiveShaderProgram(), 83 glActiveTexture(), 265, 294, 303--305, 571 glAttachShader(), 74, 625, 647 glBeginConditionalRender(), 176, 177 glBeginQuery(), 173, 174, 881, 882 glBeginQueryIndexed(), 538 glBeginTransformFeedback(), 250--252 glBind*(), 18, 181 glBindAttribLocation(), 436 glBindBuffer(), 18--20, 63, 69, 94, 242, 891 glBindBufferBase(), 63, 64, 241, 242 glBindBufferRange(), 63, 64, read more..

  • Page - 970

    glCopyTexSubImage2D(), 282 glCopyTexSubImage3D(), 282 glcorearb.h, 9, 836 glCreateProgram(), 73, 76, 878 glCreateShader(), 72, 76, 510, 511, 624, 627, 647, 878 glCreateShaderProgramv(), 81 glCullFace(), 91, 92, 160 glDebugMessageCallback(), 868, 869 glDebugMessageControl(), 872 glDebugMessageInsert(), 874 glDeleteBuffers(), 20 glDeleteFramebuffers(), 182, 183 glDeleteProgram(), 75, 76 glDeleteProgramPipelines(), 82 read more..

  • Page - 971

    glEndTransformFeedback(), 252, 540 GLEW glewInit(), 9, 15, 837 glew.h, 837 glext.h, 9, 836, 837 glFenceSync(), 589, 591--593 glFinish(), 31, 106, 841, 847 glFlush(), 30, 31, 590, 842 glFlushMappedBufferRange(), 105--107 glFramebufferParameteri(), 183 glFramebufferRenderbuffer(), 187, 188 glFramebufferTexture(), 351, 352 glFramebufferTexture1D(), 351, 352 glFramebufferTexture2D(), 351, 352 glFramebufferTexture3D(), 351, read more..

  • Page - 972

    glGetMultisamplefv(), 154, 741, 827 glGetObjectLabel(), 741, 748, 750, 761, 765, 772, 777, 781, 782, 784, 797, 799, 802, 877, 878 glGetObjectPtrLabel(), 741, 877, 878 glGetPointerv(), 741 glGetProgramBinary(), 741, 784 glGetProgramInfoLog(), 75, 741, 783 glGetProgramInterfaceiv(), 741 glGetProgramiv(), 74, 629, 630, 741, 783--786, 788--790 glGetProgramPipelineInfoLog(), 741 glGetProgramPipelineiv(), 741 read more..

  • Page - 973

    745, 749, 751, 753--756, 767--769, 797, 827 glIsEnabledi(), 197, 767, 769 glIsFramebuffer(), 182, 183 glIsProgram(), 76 glIsQuery(), 174 glIsRenderbuffer(), 184 glIsSampler(), 293 glIsShader(), 76 glIsSync(), 592 glIsTexture(), 265, 266 glIsTransformFeedback(), 240 glIsVertexArray(), 18 glLineWidth(), 88 glLinkProgram(), 47, 61, 64, 74, 245, 537, 625, 627, 647 glLogicOp(), 172 glMapBuffer(), 61, 101--104, 132, read more..

  • Page - 974

    GLSL continued buffer blocks, 69 buffer, 46, 49, 60, 62, 636 centroid, 730 clamp(), 445 column_major, 62 const, 46, 54 constructors, 39 control flow, 52 conversions, 39 defined, 57, 58 dFdx(), 450, 459 dFdy(), 450, 459 discard, 88 dmat4, 111 do-while loop, 52 double, 39, 40, 49, 111 dvec2, 111 dvec3, 111 dvec4, 111 extensions, 59 false, 39, 695, 696 flat, 424, 730 float, 39, 40, 49, read more..

  • Page - 975

    GLSL continued triangles, 497 true, 39, 695, 696, 705 uint, 40, 49, 110 uniform block, 61--69 uniform, 37, 46, 47, 49, 60, 62, 235, 363, 366, 368, 380 uvec2, 110 uvec3, 110 uvec4, 110 vec2, 113 vec3, 113, 115, 452, 480 vec4, 37, 113, 115, 425, 426, 502, 577, 893 component names, 42 vectors, 40, 42 while loop, 52 glStencilFunc(), 148, 159 glStencilFuncSeparate(), 159, 160 read more..

  • Page - 976

    GLUT continued glutIdleFunc(), 658 glutInit(), 15, 652 glutInitContextFlags(), 652, 866 glutInitContextProfile(), 15, 23, 652 glutInitContextVersion(), 15, 652 glutInitDisplayMode(), 15, 181, 652 glutInitWindowPosition(), 652 glutInitWindowSize(), 15, 652 glutKeyboardFunc(), 656 glutKeyboardUpFunc(), 656 glutMainLoop(), 16, 654, 658 glutPassiveMotionEvent(), 657 glutPostRedisplay(), 655, 657 glutReshapeFunc(), 656 read more..

  • Page - 977

    gradient vector, 450 gradient, 450 graphics processing unit, 4 greaterThan(), 704 greaterThanEqual(), 704 groupMemoryBarrier(), 735 half space, 89 halo, 384 hemispherical lighting, 384 hidden-line removal, 164 hidden-surface removal, 145 homogeneous clip coordinates, 209 homogeneous coordinate, 206 homogeneous coordinates, 209, 215, 216 image processing, 642 image, xli image-based lighting, 389 imageAtomicAdd(), read more..

  • Page - 978

    logical operation, 171 lossless compression, 326 lossy compression, 326 low-pass filtering, 449 luminance, 43 Mac OS X NSOpenGLContext(), 854 NSOpenGLPixelFormat(), 854 NSOpenGLView(), 854 magnification, 329 mantissa, 274 material, 70 matrix, xlii, 214 matrix column major, 234 multiplication, 214 OpenGL, 232 row major, 234 matrixCompMult(), 702 max(), 694 memory read only, 598 memoryBarrier(), 735 read more..

  • Page - 979

    pack, 97 packDouble2x32(), 699 packHalf2x16(), 699 packSnorm2x16(), 698 packSnorm4x8(), 698 packUnorm2x16(), 698 packUnorm4x8(), 698 pass-through shader, 12, 501 patch, 12, 485, 486 performance profiling, 879 Perlin noise, 460 perspective correction, 730 perspective division, 213 perspective projection, 207, 210, 227 Phong reflection model, 376 Phong shading, 376 pixel, 4 point fade threshold, 350 point read more..

  • Page - 980

    vertex, 35 shader plumbing, 8 shader program, 8 shader stage, 4 shader storage buffer object, 69 shader storage buffer, 46 shader variable, 23 shading, xliii shadow coordinates, 406 shadow map, 400 shadow mapping, 400 shadow sampler, 317 shadow texture, 402 shared exponent, 274 shared variables, 633 sign(), 692 sin(), 688 sinh(), 689 sky box, 312 slice, 261, 262 smoothstep(), 695 sorting, 616 read more..

  • Page - 981

    texelFetch(), 321, 714 texelFetchOffset(), 714 texels, 261 texture array, 262 buffer, 319--321, 572 compressed, 326--329 cube map, 559 gathering texels, 345 immutable storage, 357 proxy, 276, 277 rectangle, 263 target, 263 unit, 262 view, 321--325 writing to, 574 texture comparison mode, 402 texture coordinates, 153, 261 texture map, 14, 149 texture mapping, 8, 149 texture object, 261 texture sampler, read more..

  • Page - 982

    uniform block, 61--69, 629 uniform buffer object, 61 uniform variable, 46 unit square, 491 universe end of, 882 unpackDouble2x32(), 699 unpackHalf2x16(), 699 unpackSnorm2x16(), 698 unpackSnorm4x8(), 698 unpackUnorm2x16(), 698 unpackUnorm4x8(), 698 user clipping, 238 usubBorrow(), 706 Utah teapot, 500 vector, 7, 214 vertex shader, 4, 35 vertex winding, 497 vertex, 11 vertex-array object, 17 vertex-attribute read more..

  • Page - 983 read more..

  • Page - 984 read more..

  • Page - 985 read more..

  • Page - 986

    You love our titles and you love to share them with your colleagues and friends...why not earn some $$ doing it! If you have a website, blog, or even a Facebook page, you can start earning money by putting InformIT links on your page. Whenever a visitor clicks on these links and makes a purchase on, you earn commissions* on all sales! Every sale you read more..

Write Your Review