OpenGL 4.0 Shading Language Cookbook

This book is for OpenGL programmers who would like to take advantage of the modern features of GLSL 4.0 to create real-time, three-dimensional graphics.


David Wolff


340 Pages

19802 Reads

83 Downloads

English

PDF Format

9.15 MB

Game Development

Download PDF format


  • David Wolff   
  • 340 Pages   
  • 19 Feb 2015
  • Page - 1

    read more..

  • Page - 2

    OpenGL 4.0 Shading Language Cookbook Over 60 highly focused, practical recipes to maximize your use of the OpenGL Shading Language David Wolff BIRMINGHAM - MUMBAI read more..

  • Page - 3

    OpenGL 4.0 Shading Language Cookbook Copyright © 2011 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in read more..

  • Page - 4

    Credits Author David Wolff Reviewers Martin Christen Nicolas Delalondre Markus Pabst Brandon Whitley Acquisition Editor Usha Iyer Development Editor Chris Rodrigues Technical Editors Kavita Iyer Azharuddin Sheikh Copy Editor Neha Shetty Project Coordinator Srimoyee Ghoshal Proofreader Bernadette Watkins Indexer Hemangini Bari Graphics Nilesh Mohite Valentina J. D’silva Production Coordinators Kruthika Bangera read more..

  • Page - 5

    About the Author David Wolff is an associate professor in the Computer Science and Computer Physics from Oregon State University. He has a passion for computer graphics and the intersection between art and science. He has been teaching computer graphics to undergraduates at PLU for over 10 years, using OpenGL. Special thanks to Brandon Whitley for interesting discussions read more..

  • Page - 6

    About the Reviewers Martin Christen graduated with a Computer Science degree. Today, he is a senior research associate at the Institute of Geomatics Engineering of the University of Applied Sciences Northwestern (FHNW) Switzerland. He is the lead developer of the open source virtual globe engine (http://www.openwebglobe.org). computer game development. His main research interests are read more..

  • Page - 7

    Brandon Whitley worked for four years as a graphics programmer for Zipper Interactive, a Sony Computer Entertainment Worldwide Studio. He earned his Masters degree in Computer Science from Georgia Institute of Technology. While obtaining his of this book to pursue a career in computer graphics. Brandon is currently a graphics programmer at Bungie, creators of the Halo read more..

  • Page - 8

    www.PacktPub.com You might want to visit www.PacktPub.com your book. Did you know that Packt offers eBook versions of every book published, with PDF and www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at service@packtpub.com for more details. At www.PacktPub.com, you can also read a collection of read more..

  • Page - 9

    read more..

  • Page - 10

    Preface 1 Chapter 1: Getting Started with GLSL 4.0 5 Introduction 6 Using the GLEW Library to access the latest OpenGL functionality 8 Using the GLM library for mathematics 10 Determining the GLSL and OpenGL version 13 Compiling a shader 15 Linking a shader program 18 Sending data to a shader using per-vertex attributes and vertex buffer objects 22 Getting a list of read more..

  • Page - 11

    ii Table of Contents Using the halfway vector for improved performance 91 Simulating a spotlight 94 Creating a cartoon shading effect 97 Simulating fog 100 Chapter 4: Using Textures 105 Introduction 105 Applying a 2D texture 106 Applying multiple textures 111 Using alpha maps to discard pixels 114 Using normal maps 116 Simulating refraction with cube maps 130 Image-based lighting 135 read more..

  • Page - 12

    iii Table of Contents Chapter 8: Using Noise in Shaders 263 Introduction 263 Creating a noise texture using libnoise 265 Creating a seamless noise texture 269 Creating a cloud-like effect 272 Creating a wood grain effect 275 Creating a disintegration effect 279 Creating a paint-spatter effect 281 Creating a night-vision effect 284 Chapter 9: Animation and Particles 289 Introduction 289 read more..

  • Page - 13

    read more..

  • Page - 14

    to programmers interested in creating modern, interactive, graphical programs. It allows us to harness the power of modern Graphics Processing Units (GPUs) in a straightforward way by providing a simple, yet powerful, language and API. The OpenGL 4.0 Shading Language Cookbook will provide easy-to-follow examples that start by walking you through the theory and background read more..

  • Page - 15

    Preface 2 Chapter 6, Using Geometry and Tessellation Shaders, provides a series of examples to introduce you to the new and powerful segments of the shader pipeline. It provides some examples of geometry shaders, and discusses how to use tessellation shaders to dynamically render geometry at different levels of detail. Chapter 7, Shadows, provides several recipes read more..

  • Page - 16

    Preface 3 A block of code is set as follows: #version 400 in vec3 LightIntensity; layout( location = 0 ) out vec4 FragColor; void main() { FragColor = vec4(LightIntensity, 1.0); } When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold: QGLFormat format; format.setVersion(4,0); read more..

  • Page - 17

    Preface 4 Customer support Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase. account at http://www.PacktPub.com. If you purchased this book elsewhere, you can visit http://www.PacktPub.com/support Errata Although we have taken every care to ensure the accuracy of our content, mistakes do read more..

  • Page - 18

    1 Getting Started with GLSL 4.0 In this chapter, we will cover: Using the GLEW library to access the latest OpenGL functionality Using the GLM library for mathematics Determining the GLSL and OpenGL version Compiling a shader Linking a shader program Sending data to a shader using per-vertex attributes and vertex buffer objects Getting a list of active read more..

  • Page - 19

    Getting Started with GLSL 4.0 6 Introduction The OpenGL Shading Language (GLSL) Version 4.0 brings unprecedented power and It allows us to harness the power of modern Graphics Processing Units (GPUs) in a step towards using the OpenGL Shading Language version 4.0 is to create a program that utilizes the latest version of the OpenGL API. GLSL programs don't stand read more..

  • Page - 20

    Chapter 1 7 Model GeForce GT 430 96 GeForce GTS 450 192 GeForce GTX 480 480 Shader programs are intended to replace parts of the OpenGL architecture referred to as the pipeline function pipeline. When we, as programmers, wanted to implement more advanced or realistic than it really was. The advent of GLSL helped by providing us with the ability to replace this read more..

  • Page - 21

    Getting Started with GLSL 4.0 8 the following code: QGLFormat format; format.setVersion(4,0); format.setProfile(QGLFormat::CoreProfile); QGLWidget *myWidget = new QGLWidget(format); Downloading the example code purchased from your account at http://www.PacktPub.com. If you purchased this book elsewhere, you can visit http://www.PacktPub. com/support Using the GLEW Library to access the The OpenGL ABI read more..

  • Page - 22

    Chapter 1 9 How to do it... To start using GLEW in your project, use the following steps: 1. Make sure that, at the top of your code, you include the glew.h header before you #include <GL/glew.h> #include <GL/gl.h> #include <GL/glu.h> 2. In your program code, somewhere just after the GL context is created (typically in an initialization function), and before read more..

  • Page - 23

    Getting Started with GLSL 4.0 10 You can also check for the availability of extensions by checking the status of some GLEW global variables that use a particular naming convention. For example, to check for the availability of ARB_vertex_program, use something like the following: if ( ! GLEW_ARB_vertex_program ) { fprintf(stderr, "ARB_vertex_program is missing!\n"); read more..

  • Page - 24

    Chapter 1 11 How to do it... following code snippet) and headers for any extensions. We'll include the matrix transform extension, and the transform2 extension. #include <glm/glm.hpp> #include <glm/gtc/matrix_transform.hpp> #include <glm/gtx/transform2.hpp> The GLM classes are then available in the glm namespace. The following is an example of how you might go about making read more..

  • Page - 25

    Getting Started with GLSL 4.0 12 with additional features that go beyond what you can do in GLSL. If you are familiar with GLSL, GLM should be easy and natural to use. Swizzle operators (selecting components using commands like: foo.x, foo. xxy, and so on) are disabled by default in GLM. You can selectively enable GLM_SWIZZLE before including the main GLM header. The read more..

  • Page - 26

    Chapter 1 13 Determining the GLSL and OpenGL version In order to support a wide range of systems, it is essential to be able to query for the supported OpenGL and GLSL version of the current driver. It is quite simple to do so, and there are two main functions involved: glGetString and glGetIntegerv. How to do it... The code shown below will print the read more..

  • Page - 27

    Getting Started with GLSL 4.0 14 The queries for GL_VENDOR and GL_RENDERER provide additional information about the OpenGL driver. The call glGetString(GL_VENDOR) returns the company responsible for the OpenGL implementation. The call to glGetString(GL_RENDERER) provides the name 5600 Series"). Not e that both of these do not vary from release to release, so can be read more..

  • Page - 28

    Chapter 1 15 The GLSL compiler is built into the OpenGL library, and shaders can only be compiled within the context of a running OpenGL program. There is currently no external tool for pre-compiling GLSL shaders and/or shader programs. Recently, OpenGL 4.1 added the ability to save compiled shader programs to by loading pre-compiled shader programs. Compiling a shader read more..

  • Page - 29

    Getting Started with GLSL 4.0 16 Next, we'll need to build a basic shell for an OpenGL program using any standard windowing toolkit. Examples of cross-platform toolkits include GLUT, FLTK, Qt, or wxWidgets. Throughout this text, I'll make the assumption that you can create a basic OpenGL program with your favorite toolkit. Virtually all toolkits have a hook for an read more..

  • Page - 30

    Chapter 1 17 GLsizei written; glGetShaderInfoLog(vertShader, logLen, &written, log); fprintf(stderr, "Shader log:\n%s", log); free(log); } } How it works... glCreateShader. The argument is the type of shader, and can be one of the following: GL_VERTEX_SHADER, GL_FRAGMENT_SHADER, GL_GEOMETRY_SHADER, GL_TESS_EVALUATION_SHADER, or read more..

  • Page - 31

    Getting Started with GLSL 4.0 18 If the compile status is GL_FALSE, then we can query for the shader log, which will provide calling glGetShaderiv again with a value of GL_INFO_LOG_LENGTH. This provides the length of the log in the variable logLen, including the null termination character. We then allocate space for the log, and retrieve the log by calling read more..

  • Page - 32

    Chapter 1 19 Getting ready For this recipe we'll assume that you've already compiled two shader objects whose handles are stored in the variables vertShader and fragShader. For this and a few other recipes in this chapter, we'll use the following source code for the fragment shader: #version 400 in vec3 Color; out vec4 FragColor; void main() { FragColor = vec4(Color, read more..

  • Page - 33

    Getting Started with GLSL 4.0 20 3. Link the program. glLinkProgram( programHandle ); 4. Verify the link status. GLint status; glGetProgramiv( programHandle, GL_LINK_STATUS, &status ); if( GL_FALSE == status ) { fprintf( stderr, "Failed to link shader program!\n" ); GLint logLen; glGetProgramiv(programHandle, GL_INFO_LOG_LENGTH, read more..

  • Page - 34

    Chapter 1 21 We check the status of the link by calling glGetProgramiv. Similar to glGetShaderiv, glGetProgramiv allows us to query various attributes of the shader program. In this case, we ask for the status of the link by providing GL_LINK_STATUS as the second argument. The status is returned in the location pointed to by the third argument, in this read more..

  • Page - 35

    Getting Started with GLSL 4.0 22 If a program is no longer needed, it can be deleted from the OpenGL memory by calling glDeleteProgram, providing the program handle as the only argument. This invalidates the handle and frees the memory used by the program. Note that if the program object is no longer in use. The deletion of a shader program detaches the shader read more..

  • Page - 36

    Chapter 1 23 Of course, the data for this attribute must be supplied by the OpenGL program. To do so, we make use of vertex buffer objects. The buffer object contains the values for the input attribute and in the main OpenGL program we make the connection between the buffer and the input for the input attribute from the buffer for each invocation of read more..

  • Page - 37

    Getting Started with GLSL 4.0 24 It also has one output variable named Color, which is sent to the fragment shader. In this case, Color is just an unchanged copy of VertexColor. Also, note that the attribute VertexPosition is simply expanded and passed along to the built-in output variable gl_ Position for further processing. The fragment shader (basic.frag): #version read more..

  • Page - 38

    Chapter 1 25 float colorData[] = { 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f }; // Create the buffer objects GLuint vboHandles[2]; glGenBuffers(2, vboHandles); GLuint positionBufferHandle = vboHandles[0]; GLuint colorBufferHandle = vboHandles[1]; // Populate the position buffer glBindBuffer(GL_ARRAY_BUFFER, positionBufferHandle); glBufferData(GL_ARRAY_BUFFER, 9 read more..

  • Page - 39

    Getting Started with GLSL 4.0 26 How it works... Vertex attributes are the input variables to our vertex shader. In the vertex shader above, our two attributes are VertexPosition and VertexColor. Since we can give these variables any name we like, OpenGL provides a way to refer to vertex attributes in the OpenGL program by associating each (active) input read more..

  • Page - 40

    Chapter 1 27 Now that we have set up our buffer objects, we tie them together into a vertex array object (VAO). The VAO contains information about the connections between the data in our buffers and the input vertex attributes. We create a VAO using the function glGenVertexArrays. This gives us a handle to our new object, which we store in the (global) read more..

  • Page - 41

    Getting Started with GLSL 4.0 28 To summarize, rendering with vertex buffer objects (VBOs) involves the following steps: attribute indexes and shader input variables by calling glBindAttribLocation. 2. Create and populate the buffer objects for each attribute. glVertexAttribPointer while the appropriate buffer is bound. 4. When rendering, bind to the vertex array object and read more..

  • Page - 42

    Chapter 1 29 This would tell the linker to bind the output variable FragColor to color number 0, avoiding the need to call glBindFragDataLocation within our OpenGL program. It is often the case that we need to step through our vertex arrays in a non-linear fashion. In other words we may want to "jump around" the data rather than just moving through it read more..

  • Page - 43

    Getting Started with GLSL 4.0 30 However, it may be preferable to let the linker create the mappings automatically and query for them after program linking is complete. In this recipe, we'll see a simple example that prints all the active attributes and their indices. Getting ready Start with an OpenGL program that compiles and links a shader pair. You could use the read more..

  • Page - 44

    Chapter 1 31 How it works... We start by querying for the number of active attributes by calling glGetProgramiv with the argument GL_ACTIVE_ATTRIBUTES. The result is stored in nAttribs. Next, we query for the length of the longest attribute name (GL_ACTIVE_ATTRIBUTE_MAX_LENGTH) and store the result in maxLength. This includes the null terminating character, so we use that read more..

  • Page - 45

    Getting Started with GLSL 4.0 32 Within a shader, uniform variables are read-only. Their values can only be changed from outside the shader, via the OpenGL API. However, they can be initialized within the shader by assigning to a constant value along with the declaration. Uniform variables can appear in any shader within a shader program, and are always used as input read more..

  • Page - 46

    Chapter 1 33 Note the variable RotationMatrix data for this variable from the OpenGL program. The RotationMatrix is used to transform VertexPosition before assigning it to the default output position variable gl_Position. We'll use the same fragment shader as in previous recipes. #version 400 in vec3 Color; layout (location = 0) out vec4 FragColor; void main() { read more..

  • Page - 47

    Getting Started with GLSL 4.0 34 How it works... variable, and then assigning a value to that location using one of the glUniform functions. In this example, we start by clearing the color buffer, then creating a rotation matrix using GLM. Next, we query for the location of the uniform variable by calling glGetUniformLocation. This function takes the handle to the read more..

  • Page - 48

    Chapter 1 35 Compiling a shader Linking a shader program Sending data to a shader using per-vertex attributes and vertex buffer objects While it is a simple process to query for the location of an individual uniform variable, there may be instances where it can be useful to generate a list of all active uniform variables. For example, one read more..

  • Page - 49

    Getting Started with GLSL 4.0 36 GLsizei written; GLenum type; printf(" Location | Name\n"); printf("------------------------------------------------\n"); for( int i = 0; i < nUniforms; ++i ) { glGetActiveUniform( programHandle, i, maxLen, &written, &size, &type, name ); location = glGetUniformLocation(programHandle, name); read more..

  • Page - 50

    Chapter 1 37 If your program involves multiple shader programs that use the same uniform variables, one has to manage the variables separately for each program. Uniform locations are generated when a program is linked, so the locations of the uniforms may change from one program to the next. The data for those uniforms may have to be re-generated and applied to read more..

  • Page - 51

    Getting Started with GLSL 4.0 38 For this recipe, we'll use a simple example to demonstrate the use of uniform buffer objects and uniform blocks. We'll draw a quad (two triangles) with texture coordinates, and use our at its edge, it gradually fades to the background color, as shown in the following image. Getting ready Start with an OpenGL program that draws two read more..

  • Page - 52

    Chapter 1 39 uniform BlobSettings { vec4 InnerColor; vec4 OuterColor; float RadiusInner; float RadiusOuter; }; void main() { float dx = TexCoord.x - 0.5; float dy = TexCoord.y - 0.5; float dist = sqrt(dx * dx + dy * dy); FragColor = mix( InnerColor, OuterColor, smoothstep( RadiusInner, read more..

  • Page - 53

    Getting Started with GLSL 4.0 40 glGetActiveUniformBlockiv(programHandle, blockIndex, GL_UNIFORM_BLOCK_DATA_SIZE, &blockSize); GLubyte * blockBuffer= (GLubyte *) malloc(blockSize); of each variable within the block. // Query for the offsets of each block variable const GLchar *names[] = { "InnerColor", "OuterColor", read more..

  • Page - 54

    Chapter 1 41 How it works... Phew! This seems like a lot of work! However, the real advantage comes when using multiple programs where the same buffer object can be used for each program. Let's take a look at each step individually. First, we get the index of the uniform block by calling glGetUniformBlockIndex, then we query for the size of the block by read more..

  • Page - 55

    Getting Started with GLSL 4.0 42 A uniform block can have an optional instance name. For example, with our BlobSettings block, we could have used the instance name Blob, as shown here: uniform BlobSettings { vec4 InnerColor; vec4 OuterColor; float RadiusInner; float RadiusOuter; } Blob; name. For example: FragColor = mix( Blob.InnerColor, Blob.OuterColor, read more..

  • Page - 56

    Chapter 1 43 row_major and column_major. within the uniform block. row_major and shared layout( row_major, shared ) uniform BlobSettings { … }; Sending data to a shader using uniform variables If you are using C++, it can be very convenient to create classes to encapsulate some of the OpenGL objects. A prime example is the shader program object. In read more..

  • Page - 57

    Getting Started with GLSL 4.0 44 int getUniformLocation(const char * name ); bool fileExists( const string & fileName ); public: GLSLProgram(); bool compileShaderFromFile( const char * fileName, GLSLShader::GLSLShaderType type ); bool compileShaderFromString( const string & source, GLSLShader::GLSLShaderType type ); bool read more..

  • Page - 58

    Chapter 1 45 The two private functions are utilities used by other public functions. The getUniformLocation function is used by the setUniform functions a uniform variable, and the fileExists function is used by compileShaderFromFile to The constructor simply initializes linked to false, handle to zero, and logString to the empty string. The variable handle read more..

  • Page - 59

    Getting Started with GLSL 4.0 46 prog.log().c_str()); exit(1); } if( ! prog.compileShaderFromFile("myshader.frag", GLSLShader::FRAGMENT)) { printf("Fragment shader failed to compile!\n%s", prog.log().c_str()); exit(1); } // Possibly call read more..

  • Page - 60

    2 Shaders In this chapter, we will cover: Implementing diffuse, per-vertex shading with a single point light source Implementing per-vertex ambient, diffuse, and, specular (ADS) shading Using functions in shaders Implementing two sided shading Using subroutines to select shader functionality Discarding fragments to create a perforated look Introduction ability into the read more..

  • Page - 61

    The Basics of GLSL Shaders 48 Shaders are implemented using the OpenGL Shading Language (GLSL). The GLSL is syntactically similar to C, which should make it easier for experienced OpenGL programmers to learn. Due to the nature of this text, I won't present a thorough introduction to GLSL here. Instead, if you're new to GLSL, reading through these recipes should help read more..

  • Page - 62

    Chapter 2 49 The vertex shader is executed once for each vertex, possibly in parallel. The data corresponding to vertex position must be transformed into clip coordinates and assigned to the output variable gl_Position shader can send other information down the pipeline using shader output variables. For example, the vertex shader might also compute the color associated read more..

  • Page - 63

    The Basics of GLSL Shaders 50 The algorithms presented within this chapter are largely unoptimized. I present them this way We'll look at a few optimization techniques at the end of some recipes, and some more in the next chapter. One of the simplest shading techniques is to assume that the surface exhibits purely diffuse equally, regardless of direction. Incoming light read more..

  • Page - 64

    Chapter 2 51 The amount of incoming light (or radiance) that reaches the surface is partially dependent on the orientation of the surface with respect to the light source. The physics of the situation tells us that the amount of radiation that reaches a point on a surface is maximal when the light arrives along the direction of the normal vector, and zero when the read more..

  • Page - 65

    The Basics of GLSL Shaders 52 equations should be treated as component-wise operations, applied to each of the three components separately. Luckily, the GLSL will make this nearly transparent because the needed operators will operate component-wise on vector variables. Getting ready Start with an OpenGL application that provides the vertex position in attribute location 0, and read more..

  • Page - 66

    Chapter 2 53 vec3 s = normalize(vec3(LightPosition - eyeCoords)); // The diffuse shading equation LightIntensity = Ld * Kd * max( dot( s, tnorm ), 0.0 ); // Convert position to clip coordinates and pass along gl_Position = MVP * vec4(VertexPosition,1.0); } 2. Use the following code for the fragment shader. #version 400 in vec3 read more..

  • Page - 67

    The Basics of GLSL Shaders 54 Next, we compute the scattered light intensity using the equation described above and store the result in the output variable LightIntensity. Note the use of the max function here. If the dot product is less than zero, then the angle between the normal vector and the light direction is greater than 90 degrees. This means that the read more..

  • Page - 68

    Chapter 2 55 similar to the one presented here. It models the light-surface interaction as a combination of three components: ambient, diffuse, and specular. The ambient component is intended uniformly from all directions. The diffuse component was discussed in the previous recipe, specular component models the shininess of components together can model a nice (but limited) read more..

  • Page - 69

    The Basics of GLSL Shaders 56 The diffuse component models a rough surface that scatters light in all directions (see Implementing diffuse per-vertex shading with a single point light source). The intensity of the outgoing light depends on the angle between the surface normal and the vector towards the light source. The specular component is used for modeling the shininess of read more..

  • Page - 70

    Chapter 2 57 r, and to fall off quickly as the viewer moves further away from alignment with r. This can be modeled using the cosine of the angle between v and r raised to some power (f). (Recall that the dot product is proportional to the cosine of the angle between the vectors involved.) The larger the power, the faster the value drops towards zero read more..

  • Page - 71

    The Basics of GLSL Shaders 58 Getting ready In the OpenGL application, provide the vertex position in location 0 and the vertex equation are uniform variables in the vertex shader and their values must be set from the OpenGL application. How to do it... To create a shader pair that implements ADS shading, use the following code: 1. Use the following code for the read more..

  • Page - 72

    Chapter 2 59 vec3 r = reflect( -s, tnorm ); vec3 ambient = Light.La * Material.Ka; float sDotN = max( dot(s,tnorm), 0.0 ); vec3 diffuse = Light.Ld * Material.Kd * sDotN; vec3 spec = vec3(0.0); if( sDotN > 0.0 ) spec = Light.Ls * Material.Ks * pow( max( dot(r,v), 0.0 read more..

  • Page - 73

    The Basics of GLSL Shaders 60 The ambient component is computed and stored in the variable ambient. The dot product of s and n is computed next. As in the preceding recipe, we use the built-in function max to limit the range of values to between one and zero. The result is stored in the variable named sDotN, and is used to compute the diffuse component. read more..

  • Page - 74

    Chapter 2 61 We can avoid the extra normalization needed to compute the vector towards the viewer (v), by using a so-called non-local viewer. Instead of computing the direction towards the origin, we simply use the constant vector (0, 0, 1) for all vertices. This is similar to assuming that the the visual results are very similar, often visually read more..

  • Page - 75

    The Basics of GLSL Shaders 62 Chapter 3, Using a directional light source Chapter 3, Per-fragment shading Chapter 3, Using the halfway vector for improved performance The GLSL supports functions that are syntactically similar to C functions. However, the calling conventions are somewhat different. In this example, we'll revisit the ADS shader using functions to help provide read more..

  • Page - 76

    Chapter 2 63 uniform mat4 ModelViewMatrix; uniform mat3 NormalMatrix; uniform mat4 ProjectionMatrix; uniform mat4 MVP; void getEyeSpace( out vec3 norm, out vec4 position ) { norm = normalize( NormalMatrix * VertexNormal); position = ModelViewMatrix * vec4(VertexPosition,1.0); } vec3 phongModel( vec4 position, vec3 norm ) { vec3 s = normalize(vec3(Light.Position - read more..

  • Page - 77

    The Basics of GLSL Shaders 64 3. Compile and link both shaders within the OpenGL application, and install the shader program prior to rendering. How it works... In GLSL functions, the evaluation strategy is "call by value-return" (also called "call by copy- in, out, or inout. in or inout) are copied out or inout) are copied back to the corresponding argument read more..

  • Page - 78

    Chapter 2 65 It should be noted that when passing arrays or structures to functions, they are passed by value. If a large array or structure is passed, it can incur a large copy operation which may not be desired. It would be a better choice to declare these variables in the global scope. Implementing per-vertex ambient, diffuse, and specular (ADS) shading When read more..

  • Page - 79

    The Basics of GLSL Shaders 66 How to do it... To implement a shader pair that uses the ADS shading model with two-sided lighting, use the following code: 1. Use the following code for the vertex shader: #version 400 layout (location = 0) in vec3 VertexPosition; layout (location = 1) in vec3 VertexNormal; out vec3 FrontColor; out vec3 BackColor; struct LightInfo { read more..

  • Page - 80

    Chapter 2 67 BackColor = phongModel( eyeCoords, -tnorm ); gl_Position = MVP * vec4(VertexPosition,1.0); } 2. Use the following for the fragment shader: #version 400 in vec3 FrontColor; in vec3 BackColor; layout( location = 0 ) out vec4 FragColor; void main() { if( gl_FrontFacing ) { FragColor = vec4(FrontColor, 1.0); } else { read more..

  • Page - 81

    The Basics of GLSL Shaders 68 In the fragment shader, we determine which color to apply based on the value of the built-in variable gl_FrontFacing. This is a Boolean value that indicates whether the fragment is part of a front or back facing polygon. Note that this determination is based on the winding of the polygon, and not the normal vector. (A polygon is read more..

  • Page - 82

    Chapter 2 69 Implementing per-vertex ambient, diffuse, and specular (ADS) shading Per-vertex shading involves computation of the shading model at each vertex and associating the result (a color) with that vertex. The colors are then interpolated across the face of the polygon to produce a smooth shading effect. This is also referred to as Gouraud shading. In earlier read more..

  • Page - 83

    The Basics of GLSL Shaders 70 How to do it... 1. Use the same vertex shader as in the ADS example provided earlier. Change the output variable LightIntensity as follows: #version 400 layout (location = 0) in vec3 VertexPosition; layout (location = 1) in vec3 VertexNormal; flat out vec3 LightIntensity; // the rest is identical to the ADS shader… 2. Use the following code read more..

  • Page - 84

    Chapter 2 71 Implementing per-vertex ambient, diffuse, and specular (ADS) shading In GLSL, a subroutine is a mechanism for binding a function call to one of a set of possible pointers in C. A uniform variable serves as the pointer and is used to invoke the function. The value of this variable can be set from the OpenGL side, thereby binding it to one of a read more..

  • Page - 85

    The Basics of GLSL Shaders 72 In the following image, we see an example of a rendering that was created using subroutines. The teapot on the left is rendered with the full ADS shading model, and the teapot on the right is rendered with diffuse shading only. A subroutine is used to switch between shader functionality. Getting ready As with previous recipes, read more..

  • Page - 86

    Chapter 2 73 vec4 Position; // Light position in eye coords. vec3 La; // Ambient light intensity vec3 Ld; // Diffuse light intensity vec3 Ls; // Specular light intensity }; uniform LightInfo Light; struct MaterialInfo { vec3 Ka; // Ambient reflectivity vec3 Kd; read more..

  • Page - 87

    The Basics of GLSL Shaders 74 getEyeSpace(eyeNorm, eyePosition); // Evaluate the shading equation. This will call one of // the functions: diffuseOnly or phongModel. LightIntensity = shadeModel( eyePosition, eyeNorm ); gl_Position = MVP * vec4(VertexPosition,1.0); } 2. Use the following code for the fragment shader: #version 400 in vec3 read more..

  • Page - 88

    Chapter 2 75 After creating the new subroutine type, we declare a uniform variable of that type named shadeModel. subroutine uniform shadeModelType shadeModel; This variable serves as our function pointer and will be assigned to one of the two possible functions in the OpenGL application. subroutine ( shadeModelType ) This indicates that the function matches the subroutine type, read more..

  • Page - 89

    The Basics of GLSL Shaders 76 We didn't query for the index before calling glUniformSubroutinesuiv!" The reason that this code works is that we are relying on the fact that OpenGL will always number the indexes of the subroutines consecutively starting at zero. If we had multiple subroutine uniforms, we could (and should) query for their indexes using read more..

  • Page - 90

    Chapter 2 77 Getting ready The vertex position, normal, and texture coordinates must be provided to the vertex shader from the OpenGL application. The position should be provided at location 0, the normal at location 1, and the texture coordinates at location 2. As in previous examples, the lighting parameters must be set from the OpenGL application via the appropriate read more..

  • Page - 91

    The Basics of GLSL Shaders 78 }; uniform LightInfo Light; struct MaterialInfo { vec3 Ka; // Ambient reflectivity vec3 Kd; // Diffuse reflectivity vec3 Ks; // Specular reflectivity float Shininess; // Specular shininess factor }; uniform MaterialInfo Material; uniform mat4 read more..

  • Page - 92

    Chapter 2 79 in vec2 TexCoord; layout( location = 0 ) out vec4 FragColor; void main() { const float scale = 15.0; bvec2 toDiscard = greaterThan( fract(TexCoord * scale), vec2(0.2,0.2) ); if( all(toDiscard) ) discard; if( gl_FrontFacing ) read more..

  • Page - 93

    The Basics of GLSL Shaders 80 If both components of the vector toDiscard are true, then the fragment lies within the inside of each lattice frame, and therefore we wish to discard this fragment. We can use the built-in function all to help with this check. The function all will return true if all of the components of the parameter vector are true. If the read more..

  • Page - 94

    3 Optimizations In this chapter, we will cover: Shading with multiple positional lights Shading with a directional light source Using per-fragment shading for improved realism Using the halfway vector for improved performance Simulating a spotlight Creating a cartoon shading effect Simulating fog Introduction In Chapter 2, we covered a number of techniques for implementing read more..

  • Page - 95

    Lighting, Shading Effects, and Optimizations 82 When shading with multiple light sources, we need to evaluate the shading equation for each The natural choice is to create uniform arrays to store the position and intensity of each light. We'll use an array of structures so that we can store the values for multiple lights within a single uniform variable. The read more..

  • Page - 96

    Chapter 3 83 vec4 Position; // Light position in eye coords. vec3 Intensity; // Light intensity }; uniform LightInfo lights[5]; // Material parameters uniform vec3 Kd; // Diffuse reflectivity uniform vec3 Ka; // Ambient reflectivity uniform vec3 Ks; // Specular reflectivity uniform float Shininess; read more..

  • Page - 97

    Lighting, Shading Effects, and Optimizations 84 FragColor = vec4(Color, 1.0); } 3. In the OpenGL application, set the values for the lights array in the vertex shader. For each light, use something similar to the following code. This example uses the C++ shader program class ( prog is a GLSLProgram object ). prog.setUniform("lights[0].Intensity", vec3(0.0f,0.8f,0.8f) ); read more..

  • Page - 98

    Chapter 3 85 Of course, we are ignoring the fact that, in reality, the intensity of the light decreases with the square of the distance from the source. However, it is not uncommon to ignore this aspect for directional light sources. If we are using a directional light source, the direction towards the source is the same for because we no longer need to read more..

  • Page - 99

    Lighting, Shading Effects, and Optimizations 86 How to do it... To create a shader program that implements ADS shading using a directional light source, use the following code: 1. Use the following vertex shader: #version 400 layout (location = 0) in vec3 VertexPosition; layout (location = 1) in vec3 VertexNormal; out vec3 Color; uniform vec4 LightPosition; uniform vec3 read more..

  • Page - 100

    Chapter 3 87 gl_Position = MVP * vec4(VertexPosition,1.0); } 2. Use the same simple fragment shader from the previous recipe: #version 400 in vec3 Color; layout( location = 0 ) out vec4 FragColor; void main() { FragColor = vec4(Color, 1.0); } How it works... Within the vertex shader, the fourth coordinate of the uniform variable LightPosition is used to determine read more..

  • Page - 101

    Lighting, Shading Effects, and Optimizations 88 When the shading equation is evaluated within the vertex shader (as we have done in previous recipes), we end up with a color associated with each vertex. That color is then interpolated across the face, and the fragment shader assigns that interpolated color to the output fragment. As mentioned previously (Chapter 2, ), read more..

  • Page - 102

    Chapter 3 89 Getting ready Set up your OpenGL program with the vertex position in attribute location zero, and the normal in location one. Your OpenGL application must also provide the values for the uniform variables Ka, Kd, Ks, Shininess, LightPosition, and LightIntensity latter two are the position of the light in eye coordinates, and the intensity of the light source, read more..

  • Page - 103

    Lighting, Shading Effects, and Optimizations 90 uniform float Shininess; // Specular shininess factor layout( location = 0 ) out vec4 FragColor; vec3 ads( ) { vec3 n = normalize( Normal ); vec3 s = normalize( vec3(LightPosition) - Position ); vec3 v = normalize(vec3(-Position)); vec3 r = reflect( -s, n ); return LightIntensity * ( read more..

  • Page - 104

    Chapter 3 91 Chapter 2, Implementing per-vertex ambient, diffuse, and specular (ADS) shading As covered in the recipe Implementing per-vertex ambient, diffuse, and specular (ADS) shading in Chapter 2, the specular term in the ADS shading equation involves the dot r), and the direction towards the viewer (v). r), s) about the normal vector (n). This equation is implemented read more..

  • Page - 105

    Lighting, Shading Effects, and Optimizations 92 The following picture shows the relative positions of the halfway vector and the others. We could then replace the dot product in the equation for the specular component, with the dot product of h and n. Computing h requires fewer operations than it takes to compute r, so we should expect some r) and the vector read more..

  • Page - 106

    Chapter 3 93 LightIntensity * (Ka + Kd * max( dot(s, Normal), 0.0 ) + Ks * pow(max(dot(h,n),0.0), Shininess ) ); } How it works... We compute the halfway vector by summing the direction towards the viewer (v), and the direction towards the light source (s), and normalizing the result. The value for read more..

  • Page - 107

    Lighting, Shading Effects, and Optimizations 94 spotlights. In such a the apex of which was located at the light source. Additionally, the light was attenuated so that it was maximal along the axis of the cone and decreased towards the outside edges. This allowed us to create light sources that had a similar visual effect to a real spotlight. The following image shows read more..

  • Page - 108

    Chapter 3 95 d spotlight is considered to be strongest along the axis of the cone, and decreases as you move towards the edges. Getting ready Start with the same vertex shader from the recipe Using per-fragment shading for improved realism vertex shader as well as the fragment shader shown below. How to do it... To create a shader program that uses the ADS shading read more..

  • Page - 109

    Lighting, Shading Effects, and Optimizations 96 vec3 h = normalize( v + s ); return ambient + spotFactor * Spot.intensity * ( Kd * max( dot(s, Normal), 0.0 ) + Ks * pow(max(dot(h,Normal), 0.0),Shininess)); } else { return ambient; } } void main() { FragColor = vec4(adsWithSpotlight(), 1.0); } How it works... read more..

  • Page - 110

    Chapter 3 97 The function adsWithSpotlight computes the standard ambient, diffuse, and specular (ADS) the surface location to the spotlight's position (s). Next, the spotlight's direction is normalized and stored within spotDir. The angle between spotDir and the negation of s is then computed and stored in the variable angle. The variable cutoff stores the value of read more..

  • Page - 111

    Lighting, Shading Effects, and Optimizations 98 The basic effect is to have large areas of constant color with sharp transitions between them. This simulates the way that an artist might shade an object using strokes of a pen or brush. The following image shows an example of a teapot and torus rendered with toon shading. The technique presented here involves computing read more..

  • Page - 112

    Chapter 3 99 How to do it... To create a shader program that produces a toon shading effect, use the following fragment shader: #version 400 in vec3 Position; in vec3 Normal; struct LightInfo { vec4 position; vec3 intensity; }; uniform LightInfo Light; uniform vec3 Kd; // Diffuse reflectivity uniform vec3 Ka; // Ambient read more..

  • Page - 113

    Lighting, Shading Effects, and Optimizations 100 The function toonShade s, the vector towards the light source. Next, we compute the cosine term of the diffuse component by evaluating the dot product of s and Normal. The next line quantizes that value in the following way. Since the two vectors are normalized, and we have removed negative values with the max read more..

  • Page - 114

    Chapter 3 101 In the preceding equation, d min is the distance from the eye where the fog is minimal (no fog contribution), and d max is the distance where the fog color obscures all other colors in the scene. The variable z represents the distance from the eye. The value f is the fog factor. A fog factor of zero represents 100% fog, and a factor of one read more..

  • Page - 115

    Lighting, Shading Effects, and Optimizations 102 How to do it... To create a shader that produces a fog-like effect, use the following code for the fragment shader. #version 400 in vec3 Position; in vec3 Normal; struc tLightInfo { vec4 position; vec3 intensity; }; uniform LightInfo Light; struct FogInfo { float maxDist; float minDist; vec3 color; }; uniform read more..

  • Page - 116

    Chapter 3 103 vec3 shadeColor = ads(); vec3 color = mix( Fog.color, shadeColor, fogFactor ); FragColor = vec4(color, 1.0); } How it works... In this shader, the ads function is almost exactly the same as the one used in the recipe Using the halfway vector for improved performance. The differences are only in the choice of variable names. The part of this read more..

  • Page - 117

    Lighting, Shading Effects, and Optimizations 104 In the above code, we used the absolute value of the z coordinate as the distance from the camera. This may cause the fog to look a bit unrealistic in certain situations. To compute a more precise distance, we could replace the line: float dist = abs( Position.z ); with the following. float dist = length( Position.xyz read more..

  • Page - 118

    4 In this chapter, we will cover: Applying a 2D texture Applying multiple textures Using alpha maps to discard pixels Using normal maps Simulating refraction with cube maps Image-based lighting Applying a projected texture Rendering to a texture Introduction Textures are an important and fundamental aspect of real-time rendering in general, and OpenGL in read more..

  • Page - 119

    Using Textures 106 In this chapter, we'll look at some basic and advanced texturing techniques. We'll start with the basics, just applying color textures, and move on to using textures as normal maps and refraction. We'll see an example of projecting a texture onto several objects in a scene similar to the way that a slide projector projects an image. Finally, read more..

  • Page - 120

    Chapter 4 107 How to do it... To render a simple shape with a 2D texture, use the following steps: 1. In your initialization of the OpenGL application, use the following code to load the texture. (The following makes use of the Qt libraries, and assumes that the handle to the shader program is stored in programHandle.) // Load texture file const char * texName = read more..

  • Page - 121

    Using Textures 108 void main() { TexCoord = VertexTexCoord; Normal = normalize( NormalMatrix * VertexNormal); Position = vec3( ModelViewMatrix * vec4(VertexPosition,1.0) ); gl_Position = MVP * vec4(VertexPosition,1.0); } 3. Use the following code for the fragment shader: #version 400 in vec3 Position; in vec3 read more..

  • Page - 122

    Chapter 4 109 How it works... the texture data to OpenGL memory, and initialize the sampler variable within the GLSL programming environment. As I prefer to use the Qt libraries, this example uses the classes QtGLWidget and QImage to assist with the process. The QImage class constructor takes QImage object is immediately passed to the static method convertToGLFormat, read more..

  • Page - 123

    Using Textures 110 The vertex shader is very similar to the one used in previous examples except for the addition of the texture coordinate input variable VertexTexCoord, which is bound to attribute location 2. Its value is simply passed along to the fragment shader by assigning it to the shader output variable TexCoord. As just stated, we need to provide read more..

  • Page - 124

    Chapter 4 111 The application of multiple textures to a surface can be used to create a wide variety of effects. The base layer texture might represent the "clean" surface and the second layer could provide additional detail such as shadow, blemishes, roughness, or damage. In many games, so-called light maps are applied as an additional texture layer to provide read more..

  • Page - 125

    Using Textures 112 How to do it... To render objects with multiple textures, use the following steps: 1. In the initialization section of your OpenGL program, load the two images into texture memory in the same way as indicated in the previous recipe Applying a 2D texture. Make sure that the brick texture is loaded into texture unit 0 and the moss texture is in read more..

  • Page - 126

    Chapter 4 113 if( uniloc >= 0 ) glUniform1i(uniloc, 1); 2. Use the vertex shader from the previous recipe Applying a 2D texture. 3. Starting with the fragment shader from the recipe Applying a 2D texture, replace the declaration of the sampler variable Tex1 with the following code: uniform sampler2D BrickTex; uniform sampler2D MossTex; 4. Replace the main function in read more..

  • Page - 127

    Using Textures 114 In this case, the moss color would be the source color, and the brick color would be the destination color. Finally, we multiply the result of the mix function by the ambient and diffuse components of the There's more... In this example, we mixed the two texture colors together using the alpha value of the second texture. This is just one of read more..

  • Page - 128

    Chapter 4 115 If we create a texture map that has an alpha channel, we can use the value of the alpha channel to determine whether or not the fragment should be discarded. If the alpha value is below a certain value, then the pixel is discarded. As this will allow the viewer to see within the object, possibly making some back faces visible, we'll need to read more..

  • Page - 129

    Using Textures 116 } else { FragColor = vec4(phongModel(Position,-Normal),1.0) * baseColor; } } } How it works... Within the main function of the fragment shader, we access the base color texture, and store the result in baseColor. We access the alpha map texture and store the read more..

  • Page - 130

    Chapter 4 117 A normal map is a texture in which the data stored within the texture is interpreted as normal vectors instead of colors. The normal vectors are typically encoded into the RGB information of the normal map such that the red channel contains the x coordinate, the green channel contains the y, and the blue channel contains the z coordinate. The normal map read more..

  • Page - 131

    Using Textures 118 Normal maps are interpreted as vectors in tangent space (also called the object local coordinate system). In the tangent coordinate system, the origin is located at the surface point and the normal to the surface is aligned with the z axis (0, 0, 1). Therefore, the x and y axes are at a tangent to the surface. The following image shows read more..

  • Page - 132

    Chapter 4 119 In the preceding equation, S is the point in tangent space and P is the point in eye coordinates. In order to apply this transformation within the vertex shader, the OpenGL along with the vertex position. The usual situation is to provide the normal vector (n) and the tangent vector (t). If the tangent vector is provided, the binormal vector can be read more..

  • Page - 133

    Using Textures 120 layout (location = 2) in vec2 VertexTexCoord; layout (location = 3) in vec4 VertexTangent; struct LightInfo { vec4 Position; // Light position in eye coords. vec3 Intensity; // A,D,S intensity }; uniform LightInfo Light; out vec3 LightDir; out vec2 TexCoord; out vec3 ViewDir; uniform mat4 ModelViewMatrix; uniform mat3 NormalMatrix; uniform mat4 ProjectionMatrix; read more..

  • Page - 134

    Chapter 4 121 2. Use the following code for the fragment shader: #version 400 in vec3 LightDir; in vec2 TexCoord; in vec3 ViewDir; uniform sampler2D ColorTex; uniform sampler2D NormalMapTex; struct LightInfo { vec4 Position; // Light position in eye coords. vec3 Intensity; // A,D,S intensity }; uniform LightInfo Light; struct MaterialInfo { vec3 Ka; read more..

  • Page - 135

    Using Textures 122 How it works... The vertex shader starts by transforming the vertex normal and the tangent vectors into eye coordinates by multiplying by the normal matrix (and re-normalizing). The binormal vector is then computed as the cross product of the normal and tangent vectors. The result is multiplied by the w coordinate of the vertex tangent vector, read more..

  • Page - 136

    Chapter 4 123 mirror-like surface such as chrome). In order to do so, we need a texture that is representative surface. This general technique is known as environment mapping. In general, environment mapping involves creating a texture that is representative of the environment and mapping it A cube map is one of the more common varieties of textures used in read more..

  • Page - 137

    Using Textures 124 Truth be told, the conversion between the 3-dimensional texture coordinate used to access the cube map, and the 2-dimensional texture coordinate used to access the individual face image is somewhat complicated. It can be non-intuitive and confusing. A very good explanation can be found on NVIDIA's developer website: http://developer.nvidia.com/ read more..

  • Page - 138

    Chapter 4 125 1. Load the six images of the cube map into a single texture target using the following code within the main OpenGL program: glActiveTexture(GL_TEXTURE0); GLuint texID; glGenTextures(1, &texID); glBindTexture(GL_TEXTURE_CUBE_MAP, texID); const char * suffixes[] = { "posx", "negx", "posy", read more..

  • Page - 139

    Using Textures 126 2. Use the following code for the vertex shader: #version 400 layout (location = 0) in vec3 VertexPosition; layout (location = 1) in vec3 VertexNormal; layout (location = 2) in vec2 VertexTexCoord; out vec3 ReflectDir; // The direction of the reflected ray uniform bool DrawSkyBox; // Are we drawing the sky box? uniform vec3 WorldCameraPosition; uniform mat4 read more..

  • Page - 140

    Chapter 4 127 // Access the cube map texture vec4 cubeMapColor = texture(CubeMapTex,ReflectDir); if( DrawSkyBox ) FragColor = cubeMapColor; else FragColor = mix(MaterialColor, CubeMapColor, ReflectFactor); } 4. In the render portion of the OpenGL program, set the read more..

  • Page - 141

    Using Textures 128 not change as the camera moved within the scene. In the else branch within the main function, we start by converting the position to world coordinates and storing in worldPos. We then do the same for the normal, storing the result in worldNorm. Note that the ModelMatrix is used to transform the vertex normal. It is important when doing read more..

  • Page - 142

    Chapter 4 129 If we are drawing the sky box, we simply use the color unchanged. However, if we are not drawing the sky box, then we'll mix the sky box color with some material color. This allows us to provide some slight "tint" to the object. The amount of tint is adjusted by the variable ReflectFactor values of ReflectFactor the right uses a value read more..

  • Page - 143

    Using Textures 130 Objects that are transparent cause the light rays that pass through them to bend slightly at the interface between the object and the surrounding environment. This effect is called refraction. When rendering transparent objects, we simulate that effect by using an environment map, and mapping the environment onto the object is such a way as to mimic read more..

  • Page - 144

    Chapter 4 131 Getting ready Set up your OpenGL program to provide the vertex position in attribute location 0 and the vertex normal in attribute location 1. As with the previous recipe, we'll need to provide the model matrix in the uniform variable ModelMatrix. Load the cube map using the technique shown in the previous recipe. Place it in texture unit zero, and read more..

  • Page - 145

    Using Textures 132 { if( DrawSkyBox ) { ReflectDir = VertexPosition; } else { vec3 worldPos = vec3( ModelMatrix * vec4(VertexPosition,1.0) ); vec3 worldNorm = vec3(ModelMatrix * read more..

  • Page - 146

    Chapter 4 133 3. In the render portion of the OpenGL program, set the uniform DrawSkyBox to true, and then draw a cube surrounding the entire scene, centered at the origin. This will become the sky box. Following that, set DrawSkyBox to false, and draw the object(s) within the scene. How it works... Both shaders are quite similar to the shaders in the previous read more..

  • Page - 147

    Using Textures 134 There are a number of things about the technique that could be improved to provide more realistic looking results. light. For example, when looking at the surface of a lake from the shore, much of the light the Fresnel equations (after Augustin-Jean Fresnel). incidence, the polarization of the light, and the ratio of the indices of refraction. If we read more..

  • Page - 148

    Chapter 4 135 Environment maps are images of the surrounding environment for a scene. Embedded within the environment map is information about the lighting environment. For example, an environment map may contain an interior scene with several windows, each of which are lit with a bright source from outside. It would substantially enhance the realism of the scene if the objects read more..

  • Page - 149

    Using Textures 136 In the preceding equation r i i is the intensity of the ith light source, and v is the direction towards the viewer. As with the diffuse environment map, we can store the result of this sum in a texel in the specular environment map corresponding to the value of v, where v is the vector used to access the texel. To evaluate this read more..

  • Page - 150

    Chapter 4 137 uniform samplerCube DiffuseMap; // The diffuse env. map uniform samplerCube SpecMap; // The specular env. map uniform bool DrawSkyBox; struct MaterialInfo { vec3 BaseColor; float DiffuseFactor; float SpecFactor; }; uniform MaterialInfo Material; layout( location = 0 ) out vec4 FragColor; void main() { // Access the irradiance maps read more..

  • Page - 151

    Using Textures 138 There's more... This technique produces very realistic results, although it may require some trial and error to get the settings to look good. The mixing of colors within the fragment shader is just a rough As with previous recipes, this suffers from the drawback that the environment is treated doesn't change. Also if the positions of the lights in read more..

  • Page - 152

    Chapter 4 139 To project a texture onto a surface, all we need to do is determine the texture coordinates based on the relative position of the surface location and the source of the projection (the "slide projector"). An easy way to do this is to think of the projector as a camera located V) that projection matrix (P) that converts the view frustum read more..

  • Page - 153

    Using Textures 140 How to do it... To apply a projected texture to a scene, use the following steps: 1. In the OpenGL application, load the texture into texture unit zero. While the texture object is bound to the GL_TEXTURE_2D target, use the following code to set the texture's settings: glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, read more..

  • Page - 154

    Chapter 4 141 uniform mat3 NormalMatrix; uniform mat4 ProjectionMatrix; uniform mat4 MVP; void main() { vec4 pos4 = vec4(VertexPosition,1.0); EyeNormal = normalize(NormalMatrix * VertexNormal); EyePosition = ModelViewMatrix * pos4; ProjTexCoord = ProjectorMatrix * (ModelMatrix * pos4); gl_Position = MVP * pos4; } 4. Use the following code for the read more..

  • Page - 155

    Using Textures 142 return ambient + diffuse + spec; } void main() { vec3 color = phongModel(vec3(EyePosition), EyeNormal); vec4 projTexColor = vec4(0.0); if( ProjTexCoord.z > 0.0 ) projTexColor = textureProj(ProjectorTex,ProjTexCoord); FragColor = vec4(color,1.0) + projTexColor * 0.5; } How it works... When loading the texture into the read more..

  • Page - 156

    Chapter 4 143 The function textureProj is designed for accessing textures with coordinates that have been projected. It will divide the coordinates of the second argument by its last coordinate before accessing the texture. In our case, that is exactly what we want. We mentioned earlier that after transforming by the projector's matrix we will be left with read more..

  • Page - 157

    Using Textures 144 the introduction of framebuffer objects (FBOs). We can create a separate rendering target buffer (the FBO), attach our texture to that FBO, and render to the FBO in exactly the same way that we would render to the default framebuffer. All that is required is to swap in the FBO, and swap it out when we are done. Basically, the process read more..

  • Page - 158

    Chapter 4 145 glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,512,512,0,GL_RGBA, GL_UNSIGNED_BYTE,NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); // Bind the texture to the FBO glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, read more..

  • Page - 159

    Using Textures 146 glUniform1i(loc, 1); // Setup the projection matrix and view matrix // appropriately for the scene to be rendered to the texture here. // (Don't forget to match aspect ratio of the viewport.) … renderTextureScene(); // Unbind texture's FBO (back to default FB) glBindFramebuffer(GL_FRAMEBUFFER, 0); glViewport(0,0,width,height); // Viewport for main window // Use the read more..

  • Page - 160

    Chapter 4 147 As we want to render to the FBO with depth testing, we need to also attach a depth buffer. The next few lines of code create the depth buffer. The function glGenRenderbuffer creates a renderbuffer object, and glRenderbufferStorage allocates space for the renderbuffer. The second argument to glRenderbufferStorage indicates the internal format for the read more..

  • Page - 161

    Using Textures 148 There's more... As FBOs have multiple color attachment points, we can have several output targets from our fragment shaders. Note that so far, all of our fragment shaders have only had a single output variable assigned to location zero. Hence, we set up our FBO so that its texture corresponds to color attachment zero. In later recipes, we'll read more..

  • Page - 162

    5 Image Processing and Screen Space In this chapter, we will cover: Creating a "bloom" effect Using gamma correction to improve image quality Using multisample anti-aliasing Using deferred shading Introduction In this chapter, we focus on techniques that work directly with the pixels in a framebuffer. These techniques typically involve multiple passes. An read more..

  • Page - 163

    Image Processing and Screen Space Techniques 150 The ability to render to a texture, combined with the power of the fragment shader, opens up a huge range of possibilities. We can implement image processing techniques such as brightness, contrast, saturation, and sharpness by applying an additional process in the fragment shader prior to output. We can apply convolution read more..

  • Page - 164

    Chapter 5 151 to transform a pixel by replacing it with the sum of the products between the values of nearby pixels and a set of pre-determined weights. As a simple example, consider the following 10 1 25 26 27 Filter: 02 0 Pixels: 28 29 30 10 1 31 32 33 The values of the pixels could represent gray-scale intensity or the value of one of the RGB read more..

  • Page - 165

    Image Processing and Screen Space Techniques 152 One of the simplest, convolution-based techniques for edge detection is the so-called Sobel operator. The Sobel operator is designed to approximate the gradient of the image intensity horizontal components of the gradient. We can then use the magnitude of the gradient as our edge trigger. When the magnitude of the gradient is read more..

  • Page - 166

    Chapter 5 153 EdgeThreshold: The minimum value of g squared required to be considered "on an edge" RenderTex: The texture associated with the FBO Any other uniforms associated with the shading model should also be set from the OpenGL application. How to do it... following steps: 1. Use the following code for the vertex shader: #version 400 layout (location = read more..

  • Page - 167

    Image Processing and Screen Space Techniques 154 uniform int Width; // The pixel width uniform int Height; // The pixel height // This subroutine is used for selecting the functionality // of pass1 and pass2. subroutine vec4 RenderPassType(); subroutine uniform RenderPassType RenderPass; // Other uniform variables for the Phong reflection read more..

  • Page - 168

    Chapter 5 155 TexCoord + vec2(dx, dy) ).rgb); float s12 = luma(texture( RenderTex, TexCoord + vec2(dx, 0.0) ).rgb); float s22 = luma(texture( RenderTex, TexCoord + vec2(dx, -dy) ).rgb); float sx = s00 + 2 * s10 + s20 - (s02 + 2 * read more..

  • Page - 169

    Image Processing and Screen Space Techniques 156 In the second pass, we select the subroutine function pass2, and render only a single quad that covers the entire screen. The purpose of this is to invoke the fragment shader once for every pixel in the image. In the pass2 function, we retrieve the values of the eight neighboring by calling the luma results are read more..

  • Page - 170

    Chapter 5 157 can be useful in many different situations where the goal is to reduce the amount edge detection pass may improve the results by reducing the amount of high frequency that of nearby pixels using a weighted sum. The weights typically decrease with the distance from the pixel (in 2D screen space) so that pixels that are far away contribute less read more..

  • Page - 171

    Image Processing and Screen Space Techniques 158 The following images show a portion of an image before (left) and after (right) the Gaussian blur operation: To apply a Gaussian blur, for each pixel, we need to compute the weighted sum of all pixels in the image scaled by the value of the Gaussian function at that pixel (where the x and y coordinates of each read more..

  • Page - 172

    Chapter 5 159 Where the one-dimensional Gaussian function is given by the following equation: So if Cij is the color of the pixel at pixel location (i,j), the sum that we need to compute is given by the following equation: This can be re-written using the fact that the two-dimensional Gaussian is a product of two one-dimensional Gaussians. compute the sum over j (the read more..

  • Page - 173

    Image Processing and Screen Space Techniques 160 third pass, we'll apply the horizontal sum to the texture from the second pass, and send the results to the default framebuffer. Getting ready Set up two framebuffer objects (see Chapter 4, Rendering to a texture), and two once for each pixel. As with the previous recipe, we'll use a subroutine to select the functionality read more..

  • Page - 174

    Chapter 5 161 layout( location = 0 ) out vec4 FragColor; uniform float PixOffset[5] = float[](0.0,1.0,2.0,3.0,4.0); uniform float Weight[5]; vec3 phongModel( vec3 pos, vec3 norm ) { // The code for the Phong reflection model goes here… } subroutine (RenderPassType) vec4 pass1() { return vec4(phongModel( Position, Normal ),1.0); } subroutine( RenderPassType ) vec4 pass2() { read more..

  • Page - 175

    Image Processing and Screen Space Techniques 162 return sum; } void main() { // This will call either pass1(), pass2(), or pass3() FragColor = RenderPass(); } 3. In the OpenGL application, compute the Gaussian weights for the offsets found in the uniform variable PixOffset, and store the results in the array Weight. You could use the following code to do so: read more..

  • Page - 176

    Chapter 5 163 Use the following steps for pass #3: 1. Deselect the framebuffer (revert to the default), and clear the color buffer. 2. Select the pass3 subroutine function. 3. Bind the texture from pass #2 to texture unit zero. 4. Draw a full-screen quad. How it works... In the preceding code for computing the Gaussian weights (code segment 3), the function named gauss read more..

  • Page - 177

    Image Processing and Screen Space Techniques 164 Of course, we can also adapt the preceding technique to blur a larger range of texels by increasing the size of the arrays Weight and PixOffset and re-computing the weights, and/or we could use different values of sigma2 to vary the shape of the Gaussian. Chapter 4, Rendering to a texture Chapter 2, read more..

  • Page - 178

    Chapter 5 165 With traditional framebuffers, we typically store each color component using eight bits, allowing each component to take on an integer value between 0 and 255. An HDR framebuffer might use more bits per pixel to store a higher dynamic range of values. For example, an HDR buffer might use 16 or 32 bit Despite the fact that HDR produces higher quality read more..

  • Page - 179

    Image Processing and Screen Space Techniques 166 Getting ready will be used for the original render, the second and third will be used for the two passes of the Gaussian blur operation. In the fragment shader, we'll access the original render via the variable renderTex, and the two stages of the Gaussian blur will be accessed via BlurTex. The uniform variable LumThresh is read more..

  • Page - 180

    Chapter 5 167 // See Chapter 2 for the ADS shading model code vec3 phongModel( vec3 pos, vec3 norm ) { … } // The render pass subroutine (RenderPassType) vec4 pass1() { return vec4(phongModel( Position, Normal ),1.0); } // Pass to extract the bright parts subroutine( RenderPassType ) vec4 pass2() { vec4 val = texture(RenderTex, TexCoord); return val * clamp( read more..

  • Page - 181

    Image Processing and Screen Space Techniques 168 vec2(PixOffset[i],0.0) * dx ) * Weight[i]; sum += texture( BlurTex, TexCoord – vec2(PixOffset[i],0.0) * dx ) * Weight[i]; } return val + sum; } void main() { // This will call pass1(), pass2(), pass3(), or read more..

  • Page - 182

    Chapter 5 169 2. Scale the result from step 1, immediately above, by the factor (1.0 / (1.0 – LumThresh)). This should produce a value that ranges from zero to one (the luma function returns a value between zero and one for in-gamut colors). 3. We scale the pixel (val) by the result from step 2. The scale factor that is used here (1/(1-LumThresh)) may read more..

  • Page - 183

    Image Processing and Screen Space Techniques 170 Using gamma correction to improve It is common for many books about OpenGL and 3D graphics to somewhat neglect the subject of gamma correction. Lighting and shading calculations are performed, and the results are sent results that don't quite end up looking the way we might expect they should. This may be due to read more..

  • Page - 184

    Chapter 5 171 In order to compensate for this non-linear response, we can apply gamma correction before sending our results to the output framebuffer. Gamma correction involves raising the pixel intensities to a power that will compensate for the monitor's non-linear response to achieve a perceived result that appears linear. Raising the linear-space values to the read more..

  • Page - 185

    Image Processing and Screen Space Techniques 172 How to do it... Adding gamma correction to an OpenGL program can be as simple as carrying out the following steps: 1. Set up a uniform variable named Gamma and set it to an appropriate value for your system. 2. Use the following code or something similar in a fragment shader: vec3 color = lightingModel( … ); read more..

  • Page - 186

    Chapter 5 173 Anti-aliasing is the technique of removing or reducing the visual impact of aliasing artifacts that are present whenever high resolution or continuous information, is presented at a lower resolution. In real-time graphics, aliasing often reveals itself in the jagged appearance of polygon edges, or the visual distortion of textures that have a high degree of read more..

  • Page - 187

    Image Processing and Screen Space Techniques 174 The following image on the right shows the results when multisample anti-aliasing is used. The inset image is a zoomed portion of the inside edge of a torus. On the left, the torus is rendered without MSAA. The right-hand image shows the results with MSAA enabled. OpenGL has supported multisampling for some time now, and it read more..

  • Page - 188

    Chapter 5 175 format.setSampleBuffers(true); format.setSamples(4); QGLWidget *glView = new QGLWidget(format); 2. To determine whether multisample buffers are available and how many samples per-pixel are actually being used, you can use the following code (or something similar): GLint bufs, samples; glGetIntegerv(GL_SAMPLE_BUFFERS, &bufs); glGetIntegerv(GL_SAMPLES, &samples); printf("MSAA: buffers = read more..

  • Page - 189

    Image Processing and Screen Space Techniques 176 Before we can get into how sample and centroid work, we need a bit of background. Let's consider the way that polygon edges are handled without multisampling. A fragment is determined to be inside or outside of a polygon by determining where the center of that pixel lies. If the center is within the polygon, the read more..

  • Page - 190

    Chapter 5 177 Now, here's the important point. The value of the fragment shader's input variables are normally interpolated to the center of the pixel rather than to the location of any particular sample. In other words, the value that is used by the fragment shader is determined by interpolating to the location of the fragment's center, which may lie outside the read more..

  • Page - 191

    Image Processing and Screen Space Techniques 178 This shader is designed to color the polygon black unless the s component of the texture coordinate is greater than one. In that case, the fragment gets a yellow color. If we render a square with texture coordinates that range from zero to one in each direction, we may get the results shown in the following read more..

  • Page - 192

    Chapter 5 179 This, of course, requires that the fragment shader be executed once for each sample. This will produce the most accurate results, but the performance hit may not be worthwhile, especially if the visual results produced by centroid (or without the default) are good enough. Deferred shading is a technique that involves postponing the lighting/shading step to read more..

  • Page - 193

    Image Processing and Screen Space Techniques 180 Getting ready The g-buffer will contain three textures for storing the position, normal, and diffuse color. There are three uniform variables that correspond to these three textures: PositionTex, NormalTex, and ColorTex; these textures should be assigned to texture units 0, 1, and 2, respectively. Likewise, the vertex shader read more..

  • Page - 194

    Chapter 5 181 glGenTextures(1, &normTex); glBindTexture(GL_TEXTURE_2D, normTex); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB32F, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); // The read more..

  • Page - 195

    Image Processing and Screen Space Techniques 182 void main() { Normal = normalize( NormalMatrix * VertexNormal); Position = vec3( ModelViewMatrix * vec4(VertexPosition,1.0) ); TexCoord = VertexTexCoord; gl_Position = MVP * vec4(VertexPosition,1.0); } 3. Use the following code for the fragment shader: #version 400 struct read more..

  • Page - 196

    Chapter 5 183 // Store position, normal, and diffuse color in g-buffer PositionData = Position; NormalData = Normal; ColorData = Material.Kd; } subroutine(RenderPassType) void pass2() { // Retrieve position, normal and color information from // the g-buffer textures vec3 pos = vec3( texture( PositionTex, TexCoord ) ); vec3 norm read more..

  • Page - 197

    Image Processing and Screen Space Techniques 184 The three textures are then attached to the framebuffer at color attachments 0, 1, and 2 using glFramebufferTexture2D. They are then connected to the fragment shader's output variables with the call to glDrawBuffers. glDrawBuffers(4, drawBuffers); The array drawBuffers indicates the relationship between the framebuffer's components and read more..

  • Page - 198

    Chapter 5 185 One important drawback with deferred shading is that hardware enabled multisample anti-aliasing (as discussed in the recipe Using multisample anti-aliasing) doesn't work at all. The multiple samples would be needed in the second pass, because that's where the shading takes place. However, in the second pass, we only have geometry information for a single sample read more..

  • Page - 199

    read more..

  • Page - 200

    6 Using Geometry and In this chapter, we will cover: Point sprites with the geometry shader Drawing a wireframe on top of a shaded mesh Drawing silhouette lines using the geometry shader Tessellating a curve Tessellating a 2D quad Tessellating a 3D surface Tessellating based on depth Introduction Tessellation and geometry shaders are relatively new additions to read more..

  • Page - 201

    Using Geometry and Tessellation Shaders 188 program includes geometry and tessellation shaders. The tessellation portion of the shader pipeline includes two stages: the tessellation control shader (TCS), and the tessellation evaluation shader (TES). The geometry shader follows the tessellation stages and precedes the fragment shader. The tessellation shaders and geometry shader read more..

  • Page - 202

    Chapter 6 189 The geometry shader can output zero, one, or more primitives. Those primitives need not be the same kind of primitive that was received by the geometry shader. However, the GS cannot output more than one primitive type. For example, a GS could receive a triangle, and output several line segments as a line strip. Or a GS could receive a triangle and read more..

  • Page - 203

    Using Geometry and Tessellation Shaders 190 glPatchParameteri( GL_PATCH_VERTICES, numPatchVerts ); A very common application of this is when the patch primitive consists of a set of control However, there is no reason why the information within the patch primitive couldn't be used for other purposes. The patch primitive is never actually rendered. Instead, the patch is used as read more..

  • Page - 204

    Chapter 6 191 The number of vertices in the input patch need not be the same as the number of vertices in the output patch, although that will be the case in all of the examples in this chapter. The TES is executed once for each parameter-space vertex that is generated by the TPG. determine the position of the vertex (possibly along with other information such read more..

  • Page - 205

    Using Geometry and Tessellation Shaders 192 The following diagram illustrates this concept when quad tessellation is used. v 1 1 u 0 Tessellation coorinates (parameter space). Output vertices TES Patch If all of this seems confusing, start with the recipe Tessellating a curve, and work your way through the following recipes. In Tessellating a curve, we'll go through a basic example read more..

  • Page - 206

    Chapter 6 193 The following screenshot shows a group of point sprites. Each sprite is rendered as a point primitive. The quad and texture coordinates are generated automatically (within the geometry shader) and aligned to face the camera. OpenGL already has built-in support for point sprites in the GL_POINTS rendering mode. When rendering point primitives using this mode, read more..

  • Page - 207

    Using Geometry and Tessellation Shaders 194 If desired, we could generate other shapes such as hexagons, or we could rotate the quads before they are output from the geometry shader. The possibilities are endless. Implementing optimized and is likely to be faster in general. Before jumping directly into the code, let's take a look at some of the mathematics. In read more..

  • Page - 208

    Chapter 6 195 How to do it... To create a shader program that can be used to render point primitives as quads, use the following steps: 1. Use the following code for the vertex shader: #version 400 layout (location = 0) in vec3 VertexPosition; uniform mat4 ModelViewMatrix; uniform mat3 NormalMatrix; uniform mat4 ProjectionMatrix; void main() { gl_Position = read more..

  • Page - 209

    Using Geometry and Tessellation Shaders 196 TexCoord = vec2(1.0,1.0); EmitVertex(); EndPrimitive(); } 3. Use the following code for the fragment shader: #version 400 in vec2 TexCoord; // From the geometry shader uniform sampler2D SpriteTex; layout( location = 0 ) out vec4 FragColor; void main() { FragColor = texture(SpriteTex, TexCoord); } 4. Within the read more..

  • Page - 210

    Chapter 6 197 The gl_in variable is an array of structs. Each struct contains the following gl_Position, gl_PointSize, and gl_ClipDistance[]. In this example, we are only interested in gl_Position. However, the others can be set in the vertex shader to provide additional information to the geometry shader. Within the main function of the geometry shader, we produce read more..

  • Page - 211

    Using Geometry and Tessellation Shaders 198 mesh The preceding recipe demonstrated the use of a geometry shader to produce a different variety of primitive than it received. Geometry shaders can also be used to provide additional information to later stages. They are quite well suited to do so because they have access to all of the vertices of the primitive read more..

  • Page - 212

    Chapter 6 199 To compute the distance from a fragment to the edge, we use the following technique. In the geometry shader, we compute the minimum distance from each vertex to the opposite edge (also called the triangle altitude ha, hb, and hc. We can compute these altitudes using the interior angles of the triangle, which can be ha, we use the interior read more..

  • Page - 213

    Using Geometry and Tessellation Shaders 200 We will calculate all of this in screen space. That is, we'll transform the vertices to screen space within the geometry shader before computing the altitudes. Since we are working in screen space, there's no need (and it would be incorrect) to interpolate the values in a perspective correct manner. So we need to be read more..

  • Page - 214

    Chapter 6 201 uniform mat3 NormalMatrix; uniform mat4 ProjectionMatrix; uniform mat4 MVP; void main() { VNormal = normalize( NormalMatrix * VertexNormal); VPosition = vec3(ModelViewMatrix * vec4(VertexPosition,1.0)); gl_Position = MVP * vec4(VertexPosition,1.0); } 2. Use the following geometry shader: #version 400 layout( read more..

  • Page - 215

    Using Geometry and Tessellation Shaders 202 GNormal = VNormal[0]; GPosition = VPosition[0]; gl_Position = gl_in[0].gl_Position; EmitVertex(); GEdgeDistance = vec3( 0, hb, 0 ); GNormal = VNormal[1]; GPosition = VPosition[1]; gl_Position = gl_in[1].gl_Position; EmitVertex(); GEdgeDistance = vec3( 0, 0, hc ); read more..

  • Page - 216

    Chapter 6 203 float mixVal = smoothstep( Line.Width – 1, Line.Width + 1, d ); // Mix the surface color with the line color FragColor = mix( Line.Color, color, mixVal ); } How it works... The vertex shader is pretty simple. It passes the normal and position along to the read more..

  • Page - 217

    Using Geometry and Tessellation Shaders 204 Once we have the three altitudes, we set GEdgeDistance pass along GNormal, GPosition, and gl_Position by calling EmitVertex() the per-vertex output variables. We then proceed similarly for the other two vertices of the EndPrimitive(). In the fragment shader, we start by evaluating the basic shading model and storing the resulting read more..

  • Page - 218

    Chapter 6 205 This technique was originally published in an NVIDIA whitepaper in 2007 (Solid Wireframe, NVIDIA Whitepaper WP-03014-001_v01 available at developer. nvidia.com). The whitepaper was listed as a Direct3D example, but of course our implementation here is provided in OpenGL. Chapter 2, Implementing per-vertex ambient, diffuse, and specular (ADS) shading. geometry shader read more..

  • Page - 219

    Using Geometry and Tessellation Shaders 206 One of the most important features of the geometry shader is that it allows us to provide additional vertex information beyond just the primitive being rendered. When geometry shaders were introduced into OpenGL, several additional primitive rendering modes were also introduced. These "adjacency" modes allo w additional vertex read more..

  • Page - 220

    Chapter 6 207 When a mesh is rendered with adjacency information, the geometry shader has access to all six vertices associated with a particular triangle. We can then use the adjacent triangles to determine whether or not a triangle edge is part of the silhouette of the object. The basic assumption is that an edge is a silhouette edge if the triangle is front read more..

  • Page - 221

    Using Geometry and Tessellation Shaders 208 How to do it... To create a shader program that utilizes the geometry shader to render silhouette edges, use the following steps: 1. Use the following vertex shader: #version 400 layout (location = 0 ) in vec3 VertexPosition; layout (location = 1 ) in vec3 VertexNormal; out vec3 VNormal; out vec3 VPosition; uniform mat4 read more..

  • Page - 222

    Chapter 6 209 void emitEdgeQuad( vec3 e0, vec3 e1 ) { vec2 ext = PctExtend * (e1.xy - e0.xy); vec2 v = normalize(e1.xy – e0.xy); vec2 n = vec2(-v.y, v.x) * EdgeWidth; // Emit the quad GIsEdge = true; // This is part of the sil. edge gl_Position = vec4( e0.xy - ext, e0.z, 1.0 ); EmitVertex(); read more..

  • Page - 223

    Using Geometry and Tessellation Shaders 210 GNormal = VNormal[2]; GPosition = VPosition[2]; gl_Position = gl_in[2].gl_Position; EmitVertex(); GNormal = VNormal[4]; GPosition = VPosition[4]; gl_Position = gl_in[4].gl_Position; EmitVertex(); EndPrimitive(); } 3. Use the following fragment shader: #version 400 //*** Light and read more..

  • Page - 224

    Chapter 6 211 How it works... The vertex shader is a simple "pass-through" shader. It converts the vertex position and normal to camera coordinates and sends them along, via VPosition and VNormal. These will be used for shading within the fragment shader and will be passed along (or ignored) by the geometry shader. The position is also converted to clip read more..

  • Page - 225

    Using Geometry and Tessellation Shaders 212 Next, the variable v is assigned to the normalized vector from e0 to e1. The variable n gets a vector that is perpendicular to v (in 2D this can be achieved by swapping the x and y coordinates and negating the new x coordinate). This is just a counter-clockwise 90 degree rotation in 2D. We scale the vector n read more..

  • Page - 226

    Chapter 6 213 The preceding image shows the feathering of a silhouette edge. The gaps between the but in practice they haven't been very distracting in my experience. A second issue is related to depth testing. If an edge polygon extends into another area of the mesh, it can be clipped due to the depth test. The following is an example: The edge polygon should read more..

  • Page - 227

    Using Geometry and Tessellation Shaders 214 In this recipe, we'll take a look at the basics of tessellation shaders by drawing a cubic Bezier curve end of the curve, and the middle points guide the shape of the curve, but do not necessarily using a set of blending functions contributes to the curve for a given position along the curve. For Bezier curves, the read more..

  • Page - 228

    Chapter 6 215 As stated in the introduction to this chapter, the tessellation functionality within OpenGL involves 2 shader stages. They are the tessellation control shader (TCS) and the tessellation Bezier curve at each particular vertex location within the TES. The following screenshot shows the output of this example for three different tessellation levels. The left image read more..

  • Page - 229

    Using Geometry and Tessellation Shaders 216 Getting ready The following are the important uniform variables for this example: NumSegments: The number of line segments to be produced NumStrips: The number of isolines to be produced. For this example, this should be set to one LineColor: The color for the resulting line strip Set the uniform variables within the main read more..

  • Page - 230

    Chapter 6 217 3. Use the following as the tessellation evaluation shader: #version 400 layout( isolines ) in; uniform mat4 MVP; // projection * view * model void main() { // The tessellation u coordinate float u = gl_TessCoord.x; // The patch vertices (control points) vec3 p0 = gl_in[0].gl_Position.xyz; vec3 p1 = gl_in[1].gl_Position.xyz; read more..

  • Page - 231

    Using Geometry and Tessellation Shaders 218 6. Render the four control points as a patch primitive within the OpenGL application's render function: glDrawArrays(GL_PATCHES, 0, 4); How it works... The vertex shader is just a "pass-through" shader. It sends the vertex position along to the next layout (vertices = 4) out; Note that this is not the same as the number read more..

  • Page - 232

    Chapter 6 219 At the time of writing, there was inconsistency between drivers when it came to gl_TessLevelOuter for isolines. On ATI Catalyst drivers, gl_TessLevelOuter[0] in each isoline, however on the NVIDIA drivers that I've tested, it is gl_TessLevelOuter[1] differs from the diagram. Hopefully, this issue will be worked out soon, but for now, be aware that you may read more..

  • Page - 233

    Using Geometry and Tessellation Shaders 220 One of the best ways to understand OpenGL's hardware tessellation is to visualize the tessellation of a 2D quad. When linear interpolation is used, the triangles that are produced are directly related to the tessellation coordinates (u,v) that are produced by the tessellation primitive generator. It can be extremely helpful read more..

  • Page - 234

    Chapter 6 221 gl_ TessLevelOuter and gl_TessLevelInner. For example, gl_TessLevelInner[0] corresponds to IL0, gl_TessLevelOuter[2] corresponds to OL2, and so on. If we draw a patch primitive that consists of a single quad (four vertices), and use linear interpolation, the triangles that result can help us to understand how OpenGL does quad tessellation. The following diagram read more..

  • Page - 235

    Using Geometry and Tessellation Shaders 222 Before jumping into the code, let's discuss linear interpolation. If the four corners of the quad linearly interpolating the four corners with respect to parameters u and v. We'll let the tessellation primitive generator create a set of vertices with appropriate parametric coordinates, and we'll determine the corresponding positions read more..

  • Page - 236

    Chapter 6 223 void main() { // Pass along the vertex position unmodified gl_out[gl_InvocationID].gl_Position = gl_in[gl_InvocationID].gl_Position; gl_TessLevelOuter[0] = float(Outer); gl_TessLevelOuter[1] = float(Outer); gl_TessLevelOuter[2] = float(Outer); gl_TessLevelOuter[3] = float(Outer); gl_TessLevelInner[0] = float(Inner); read more..

  • Page - 237

    Using Geometry and Tessellation Shaders 224 uniform vec4 QuadColor; noperspective in vec3 EdgeDistance; // From geometry shader layout ( location = 0 ) out vec4 FragColor; float edgeMix() { // ** insert code here to determine how much of the edge // color to include (see recipe "Drawing a wireframe on // top of a shaded mesh"). ** } void main() { read more..

  • Page - 238

    Chapter 6 225 The main function in the TES starts by retrieving the parametric coordinates for this vertex by accessing the variable gl_TessCoord. Then we move on to read the positions of the four vertices in the patch from the gl_in array. We store them in temporary variables to be used in the interpolation calculation. The built-in output variable read more..

  • Page - 239

    Using Geometry and Tessellation Shaders 226 by a set of 16 control points (laid out in a 4x4 grid) P ij, with i and j ranging from 0 to 3, then the parametric Bezier surface is given by the following equation: The Bs in the above equation are the cubic Bernstein polynomials (see the previous recipe, Tessellating a 2D curve). We also need to compute the read more..

  • Page - 240

    Chapter 6 227 How to do it... To create a shader program that creates Bezier patches from input patches of 16 control points, use the following steps: 1. Use the vertex shader from Tessellating a 2D quad. 2. Use the following code for the tessellation control shader: #version 400 layout( vertices=16 ) out; uniform int TessLevel; void main() { // Pass along the read more..

  • Page - 241

    Using Geometry and Tessellation Shaders 228 // Derivatives db[0] = -3.0 * t1 * t1; db[1] = -6.0 * t * t1 + 3.0 * t12; db[2] = -3.0 * t * t + 6.0 * t * t1; db[3] = 3.0 * t * t; } void main() { float u = gl_TessCoord.x; float v = gl_TessCoord.y; // The sixteen control points read more..

  • Page - 242

    Chapter 6 229 vec4 du = p00*dbu[0]*bv[0] + p01*dbu[0]*bv[1] + p02*dbu[0]*bv[2] + p03*dbu[0]*bv[3] + p10*dbu[1]*bv[0] + p11*dbu[1]*bv[1] + p12*dbu[1]*bv[2] + p13*dbu[1]*bv[3] + p20*dbu[2]*bv[0] + p21*dbu[2]*bv[1] + p22*dbu[2]*bv[2] + p23*dbu[2]*bv[3] + p30*dbu[3]*bv[0] + p31*dbu[3]*bv[1] + p32*dbu[3]*bv[2] + read more..

  • Page - 243

    Using Geometry and Tessellation Shaders 230 The tessellation evaluation shader starts by using a layout directive to indicate the type of tessellation to be used. As we are tessellating a 4x4 Bezier surface patch, quad tessellation makes the most sense. The function basisFunctions evaluates the Bernstein polynomials and their derivatives for a given value of the parameter read more..

  • Page - 244

    Chapter 6 231 When tessellation shaders are used, the tessellation level is what determines the geometric complexity of the object. As the tessellation levels can be set within the tessellation control shader, it is a simple matter to vary the tessellation levels with respect to the distance from the camera. In this example, we'll vary the tessellation levels linearly (with read more..

  • Page - 245

    Using Geometry and Tessellation Shaders 232 How to do it... To create a shader program that varies the tessellation level based on the depth, use the following steps: 1. Use the vertex shader and tessellation evaluation shader from the recipe Tessellating a 3D surface. 2. Use the following code for the tessellation control shader: #version 400 layout( vertices=16 ) out; read more..

  • Page - 246

    Chapter 6 233 How it works... The TCS takes the position and converts it to camera coordinates and stores the result in the variable p. The absolute value of the z coordinate is then scaled and clamped such that the result is between zero and one. If the z coordinate is equal to MaxDepth, the value of depth will be 1.0, if it is equal to MinDepth, read more..

  • Page - 247

    read more..

  • Page - 248

    7 Shadows In this chapter, we will cover: Rendering shadows with shadow maps Anti-aliasing shadow edges with PCF Creating soft shadow edges with random sampling Improving realism with pre-baked ambient occlusion Introduction Shadows add a great deal of realism to a scene. Without shadows, it can be easy to misjudge the relative location of objects, and the read more..

  • Page - 249

    Shadows 236 ambient occlusion (AO). Ambient occlusion is a technique that takes into account light attenuation due to occlusion. In other words, with AO, the parts of the scene that are "sheltered" (occluded) by nearby objects, such as corners or creases, receive less contribution from the shading model and appear to be shadowed. Ambient occlusion is a technique read more..

  • Page - 250

    Chapter 7 237 The following image shows an example of shadows produced by the basic shadow mapping technique: Let's look at each step of the algorithm in a bit more detail. rendering the scene as if the camera is located at the position of the light source, and is oriented towards the shadow casting objects. We set up a projection matrix such that the view read more..

  • Page - 251

    Shadows 238 The following images represent an example of the basic shadow mapping setup. The left image shows the light's position and its associated perspective frustum. The right-hand image shows the corresponding shadow map. The grey scale intensities in the shadow map correspond to the depth values (darker is closer). Once we have created the shadow map and read more..

  • Page - 252

    Chapter 7 239 These are called clip coordinates because the built-in clipping functionality the perspective (or orthographic) frustum are transformed by the projection matrix to the (homogeneous) space that is contained within a cube centered at the origin, with side length of 2. This space is called the canonical viewing volume. The term "homogeneous" means that these coor read more..

  • Page - 253

    Shadows 240 This matrix will scale and translate our coordinates such that the x, y, and z components range from 0 to 1 (after perspective division) for points within the light's frustum. Now, combining the bias matrix with the light's view (V l) and projection (P l) matrices, we have the following equation for converting positions in world coordinates (W) to read more..

  • Page - 254

    Chapter 7 241 How to do it... To create an OpenGL application that creates shadows using the shadow mapping technique, use the following steps. We'll start by setting up a FBO to contain the shadow map texture, and then move on to the required shader code: 1. In the main OpenGL program, set up a FBO with a depth buffer only. Declare a GLuint variable read more..

  • Page - 255

    Shadows 242 GLenum drawBuffers[]={GL_NONE}; glDrawBuffers(1,drawBuffers); // Revert to the default framebuffer for now glBindFramebuffer(GL_FRAMEBUFFER,0); 2. Use the following code for the vertex shader: #version 400 layout (location=0) in vec3 VertexPosition; layout (location=1) in vec3 VertexNormal; out vec3 Normal; out vec3 Position; // Coordinate to be used for shadow map lookup out read more..

  • Page - 256

    Chapter 7 243 { // Compute only the diffuse and specular components of // the Phong shading model. } subroutine void RenderPassType(); subroutine uniform RenderPassType RenderPass; subroutine (RenderPassType) void shadeWithShadow() { vec3 ambient = …;// compute ambient component here vec3 diffAndSpec = phongModelDiffAndSpec(); // Do the shadow-map look-up float read more..

  • Page - 257

    Shadows 244 Pass 2 1. Select the viewport, view, and projection matrices appropriate for the scene. 2. Bind to the default framebuffer. 3. Disable culling (or switch to back-face culling). 4. Select the subroutine function shadeWithShadow. 5. Draw the scene. How it works... framebuffer object (FBO) for our shadow map texture. The FBO contains only a single texture connected read more..

  • Page - 258

    Chapter 7 245 The next few lines create and set up the FBO. The shadow map texture is attached to the FBO as the depth attachment with the function glFramebufferTexture2D. For more details about FBOs, check out the recipe in Chapter 3, Rendering to a texture. The vertex shader is fairly simple. It converts the vertex position and normal to camera coordinates read more..

  • Page - 259

    Shadows 246 When rendering the shadow map, note that we culled the front faces. This is to avoid the only works if our mesh is completely closed. If back faces are exposed, you may need to use another technique (such as glPolygonOffset) to avoid this. I'll talk a bit more about this in the next section. There's more... There's a vast amount of information read more..

  • Page - 260

    Chapter 7 247 When creating the shadow map, we only rendered back faces. This is because of the fact that if we were to render front faces, points on certain faces will have nearly the same depth as the that should be completely lit. The following image shows an example of this effect. Since the majority of faces that cause this issue are those that are read more..

  • Page - 261

    Shadows 248 Proceedings, Volume 21, Number 4, July 1987). The concept involved transforming the fragment's extents into shadow space, sampling several locations within that region, and computing the percent that is closer than the depth of the fragment. The result is then used blurring the shadow's edges. A common variant of the PCF algorithm involves just sampling a constant read more..

  • Page - 262

    Chapter 7 249 Getting ready Start with the shaders and FBO presented in the previous recipe, Rendering shadows with shadow maps. We'll just make a few minor changes to the code presented there. How to do it... To add the PCF technique to the shadow mapping algorithm, we'll just make a few changes to the shaders from the recipe Rendering shadows with shadow maps: read more..

  • Page - 263

    Shadows 250 How it works... OpenGL driver can repeat the depth comparison on the four nearby texels within the texture. The results of the four comparisons will be averaged and returned. Within the fragment shader, we use the textureProjOffset function to sample the four texels (diagonally) surrounding the texel nearest to ShadowCoord. The third argument is the offset. It read more..

  • Page - 264

    Chapter 7 251 The basic shadow mapping algorithm combined with PCF can produce shadows with soft edges. However, if we desire blurred edges that are substantially wide (to approximate true soft shadows) then a large number of samples are required. Additionally, there is a good deal of wasted effort when shading fragments lie in the center of large shadows, or read more..

  • Page - 265

    Shadows 252 Additionally, we vary the sample locations through a set of precomputed sample patterns. We compute random sample offsets and store them in a texture prior to rendering. Then, in the set of offsets and use them to vary the fragment's position in the shadow map. The results are then averaged together in a similar manner to the basic PCF algorithm. read more..

  • Page - 266

    Chapter 7 253 Radius: The blur radius in pixels divided by the size of the shadow map texture (assuming a square shadow map). This could be considered as the "softness" of the shadow. How to do it... To modify the shadow mapping algorithm to use this random sampling technique, use the following steps. We'll build the offset texture within the main read more..

  • Page - 267

    Shadows 254 data[cell+0] = sqrtf(v.y) * cosf(TWOPI*v.x); data[cell+1] = sqrtf(v.y) * sinf(TWOPI*v.x); data[cell+2] = sqrtf(v.w) * cosf(TWOPI*v.z); data[cell+3] = sqrtf(v.w) * sinf(TWOPI*v.z); } } } glActiveTexture(GL_TEXTURE1); GLuint texID; glGenTextures(1, read more..

  • Page - 268

    Chapter 7 255 int samplesDiv2 = int(OffsetTexSize.z); vec4 sc = ShadowCoord; for( int i = 0 ; i< 4; i++ ) { offsetCoord.z = i; vec4 offsets = texelFetch(OffsetTex,offsetCoord,0) * Radius * ShadowCoord.w; sc.xy = ShadowCoord.xy + offsets.xy; sum += textureProj(ShadowMap, sc); read more..

  • Page - 269

    Shadows 256 How it works... The function buildOffsetTex creates our three dimensional texture of random offsets. texSize preceding images, I used a value of 8. The second and third parameters, samplesU and samplesV respectively, for a total of 32 samples. The u and v directions are arbitrary axes that are used samplesU x samplesV (4 is randomly "jittered" from its read more..

  • Page - 270

    Chapter 7 257 Of course, we also pack the samples in such a way that a single texel contains two samples. This is not strictly necessary, but is done to conserve memory space. However, it does make the code a bit more complex. Within the fragment shader, we start by computing the ambient component of the shading model separately from the diffuse and read more..

  • Page - 271

    Shadows 258 It should also be noted that this blurring of the edges may not be desired for all shadow edges. For example, edges that are directly adjacent to the occluder, that is, creating the shadow, should not be blurred. These may not always be visible, but can become so in certain situations, such as when the occluder is a narrow object. The effect is to read more..

  • Page - 272

    Chapter 7 259 Tracing all of these rays can be a very time consuming process, and is somewhat impractical depth buffer (we'll look at more on this later). The good news is that ambient occlusion accessibility factors can be precomputed, and are independent of the position of the light source. The precomputed values will be valid as long as the object is not deformed read more..

  • Page - 273

    Shadows 260 The top image is the ambient occlusion texture. The bottom left shows the mesh with diffuse shading only. The bottom right is diffuse lighting attenuated with the accessibility factors from one on the right. Including ambient occlusion gives the object a much more realistic look, and provides additional depth to the shading. There are many tools available to read more..

  • Page - 274

    Chapter 7 261 { Position = vec3( ModelViewMatrix * vec4(VertexPosition,1.0) ); Normal = NormalMatrix * VertexNormal; TexCoord = TexCoord0; gl_Position = MVP * vec4(VertexPosition,1.0); } 2. Use the following code for the fragment shader: #version 400 // Declare any uniforms needed for the Phong shading model. in vec3 read more..

  • Page - 275

    Shadows 262 There's more... As mentioned some time back, precomputing the AO accessibility factors works quite well under certain circumstances. The object needs to be static (non-deformable), and the occlusion factors shouldn't be dependent on other objects that may move relative to the object. If either of these considerations do not hold, then the AO values can become read more..

  • Page - 276

    8 Using Noise in Shaders In this chapter, we will cover: Creating a noise texture using libnoise Creating a seamless noise texture Creating a cloud-like effect Creating a wood grain effect Creating a disintegration effect Creating a paint-spatter effect Creating a night-vision effect Introduction It's easy to use shaders to create a smooth looking surface, but read more..

  • Page - 277

    Using Noise in Shaders 264 All of the preceding effects have qualities that are random in nature. Therefore, you might imagine that we could generate them by simply using random data. However, random data such as the kind that is generated from a pseudorandom number generator is not very useful in computer graphics. There are two main reasons: First, we need read more..

  • Page - 278

    Chapter 8 265 Many books use a 3D rather than a 2D noise texture, to provide another dimension of noise that is available to the shaders. To keep things simple, and to focus on using surface texture coordinates, I've chosen to use a 2D noise texture in the recipes within this chapter. If desired, it should be straightforward to extend these recipes to read more..

  • Page - 279

    Using Noise in Shaders 266 and decreasing amplitudes. Each function is referred to as an octave. The libnoise library can generate Perlin noise as a sum of any number of octaves. The more octaves involved; the more variation in the generated noise. Summed noise involving higher octaves will have more Perlin noise generated with one, two, three, and four octaves, from read more..

  • Page - 280

    Chapter 8 267 for( int oct = 0; oct < 4; oct++ ) { perlinNoise.SetOctaveCount(oct+1); for( int i = 0; i < width; i++ ) { for( int j = 0 ; j < height; j++ ) { double x = xFactor * i; double y = yFactor * j; double z = 0.0; float val = (float)perlinNoise.GetValue(x,y,z); read more..

  • Page - 281

    Using Noise in Shaders 268 How it works... The libnoise library is based on the concept of noise modules. To generate Perlin noise, we start by creating a Perlin noise module by declaring an instance of the Perlin module named perlinNoise. noise::module::Perlin perlinNoise; The SetFrequency function by the module. Each successive octave decreases the frequency by one read more..

  • Page - 282

    Chapter 8 269 There's more... You should feel free to change various parts of this code and see what happens to the result. Try using a different "slice" of the 3D noise space or use a different persistence value by using the SetPersistence function. The libnoise library provides plenty of options and additional modules that you can use to modify the results. read more..

  • Page - 283

    Using Noise in Shaders 270 noise function's space. The value that we store in the texture at point A will be the linear interpolation of the raw noise values at A, B, C, and D. The interpolation is based on the position of A within the boundaries of the texture. e represents the horizontal distance of A from the left boundary, and d q to be the read more..

  • Page - 284

    Chapter 8 271 Getting ready For this recipe, we'll start with the code from the previous recipe, namely, Creating a noise texture using libnoise. You'll need to install the libnoise library and enter the code from that recipe. The following code also makes use of the GLM library, so that will need to be installed as well (see Chapter 1, Using the GLM library read more..

  • Page - 285

    Using Noise in Shaders 272 How it works... Within the main loop, we sample the noise function at the texture location (a) and the three other locations (b, c, and d) that are offset from a compute one minus the percentage along the horizontal extent and store the result in xmix. One minus the percentage along the vertical extent is stored in ymix. We then read more..

  • Page - 286

    Chapter 8 273 The left-hand image tiles the noise values once in the horizontal direction. The right-hand image tiles the noise values in the vertical direction. The center image does not tile. Getting ready Set up your program to generate a seamless noise texture and make it available to the shaders through the uniform variable NoiseTex. There are two uniforms in the read more..

  • Page - 287

    Using Noise in Shaders 274 float t = (cos( noise.g * PI ) + 1.0) / 2.0; vec4 color = mix( SkyColor, CloudColor, t ); FragColor = vec4( color.rgb , 1.0 ); } How it works... We start by retrieving the noise value from the noise texture (variable noise). The green channel contains two octave noises, so we use the value stored in that channel read more..

  • Page - 288

    Chapter 8 275 To create the look of wood, we can start by creating a virtual "log", with per fectly cylindrical growth rings. Then we'll take a slice of the log, and perturb the growth rings using noise from our noise texture. The following image illustrates our virtual "log". It is aligned with the y-axis, and extends Each ring is given a darker color read more..

  • Page - 289

    Using Noise in Shaders 276 before doing so, we'll perturb that distance based on a value from the noise texture. The result has a general look that is similar to real wood. The following image shows an example: Getting ready Set up your program to generate a noise texture and make it available to the shaders through the uniform variable NoiseTex. There are three read more..

  • Page - 290

    Chapter 8 277 in vec2 TexCoord; layout ( location = 0 ) out vec4 FragColor; void main() { // Transform the texture coordinates to define the // "slice" of the log. vec4 cyl = Slice * vec4( TexCoord.st, 0.0, 1.0 ); // The distance from the log's y axis. float dist = length(cyl.xz); // Perturb the distance using the read more..

  • Page - 291

    Using Noise in Shaders 278 Next, the distance from the y-axis is determined by using the built-in length function (length(cyl.xz)). This will be used to determine how close we are to a growth ring. The color will be a light wood color if we are between growth rings, and a dark color when we are close to a growth ring. However, before determining the read more..

  • Page - 292

    Chapter 8 279 The following image shows an example of the results: Creating a noise texture using libnoise It is straightforward to use the GLSL discard keyword in combination with noise to simulate erosion or decay. We can simply discard fragments that correspond to a noise value that is above or below a certain threshold. The following image shows a teapot read more..

  • Page - 293

    Using Noise in Shaders 280 Create a seamless noise texture (see Creating a seamless noise texture), and place it in the appropriate texture channel. program: NoiseTex: The noise texture LowThreshold: Fragments are discarded if the noise value is below this value HighThreshold: Fragments are discarded if the noise value is above this value How to do it... To create a read more..

  • Page - 294

    Chapter 8 281 How it works... The fragment shader starts by retrieving a noise value from the noise texture (NoiseTex), and storing the result in the variable noise. We want noise that has a large amount of high (noise.a). We then discard the fragment if the noise value is below LowThreshold or above HighThreshold. As the discard keyword causes the execution of the read more..

  • Page - 295

    Using Noise in Shaders 282 Getting ready Start with a basic setup for rendering using the Phong shading model (or whatever model you prefer). Include texture coordinates and pass them along to the fragment shader. PaintColor: The color of the paint spatters Threshold: The minimum noise value where a spatter will appear Create a noise texture with high frequency noise read more..

  • Page - 296

    Chapter 8 283 in vec3 Normal; in vec2 TexCoord; // The paint-spatter uniforms uniform vec3 PaintColor = vec3(1.0); uniform float Threshold = 0.65; layout ( location = 0 ) out vec4 FragColor; vec3 phongModel(vec3 kd) { // Evaluate the Phong shading model using kd as the diffuse // reflectivity. } void main() { vec4 noise = texture( NoiseTex, TexCoord ); vec3 color read more..

  • Page - 297

    Using Noise in Shaders 284 Noise can be useful to simulate static or other kinds of electronic interference. This recipe is a fun example of that. We'll create the look of night-vision goggles with some noise thrown in to simulate some random static in the signal. Just for fun, we'll also outline the scene in the classic "binocular" view. The following image read more..

  • Page - 298

    Chapter 8 285 Create a noise texture with high frequency noise, and make it available to the shader via NoiseTex. Associate the texture with the FBO available via RenderTex. How to do it... To create a shader program that generates a night-vision effect, use the following steps: 1. Set up your vertex shader to pass along the position, normal, and texture read more..

  • Page - 299

    Using Noise in Shaders 286 vec4 color = texture(RenderTex, TexCoord); float green = luminance( color.rgb ); float dist1 = length(gl_FragCoord.xy – vec2(Width/4.0, Height/2.0)); float dist2 = length(gl_FragCoord.xy – vec2(3.0 * Width/4.0, Height/2.0)); if( dist1 read more..

  • Page - 300

    Chapter 8 287 In the second pass, the pass2 function is executed. We start by retrieving a noise value (noise color). Then we compute the luminance value for the color and store that result in the variable green. This will The next step involves determining whether or not the fragment is inside the "binocular" lenses. We compute the distance to the center read more..

  • Page - 301

    read more..

  • Page - 302

    9 Animation and In this chapter, we will cover: Animating a surface with vertex displacement Creating a particle fountain Creating a particle system using transform feedback Creating a particle system using instanced particles Simulating smoke with particles Introduction Shaders provide us with the ability to leverage the massively parallel architectures of today's modern read more..

  • Page - 303

    Animation and Particles 290 In this chapter, we'll look at several examples of animation within shaders, focusing mostly Animating with vertex displacement, demonstrates animation by transforming the vertex positions of an object based on a time-dependent function. In the recipe, Creating a particle fountain, we create a simple particle system under constant acceleration. In read more..

  • Page - 304

    Chapter 9 291 Alternatively, we could use a noise texture to animate the surface based on a random function. (See Chapter 8 for details on noise textures.) Before we jump into the code, let's take a look at the mathematics that we'll need. We'll transform the y-coordinate of the surface as a function of the current time and the modeling x-coordinate. To read more..

  • Page - 305

    Animation and Particles 292 K Velocity: The wave's velocity Amp: The wave's amplitude Set up your program to provide appropriate uniform variables for your chosen shading model. How to do it... Use the following code for the vertex shader: #version 400 layout (location = 0) in vec3 VertexPosition; out vec4 Position; out vec3 Normal; uniform float Time; // The animation read more..

  • Page - 306

    Chapter 9 293 Create a fragment shader that computes the fragment color based on the variables Position and Normal using whatever shading model you choose (see Chapter 3, Implementing per-fragment shading). How it works... The vertex shader takes the position of the vertex and updates the y-coordinate using the pos is just a copy of the input variable read more..

  • Page - 307

    Animation and Particles 294 During the lifetime of a particle, it is animated according to a set of rules. These rules often account things such as wind, friction, or other factors. The particle may also change shape or transparency during its lifetime. Once the particle has reached a certain age (or position), it is considered to be "dead" and can be read more..

  • Page - 308

    Chapter 9 295 We'll render each particle as a textured point sprite (using GL_POINTS). It is easy to apply a texture to a point sprite because OpenGL will automatically generate texture coordinates and make them available to the fragment shader via the built-in variable gl_PointCoord. We'll also reduce the alpha of the point sprite linearly with the age of the read more..

  • Page - 309

    Animation and Particles 296 To pick vectors from within our cone, we utilize spherical coordinates. The value of theta determines the angle between the center of the cone and the outer edge. The value of phi theta. For more on spherical coordinates, grab your favorite math book. Once a direction is chosen, the vector is scaled to have a magnitude between 1.25 read more..

  • Page - 310

    Chapter 9 297 You will also want to choose a reasonable size for each point sprite. For example, the following line sets it to 10 pixels: glPointSize(10.0f); How to do it... Use the following code for the vertex shader: #version 400 // Initial velocity and start time layout (location = 0) in vec3 VertexInitVel; layout (location = 1) in float StartTime; out float Transp; read more..

  • Page - 311

    Animation and Particles 298 { FragColor = texture(ParticleTex, gl_PointCoord); FragColor.a *= Transp; } How it works... The vertex shader receives the particle's initial velocity (VertexInitVel) and start time (StartTime) in its two input attributes. The variable Time stores the amount of time that has elapsed since the beginning of the animation. The output variable Transp read more..

  • Page - 312

    Chapter 9 299 We could also create a better indication of distance by varying the size of the particles with the vertex shader using the built-in variable gl_PointSize. Alternatively, we could use the geometry shader as outlined in Chapter 6, Point Sprites with the Geometry Shader, to draw the point sprites as actual quads. be recycled easily. When a particle read more..

  • Page - 313

    Animation and Particles 300 In this example, we'll implement the same particle system from the previous recipe (Creating a particle fountain), this time making use of transform feedback. Instead of using an equation that describes the particle's motion for all time, we'll update the particle positions incrementally, solving the equations of motion based on the forces involved at read more..

  • Page - 314

    Chapter 9 301 Update pass(no rasterization) A input output B B Vertex Shader to fragment shader input Render pass Vertex Shader In the next frame of animation, we repeat the same process, swapping the two buffers. to be written to a designated buffer (or set of buffers). There are several steps involved that will be demonstrated below, but the basic idea is as follows. Just read more..

  • Page - 315

    Animation and Particles 302 start time buffer with vertex attribute two, and the initial velocity buffer with vertex attribute three. The second vertex array should be set up in the same way using the B buffers and the same initial velocity buffer. In the following code, the handles to the two vertex arrays will be accessed via the GLuint array named read more..

  • Page - 316

    Chapter 9 303 Similar to vertex arrays, transform feedback objects store the buffer bindings for the GL_ TRANSFORM_FEEDBACK_BUFFER binding point so that they can be reset quickly at a later time. In the preceding code, we create two transform feedback objects, and store their handles in the array named feedback posBuf[0] to index 0, velBuf[0] to index 1, and read more..

  • Page - 317

    Animation and Particles 304 Velocity = VertexVelocity; StartTime = VertexStartTime; if( Time >= StartTime ) { float age = Time - StartTime; if( age >ParticleLifetime ) { // The particle is past its lifetime, recycle. Position = vec3(0.0); Velocity = read more..

  • Page - 318

    Chapter 9 305 After compiling the shader program, but before linking, use the following code to set up the connection between vertex shader output variables and output buffers: const char * outputNames[] = { "Position", "Velocity", "StartTime" }; glTransformFeedbackVaryings(progHandle, 3, read more..

  • Page - 319

    Animation and Particles 306 How it works... There's quite a bit here to sort through. Let's start with the vertex shader. The vertex shader is broken up into two subroutine functions. The update function is used particle. The render function is used during the second pass. It computes the transparency based on the age of the particle and sends the position and read more..

  • Page - 320

    Chapter 9 307 The last code segment above describes how you might implement the render function within the main OpenGL program. In this example, there are two important GLuint arrays: feedback and particleArray. They are each size two and contain the handles to the two feedback buffer objects, and the two vertex array objects respectively. The variable drawBuf is just read more..

  • Page - 321

    Animation and Particles 308 It is often useful to determine how many primitives were written during transform feedback. For example, if a geometry shader was active, the number of primitives written could be different from the number of primitives that were sent down the pipeline. OpenGL provides a way to query for this information using query objects. To do so, start read more..

  • Page - 322

    Chapter 9 309 In this example, we'll modify the particle system introduced in the previous recipes. Rather than using point sprites, we'll render a more complex object in the place of each particle. The following image shows an example where each particle is rendered as a shaded torus: Using instanced rendering is simply a matter of calling one of the instanced draw read more..

  • Page - 323

    Animation and Particles 310 Getting ready Start with a particle system as described in Creating a particle fountain. We'll just make a few if desired, but to keep things simple, we'll use the more basic particle system. It should be straightforward to adapt this example to the transform feedback-based system. When setting up the vertex array object for your particle read more..

  • Page - 324

    Chapter 9 311 layout (location = 0) in vec3 VertexPosition; layout (location = 1) in vec3 VertexNormal; layout (location = 2) in vec3 VertexTexCoord; layout (location = 3) in vec3 VertexInitialVelocity; layout (location = 4) in float StartTime; out vec3 Position; out vec3 Normal; Within the main function, update the position of the vertex by translating it using the equation of read more..

  • Page - 325

    Animation and Particles 312 Creating a particle fountain Creating a particle system using transform feedback don't worry about a downward gravitational acceleration. In fact, we'll actually use a slight single point. Of course, we'll need to use a particle texture that has the red and orange The following image shows an example of the running particle system: is not read more..

  • Page - 326

    Chapter 9 313 Set the uniform variable ParticleLifetime to about four seconds. channel, and set the uniform ParticleTex to zero. Use a point size of about 50.0. How to do it... When setting up the initial positions for your particles, instead of using the origin for all particles, use a random x location. The following code could be used: GLfloat *data = new read more..

  • Page - 327

    Animation and Particles 314 How it works... We randomly distribute the x-coordinate of the initial positions between -2.0 and 2.0 for all of the particles, and set the initial velocities to have a y-coordinate between 0.1 and 0.5. Since the acceleration has only a y-component, the particles will move only along a straight, vertical line in the y direction. The x read more..

  • Page - 328

    Chapter 9 315 The texture for each particle is a very light "smudge" of grey or black color. To make the particles grow over time, we'll make use of the GL_PROGRAM_POINT_SIZE functionality in OpenGL, which allows us to modify the point size within the vertex shader. Getting ready Start with the basic particle system presented in the recipe Creating a read more..

  • Page - 329

    Animation and Particles 316 In the main OpenGL application, before rendering your particles, make sure to enable GL_ PROGRAM_POINT_SIZE: glEnable(GL_PROGRAM_POINT_SIZE); How it works... The render subroutine function sets the built-in variable gl_PointSize to a value between MinParticleSize and MaxParticleSize, determined by the age of the particle. This causes the size of the particles read more..

  • Page - 330

    Symbols 2D quad tessellating 220-224 2D texture applying 106-110 3D surface tessellating 225-230 [] operator 34 A accessibility 258, 259 adjacency modes GL_LINES_ADJACENCY 206 GL_LINE_STRIP_ADJACENCY 206 GL_TRIANGLES_ADJACENCY 206 GL_TRIANGLE_STRIP_ADJACENCY 206 ads function 103 ADS shading directional light source 61 distance attenuation 61 functions, using 62-64 implementing 55-59 read more..

  • Page - 331

    318 C++ shader program class building 43, 44 requirements 43 working 44-46 cube map 123 CubeMapTex 127 curve tessellating 214-219 D DarkWoodColor 276 deferredFBO variable 180 deferred shading about 150, 179 using 179-185 deprecation model 7 depth shadows 235 diffuse component 55 diffuse irradiance environment map 135 diffuseOnly function 75 diffuse shading about read more..

  • Page - 332

    319 GLee library 10 glEnableVertexAttribArray 27 glEndTransformFeedback function 301 GLEW Library downloading 8 using, for OpenGL functionality accessing 8-10 gl_FrontFacing variable 68 glGenFramebuffers 146 glGenTextures 109 glGenVertexArrays function 27 glGetActiveAttrib function 31 glGetActiveUniformName function 36 glGetAttribLocation 31 glGetIntegerv function about 13 read more..

  • Page - 333

    320 GLSLProgram class 45 GLSL Shaders about 47 ADS shading, implementing 55-60 diffuse shading, implementing with single point light source 50-53 fragment 48 functions, using 62-64 geometry 48 overview 47 per-vertex shading, implementing with single point light source 50-53 tessellation control 48 two-sided shadind, implementing 65-68 vertex 48 gl_TessLevelInner array read more..

  • Page - 334

    321 multiple positional lights shading with 82-84 multiple textures applying 111-113 multi-sample anti-aliasing about 173 using 173-179 N night-vision effect creating 284-287 NoiseTex 280 noise texture creating, libnoise used 265-269 NoiseTex variable 273 non-local viewer 61 normal map about 117 using 117-122 normal mapping 116 NormalTex variable 180 NumSegments variable read more..

  • Page - 335

    322 simulating, with cube maps 123-129 refraction about 130 simulating, with cube maps 130-133 render function 306 renderTex variable 166 Runge-Kutta integration 300 S Screen-Space Ambient Occlusion (SSAO) 262 sDotN 60 seamless noise texture creating 269-272 SetFrequency function 268 SetOctaveCount function 268 SetPersistence function 269 setUniform function 45 shadeModelType, read more..

  • Page - 336

    323 textureProjOffset function 248 Threshold 282 toObjectLocal 122 toon shading 97 transform feedback 289 Transp variable 306 triangle altitude 199 two-sided rendering using, for debugging 68 two-sided shading implementing 65-68 U uniform blocks about 37 data, modifying 41 instance name, using 42 need for 37 working 41 uniform buffer objects using 38-40 working read more..

  • Page - 337

    read more..

  • Page - 338

    OpenGL 4.0 Shading Language Cookbook Mastering phpMyAdmin for Effective MySQL Management" in April 2004 and subsequently continued to specialize in publishing highly focused Our books and publications share the experiences of your fellow IT professionals in adapting and customizing today's systems, applications, and frameworks. Our solution based books give you the knowledge and read more..

  • Page - 339

    Animation Cookbook ISBN: 978-1-849513-20-3 Paperback: 308 pages 50 great recipes for giving soul to your characters by building high-quality rigs character rigs 2. Understand and make your characters , so that your audience believes they're alive 3. See common approaches when animating your characters in real world situations ISBN: 978-1-849513-10-4 read more..

  • Page - 340

    GIMP 2.6 cookbook ISBN: 978-1-849512-02-2 Paperback: 408 pages Over 50 recipes to produce amazing graphics with the GIMP 1. Recipes for working with the GIMP, the most powerful open source graphics package in the world 2. Straightforward instructions guide you through the tasks to unleash your true creativity without being hindered by the system read more..

Write Your Review