diff options
| author | Venugopal Shivashankar <venugopal.shivashankar@digia.com> | 2013-02-08 17:25:29 +0100 |
|---|---|---|
| committer | Venugopal Shivashankar <venugopal.shivashankar@digia.com> | 2013-02-27 14:00:39 +0100 |
| commit | de97f693f23b1a6d6e411685fe4ace7d6563719c (patch) | |
| tree | fa8d648c68e2a93a19676709bd075015059c4616 /openGL_tutorial | |
| parent | 625972f263fce22f65ae258b1e11bee3b49a4770 (diff) | |
Updated the OpenGL learning guide
Task-number: QTBUG-28831
Change-Id: Ibc6d1d937b0a7651221f403e66dc389126d28c4b
Reviewed-by: Jerome Pasion <jerome.pasion@digia.com>
Diffstat (limited to 'openGL_tutorial')
| -rw-r--r-- | openGL_tutorial/about.rst | 5 | ||||
| -rw-r--r-- | openGL_tutorial/conclusion.rst | 20 | ||||
| -rw-r--r-- | openGL_tutorial/conf.py | 2 | ||||
| -rw-r--r-- | openGL_tutorial/introduction.rst | 292 | ||||
| -rw-r--r-- | openGL_tutorial/usingOpenGL.rst | 103 |
5 files changed, 209 insertions, 213 deletions
diff --git a/openGL_tutorial/about.rst b/openGL_tutorial/about.rst index 56d066e..7ca0933 100644 --- a/openGL_tutorial/about.rst +++ b/openGL_tutorial/about.rst @@ -15,7 +15,7 @@ About this Tutorial Why Would You Want to Read this Guide? -------------------------------------- -This tutorial provides a basic introduction to OpenGL and 3D computer graphics in general. It shows how to make use of Qt and its OpenGL related classes to create 3D graphics by using OpenGL's programmable pipeline. The tutorial provides many examples that go through the basic features of OpenGL programming like rendering, texture mapping, lighting etc. By the end of the tutorial, you will have a good understanding about how OpenGL works and you will also be able to write custom shader programs. +This tutorial provides a basic introduction to OpenGL and 3D computer graphics. It shows how to make use of Qt and its OpenGL related classes to create 3D graphics by using OpenGL's programmable pipeline. The tutorial provides many examples that demonstrate the basic features of OpenGL programming such as rendering, texture mapping, lighting, and so on. By the end of the tutorial, you will have a good understanding about how OpenGL works and you will also be able to write custom shader programs. .. image:: images/opengl-example-lighting.png @@ -35,7 +35,8 @@ The guide is available in the following formats: :download:`PDF <qtopengltutorial/OpenGLTutorial.pdf>` :download:`ePub <qtopengltutorial/OpenGLTutorial.epub>` for ebook readers. Further details can be found `here <http://en.wikipedia.org/wiki/EPUB#Software_reading_systems>`_. - :download:`Qt Help <qtopengltutorial/OpenGLTutorial.qch>` for Qt Assistant and Qt Creator. In Qt Assistant, in the :qt:`Preferences Dialog <assistant-details.html#preferences-dialog>` under the `Documentation` tab (in a collapsible menu for Mac users), you click on the `Add` button in order to add this guide in .qch format. We do the same in Qt Creator under the `Options` dialog in the `Help` section. Here you can add this guide in the `Documentation` tab. + + :download:`Qt Help <qtopengltutorial/OpenGLTutorial.qch>` for Qt Assistant and Qt Creator. In Qt Assistant, in the :qt:`Preferences Dialog <assistant-details.html#preferences-dialog>` under the `Documentation` tab (in a collapsible menu for Mac users), click the `Add` button to add this guide in .qch format. We do the same in Qt Creator under the `Options` dialog in the `Help` section. Here you can add this guide in the `Documentation` tab. License diff --git a/openGL_tutorial/conclusion.rst b/openGL_tutorial/conclusion.rst index ab66994..b207f55 100644 --- a/openGL_tutorial/conclusion.rst +++ b/openGL_tutorial/conclusion.rst @@ -12,15 +12,19 @@ Conclusion & Further Reading ============================ - We hope you liked this tutorial and that we have made you even more curious about OpenGL and 3D programming. If you want to delve deeper into OpenGL, you should definitively think about getting a good book dedicated to this topic. `The OpenGL homepage <http://www.opengl.org>`_ lists quite a few recommendations. If you are looking for an even higher level approach, you may consider taking a look at `Qt/3D <http://doc.qt.nokia.com/qt3d-snapshot>`_ and/or `QtQuick3D <http://doc.qt.nokia.com/qt-quick3d-snapshot>`_. + We hope you liked this tutorial and that we have made you even more curious about OpenGL and 3D programming. If you want to delve deeper into OpenGL, you should definitively think about getting a good book dedicated to this topic. `The OpenGL homepage <http://www.opengl.org>`_ lists quite a few recommendations. If you are looking for an even higher level approach, you may consider taking a look at `Qt/3D <http://doc-snapshot.qt-project.org/qt3d-1.0>`_ and/or `QtQuick3D <http://doc.qt.digia.com/qt-quick3d-snapshot>`_. - Since OpenGL is able to compute a lot of information really fast, you may have thoughts about using it for more than just computer graphics. A framework based on this approach is called `OpenCL` (which is also managed by the Khronos Group Inc.). There even is a Qt Module for this framework. It is called `QtOpenCL <http://doc.qt.nokia.com/opencl-snapshot>`_. +As OpenGL is able to compute a lot of information really fast, you may have thoughts about using it for more than just computer graphics. A framework based on this approach is called `OpenCL` (which is also managed by the Khronos Group Inc.). There even is a Qt Module for this framework. It is called `QtOpenCL <http://doc.qt.digia.com/opencl-snapshot/index.html>`_. -Links: +References: - `http://www.opengl.org` - The OpenGL homepage - `http://www.khronos.org/opengl` - The Khronos Group Inc. homepage regarding OpenGL - `http://www.khronos.org/opengles` - The Khronos Group Inc. homepage regarding OpenGL ES - `http://doc.qt.nokia.com/qt3d?snapshot` - Qt/3D Reference Documentation - `http://doc.qt.nokia.com/qt?quick3d?snapshot` - QtQuick3D Reference Documentation + * `http://www.opengl.org` - The OpenGL homepage + + * `http://www.khronos.org/opengl` - The Khronos Group Inc. homepage regarding OpenGL + + * `http://www.khronos.org/opengles` - The Khronos Group Inc. homepage regarding OpenGL ES + + * `http://doc-snapshot.qt-project.org/qt3d-1.0` - Qt/3D Reference Documentation + + * `http://doc.qt.digia.com/qt-quick3d-snapshot` - QtQuick3D Reference Documentation diff --git a/openGL_tutorial/conf.py b/openGL_tutorial/conf.py index bf926bc..75a5814 100644 --- a/openGL_tutorial/conf.py +++ b/openGL_tutorial/conf.py @@ -28,7 +28,7 @@ htmlhelp_basename = 'OpenGLTutorial' latex_documents = [ ('index', 'OpenGLTutorial.tex', u'OpenGL Tutorial', - u'Nokia, Qt Learning', 'manual'), + u'Digia, Qt Learning', 'manual'), ] # -- Options for Epub output --------------------------------------------------- diff --git a/openGL_tutorial/introduction.rst b/openGL_tutorial/introduction.rst index 140b5d9..65afb40 100644 --- a/openGL_tutorial/introduction.rst +++ b/openGL_tutorial/introduction.rst @@ -12,17 +12,17 @@ Introduction ============ -This tutorial provides a basic introduction to OpenGL and 3D computer graphics in general. It shows how to make use of Qt and its OpenGL related classes to create 3D graphics. +This tutorial provides a basic introduction to OpenGL and 3D computer graphics. It shows how to make use of Qt and its OpenGL-related classes to create 3D graphics. We will use the core features of OpenGL 3.0/ 2.0 ES and all following versions, which means that we will be utilizing OpenGL's programmable rendering pipeline to write our own shaders with the OpenGL Shading Language (GLSL) / OpenGL ES Shading Language (GLSL / ES). -Chapter one gives an introduction to 3D computer graphics and the OpenGL API including the OpenGL Shading Language (GLSL) / OpenGL ES Shading Language (GLSL / ES). If you are already familiar with this topic and only want to see how to use OpenGL in your Qt programs, you can skip this introductory chapter and continue on to chapter two. +Chapter one gives an introduction to 3D computer graphics and the OpenGL API including the OpenGL Shading Language (GLSL) / OpenGL ES Shading Language (GLSL / ES). If you are already familiar with this topic and only want to see how to use OpenGL in your Qt programs, you can skip this introductory chapter and move on to chapter two. -In chapter two, we present examples which utilize the information covered in chapter one and show how to use OpenGL together with Qt's OpenGL related functionality. +In chapter two, we present examples which utilize the information covered in chapter one and show how to use OpenGL together with Qt's OpenGL-related functionality. -At the end of this tutorial you find some references and links which may come in handy especially when working through the examples. Please note that this tutorial is meant to get you started with this topic and can not go into the same depth as a decent OpenGL dedicated book. Also note that Qt's OpenGL-related classes do a lot of work for you by hiding some of the details which you would encounter if you wrote your programs using only OpenGL's API. +At the end of this tutorial you find some references and links that may come in handy, especially when working through the examples. Note that this tutorial is meant to get you started with this topic and can not go into the same depth as a decent OpenGL dedicated book would. Also note that the Qt's OpenGL-related classes makes your life easier by hiding some of the details, which you would encounter if you wrote your programs using only the OpenGL API. -In the example part, we will use Qt's high level functionality whenever possible and only briefly name the differences. So if you intend to get a complete understanding of how to use native, you should additionally consult a tutorial or book dedicated to this topic. +In the example part, we will use Qt's high-level functionality whenever possible and only briefly name the differences. So if you intend to get a complete understanding of how to use native, you should additionally consult a tutorial or book dedicated to this topic. What's OpenGL -------------- @@ -33,15 +33,15 @@ scalable, cross-language and cross-platform specification that defines a uniform the computer's graphics accelerator. It guarantees a set of basic capabilities and allows vendors to implement their own extensions. -OpenGL is a low level API which requires the programmer to tell it the exact steps needed to +OpenGL is a low-level API which requires the programmer to tell it the exact steps needed to render a scene. You cannot just describe a scene and have it displayed on your monitor. It is -up to you to specify geometry primitives in a 3D space, apply coloring and lighting effects +up to you to specify geometry primitives in a 3D space, apply coloring and lighting effects, and render the objects onto the screen. While this requires some knowledge of computer graphics, it also gives you a lot of freedom to invent your own algorithms and create a -variety of new graphics effects. +variety of new graphical effects. The sole purpose of OpenGL is to render computer graphics. It does not provide any -functionality for window management or for handling events like user input. This is what +functionality for window management or for handling events such as user input. This is what we use Qt for. @@ -52,29 +52,29 @@ The geometry of three dimensional objects is described by an arrangement of very building blocks (primitives) such as single points, lines or triangles. Triangles are the most common ones as they are used to approximate the surface of objects. Each surface can be split up into small planar triangles. While this works well on edged -objects, smooth objects like spheres will look jagged. Of course you could use more +objects, but smooth objects like spheres will look jagged. Of course you could use more triangles to improve the approximation, but this comes at the cost of performance as more -triangles will need to be processed by your graphics card. Instead of simply increasing the +triangles will have to be processed by your graphics card. Instead of simply increasing the polygon count, you should always consider additional techniques such as improving the lighting algorithm or adapting the level of detail. .. image:: images/opengl-sphere.png :align: center -To define the spatial properties of your objects, you set up a list of points, lines and/or +To define the spatial properties of your objects, you set up a list of points, lines, and/or triangles. Each primitive in turn is specified by the position of its corners (a vertex / vertices). Thus it is necessary to have a basic understanding of how to define points in space and to manipulate them efficiently. But we will brush up our linear algebra knowledge in a moment. -To see those objects, you need to apply coloring to your primitives. Color values are often +To see the objects, you must apply coloring to your primitives. Color values are often defined for each primitive (for each vertex to be precise) and used to paint or fill in with -color. For more realistic applications, images (called textures) are placed on top of them. The -appearance can be further adapted according to material properties or lighting. +color. For more realistic applications, images (called textures) are placed on top of the objects. +The appearance can be further adapted according to material properties or lighting. So how do we actually get our scene displayed on the screen. -Since the computer screen is a two dimensional device, we need to project our objects onto a -plane. This plane is then mapped to a region on our screen called the viewport*. To +As the computer screen is a two dimensional device, we need to project our objects onto a +plane. This plane is then mapped to a region on our screen called the viewport*. To understand this process illustratively, imagine that you are standing in front of the window and sketching the outlines of the objects you see outside onto the glass without moving your head. The drawing on the glass then represents a two dimensional projection of your @@ -95,6 +95,7 @@ What we just introduced is called `perspective projection`. It has a viewing vol of a frustum and adds the illusion that distant objects appear smaller than closer objects of the same size. This greatly contributes to realism, and therefore, is used for simulations, games and VR (virtual reality) applications. + The other type is called `orthographic projection`. Orthographic projections are specified by a rectangular viewing volume. Every two objects that have the same size also have the same size in the projection regardless of its distance from the viewer. This is often used in CAD @@ -108,7 +109,7 @@ size in the projection regardless of its distance from the viewer. This is ofte A Short Recapitulation of Linear Algebra ---------------------------------------- -Since it is essential to have a basic understanding of linear algebra when writing OpenGL programs, this chapter will briefly state the most important concepts involved. Although we will mostly have Qt do the math, it is still good to know, what is going on in the background. +As it is essential to have a basic understanding of linear algebra when writing OpenGL programs, this chapter will briefly state the most important concepts involved. Although we will mostly let Qt do the math, it is still good to know what is going on in the background. The location of a 3D point in relation to an arbitrary coordinate system is identified by its x-, y- and z-coordinates. This set of values is also called a `vector`. When used to describe primitives, it is called a `vertex`. @@ -133,7 +134,8 @@ Scaling means multiplying the vertices by the desired ratio (here named `s`). :align: center Rotating, stretching, shearing, or reflecting is more complicated and is achieved by multiplying the vertices by a transformation matrix (here named `T``). -A matrix is basically a table of coefficients that, when multiplied by a vector, yields a new vector with each element that is a linear combination of the multiplied vector's elements. +A matrix is basically a table of coefficients that are multiplied by a vector to get a new vector, where each element is a linear combination of the multiplied +vector's elements. .. image:: images/opengl-formula-matrix.png :align: center @@ -183,7 +185,7 @@ Coordinate Systems & Frame Concept How can we use this knowledge of linear algebra to put a three dimensional scene on screen? In this tutorial, we will use the most widely used concept called the `frame concept`. This pattern allows us to easily manage objects and viewers (including their positions and orientations) as well as the projection that we want to apply. -Imagine two coordinate systems: `A` and `B`. Coordinate system `B` originates from coordinate system `A` via a translation and a rotation that can be described by the matrix +Imagine two coordinate systems: `A` and `B`. Coordinate system `B` originates from coordinate system `A` via a translation and a rotation that can be described by the following matrix: .. image:: images/opengl-formula-t.png :align: center @@ -198,7 +200,7 @@ in coordinate system `B`, the corresponding coordinates of point .. image:: images/opengl-formula-pa-calculation.png :align: center -can be calculated. +can be calculated, .. image:: images/opengl-formula-pa.png :align: center @@ -224,11 +226,11 @@ Another matrix that is often used is the `model-view-projection matrix`. It is t The definition of these matrices has various advantages: - In the design phase, every object's model (i.e. its set of vertices) can be specified in relation to an arbitrary coordinate system (for example its center point) + * In the design phase, every object's model (i.e. its set of vertices) can be specified in relation to an arbitrary coordinate system (for example its center point). - The transformation process is divided into small steps, which as such are quite illustrative + * The transformation process is divided into small steps, which are quite illustrative. - All the used transformation matrices can be calculated, stored and combined efficiently + * All the used transformation matrices can be calculated, stored, and combined efficiently. .. image:: images/opengl-transformation-pipeline.png :align: center @@ -239,24 +241,24 @@ The figure above illustrates the steps that are required to yield proper screen The OpenGL Rendering Pipeline ----------------------------- -The OpenGL rendering pipeline is a high level model which describes the basic steps that OpenGL takes to render a picture on the screen. As the word `pipeline` suggests, all operations are applied in a particular order. Very simply put, the rendering pipeline has a state, takes some inputs and returns an image to the screen. +The OpenGL rendering pipeline is a high-level model, which describes the basic steps that OpenGL takes to render a picture on the screen. As the word `pipeline` suggests, all operations are applied in a particular order. That is, the rendering pipeline has a state that takes some inputs and returns an image to the screen. -The state of the rendering pipeline affects the behavior of its functions. As it is not practical to set options every time we want to draw something, we can set parameters beforehand. These parameters are then used in all subsequent function calls. For example, once you've defined a background color, that color is used to clear the screen until you change it to something else. You can also turn distinct capabilities like depth testing or multisampling on and off. Therefore, to draw an overlay image on top of your screen, you would first draw the scene with depth testing enabled, then disable depth testing, and after that, draw the overlay elements, which will then always be rendered on top of the screen regardless of their distance from the viewer. +The state of the rendering pipeline affects the behavior of its functions. As it is not practical to set options every time we want to draw something, we can set parameters beforehand. These parameters are then used in all subsequent function calls. For example, once you've defined a background color, that color is used to clear the screen until you change it to something else. You can also turn distinct capabilities such as depth testing or multisampling on and off. Therefore, to draw an overlay image on top of your screen, you would first draw the scene with depth testing enabled, then disable depth testing and draw the overlay elements, which will then always be rendered on top of the screen regardless of their distance from the viewer. The inputs to the pipeline can be provided as single values or arrays. Most of the time these values will represent vertex positions, surface normals, textures, texture coordinates or color values. The output of the rendering pipeline is the image that is displayed on the screen or written into memory. Such a memory segment is then called a framebuffer. -The figure below shows a simplified version of the pipeline. The elements that are not relevant to this tutorial were omitted (such as tesselation, geometry shading and transform feedback). +The figure below shows a simplified version of the pipeline. The elements that are not relevant to this tutorial were omitted (such as tesselation, geometry shading, and transform feedback). -The main program that resides inside the computer's memory, and is executed by the CPU, is displayed in the left column. The steps executed on the graphics card are listed in the column on the right. +The main program that resides inside the computer's memory, is executed by the CPU and displayed in the left column. The steps executed on the graphics card are listed in the column on the right. .. image:: images/opengl-rendering-pipeline.png :align: center -The graphics card has its own memory and a GPU just like a small, but powerful computer that is highly specialized in processing 3D data. Programs that run on the GPU are called shaders. Both the host computer and the graphics card can work independently. To take full advantage of hardware acceleration, you should keep both of them busy at the same time. +The graphics card has its own memory and a GPU just like a small powerful computer that is highly specialized in processing 3D data. Programs that run on the GPU are called shaders. Both the host computer and the graphics card can work independently, and you should keep both of them busy at the same time to take full advantage of hardware acceleration. -During `vertex specification`, the ordered list of vertices that gets streamed to the next step is set up. This data can either be sent by the program that is executed on the CPU one vertex after the other or read from GPU memory directly using buffer objects. However, repeatedly getting data via the system bus should be avoided whenever possible since it is faster for the graphics card to access its own memory. +During `vertex specification`, the ordered list of vertices that gets streamed to the next step is set up. This data can either be sent by the program that is executed on the CPU one vertex after the other or read from GPU memory directly using buffer objects. However, repeatedly getting data via the system bus should be avoided whenever as it is faster for the graphics card to access its own memory. The `vertex shader` processes data on a per vertex basis. It receives this stream of vertices along with additional attributes like associated texture coordinates or color values, and data such as the model-view-projection matrix. Its typical task is to transform vertices and to apply the projection matrix. Besides its interface to the immediately following stage, the vertex shader can also pass data to the fragment shader directly. @@ -264,66 +266,65 @@ During the `primitive assembly` stage, the projected vertices are composed into During the `clipping and culling` stage, primitives that lie beyond the viewing volume, and therefore are not visible anyway, are removed. Also, if face culling is enabled, every primitive that does not show its front side (but its reverse side instead) is removed. This step effectively contributes to performance. -The `rasterisation` stage yields so called `fragments`. These fragments correspond to pixels on the screen. Depending on the user's choice, for each primitive, a set of fragments may be created. You may either fill the whole primitive with (usually colored) fragments, or only generate its outlines (e.g. to render a wireframe model). +The `rasterisation` stage yields so called `fragments`. These fragments correspond to pixels on the screen. Depending on the user's choice, for each primitive, a set of fragments may be created. You may either fill the whole primitive with (usually colored) fragments, or only generate its outlines (for example, to render a wireframe model). Each fragment is then processed by the `fragment shader`. The most important output of the fragment shader is the fragment's color value. Texture mapping and lighting are usually applied during this step. Both the program running on the CPU and the vertex shader can pass data to it. Obviously it also has access to the texture buffer. Because there are usually a lot of fragments in between a few vertices, values sent by the vertex shader are generally interpolated. Whenever possible, computational intensive calculations should be implemented in the vertex instead of in the fragment shader as there are usually many more fragments to compute than vertices. -The final stage, `per-sample operations`, applies several tests to decide which fragments should actually be written to the framebuffer (depth test, masking etc). After this, blending occurs and the final image is stored in the framebuffer. +The final stage, `per-sample operations`, applies several tests to decide which fragments should actually be written to the framebuffer (depth test, masking, and so on). After this, blending occurs and the final image is stored in the framebuffer. OpenGL API ---------- -This chapter will explain the conventions used in OpenGL. Although we will try to use Qt's abstraction to the OpenGL API whenever possible, we will still need to call some of its functions directly. The examples will introduce you to the required functions. +This chapter will explain the conventions used in OpenGL. Although we will try to use Qt's abstraction to the OpenGL API wherever possible, we will still need to call some of the OpenGL functions directly. The examples will introduce you to the required functions. The OpenGL API uses its own data types to improve portability and readability. These types are guaranteed to hava a minimum range and precision on every platform. -.. list-table:: - :widths: 20 80 - :header-rows: 1 - :stub-columns: 0 - - - Type - - Description - - *GLenum* - - Indicates that one of OpenGL's preprocessor definitions is expected. - - *GLboolean* - - Used for boolean values. - - *GLbitfield* - - Used for bitfields. - - *GLvoid* - - sed to pass pointers. - - *GLbyte* - - 1-byte signed integer. - - *GLshort* - - GLshort 2-byte signed integer. - - *GLint* - - 4-byte signed integer. - - *GLubyte* - - 1-byte unsigned integer. - - *GLushort* - - 2-byte unsigned integer. - - *GLuint* - - 4-byte unsigned integer. - - *GLsizei* - - Used for sizes. - - *GLfloat* - - Single precision floating point number. - - *GLclampf* - - Single precision floating point number ranging from 0 to 1. - - *GLdouble* - - Double precision floating point number. - - *GLclampd* - - Double precision floating point number ranging from 0 to 1. - -OpenGL's various preprocessor definitions are prefixed with GL_*. Its functions begin with *gl*. - -A function that triggers the rendering process for example is declared as void glDrawArrays(GLenum mode, GLint first, GLsizei count)*. + .. list-table:: + :widths: 20 80 + :header-rows: 1 + :stub-columns: 0 + + * - Type + - Description + * - *GLenum* + - Indicates that one of OpenGL's preprocessor definitions is expected. + * - *GLboolean* + - Used for boolean values. + * - *GLbitfield* + - Used for bitfields. + * - *GLvoid* + - sed to pass pointers. + * - *GLbyte* + - 1-byte signed integer. + * - *GLshort* + - GLshort 2-byte signed integer. + * - *GLint* + - 4-byte signed integer. + * - *GLubyte* + - 1-byte unsigned integer. + * - *GLushort* + - 2-byte unsigned integer. + * - *GLuint* + - 4-byte unsigned integer. + * - *GLsizei* + - Used for sizes. + * - *GLfloat* + - Single precision floating point number. + * - *GLclampf* + - Single precision floating point number ranging from 0 to 1. + * - *GLdouble* + - Double precision floating point number. + * - *GLclampd* + - Double precision floating point number ranging from 0 to 1. + +OpenGL's various preprocessor definitions are prefixed with GL_*. Its functions begin with *gl*. For example, a +function that triggers the rendering process is declared as void glDrawArrays(GLenum mode, GLint first, GLsizei count)*. The OpenGL Shading language --------------------------- -As we have already learned, programming shaders is one of the core requirements when using OpenGL. Shader programs are written in a high level language called `The OpenGL Shading Language (GLSL)`, which is a language very similar to C. To install a shader program, the shader source code has to be sent to the graphics card as a string, where the program then needs to be compiled and linked. +As we have already learned, programming shaders is one of the core requirements when using OpenGL. Shader programs are written in a high-level language called `The OpenGL Shading Language (GLSL)`, which is a language very similar to C. To install a shader program, the shader source code has to be sent to the graphics card as a string, where the program then needs to be compiled and linked. The language specifies various types suited to its needs. @@ -332,97 +333,96 @@ The language specifies various types suited to its needs. :header-rows: 1 :stub-columns: 0 - - Type - - Description - - *void* - - No `function return` value or `empty parameter` list. - - *float* - - Floating point value. - - *int* - - Signed integer. - - *bool* - - Boolean value. - - *vec2, vec3, vec4* - - Floating point vector. - - *ivec2, ivec3, ivec4* - - Signed integer vector. - - *bvec2, bvec3, bvec4* - - Boolean vector. - - *mat2, mat3, mat4* - - 2x2, 3x3, 4x4 floating point matrix. - - *sampler2D* - - Access a 2D texture. - - - *samplerCube* - - Access cube mapped texture. + * - Type + - Description + * - *void* + - No `function return` value or `empty parameter` list. + * - *float* + - Floating point value. + * - *int* + - Signed integer. + * - *bool* + - Boolean value. + * - *vec2, vec3, vec4* + - Floating point vector. + * - *ivec2, ivec3, ivec4* + - Signed integer vector. + * - *bvec2, bvec3, bvec4* + - Boolean vector. + * - *mat2, mat3, mat4* + - 2x2, 3x3, 4x4 floating point matrix. + * - *sampler2D* + - Access a 2D texture. + * - *samplerCube* + - Access cube mapped texture. All these types may be combined using a C like structure or array. -To access the elements of a vector or a matrix, square brackets "[]" can be used (e.g. vector[index] = value* and *matrix[column][row] = value;*). In addition to this, the vector's named components are accessible by using the field selector operator "." (e.g. *vector.x = xValue* and *vector.xy = vec2(xValue, yValue)*). The names *(x, y, z, w)* are used for positions. *(r, g, b, a)* and *(s, t, p, q)* are used to address color values and texture coordinates respectively. +To access the elements of a vector or a matrix, square brackets "[]" can be used (for example, vector[index] = value* and *matrix[column][row] = value;*). In addition to this, the vector's named components are accessible by using the field selector operator, "." (for example, *vector.x = xValue* and *vector.xy = vec2(xValue, yValue)*). The names *(x, y, z, w)* are used for positions. *(r, g, b, a)* and *(s, t, p, q)* are used to address color values and texture coordinates respectively. To define the linkage between different shaders as well as between shaders and the application, GLSL provides variables with extra functionality by using storage qualifiers. These storage qualifiers need to be written before the type name during declaration. -.. list-table:: - :widths: 30 70 - :header-rows: 1 - :stub-columns: 0 - - - Storage Qualifier - - Description - - *none* - - (default) Normal variable - - *const* - - Compile-time constant - - *attribute* - - Linkage between a vertex shader and OpenGL for per-vertex data.Since the vertex shader is executed once for every vertex, this read-only value holds a new value every time it runs. It is used to pass vertices to the vertex shader for example. - - *uniform* - - Linkage between a shader and OpenGL for per-rendering data. This read-only value does not change across the the whole rendering process. It issued to pass the model-view-projection matrix for example since this parameter does not change for one object. - - *varying* - - Linkage between the vertex shader and the fragment shader for interpolated data. This variable is used to pass values calculated in the vertex shader to the fragment shader. For this to work, the variables need to share the same name the in both shaders. Since there are usually a lot of fragments in between a few vertices, the data calculated by the vertex shader is (by default) interpolated. Such variables are often used as texture coordinates or lighting calculations. - -To send data from the vertex shader to the fragment shader, the `out` variable of the vertex shader and the `in` variable of the fragment shader need to share the same name. Since there are usually a lot of fragments in between a few vertices, the data calculated by the vertex shader is by default interpolated in a perspective correct manner. To enforce this behavior, the additional qualifier smooth* can be written before *in*. To use linear interpolation, the *noperspective* qualifier can be set. Interpolation can be completely disabled by using *flat*. Then, for all the fragments in between a primitive, the value output by the first vertex of this primitive is used. + .. list-table:: + :widths: 40 60 + :header-rows: 1 + :stub-columns: 0 + + * - Storage Qualifier + - Description + * - *none* + - (default) Normal variable + * - *const* + - Compile-time constant + * - *attribute* + - Linkage between a vertex shader and OpenGL for per-vertex data. As the vertex shader is executed once for every vertex, this read-only value holds a new value every time it runs. It is used to pass vertices to the vertex shader for example. + * - *uniform* + - Linkage between a shader and OpenGL for per-rendering data. This read-only value does not change across the the whole rendering process. It is used to pass the model-view-projection matrix, for example as this parameter does not change for one object. + * - *varying* + - Linkage between the vertex shader and the fragment shader for interpolated data. This variable is used to pass values calculated in the vertex shader to the fragment shader. For this to work, the variables need to share the same name in both shaders. As there are usually a lot of fragments in between a few vertices, the data calculated by the vertex shader is (by default) interpolated. Such variables are often used as texture coordinates or lighting calculations. + +To send data from the vertex shader to the fragment shader, the `out` variable of the vertex shader and the `in` variable of the fragment shader need to share the same name. As there are usually a lot of fragments in between a few vertices, the data calculated by the vertex shader is by default interpolated in a perspective correct manner. To enforce this behavior, the additional qualifier smooth* can be written before *in*. To use linear interpolation, the *noperspective* qualifier can be set. Interpolation can be completely disabled by using *flat*, which results in using the value output by the first vertex of the primitive for all the fragments in between a primitive. This kind of variables are commonly called `varyings`, due to this interpolation and because in earlier versions of OpenGL this shader-to-shader linkage was achieved using a variable qualifier called varying* instead of *in* and *out*. Several built-in variables are used for communication with the pipeline. We will use the following: -.. list-table:: - :widths: 30 70 - :header-rows: 1 - :stub-columns: 0 + .. list-table:: + :widths: 40 60 + :header-rows: 1 + :stub-columns: 0 - - Variable Name - - Description - - *vec4 gl_Position* - - The rasterization step needs to know the position of the transformed vertex. Therefore, the vertex shader needs to set this variable to the calculated value. - - *vec4 gl_FragColor* - - This variable defines the fragment's RGBA color that will eventually be written to the frame buffer. This value can be set by the fragment shader. + * - Variable Name + - Description + * - *vec4 gl_Position* + - The rasterization step needs to know the position of the transformed vertex. Therefore, the vertex shader needs to set this variable to the calculated value. + * - *vec4 gl_FragColor* + - This variable defines the fragment's RGBA color that will eventually be written to the frame buffer. This value can be set by the fragment shader. -When using multiple variable qualifiers, the order is <storage qualifier> <precision qualifier> <type> <name>*. +When using multiple variable qualifiers, the order is <storage qualifier> <precision qualifier> <type> <name>*. -Just like in C, every GLSL program's entry point is the main()* function, but you are also allowed to declare your own functions. Functions in GLSL work quite differently than those in C. They do not have a return value. Instead, values are returned using a calling convention called `value-return`. For this purpose, GLSL uses parameter qualifiers, which need to be written before the variable type during function declaration. These qualifiers specify if and when values are exchanged between a function and its caller. +Just like in C, every GLSL program's entry point is the main()* function, but you are also allowed to declare your own functions. Functions in GLSL work quite differently than those in C. They do not have a return value. Instead, values are returned using a calling convention called `value-return`. For this purpose, GLSL uses parameter qualifiers, which must be written before the variable type during function declaration. These qualifiers specify if and when values are exchanged between a function and its caller. -.. list-table:: - :widths: 30 70 - :header-rows: 1 - :stub-columns: 0 + .. list-table:: + :widths: 40 60 + :header-rows: 1 + :stub-columns: 0 - - Parameter qualifier - - Description - - in - - (default) On entry, the variable is initialized to the value passed by the caller. - - out - - On return, the value of this variable is written into the variable passed by the caller. The variable is not initialized. - - inout - - A combination of in and out. The variable is both initialized and returned. + * - Parameter qualifier + - Description + * - in + - (default) On entry, the variable is initialized to the value passed by the caller. + * - out + - On return, the value of this variable is written into the variable passed by the caller. The variable is not initialized. + * - inout + - A combination of in and out. The variable is both initialized and returned. -There are actually many more qualifiers, but listing all of them goes beyond the scope of this tutorial. +There are actually many more qualifiers, but listing all of them is beyond the scope of this tutorial. - The language also offers control structures like if*, *switch*, *for*, *while*, and *do while*, including *break* and *return*. Additionally, in the fragment shader, you can call *discard* to exit the fragment shader and have that fragment ignored by the rest of the pipeline. + The language also offers control structures such as if*, *switch*, *for*, *while*, and *do while*, including *break* and *return*. Additionally, in the fragment shader, you can call *discard* to exit the fragment shader and have that fragment ignored by the rest of the pipeline. - GLSL also uses several preprocessor directives. The most notable one that you should use in all of your programs is #version* followed by the three digits of the language version you want to use (e.g. *#version 330* for version 3.3). By default, OpenGL assumes version 1.1, which might not always be what you want. + GLSL also uses several preprocessor directives. The most notable one that you should use in all of your programs is, #version* followed by the three digits of the language version you want to use (e.g. *#version 330* for version 3.3). By default, OpenGL assumes version 1.1, which might not always be what you want. Although GLSL is very similar to C, there are still some restrictions you should be aware of: @@ -434,7 +434,7 @@ There are actually many more qualifiers, but listing all of them goes beyond the Array indexing is only possible with constant indices. - Type casting is only possible using constructors (e.g. *myFloat = float(myInt);*). + Type casting is only possible using constructors (for example, *myFloat = float(myInt);*). .. note:: The scene you want to render may be so complex that it has thousands of vertices and millions of fragments. This is why modern graphics cards are equipped with several stream processing units, each of which executes one vertex shader or fragment shader at a time. Because all vertices and fragments are processed in parallel, there is no way for the shader to query the properties of another vertex or fragment. diff --git a/openGL_tutorial/usingOpenGL.rst b/openGL_tutorial/usingOpenGL.rst index a6ef0d4..9e9ed89 100644 --- a/openGL_tutorial/usingOpenGL.rst +++ b/openGL_tutorial/usingOpenGL.rst @@ -12,14 +12,14 @@ Using OpenGL in your Qt Application =================================== -Qt provides a widget called `QGLWidget` for rendering OpenGL Graphics, which enables you to easily integrate OpenGL into your Qt application. It is subclassed and used like any other QWidget and is cross platform. You usually reimplement the following three virtual methods: +Qt provides a widget called `QGLWidget` for rendering OpenGL Graphics, which enables you to easily integrate OpenGL into your Qt application. It is subclassed and used like any other QWidget and is +cross-platform. You usually reimplement the following three virtual methods: - - `QGLWidget::initializeGL()` - sets up the OpenGL rendering context. It is called once before the first time `QGLWidget::resizeGL()` or `QGLWidget::paintGL()` are called. + `QGLWidget::initializeGL()` - sets up the OpenGL rendering context. It is called once before the `QGLWidget::resizeGL()` or `QGLWidget::paintGL()` function is called for the first time. `QGLWidget::resizeGL()` - gets called whenever the `QGLWidget` is resized, and after initialization. This method is generally used for setting up the viewport and the projection. `QGLWidget::paintGL()` - renders the OpenGL scene. It is comparable to `QWidget::paint()`. -Qt also offers a cross platform abstraction for shader programs called `QGLShaderProgram`. This class facilitates the process of compiling and linking the shader programs as well as switching between different shaders. +Qt also offers a cross-platform abstraction for shader programs called `QGLShaderProgram`. This class facilitates the process of compiling and linking the shader programs as well as switching between different shaders. .. note:: You might need to adapt the versions set in the example source codes to those supported by your system. @@ -28,8 +28,7 @@ Hello OpenGL We are beginning with a small `Hello World` example that will have our graphics card render a simple triangle. For this purpose we subclass `QGLWidget` in order to obtain an OpenGL rendering context and write a simple vertex and fragment shader. -This example will confirm if we have set up our development environment properly and if OpenGL is running on our target system. - +This example confirms whether we have set up our development environment properly. .. image:: images/opengl-example-hello-opengl.png @@ -54,18 +53,17 @@ We want the widget to be a subclass of `QGLWidget`. Because we might later be us To call the usual OpenGL rendering commands, we reimplement the three virtual functions `GLWidget::initializeGL()`, `QGLWidget::resizeGL()`, and `QGLWidget::paintGL()`. -We also need some member variables. `pMatrix` is a `QMatrix4x4` that keeps the projection part of the transformation pipeline. To manage the shaders, we will use a `QGLShaderProgram` we named `shaderProgram`. `vertices` is a `QVector` made of `QVector3Ds` that stores the triangle's vertices. Although the vertex shader will expect us to send homogeneous coordinates, we can use 3D vectors, because the OpenGL pipeline will automatically set the fourth coordinate to the default value of 1. +We also need some member variables. `pMatrix` is a `QMatrix4x4` that keeps the projection part of the transformation pipeline. To manage the shaders, we use a `QGLShaderProgram` named, `shaderProgram`. `vertices` is a `QVector` made of `QVector3Ds` that stores the triangle's vertices. Although the vertex shader will expect us to send homogeneous coordinates, we can use 3D vectors, because the OpenGL pipeline automatically sets the fourth coordinate to the default value of 1. .. literalinclude:: src/examples/hello-opengl/glwidget.h :language: cpp :start-after: //! [0] :end-before: //! [0] - Now that we have defined our widget, we can finally talk about the implementation. The constructor's initializer list calls `QGLWidget's` constructor passing a `QGLFormat` object. -This can be used to set the capabilities of the OpenGL rendering context such as double buffering or multisampling. We are fine with the default values so we could as well have omitted the `QLFormat`. Qt will try to acquire a rendering context as close as possible to what we want. +This can be used to set the capabilities of the OpenGL rendering context such as double buffering or multisampling. We are fine with the default values so we could as well have omitted the `QLFormat`. Qt tries to acquire a rendering context as close as possible to what we want. Then we reimplement `QWidget::sizeHint()` to set a reasonable default size for the widget. @@ -78,8 +76,7 @@ Then we reimplement `QWidget::sizeHint()` to set a reasonable default size for t The `QGLWidget::initializeGL()` method gets called once when the OpenGL context is created. We use this function to set the behavior of the rendering context and to build the shader programs. - -If we want to render 3D images, we need to enable depth testing. This is one of the tests that can be performed during the per-sample-operations stage. It will cause OpenGL to only display the fragments nearest to the camera when primitives overlap. Although we do not need this capability since we only want to show a plane triangle, we will need this setting in our other examples. If you've omitted this statement, you might see objects in the back popping through objects in the front depending on the order the primitives are rendered. +If we want to render 3D images, we need to enable depth testing. This is one of the tests that can be performed during the per-sample-operations stage. It will cause OpenGL to only display the fragments nearest to the camera when primitives overlap. Although we do not need this capability as we only want to show a plane triangle, but we can use this setting in our other examples. If you've omitted this statement, you might see objects in the back popping through objects in the front depending on the order the primitives are rendered. Deactivating this capability is useful if you want to draw an overlay image on top of the screen. As an easy way to significantly improve the performance of a 3D application, we also enable face culling. This tells OpenGL to only render primitives that show their front side. The front side is defined by the order of the triangle's vertices. You can tell what side of the triangle you are seeing by looking at its corners. If the triangle's corners are specified in a counterclockwise order, this means that the front of the triangle is the side facing you. For all triangles that are not facing the camera, the fragment processing stage can be omited. @@ -90,7 +87,7 @@ The specified color will then be used in all subsequent calls to `glClear(GLbitf In the following section we are setting up the shaders. We pass the source codes of the shaders to the QGLShaderProgram, compile and link them, and bind the program to the current OpenGL rendering context. -Shader programs need to be supplied as source codes. We can use `QGLShaderProgram::addShaderFromSourceFile()` to have Qt handle the compilation. This function compiles the source code as the specified shader type and adds it to the shader program. If an error occurs, the function returns `false`, and we can access the compilation errors and warnings using `QGLShaderProgram::log()`. Errors will be automatically printed to the standard error output if we run the program in debug mode. +Shader programs need to be supplied as source codes. We can use `QGLShaderProgram::addShaderFromSourceFile()` to let Qt handle the compilation. This function compiles the source code as the specified shader type and adds it to the shader program. If an error occurs, the function returns `false`, and we can access the compilation errors and warnings using `QGLShaderProgram::log()`. Errors will be automatically printed to the standard error output if we run the program in debug mode. After the compilation, we still need to link the programs using `QGLShaderProgram::link()`. We can again check for errors and access the errors and warnings using `QGLShaderProgram::log()`. @@ -102,7 +99,6 @@ Binding and releasing a program can be done several times during the rendering p which means several vertex and fragment shaders can be used for different objects in the scene. We will therefore use these functions in the `QGLWidget::paintGL()` function. - Last but not least, we set up the triangles' vertices. Note that we've defined the triangle with the front side pointing to the positive z direction. Having face culling enabled, we can then see this object if we look at it from viewer positions with a z value greater than this object's z value. .. literalinclude:: src/examples/hello-opengl/glwidget.cpp @@ -115,7 +111,7 @@ Now let's take a look at the shaders we will use in this example. The vertex shader only calculates the final projection of each vertex by multiplying the vertex with the model-view-projection matrix. -It needs to read two input variables. The first input is the model-view-projection matrix. It is a 4x4 matrix, that only changes once per object and is therefore declared as a `uniform mat4`. We've named it `mvpMatrix`. The second variable is the actual vertex that the shader is processing. As the shader reads a new value every time it is executed, the vertex variable needs to be declared as an `attribute vec4`. We've named this variable `vertex`. +It needs to read two input variables. The first input is the model-view-projection matrix. It is a 4x4 matrix that changes once per object and is therefore declared as a `uniform mat4`. We've named it `mvpMatrix`. The second variable is the actual vertex that the shader is processing. As the shader reads a new value every time it is executed, the vertex variable needs to be declared as an `attribute vec4`. We've named this variable `vertex`. In the `main()` function, we simply calculate the resulting position that is sent to the rasterization stage using built in matrix vector multiplication. @@ -141,7 +137,7 @@ The `main()` function then sets the built in `hl_FragColor` output variable to t The reimplemented `QGLWidget::resizeGL()` method is called whenever the widget is resized. This is why we use this function to set up the projection matrix and the viewport. -After we had checked the widget's height to prevent a division by zero, we set it to a matrix that does the perspective projection. Luckily we do not have to calculate it ourselves. We can use one of the many useful methods of `QMatrix4x4`, namely `QMatrix4x4::perspective()`, which does exactly what we need. This method multiplies its `QMatrix4x4` instance with a projection matrix that is specified by the angle of the field of view, its aspect ratio and the clipping regions of the near and far planes. The matrix we get using this function resembles the projection of a camera that is sitting in the origin of the world coordinate system looking towards the world's negative z direction with the world's x axis pointing to the right side and the y axis pointing upwards. The fact that this function alters its instance explains the need to first initialize it to an identity matrix (a matrix that, if it is applied as a transformation, does not change a vector at all). +After we had checked the widget's height to prevent a division by zero, we set it to a matrix that does the perspective projection. Luckily we do not have to calculate it ourselves. We can use one of the many useful methods of `QMatrix4x4`, namely `QMatrix4x4::perspective()`, which does exactly what we need. This method multiplies its `QMatrix4x4` instance with a projection matrix that is specified by the angle of the field of view, its aspect ratio and the clipping regions of the near and far planes. The matrix we get using this function resembles the projection of a camera that is sitting in the origin of the world coordinate system looking towards the world's negative z direction with the world's x axis pointing to the right side and the y axis pointing upwards. The fact that this function alters its instance explains the need to first initialize it to an identity matrix (a matrix that doesn't change the vector when applied as a transformation). Next we set up the OpenGL viewport. The viewport defines the region of the widget that the result of the projection is mapped to. This mapping transforms the normalized coordinates on the aforementioned camera's film to pixel coordinates within the `QGLWidget`. To avoid distortion, the aspect ratio of the viewport should match the aspect ratio of the projection. @@ -155,15 +151,14 @@ Finally, we have OpenGL draw the triangle in the `QGLWidget::paintGL()` method. The first thing we do is clear the screen using `glClear(GLbitfield mask)`. If this OpenGL function is called with the `GL_COLOR_BUFFER_BIT` set, it fills the color buffer with the color set by `glClearColor(GLclampf red, GLclampf green, GLclampf blue, GLclampf aplha)`. Setting the `GL_DEPTH_BUFFER_BIT` tells OpenGL to clear the depth buffer, which is used for the depth test and stores the distance of rendered pixels. We usually need to clear both buffers, and therefore, we set both bits. - As we already know, the model-view-projection matrix that is used by the vertex shader is a concatenation of the model matrix, the view matrix and the projection matrix. Just like for the projection matrix, we also use the `QMatrix4x4` class to handle the other two transformations. Although we do not want to use them in this basic example, we already introduce them here to clarify their use. We use them to calculate the model-view-projection matrix, but leave them initialized to the identity matrix. This means we do not move or rotate the triangle's frame and also leave the camera unchanged, located in the origin of the world coordinate system. -The rendering can now be triggered by calling the OpenGL function `glDrawArrays(GLenum mode, GLint first, GLsizei count)`. But before we can do that, we need to bind the shaders and hand over all the uniforms and attributes they need. +The rendering can now be triggered by calling the OpenGL function, `glDrawArrays(GLenum mode, GLint first, GLsizei count)`. But before we can do that, we need to bind the shaders and hand over all the uniforms and attributes they need. -In native OpenGL the programmer would first have to query the id (called `location`) of each input variable using the verbatim variable name as it is typed in the shader source code +In native OpenGL, the programmer would first have to query the id (called `location`) of each input variable using the verbatim variable name as it is typed in the shader source code, and then set its value using this id and a type OpenGL understands. `QGLShaderProgram` instead offers a huge set of overloaded functions for this purpose which allow you to address an input variable using either its `location` or its name. These functions can also @@ -171,20 +166,20 @@ automatically convert the variable type from Qt types to OpenGL types. We set the uniform values for both shaders using `QGLShaderProgram::setUniformValue()` by passing its name. The vertex shader's uniform `Matrix` is calculated by multiplying its -three components. The color of the triangle is set using a `QColor` instance that will +three components. The color of the triangle is set using a `QColor` instance that is automatically be converted to a `vec4` for us. To tell OpenGL where to find the stream of vertices, we call QGLShaderProgram::setAttributeArray() and pass the QVector::constData() pointer. Setting attribute arrays works in the same way as setting uniform values, but there's one difference: -we additionally need to explicitly enable the attribute array using -QGLShaderProgram::enableAttributeArray(). If we did not do this, OpenGL would assume +we must explicitly enable the attribute array using +QGLShaderProgram::enableAttributeArray(). If we do not do this, OpenGL would assume that we've assigned a single value instead of an array. Finally we call `glDrawArrays(GLenum mode, GLint first, GLsizei count)` to do the rendering. It is used to start rendering a sequence of geometry primitives using the current -configuration. We pass `GL_TRIANGLES` as the first parameter to tell OpenGL that each of the three vertices form a triangle. The second parameter specifies the starting index within the attribute arrays and the third parameter is the number of indices to be rendered. +configuration. We pass `GL_TRIANGLES` as the first parameter to tell OpenGL that each of the three vertices form a triangle. The second parameter specifies the starting index within the attribute arrays, and the third parameter is the number of indices to be rendered. Note that if you later want to draw more than one object, you only need to repeat all of the steps (except for clearing the screen, of course) you took in this method for each new object. @@ -194,8 +189,7 @@ steps (except for clearing the screen, of course) you took in this method for ea :start-after: //! [3] :end-before: //! [3] -You should now see a white triangle on black background after compiling and running this program. - +You should see a white triangle on black background after compiling and running this program. Rendering in 3D --------------- @@ -215,7 +209,7 @@ the scene shall change if we turn the mouse's scroll wheel. For this functionality, we reimplement `QWidget::mousePressEvent(), -QWidget::mouseMoveEvent()`, and `QWidget::wheelEvent()`. The new member variables `alpha, beta` and `distance` hold the parameters of the view point, and `lastMousePosition` helps us track mouse movement. +QWidget::mouseMoveEvent()`, and `QWidget::wheelEvent()`. The new member variables `alpha, beta`, and `distance` hold the parameters of the view point, and `lastMousePosition` helps us track mouse movement. .. code-block:: cpp @@ -273,8 +267,8 @@ These three parameters need to be initialized in the constructor and, to account In the `QWidget::mousePressEvent()`, we store the mouse pointer's initial position to be able to track the movement. In the `QGLWidget::mouseMoveEvent()`, we calculate the pointers -change and adapt the angles `alpha` and `beta`. Since the view point's parameters have -changed, we then call `QGLWidget::updateGL()` to trigger an update of the rendering context. +change and adapt the angles `alpha` and `beta`. As the view point's parameters have +changed, we call `QGLWidget::updateGL()` to trigger an update of the rendering context. .. literalinclude:: src/examples/rendering-in-3d/glwidget.cpp :language: cpp @@ -312,21 +306,20 @@ In order to finish this example, we only need to change our list of vertices to } If you now compile and run this program, you will see a white cube that can be rotated -using the mouse. Since each of its six sides is painted in the same plane color, depth is not very visible. We will work on this in the next example. +using the mouse. As each of its six sides is painted in the same plane color, depth is not visible. We will work on this in the next example. Coloring -------- In this example, we want to color each side of the cube in different colors to enhance the illusion of three dimensionality. To archive this, we will extend our shaders in a way that -will allow us to specify a single color for each vertex and use the interpolation of varyings to generate the fragment's colors. -This example will show you how to communicate data from the vertex shader over to the +allows us to specify a single color for each vertex and use the interpolation of varyings to generate the fragment's colors. +This example shows you how to communicate data from the vertex shader over to the fragment shader. .. image:: images/opengl-example-coloring.png - -.. Note:: The source code related to this section is located in `examples/coloring/` directory +.. Note:: The source code related to this section is located in the `examples/coloring/` directory To tell the shaders about the colors, we specify a color value for each vertex as an attribute array for the vertex shader. So on each run of the shader, it will read a new value for both the vertex attribute and the color attribute. @@ -348,7 +341,7 @@ In the fragment shader's main function, we set the `gl_FragColor` variable to th :start-after: //! [0] :end-before: //! [0] -Of course we still need to imploy a new structure to store the color values and send them to +Of course we still need to use a new structure to store the color values and send them to the shaders in the `QGLWidget::paintGL()` method. But this should be very straightforward as we have already done all of this for the `vertices` attribute array in just the same manner. @@ -433,31 +426,30 @@ shaders, but an array of any data you want. However, in this example we will use .. image:: images/opengl-example-texture-mapping.png -.. Note:: The source code related to this section is located in `examples/texture-mapping/` directory +.. Note:: The source code related to this section is located in the `examples/texture-mapping/` directory -In order to map a texture to a primitive, we have to specify so-called texture coordinates* that tell OpenGL which image coordinate is to be pinned to which vertex. Texture coordinates are instances of `vec2` that are normalized to a range between 0 and 1. The origin of the texture coordinate system is in the lower left of an image, having the first axis pointing to the right side and the second axis pointing upwards (i.e. the lower left corner of an image is at `(0, 0)` and the upper right corner is at `(1, 1)`. Coordinate values higher than 1 are also allowed, causing the texture to wrap around by default. +In order to map a texture to a primitive, we have to specify so-called texture coordinates* that tell OpenGL which image coordinate is to be pinned to which vertex. Texture coordinates are instances of `vec2` that are normalized to a range between 0 and 1. The origin of the texture coordinate system is in the lower left of an image, having the first axis pointing to the right side and the second axis pointing upwards (i.e. the lower left corner of an image is at `(0, 0)` and the upper right corner is at `(1, 1)`). Coordinate values higher than 1 are also allowed, causing the texture to wrap around by default. The textures themselves are OpenGL objects stored in the graphics card's memory. They are -created using `glGenTextures(GLsizei n, GLuint texture)` and deleted again with a call to `glDelereTextures(GLsizei n, const GLuint *texture)`. To identify textures, each -texture is assigned a texture ID during its creation. As with shader programs, they need to be bound to `glBindTexture(GLenum target, GLuint texture)` before they can be configured +created using `glGenTextures(GLsizei n, GLuint texture)` and deleted again with a call to `glDelereTextures(GLsizei n, const GLuint *texture)`. To identify textures, each +texture is assigned a texture ID during its creation. As with shader programs, they must be bound to `glBindTexture(GLenum target, GLuint texture)` before they can be configured and filled with data. We can use Qt's `QGLWidget::bindTexture()` to create the texture object. Normally we would have to make sure that the image data is in a particular format, according to the configuration of the texture object, but luckily `QGLWidget::bindTexture()` -also takes care of that. +can take care of this. OpenGL allows us to have several textures accessible to the shaders at the same time. For -this purpose, OpenGL uses so-called texture units*. So before we can use a texture, we need +this purpose, OpenGL uses so-called texture units*. So before we can use a texture, we need to bind it to one of the texture units identified by the enum `GL_TEXTUREi` (with i ranging from 0 to `GL_MAX_COMBINED_TEXTURE_UNITS -1`). To do this, we call `glActiveTexture(GLenum texture)` and bind the texture using `glBindTexture(GLenum target, GLuint texture)`. -Because we also need to call `glBindTexture(GLenum target, GLuint texture)` if we want -to add new textures or modify them, and binding a texture overwrites the the current active -texture unit, you should set the active texture unit to an invalid unit after setting it by calling `glActiveTexture(0)`. This way several texture units can be configured at the same time. Note that texture units need to be used in an ascending order beginning with `GL_TEXTURE0`. +To add new textures or modify existing ones, we have to call `glBindTexture(GLenum target, GLuint texture)`, overwrites the the current active +texture unit. So you should set the active texture unit to an invalid unit after setting it by calling `glActiveTexture(0)`. This way several texture units can be configured at the same time. Note that texture units must be used in an ascending order beginning with `GL_TEXTURE0`. To access a texture in a shader to actually render it, we use the `texture2D(sampler2D sampler, vec2 coord)` function to query the color value at a certain texture coordinate. This function reads two parameters. The first parameter is of the type `sampler2D` and it refers to a texture unit. The second parameter is the texture coordinate that we want to access. To read from the texture unit `i` denoted by the enum `GL_TEXTUREi`, we have to pass the `GLuint` `i` as the uniform value. With all of this theory we are now able to make our cube textured. -We replace the `vec4` color attribute and the corresponding varying with a `vec2` variable for the texture coordinates and forward this value to the fragment shader. +We replace the `vec4` color attribute and the corresponding varying with a `vec2` variable for the texture coordinates, and forward this value to the fragment shader. .. literalinclude:: src/examples/texture-mapping/vertexShader.vsh :language: cpp @@ -474,7 +466,7 @@ the right color value. The uniform `texture` of the type `sampler2D` chooses the :end-before: //! [0] In the `GlWidget` class declaration, we replace the previously used `colors` member with a -`QVector` made of `QVector2Ds` for the texture coordinates and add a member variable to +`QVector` made of `QVector2Ds` for the texture coordinates, and add a member variable to hold the texture object ID. .. code-block:: cpp @@ -559,7 +551,7 @@ Qt only defines the functionality required by its own OpenGL-related classes. `g Several utility libraries exist to ease the definition of these functions (e.g. GLEW, GLEE, etc). We will define `glActiveTexture(GLenum texture)` and `GL_TEXTUREi` manually. -First we include the `glext.h` header file to set the missing enums and a few typedefs which will help us keep the code readable (since the version shipped with your compiler might be outdated, you may need to get the latest version from `the OpenGL homepage <http://www.opengl.org>`_. Next we declare the function pointer, which we will use to call `glActiveTexture(GLenum texture)` using the included typedefs. To avoid confusing the linker, we use a different name than `glActiveTexture` and define a pre-processor macro which replaces calls to `glActiveTexture(GLenum texture)` with our own function: +First we include the `glext.h` header file to set the missing enums and a few typedefs that help us make the code readable (as the version shipped with your compiler might be outdated, you may need to get the latest version from `the OpenGL homepage <http://www.opengl.org>`_). Next we declare the function pointer, which we will use to call `glActiveTexture(GLenum texture)` using the included typedefs. To avoid confusing the linker, we use a different name than `glActiveTexture` and define a pre-processor macro to replaces calls to `glActiveTexture(GLenum texture)` with our own function: .. literalinclude:: src/examples/texture-mapping/glwidget.cpp :language: cpp @@ -591,17 +583,17 @@ The ability to write your own shader programs gives you the power to set up the lighting effect that best suits your needs. This may range from very basic and time saving approaches to high quality ray tracing algorithms. -In this chapter, we will implement a technique called Phong shading*, which is a popular +In this chapter, we will implement a technique called Phong shading*, which is a popular baseline shading method for many rendering applications. For each pixel on the surface of an object, we will calculate the color intensity based on the position and color of the light source as well as the object's texture and its material properties. -To show the results, we will display the cube with a light source circling above it. The light source will be marked by a pyramid which we will render using the per-vertex color shader of one of the previous examples. So in this example, you will also see how to render a scene with multiple objects and different shader programs. +To show the results, we will display the cube with a light source circling above it. The light source will be marked by a pyramid, which we will render using the per-vertex color shader of one of the previous examples. So in this example, you will also see how to render a scene with multiple objects and different shader programs. .. image:: images/opengl-example-lighting.png -.. Note:: The source code related to this section is located in `examples/lighting/` directory +.. Note:: The source code related to this section is located in the `examples/lighting/` directory Because we use two different objects and two different shader programs, we added prefixes to the names. The cube is rendered using the `lightingShaderProgram`, for which we need an @@ -634,23 +626,22 @@ To track the position of the light source, we introduced a new member variable t The Phong reflection model assumes that the light reflected off an object (i.e. what you actually see) consists of three components: diffuse reflection of rough surfaces, specular -highlights of glossy surfaces and an ambient term that sums up the small amounts of light +highlights of glossy surfaces, and an ambient term that sums up the small amounts of light that get scattered about the entire scene. For each light source in the scene, we define :math:`i_d` and :math:`i_s` as the intensities (RGB values) of the diffuse and the specular components. :math:`i_a` is defined as the ambient lighting component. -For each kind of surface (whether glossy, flat etc), we define the following parameters: :math:`k_d` and :math:`k_s` set the ratio of reflection of the diffuse and specular component, :math:`k_a` sets the ratio of the reflection of the ambient term respectively and :math:`\alpha` is a shininess constant that controls the size of the specular highlights. +For each kind of surface (whether glossy or flat), we define the following parameters: :math:`k_d` and :math:`k_s` set the ratio of reflection of the diffuse and specular component, :math:`k_a` sets the ratio of the reflection of the ambient term respectively and :math:`\alpha` is a shininess constant that controls the size of the specular highlights. The equation for computing the illumination of each surface point (fragment) is: .. image:: images/opengl-formula-phong.png :align: center -:math:`\hat{L}_m` is the normalized direction vector pointing from the fragment to the light source, :math:`\hat{N}` is the surface normal of this fragment, :math:`\hat{R}_m` is the direction of the light reflected at this point and :math:`\hat{V}` points from the fragment towards the viewer of the scene. +:math:`\hat{L}_m` is the normalized direction vector pointing from the fragment to the light source, :math:`\hat{N}` is the surface normal of this fragment, :math:`\hat{R}_m` is the direction of the light reflected at this point, and :math:`\hat{V}` points from the fragment towards the viewer of the scene. To obtain the vectors mentioned above, we calculate them for each vertex in the vertex shader and tell OpenGL to pass them as interpolated values to the fragment shader. In the fragment shader, we finally set the illumination of each point and combine it with the color value of the texture. - So in addition to passing vertex positions and the model-view-projection matrix to get the fragment's position, we also need to pass the surface normal of each vertex. To calculate the transformed cube's :math:`\hat{L}_m` and :math:`\hat{V}`, we need to know the model-view part of the transformation. To calculate the transformed :math:`\hat{N}`, we need to apply a matrix that transforms the surface normals. This extra matrix is needed because we only want the normals to be rotated according to the model-view matrix, but not to be translated. This is the vertex shader's source code: @@ -744,7 +735,7 @@ After clearing the screen and calculating the view matrix (which is the same for objects) in the `GlWidget::painGL()` method, we first render the cube using the lighting shaders and then we render the spotlight using the coloring shader. Because we want to keep the cube's origin aligned with the world's origin, we leave the -model matrix `mMatrix` set to an identity matrix. Then we calculate the model-view matrix, +model matrix (`mMatrix`) set to an identity matrix. Then we calculate the model-view matrix, which we also need to send to the lighting vertex shader, and extract the normal matrix with Qt's `QMatrix4x4::normal()` method. As we have already stated, this matrix will transform the surface normals of our cube from model coordinates into viewer coordinates. After that, we @@ -757,7 +748,7 @@ Next we render the spotlight. Because we want to move the spotlight to the same place as the light source, we need to modify its model matrix. First we restore the identity matrix (actually we did not modify the model matrix before so it still is set to the identity matrix anyway). Then we move the -spotlight to the light sources position. Now we still want to rotate it since it looks nicer if it faces our cube. We therefore apply two rotation matrices on top. Because the pyramid that represents our lightspot is still to big to fit into our scene nicely, we scale it down to a tenth of its original size. +spotlight to the light sources position. Now we still want to rotate it as it looks nicer if it faces our cube. We therefore apply two rotation matrices on top. Because the pyramid that represents our lightspot is still too big to fit into our scene nicely, we scale it down to a tenth of its original size. Now we follow the usual rendering procedure again, this time using the `coloringShaderProgram` and the spotlight data. Thanks to depth testing, the new object will be integrated seamlessly into our existing scene. @@ -914,7 +905,7 @@ In the `GlWidget::initializeGL()` method, we first need to create the buffer obj Then, as with textures, we need to bind it to the rendering context to make it active using `QGLBuffer::bind()`. -After this we call `QGLBuffer::allocate()` to allocate the amount of memory we will need to store our vertices, normals, and texture coordinates. This function expects the number of bytes to reserve as a parameter. Using this method, we could also directly specify a pointer to the data which we want to be copied, but since we want to arrange several datasets one after the other, we do the copying in the next few lines. Allocating memory also makes us responsible for freeing this space when it's not needed anymore by using `QGLBuffer::destroy()`. Qt will do this for us if the `QGLBuffer` object is destroyed. +After this, we call `QGLBuffer::allocate()` to allocate the amount of memory we need to store our vertices, normals, and texture coordinates. This function expects the number of bytes to reserve as a parameter. Using this method, we could also directly specify a pointer to the data which we want to be copied, but we want to arrange several datasets one after the other so we do the copying in the next few lines. Allocating memory also makes us responsible for freeing this space when it's not needed anymore by using `QGLBuffer::destroy()`. Qt will do this for us when the `QGLBuffer` object is destroyed. Uploading data to the graphics card is done by using `QGLBuffer::write()`. It reads an offset (in bytes) from the beginning of the buffer object, a pointer to the data in the system memory, which is to be read from, and the number of bytes to copy. First we copy the cubes vertices. Then we append its surface normals and the texture coordinates. Note that because OpenGL uses `GLfloats` for its computations, we need to consider the size of the \c {GLfloat} type when specifying memory offsets and sizes. Then we unbind the buffer object using `QGLBuffer::release()`. @@ -958,7 +949,7 @@ We do the same for the spotlight object. -Just in case you're interested, this is how the creation of buffer objects would work if we did not use Qt's `QGLBuffer` class for this purpose: We would call `void glGenBuffers(GLsizei n, GLuint buffers)` to request `n` numbers of buffer objects with their ids stored in `buffers`. Next we would bind the buffer using `void glBindBuffer(enum target, uint bufferName)`, where we would also specify the buffer's type. Then we would use `void glBufferData(enum target, sizeiptr size, const void *data, enum usage)` to upload the data. The enum called `usage` specifies the way the buffer is used by the main program running on the CPU (e.g. write-only, read-only, copy-only) as well as the frequency of the buffer's usage, in order to support optimizations. `void glDeleteBuffers(GLsizei n, const GLuint *buffers)` is the OpenGL API function to delete buffers and free their memory. +Just in case you're interested, this is how the creation of buffer objects would work if we did not use Qt's `QGLBuffer` class for this purpose: We would call `void glGenBuffers(GLsizei n, GLuint buffers)` to request `n` numbers of buffer objects with their ids stored in `buffers`. Next we would bind the buffer using `void glBindBuffer(enum target, uint bufferName)`, where we would also specify the buffer's type. Then we would use `void glBufferData(enum target, sizeiptr size, const void *data, enum usage)` to upload the data. The enum called `usage` specifies the way the buffer is used by the main program running on the CPU (for example, write-only, read-only, and copy-only) as well as the frequency of the buffer's usage, in order to support optimizations. `void glDeleteBuffers(GLsizei n, const GLuint *buffers)` is the OpenGL API function to delete buffers and free their memory. To have OpenGL use our vertex buffer objects as the source of its vertex attributes, we need to set them differently in the `GlWidget::updateGL()` method. |
