diff --git a/README.md b/README.md index 18117b45..c8a61ddb 100644 --- a/README.md +++ b/README.md @@ -16,6 +16,11 @@ The book itself is also hosted in [GitHub](https://github.com/lwjglgamedev/vulka The source code of the samples of this book is in [GitHub](https://github.com/lwjglgamedev/vulkanbook/tree/master/booksamples). +> [!NOTE] +> A new version is on the way, you can check now source code in this [branch](https://github.com/lwjglgamedev/vulkanbook/tree/test/booksamples). +> It uses dynamic render, contains major changes in several areas such as materials, descriptor sets, and overall architecture and adds new examples for ray tracing. +> Code will change and documentation is not started yet, however it may be worth it to have a look at it. + ## EPUB version An EPUB verion is automatically gerenated in [GitHub](https://github.com/lwjglgamedev/vulkanbook/tree/master/bookcontents/vulkanbook.epub). diff --git a/bookcontents/chapter-01/chapter-01.md b/bookcontents/chapter-01/chapter-01.md index 68665d83..f9d1e420 100644 --- a/bookcontents/chapter-01/chapter-01.md +++ b/bookcontents/chapter-01/chapter-01.md @@ -10,8 +10,8 @@ You will see something similar in any other application independently of the spe The base requirements to run the samples of this book are: -- [Java version 15](https://jdk.java.net/15/) or higher. -- Maven 3.6.X or higher to build the samples. +- [Java version 17](https://jdk.java.net/17/) or higher. +- Maven 3.9.X or higher to build the samples. Building the samples with maven will create a jar file, under the target folder, and the required folders with the dependencies and the resources. You can execute them from the command line just by using `java -jar `. - Using an IDE is optional. diff --git a/bookcontents/chapter-02/chapter-02.md b/bookcontents/chapter-02/chapter-02.md index e1b60bee..558110de 100644 --- a/bookcontents/chapter-02/chapter-02.md +++ b/bookcontents/chapter-02/chapter-02.md @@ -26,7 +26,7 @@ So let's start by coding the constructor, which starts like this: public class Instance { ... public Instance(boolean validate) { - LOGGER.debug("Creating Vulkan instance"); + Logger.debug("Creating Vulkan instance"); try (MemoryStack stack = MemoryStack.stackPush()) { ... } diff --git a/bookcontents/chapter-06/chapter-06.md b/bookcontents/chapter-06/chapter-06.md index 4b9366bd..66def261 100644 --- a/bookcontents/chapter-06/chapter-06.md +++ b/bookcontents/chapter-06/chapter-06.md @@ -533,11 +533,11 @@ public class VulkanModel { ... } ``` -We firs define a copy region, by filling up a `VkBufferCopy` buffer, which will have the whole size of the staging buffer. Then we record the copy command, the `vkCmdCopyBuffer` function. +We first define a copy region, by filling up a `VkBufferCopy` buffer, which will have the whole size of the staging buffer. Then we record the copy command, the `vkCmdCopyBuffer` function. ## Graphics pipeline overview -A graphics pipeline is a model which describes the sets required to render a scene into a screen. In Vulkan this is modeled using a data structure. This structure defines several parameters to control the certain steps (fixed steps) allowing setting up programs (called shaders) to control the execution of other steps (programmable steps). The following picture depicts Vulkan graphics pipeline. +A graphics pipeline is a model which describes the steps required to render a scene into a screen. In Vulkan this is modeled using a data structure. This structure defines several parameters to control the certain steps (fixed steps) allowing setting up programs (called shaders) to control the execution of other steps (programmable steps). The following picture depicts Vulkan graphics pipeline. ![Graphics pipeline](rc06-yuml-01.svg) @@ -550,7 +550,7 @@ Description of the stages (NOTE: graphics pipeline in Vulkan can also work in me - Fragment shader: Processes the fragments from the rasterization stage determining the values that will be written into the frame buffer output attachments. This is also a programmable stage which usually outputs the color for each pixel. - Blending: Controls how different fragments can be mixed over the same pixel handling aspects such as transparencies and color mixing. -One important topic to understand when working with Vulkan pipelines is that they are almost immutable. Unlike OpenGL, we can't modify at run time the properties of a graphics pipeline. Almost any change that we want to make implies the creation of a new pipeline. In OpenGL it is common to modify ay runtime certain parameters that control how transparencies are handled (blending) or if the depth-testing is enabled. We can modify those parameters at run time with no restrictions. (The reality is that under the hood, our driver is switching between pipelines definitions that meet those settings). In Vulkan, however, we will need to define multiple pipelines if we ant to change those settings while rendering ans switch between them manually. +One important topic to understand when working with Vulkan pipelines is that they are almost immutable. Unlike OpenGL, we can't modify at run time the properties of a graphics pipeline. Almost any change that we want to make implies the creation of a new pipeline. In OpenGL it is common to modify ay runtime certain parameters that control how transparencies are handled (blending) or if the depth-testing is enabled. We can modify those parameters at run time with no restrictions. (The reality is that under the hood, our driver is switching between pipelines definitions that meet those settings). In Vulkan, however, we will need to define multiple pipelines if we want to change those settings while rendering and switch between them manually. ## Shaders @@ -684,7 +684,7 @@ public class ShaderCompiler { } ``` -The method receives, through the `glsShaderFile` parameter, the path to the GLSL file and the type of shader. In this case, the `shaderType` parameter should be one of the defined by the `org.lwjgl.util.shaderc.Shaderc` class. This method checks if the GLSL file has changed (by comparing the date of the SPIR-V file vs the GLSL file) and compiles it by calling the `compileShader` method and writes the result to a file constructed with the same path bad adding the `.spv` extension. +The method receives, through the `glsShaderFile` parameter, the path to the GLSL file and the type of shader. In this case, the `shaderType` parameter should be one of the defined by the `org.lwjgl.util.shaderc.Shaderc` class. This method checks if the GLSL file has changed (by comparing the date of the SPIR-V file vs the GLSL file) and compiles it by calling the `compileShader` method and writes the result to a file constructed with the same path but adding the `.spv` extension. The `compileShader` method just invokes the `shaderc_result_get_compilation_status` from the `Shaderc` compiler binding provided by LWJGL. diff --git a/bookcontents/chapter-08/chapter-08.md b/bookcontents/chapter-08/chapter-08.md index 51cd0a0a..107ee293 100644 --- a/bookcontents/chapter-08/chapter-08.md +++ b/bookcontents/chapter-08/chapter-08.md @@ -105,7 +105,7 @@ public class ModelLoader { } ``` -We first check if the path to the 3D model and the texture directory exist. After that, we import the 3D model by invoking the `aiImportFile` Assimp function which will return an `AIScene` structure. Then, we use the `AIScene` structure to load the 3D models materials. We get the total number of materials allocating as many structures of `AIMaterial` as needed. The material will hold information related to the textures and colors for each mesh. For each of the materials we extract the values that we will need by calling the `processMaterial` method. The next step is to load the meshes data by calling the `processMesh` method. As in the case of materials, we get the total number of meshes that the `AIScene` contains and allocate as many `AIMesh` structure as needed. Once we have finished processing the model we just release the `AIScene` and return the array of the meshes present in the model. Let's analyze first the `processMaterial` method: +We first check if the path to the 3D model and the texture directory exists. After that, we import the 3D model by invoking the `aiImportFile` Assimp function which will return an `AIScene` structure. Then, we use the `AIScene` structure to load the 3D models materials. We get the total number of materials allocating as many structures of `AIMaterial` as needed. The material will hold information related to the textures and colors for each mesh. For each of the materials we extract the values that we will need by calling the `processMaterial` method. The next step is to load the meshes data by calling the `processMesh` method. As in the case of materials, we get the total number of meshes that the `AIScene` contains and allocate as many `AIMesh` structure as needed. Once we have finished processing the model we just release the `AIScene` and return the array of the meshes present in the model. Let's analyze first the `processMaterial` method: ```java public class ModelLoader { @@ -198,7 +198,7 @@ public class ModelData { ... ``` -Going back to the `ModelLoader`class, the remaining methods are quite simple, we just extract the position and texture coordinates and the indices: +Going back to the `ModelLoader` class, the remaining methods are quite simple, we just extract the position and texture coordinates and the indices: ```java public class ModelLoader { @@ -398,7 +398,7 @@ public class Texture { } ``` -In order for Vulkan to correctly use the image, we need to transition it to the correct layout an copy the staging buffer contents to the image.This is done in the `recordTextureTransition` method. +In order for Vulkan to correctly use the image, we need to transition it to the correct layout an copy the staging buffer contents to the image. This is done in the `recordTextureTransition` method. ```java public class Texture { @@ -503,7 +503,7 @@ In the second case, the second `if` condition will be executed. We use the `VK_A Once the conditions has been set we record the image pipeline barrier by invoking the `vkCmdPipelineBarrier` function. -The only missing method in t he `Texture` class is the `recordCopyBuffer`: +The only missing method in the `Texture` class is the `recordCopyBuffer`: ```java public class Texture { @@ -590,7 +590,7 @@ public class EngineProperties { } ``` -Back to the `TextureCache` class, the rest of the methods are the classical `cleanup` method to free the images and the `getTexture` method to be able to retrieve one already created `Texture` using is file path and through their position. +Back to the `TextureCache` class, the rest of the methods are the classical `cleanup` method to free the images and the `getTexture` method to be able to retrieve one already created `Texture` using is file path and through their position. ```java public class TextureCache { @@ -833,7 +833,7 @@ public class DescriptorPool { } ``` -The format of each descriptor set must me defined by a descriptor set layout. The layout will be something very dependent on the specific data structures that we will use in our shaders. However, we will create an abstract class to avoid repeating the cleanup method and to store its handle: +The format of each descriptor set must be defined by a descriptor set layout. The layout will be something very dependent on the specific data structures that we will use in our shaders. However, we will create an abstract class to avoid repeating the cleanup method and to store its handle: ```java package org.vulkanb.eng.graph.vk; @@ -993,7 +993,7 @@ public class TextureSampler { In order to create a sampler, we need to invoke the `vkCreateSampler` function which requires a `VkSamplerCreateInfo` structure, defined by the following fields: - `sType`: The type of the structure: `VK_STRUCTURE_TYPE_SAMPLER_CREATE_INFO`. -- `magFilter` and `minFilter`: control how magnification and magnification filter work while performing a texture lookup. In this case, we are using a `VK_FILTER_LINEAR` filter, which is the value for a liner filter (for a 2D texture, it combines for values of four pixels weighted). You can use `VK_FILTER_NEAREST` to pickup just the closest value in the lookup or `VK_FILTER_CUBIC_EXT` to apply cubic filtering (it uses 16 values for 2D textures). +- `magFilter` and `minFilter`: control how magnification and magnification filter work while performing a texture lookup. In this case, we are using a `VK_FILTER_LINEAR` filter, which is the value for a linear filter (for a 2D texture, it combines for values of four pixels weighted). You can use `VK_FILTER_NEAREST` to pickup just the closest value in the lookup or `VK_FILTER_CUBIC_EXT` to apply cubic filtering (it uses 16 values for 2D textures). - `addressModeU`, `addressModeV` and `addressModeW`: This will control what will be returned for a texture lookup when the coordinates lay out of the texture size. The `U`, `V` and `W` refer to the `x`, `y` and `z` axis (for 3D images). In this case, we specify the `VK_SAMPLER_ADDRESS_MODE_REPEAT` which means that the texture is repeated endlessly over all the axis. There are some other values such as `VK_SAMPLER_ADDRESS_MODE_MIRRORED_REPEAT` or `VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_EDGE` which are similar as the ones used in OpenGL. - `borderColor`: This sets the color for the border that will be used for texture lookups beyond bounds when `VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_BORDER` is used in the `addressModeX` attributes. - `unnormalizedCoordinates`: Texture coordinates cover the [0, 1] range. When this parameter is set to `true` the coordinates will cover the ranges [0, width], [0, height]. @@ -1175,7 +1175,7 @@ public abstract class DescriptorSet { ... } ``` -The code is similar as the descriptor set used for textures, with the following exception, ee use a `VkDescriptorBufferInfo` to link the descriptor with the buffer that will hold the descriptor set values. Now we can create another class, specifically for uniforms, which extends the `SimpleDescriptorSet` class, named `UniformDescriptorSet`: +The code is similar as the descriptor set used for textures, with the following exception, we use a `VkDescriptorBufferInfo` to link the descriptor with the buffer that will hold the descriptor set values. Now we can create another class, specifically for uniforms, which extends the `SimpleDescriptorSet` class, named `UniformDescriptorSet`: ```java public abstract class DescriptorSet { ... diff --git a/bookcontents/chapter-12/chapter-12.md b/bookcontents/chapter-12/chapter-12.md index d2e5f1a4..0f5efe6b 100644 --- a/bookcontents/chapter-12/chapter-12.md +++ b/bookcontents/chapter-12/chapter-12.md @@ -1,4 +1,4 @@ -# Chapter 12 - Vulkan Memory Allocator and specialization constants +# Chapter 12 - Vulkan Memory Allocator and storage buffers This will be a short chapter where we will introduce the VMA library which will help us with Vulkan memory allocation. Additionally, we will also introduce storage buffers. diff --git a/bookcontents/chapter-13/chapter-13.md b/bookcontents/chapter-13/chapter-13.md index 77af94f9..c34c83c5 100644 --- a/bookcontents/chapter-13/chapter-13.md +++ b/bookcontents/chapter-13/chapter-13.md @@ -1277,9 +1277,9 @@ public class LightSpecConstants { } ``` -First, we create a buffer that will hold the specialization constants data, which will be the number of cascade shadows, if we will use PCF, the value of shadow bias and teh debug flag. We need to create one `VkSpecializationMapEntry` for each specialization constant. The `VkSpecializationMapEntry` defines the numerical identifier used by the constant, the size of the data and the offset in the buffer that holds the data for all the constants. With all that information, we create the `VkSpecializationInfo` structure. +First, we create a buffer that will hold the specialization constants data, which will be the number of cascade shadows, if we will use PCF, the value of shadow bias and the debug flag. We need to create one `VkSpecializationMapEntry` for each specialization constant. The `VkSpecializationMapEntry` defines the numerical identifier used by the constant, the size of the data and the offset in the buffer that holds the data for all the constants. With all that information, we create the `VkSpecializationInfo` structure. -Now we can examine the changes in the `LightingRenderActivity` class. First, we will create an atribute to hold an instance for the `LightSpecConstants` class which will be created in the constructor. Also, we need a uniform that will hold the inverse projection and view matrices. Previously, we had just one buffer, because it only contained the inverse projection matrix. Since this did not change between frames we just needed one buffer. However, now, it will store also the inverse view matrix. That matrix can change between frame, so to avoid modifying the buffer while rendering, we will have as many buffers as swap chain images. We will need also new buffers, and descriptor sets for the cascade shadow splits data. We will not update that uniform in the constructor, but while recording the commands, therefore the constructor has been changed (no `Scene` instance as a parameter) and the `updateInvProjMatrix` method has been removed. The previous attributes `invProjBuffer` and `invProjMatrixDescriptorSet` have been removed. We need also new uniforms for the data of the cascade splits projection view uniforms and cascade instances). In the `cleanup` method, we just need to free those resources. +Now we can examine the changes in the `LightingRenderActivity` class. First, we will create an attribute to hold an instance for the `LightSpecConstants` class which will be created in the constructor. Also, we need a uniform that will hold the inverse projection and view matrices. Previously, we had just one buffer, because it only contained the inverse projection matrix. Since this did not change between frames we just needed one buffer. However, now, it will store also the inverse view matrix. That matrix can change between frame, so to avoid modifying the buffer while rendering, we will have as many buffers as swap chain images. We will need also new buffers, and descriptor sets for the cascade shadow splits data. We will not update that uniform in the constructor, but while recording the commands, therefore the constructor has been changed (no `Scene` instance as a parameter) and the `updateInvProjMatrix` method has been removed. The previous attributes `invProjBuffer` and `invProjMatrixDescriptorSet` have been removed. We need also new uniforms for the data of the cascade splits projection view uniforms and cascade instances). In the `cleanup` method, we just need to free those resources. ```java public class LightingRenderActivity { diff --git a/bookcontents/chapter-14/chapter-14.md b/bookcontents/chapter-14/chapter-14.md index 3199d259..bb181a75 100644 --- a/bookcontents/chapter-14/chapter-14.md +++ b/bookcontents/chapter-14/chapter-14.md @@ -10,7 +10,7 @@ You can find the complete source code for this chapter [here](../../booksamples/ In skeletal animation the way a model is transformed to play an animation is defined by its underlying skeleton. A skeleton is nothing more than a hierarchy of special points called joints. In addition to that, the final position of each joint is affected by the position of their parents. For instance, think of a wrist: the position of a wrist is modified if a character moves the elbow and also if it moves the shoulder. -Joints do not need to represent a physical bone or articulation: they are artifacts that allow the creatives to model an animation (we may use sometimes the terms bone and joint to refer to the same ting). The models still have vertices that define the different positions, but, in skeletal animation, vertices are drawn based on the position of the joints they are related to and modulated by a set of weights. If we draw a model using just the vertices, without taking into consideration the joints, we would get a 3D model in what is called the bind pose. Each animation is divided into key frames which basically describes the transformations that should be applied to each joint. By changing those transformations, changing those key frames, along time, we are able to animate the model. Those transformations are based on 4x4 matrices which model the displacement and rotation of each joint according to the hierarchy (basically each joint must accumulate the transformations defined by its parents). +Joints do not need to represent a physical bone or articulation: they are artifacts that allow the creatives to model an animation (we may use sometimes the terms bone and joint to refer to the same thing). The models still have vertices that define the different positions, but, in skeletal animation, vertices are drawn based on the position of the joints they are related to and modulated by a set of weights. If we draw a model using just the vertices, without taking into consideration the joints, we would get a 3D model in what is called the bind pose. Each animation is divided into key frames which basically describes the transformations that should be applied to each joint. By changing those transformations, changing those key frames, along time, we are able to animate the model. Those transformations are based on 4x4 matrices which model the displacement and rotation of each joint according to the hierarchy (basically each joint must accumulate the transformations defined by its parents). If you are reading this, you might probably already know the fundamentals of skeletal animations. The purpose of this chapter is not to explain this in detail but to show an example on how this can be implemented using Vulkan with compute shaders. If you need all the details of skeletal animations you can check this [excellent tutorial](http://ogldev.atspace.co.uk/www/tutorial38/tutorial38.html). @@ -55,7 +55,7 @@ public class ModelData { ``` The new `animMeshDataList` attribute is the equivalent of the `meshDataList` one. That list will contain an entry for each mesh storing the relevant data for animated models. In this case, that data is grouped under the `AnimMeshData` and contains two arrays that will contain the weights that will modulate the transformations applied to the joints related to each vertex (related by their identifier in the hierarchy). That data is common to all the animations supported by the model, since it is related to the model structure itself, its skeleton. The `animationsList` attribute holds the list of animations defined for a model. An animation is described by the `Animation` record and consists on a name the duration of the animation (in milliseconds) and the data of the key frames that compose the animation. Key frame data is defined by the `AnimatedFrame` record which contains the transformation matrices for each of the model joints for that specific frame. Therefore, in order to load animated models we just need to get the additional structural data for mesh (weights and the joints they apply to) and the transformation matrices for each of those joints per animation key frame. -After that we need to modify the `Entity` class to add new attributes to control its animation state to pause / resume the animation, to select the proper animation and to select a specific key frame): +After that we need to modify the `Entity` class to add new attributes to control its animation state to pause / resume the animation, to select the proper animation and to select a specific key frame: ```java public class Entity { @@ -161,7 +161,7 @@ public class ModelLoader { ... } ``` -As you can see we are using a new flag: `aiProcess_LimitBoneWeights` that limits the number of bones simultaneously affecting a single vertex to a maximum value (the default maximum values is `4`). The `loadModel` method version that automatically sets the flags receives an extra parameter which indicates if this is an animated model or not. We use that parameter to avoid setting the `aiProcess_PreTransformVertices` for animated models. That flag performs some transformation over the data loaded so the model is placed in the origin and the coordinates are corrected. We cannot use this flag if the model uses animations because it will remove that information. In the `loadModel` method version that actually performs the loading tasks, we have added, at the end, code to load animation data. We first load the skeleton structure (the bones hierarchy) and the weights associated to each vertex. With tha information, we construct `ModelData.AnimMeshData` instances (one per Mesh). After that, we retrieve the different animations and construct the transformation data per key frame. +As you can see we are using a new flag: `aiProcess_LimitBoneWeights` that limits the number of bones simultaneously affecting a single vertex to a maximum value (the default maximum values is `4`). The `loadModel` method version that automatically sets the flags receives an extra parameter which indicates if this is an animated model or not. We use that parameter to avoid setting the `aiProcess_PreTransformVertices` for animated models. That flag performs some transformation over the data loaded so the model is placed in the origin and the coordinates are corrected. We cannot use this flag if the model uses animations because it will remove that information. In the `loadModel` method version that actually performs the loading tasks, we have added, at the end, code to load animation data. We first load the skeleton structure (the bones hierarchy) and the weights associated to each vertex. With that information, we construct `ModelData.AnimMeshData` instances (one per Mesh). After that, we retrieve the different animations and construct the transformation data per key frame. The `processBones` method is defined like this: ```java @@ -639,7 +639,7 @@ Prior to jumping to the code, it is necessary to briefly describe compute shader As mentioned above, a key topic of compute shaders is how many times they should be invoked and how the work load is distributed. Compute shaders define the concept of work groups, which are a collection of of shader invocations that can be executed, potentially, in parallel. Work groups are three dimensional, so they will be defined by the triplet `(Wx, Wy, Wz)`, where each of those components must be equal to or greater than `1`. A compute shader will execute in total `Wx*Wy*Wz` work groups. Work groups have also a size, named local size. Therefore, we can define local size as another triplet `(Lx, Ly, Lz)`. The total number of times a compute shader will be invoked will be the product `Wx*Lx*Wy*Ly*Wz*Lz`. The reason behind specifying these using three dimension parameters is because some data is handled in a more convenient way using 2D or 3D dimensions. You can think for example in a image transformation computation, we would be probably using the data of an image pixel and their neighbor pixels. We could organize the work using 2D computation parameters. In addition to that, work done inside a work group, can share same variables and resources, which may be required when processing 2D or 3D data. Inside the computer shader we will have access to pre-built variables that will identify the invocation we are in so we can properly access the data slice that we want to work with according to our needs. -In order to support the execution of commands that will go through the compute pipeline, we need first to define a new class named `ComputePipeline` to support the creation of that type of pipelines. Compute pipelines are much simpler than graphics pipelines. Graphics pipelines have a set of fixed and programable stages while the compute pipeline has a single programmable compute shader stage. So let's go with it: +In order to support the execution of commands that will go through the compute pipeline, we need first to define a new class named `ComputePipeline` to support the creation of that type of pipelines. Compute pipelines are much simpler than graphics pipelines. Graphics pipelines have a set of fixed and programmable stages while the compute pipeline has a single programmable compute shader stage. So let's go with it: ```java public class ComputePipeline { @@ -996,7 +996,7 @@ public class AnimationComputeActivity { ... } ``` -In this methods, we first discard the models that do not contain animations. For each of the models that contain animations, we create a descriptor set that will hold an array of matrices with the transformation matrices associated to the joints of the model. Those matrices change for each animation frame, so for a model, we will have as many arrays (ans therefore as many descriptors) as animation frames the model has. We will pass that data to the compute shader as uniforms so we use a `UniformDescriptorSet` per frame that will contain that array of matrices. For each mesh of the model we will need at least, two storage buffers, the first one will hold the data for the bind position (position, texture coordinates, normal, tangent and bitangent). That data is composed by 14 floats (4 bytes each) and will be transformed according to the weights and joint matrices to generate the animation. The second storage buffer will contain the weights associated to each vertex (a vertex will have 4 weights that will modulate the bind position using the joint transformation matrices. Each opf those weights will be associated to a joint index). Therefore we need to create two storage descriptor sets per mesh. We combine that information in the `MeshDescriptorSets` record. That record also defines a paramater named `groupSize`, let's explain now what is this parameter for. As mentioned previously, compute shaders invocations are organized in work groups (`Wx`, `Wy` and `Wz`) which have a local size (`Lx`, `Ly` and `Lz`). In our specific case, we will be organizing the work using just one dimension, so the `Wy`, `Wz`, `Ly` and `Lz` values will be set to `1`. The local size is defined in the shader code, and, as we will see later on, we will use a value of `32` for `Lx`. Therefore, the number of times the compute shader will be executed will be equal to `Wx*Lx`. Because of that, we need to divide the total number of vertices, for a mesh, per the local size value (`32`) in order to properly set up the `Wx` value, which is what defines the `groupSize` parameter. Finally, we store the joint matrices descriptor sets and the storage descriptor sets in a map using the model identifier as the key. This will be used later on when rendering. To summarize, this method creates the required descriptor sets that are common to all the entities which use this animated model. +In this method, we first discard the models that do not contain animations. For each of the models that contain animations, we create a descriptor set that will hold an array of matrices with the transformation matrices associated to the joints of the model. Those matrices change for each animation frame, so for a model, we will have as many arrays (and therefore as many descriptors) as animation frames the model has. We will pass that data to the compute shader as uniforms so we use a `UniformDescriptorSet` per frame that will contain that array of matrices. For each mesh of the model we will need at least, two storage buffers, the first one will hold the data for the bind position (position, texture coordinates, normal, tangent and bitangent). That data is composed by 14 floats (4 bytes each) and will be transformed according to the weights and joint matrices to generate the animation. The second storage buffer will contain the weights associated to each vertex (a vertex will have 4 weights that will modulate the bind position using the joint transformation matrices. Each of those weights will be associated to a joint index). Therefore we need to create two storage descriptor sets per mesh. We combine that information in the `MeshDescriptorSets` record. That record also defines a parameter named `groupSize`, let's explain now what is this parameter for. As mentioned previously, compute shaders invocations are organized in work groups (`Wx`, `Wy` and `Wz`) which have a local size (`Lx`, `Ly` and `Lz`). In our specific case, we will be organizing the work using just one dimension, so the `Wy`, `Wz`, `Ly` and `Lz` values will be set to `1`. The local size is defined in the shader code, and, as we will see later on, we will use a value of `32` for `Lx`. Therefore, the number of times the compute shader will be executed will be equal to `Wx*Lx`. Because of that, we need to divide the total number of vertices, for a mesh, per the local size value (`32`) in order to properly set up the `Wx` value, which is what defines the `groupSize` parameter. Finally, we store the joint matrices descriptor sets and the storage descriptor sets in a map using the model identifier as the key. This will be used later on when rendering. To summarize, this method creates the required descriptor sets that are common to all the entities which use this animated model. The records mentioned before are defined as inner classes: ```java diff --git a/bookcontents/chapter-17/chapter-17.md b/bookcontents/chapter-17/chapter-17.md index 48694421..9e70e598 100644 --- a/bookcontents/chapter-17/chapter-17.md +++ b/bookcontents/chapter-17/chapter-17.md @@ -274,7 +274,7 @@ public class SoundManager { } ``` -This class holds references to the ```SoundBuffer``` and ```SoundSource``` instances to track and later cleanup them properly. SoundBuffers and SoundSources are stored in in a ```Map``` so they can be retrieved by an identifier. Although a `SoundSource` will be bound always only toa a single `SoundBuffer` we do not need to create a sound soruce for ach possible sound. In fact, we can have a few sound sources depending on their characteristics, for example if they are relative or not, or their position, and change the buffer which they are bound to dynamically. The constructor initializes the OpenAL subsystem: +This class holds references to the ```SoundBuffer``` and ```SoundSource``` instances to track and later cleanup them properly. SoundBuffers and SoundSources are stored in in a ```Map``` so they can be retrieved by an identifier. Although a `SoundSource` will be bound always only to a single `SoundBuffer` we do not need to create a sound source for each possible sound. In fact, we can have a few sound sources depending on their characteristics, for example if they are relative or not, or their position, and change the buffer which they are bound to dynamically. The constructor initializes the OpenAL subsystem: * Opens the default device. * Create the capabilities for that device. diff --git a/bookcontents/vulkanbook.epub b/bookcontents/vulkanbook.epub index 0b757045..4891477e 100644 Binary files a/bookcontents/vulkanbook.epub and b/bookcontents/vulkanbook.epub differ