The result is an image that feels as if you are shining a spotlight onto the center of your screen. The screen position directly influences color value of the rendered geometry. This gives you access to the sceen position in the same way as we gained access to the world position.īellow you can see a simple screen space radial gradient that is being calculated and applied to the objects in the scene. You can access the screen position by adding the following to the shader input struct: float4 screenPos Usually the screen space coordinates are used for post processing effects but in our case we use this information to influence the base shading of the objects in question. That way we can determine how far the surface we are trying to render is removed from the center of the screen (focal point) in screen space. To achieve this we introduced screen space coordinates into the shader. Meaning that, if possible, we still want to see parts the object we are trying to hide as long as it doesn’t hinder our view. In our initial requirements for the obstruction handling (see part 1) we mentioned that the player should still feel aware of the objects that are being hidden. The next step: How do we use this to dynamically deal with the camera obstruction problem? Involving the screen coordinates This gave us exactly what we were looking for. Add in an emissive border and you end up with a fun magical disintegration effect. Notice how the fully procedural technique feels incredibly smooth compared to the the texture technique. Below you can see the two solutions side by side (texture vs procedural). The dissolve is also seamless across all surfaces no matter how they are aligned as no UV coordinates are needed. The use of procedural noise enables us to scale and zoom as much as we want without any loss of quality. (The image above has the dissolve percentage set to around 50% to better visualize the 3D noise volumes.) (Each point in 3D space is assigned a single value that forms a gradient with each surrounding point in space.) In other words we end up with a 3D texture that consists of gradient volumes with values that smoothly transition between 0 and 1. Instead, we use math to generate procedural 3D noise that uses the world space coordinates as the main input parameter. The only way around this problem is by completely eliminating the use of a texture. We also cannot guarantee a nice continuation of the noise texture across large surface angle differences and between intersecting objects. Namely that the dissolve can be very chaotic near the transition edges and because we are using a texture as the basis for the dissolve we can’t get to close without clearly noticing pixelated transitions. Part 1 ended with highlighting some of the flaws that revealed themselves when using the texture based dissolve shader technique. In this part I will go over how we improved the dissolve quality and how we use this simple technique throughout Trifox to create all kinds of fun visual effects (including the dynamic camera obstruction handling) and even some cool gameplay mechanics. I went over the approach we took in coming up with a solution that works well for the game and had a closer look some common shader techniques such as world space texturing and the creating of a simple texture based dissolve shader. In part 1 we defined a specific problem we are trying to solve: How do we deal with objects that obstruct the view of our main character during gameplay? In case you missed part 1 you can find it right here: Quick recap And welcome to part 2 of Dissolving The World: Obstruction handling and dissolve shader techniques in Trifox. Hi, I’m Brecht Lecluyse, one of the developers at Glowfish Interactive that is currently working on Trifox. Obstruction handling and dissolve shader techniques in Trifox
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |