Advanced Effects

You are an experienced programmer and have a problem with the engine, shaders, or advanced effects? Here you'll get answers.
No questions about C++ programming or topics which are answered in the tutorials!
Vectrotek
Competition winner
Posts: 1087
Joined: Sat May 02, 2015 5:05 pm

Re: Advanced Effects

Post by Vectrotek »

From this I can see that the Normals are Updated and Unique for each instance..

Image
Vectrotek
Competition winner
Posts: 1087
Joined: Sat May 02, 2015 5:05 pm

Re: Advanced Effects

Post by Vectrotek »

Here I can see how Irrlicht seems to try apply a single array of TAN BIN Pairs to different meshes..
(by which criteria I don't know)
(you wont see them SWAP but they do)

Image
Vectrotek
Competition winner
Posts: 1087
Joined: Sat May 02, 2015 5:05 pm

Re: Advanced Effects

Post by Vectrotek »

The only way I can have UNIQUE "TAN BIN Pairs" is to load the same object twice..
(they may not have identical names though)

Image
Vectrotek
Competition winner
Posts: 1087
Joined: Sat May 02, 2015 5:05 pm

Re: Advanced Effects

Post by Vectrotek »

This is what the two objects should look like but
the only way I could do it was to load the same object twice
which is obviously not a solution..

Image
It's not the "Kiss of Death" for my project but pretty close..

Chow! Like to hear from you!
devsh
Competition winner
Posts: 2057
Joined: Tue Dec 09, 2008 6:00 pm
Location: UK
Contact:

Re: Advanced Effects

Post by devsh »

version 0.2.2 of my engine is out if you're interested
Vectrotek
Competition winner
Posts: 1087
Joined: Sat May 02, 2015 5:05 pm

Re: Advanced Effects

Post by Vectrotek »

That would be the natural next step..
Vectrotek
Competition winner
Posts: 1087
Joined: Sat May 02, 2015 5:05 pm

Re: Advanced Effects

Post by Vectrotek »

While I have an instance of a skinned mesh that calls itself unique but isn't, I shall keep looking for
a good work-around till the day Irrlicht gets this sorted out..
For now I'll pretend that they are unique because my code is doing it's part and should
suddenly start working as expected when the problem is solved..

Anyhow, while my system is what it is I shall have to carry on in 32 bit..
Nothing ever is lost while learning..
omaremad
Competition winner
Posts: 1027
Joined: Fri Jul 15, 2005 11:30 pm
Location: Cairo,Egypt

Re: Advanced Effects

Post by omaremad »

Lazy solution:

Examine smgr->drawall () code.

You might be able to change the skinned mesh generation from onAnimate() to the onRender() phase. That way a shared mesh can be updated just before rendering then dumped. Which is essnetially creating hidden temporary copies that only stay alive enough for rendering. A custom scenenode loosesly based on CAnimatedMeshScenenode might do the trick. Make a mesh copy in onRender, do skinning, then delete it. Delaying the skinning would mean you bounding boxes will be invalid fof the onRegister phase which does culling afted onAnimate though.

A hack but uptodate vertex based boudning boxes are very expensive and wasteful. A big fixed bounding box based on the frame when the mesh is biggest is ideal
"Irrlicht is obese"

If you want modern rendering techniques learn how to make them or go to the engine next door =p
omaremad
Competition winner
Posts: 1027
Joined: Fri Jul 15, 2005 11:30 pm
Location: Cairo,Egypt

Re: Advanced Effects

Post by omaremad »

HINT: if you do the tangents on the mesh thats grabbed by smgr->getMesh will grab a central mesh from the mesh cache that is shared by all animated meshes of same filename. Dont use mesh cache references, you are getting cached shared meshes. Do that tangents at another stage or on another refrence.

I think you are getting tangents of the last mesh that gets updated from the cache.
"Irrlicht is obese"

If you want modern rendering techniques learn how to make them or go to the engine next door =p
omaremad
Competition winner
Posts: 1027
Joined: Fri Jul 15, 2005 11:30 pm
Location: Cairo,Egypt

Re: Advanced Effects

Post by omaremad »

You can also preserve the boudning box based on last frame from the onRender function
"Irrlicht is obese"

If you want modern rendering techniques learn how to make them or go to the engine next door =p
Vectrotek
Competition winner
Posts: 1087
Joined: Sat May 02, 2015 5:05 pm

Re: Advanced Effects

Post by Vectrotek »

Omaremad:
Don't use mesh cache references, you are getting cached shared meshes. Do that tangents at another stage or on another reference.
Thanks! I think this is the clue..
Vectrotek
Competition winner
Posts: 1087
Joined: Sat May 02, 2015 5:05 pm

Re: Advanced Effects

Post by Vectrotek »

Trying out Screen Space Reflection..
I'll post the project once it look presentable.
Image
Vectrotek
Competition winner
Posts: 1087
Joined: Sat May 02, 2015 5:05 pm

Re: Advanced Effects

Post by Vectrotek »

Here is a still messy shader..

Code: Select all

 
 
 // Credit to the original author.. (Thanks to Martins Upitis for sharing)
 // This comes from "https://drive.google.com/file/d/1q2mf_xvHcoxfDXRZYI_GILWQhU0CbvoN/view
 // "Sebastian Mestre"
 // Re-written for Irrlicht by Vectrotek..
 
 // The original version of this was written for Blender which at the time had no way of feeding the View Normals buffer
 // to the shader. It aquired these normals by a clever little algorythm that scanned the depth buffer.
 // Here we simply feed the normal information via a prerendered normal buffer.
 // We also use the "" Render Target format to get a good Depth..
 
 #version 120
 // ======================================== ==== == = 
 uniform sampler2D Image01;      // DEPTH.. 32 BIT FLOAT IN RED CHANNEL
 uniform sampler2D Image02;      // ORIGINAL RENDER..
 uniform sampler2D Image03;      // RANGE COMPRESSED VIEW-SPACE NORMALS..
 
 uniform int       SSRMode;    // 0 is BLINN; 1 is MESTRE; 2 is CGX; (look these up)
 uniform int       SSRRayCount;
 
 // The idea is to tweak the these values during development and then to have them hardcoded at release..
 
 // uniform float     SurfaceGloss;          // To be a key image..
 
 uniform float     SSRGloss;          // To be a key image..
 
 // uniform float     SurfaceReflectiveness; // Also key image..
 uniform float     SSRReflectionLevel; // Also key image..
 
 // uniform float     RayMarchStepSizeIn;    // step size for raymarching..
 uniform float     SSRRayMarchStepSize;    // step size for raymarching..
 
// uniform int       maxstepsIn;            // max steps for raymarching..
 uniform int       SSRRayMarchStepCount;            // max steps for raymarching..
 
 // uniform float     startScaleIn;          // initial scale of step size for raymarching..
 uniform float     SSRStepScaleStart;          // initial scale of step size for raymarching..
 
 
 uniform float     SSRMaxPenetration;               // thickness of the world.. ORIG: 0.5..
 
 uniform float     BufferWidth;
 uniform float     BufferHeight;
 uniform float     CameraNEAR;
 uniform float     CameraFAR;
 
 // uniform vec3      CamPosXYZ;    // No use for this yet..
 // ======================================== ==== == = 
 
 
 int EARLY_EXIT = 0;
 
 
 float   fov       = 75.0;                       // MUST MATCH CAMERA (perhaps also a uniform from app)
 
 // TO BE INCORPORATED..
 float   SSRFieldOfView       = 75.0;
 float   SSRStepScaleIncrement = 0.5;
 
 
 
 const   float pi  = 3.14159265359;
 vec2    texCoord  = gl_TexCoord[0].st;
 vec3    skycolor  = vec3( 0.0 ,  0.0 ,  0.0 );   // use the horizon color under world properties, fallback when reflections fail
 vec3    grncolor  = vec3( 0.0 ,  0.0 ,  0.0 );   // Not used here by the looks of it..
 // Tweak these to your liking -- each comes with advantages and disadvantages..
 float AspectRatio      = BufferWidth / BufferHeight; // See how a uniform can be assigned to a declared variable outside ot the main function..
 float FovRatio         = 90.0 / SSRFieldOfView;
 float DepthLinearised ;
 // ================================== ==== === == =
 float UnPackDepth_002(vec3 RGBDepth) {return RGBDepth.x + (RGBDepth.y / 255.0) + (RGBDepth.z / 65535.0);}
 // ================================== ==== === == =
 float GetTrueDepth(vec2 coord) 
  {return (UnPackDepth_002(texture2D(Image01, coord).xyz) + (1.0 / (CameraFAR - CameraNEAR))*CameraNEAR)*(CameraFAR - CameraNEAR);
 
  
 
   // THIS PROVES THAT WE HAVE A SUCCESSFULL 32BIT RED CHANNEL RENDERED TO!!!! (big game changer..no more pack and unpack)..
   
   // float TrueDepthJACKED = LinearisedAsReadRED * CameraFAR - LinearisedAsReadRED * CameraNEAR + CameraNEAR;;
   // return TrueDepthJACKED;
  }
 // ================================== ==== === == =
 vec3 GetViewSpacePosFromCoord(vec2 coord) 
  {vec3 pos = vec3((coord.s * 2.0 - 1.0) / FovRatio, (coord.t * 2.0 - 1.0) / AspectRatio / FovRatio, 1.0);
   return (pos * GetTrueDepth(coord));
  }
 // ================================== ==== === == =
 // Is this even worth the performance?  BRING THIS IN AS A BUFFER RENDERED TO BY A PLAIN VIEW NORMAL GEOMETRY RENDERER..
 // IRRLICHT: We replace this by simply loading a Range Compressed Version of the normals in a buffer.
 vec3 GetViewNormalViaDepth(vec2 coord) 
  {float pW  = 1.0 / BufferWidth;
   float pH  = 1.0 / BufferHeight;
   vec3  p1  = GetViewSpacePosFromCoord(coord + vec2(pW, 0.0)).xyz;
   vec3  p2  = GetViewSpacePosFromCoord(coord + vec2(0.0, pH)).xyz;
   vec3  p3  = GetViewSpacePosFromCoord(coord + vec2(-pW, 0.0)).xyz;
   vec3  p4  = GetViewSpacePosFromCoord(coord + vec2(0.0, -pH)).xyz;
   vec3  vP  = GetViewSpacePosFromCoord(coord);
   vec3  dx  = vP - p1;
   vec3  dy  = p2 - vP;
   vec3  dx2 = p3 - vP;
   vec3  dy2 = vP - p4;
   if (length(dx2) < length(dx) && coord.x - pW >= 0.0 || coord.x + pW > 1.0) 
    {dx = dx2;}
   if (length(dy2) < length(dy) && coord.y - pH >= 0.0 || coord.y + pH > 1.0) 
    {dy = dy2;}
  return normalize(cross(dx, dy));
 }
 // ================================== ==== === == =
 vec2 ScreenCoordFromPos(vec3 Position) 
  {vec3 Normal = Position / Position.z;
   vec2 ScreenCoord = vec2((Normal.x * FovRatio + 1.0) / 2.0, (Normal.y * FovRatio * AspectRatio + 1.0) / 2.0);
   return ScreenCoord;
  }
 // ================================== ==== === == =
 vec2 SnapToPixel(vec2 coordin) 
  {vec2 coord;
   coord.x = (floor(coordin.x *  BufferWidth) + 0.5) /  BufferWidth;
   coord.y = (floor(coordin.y * BufferHeight) + 0.5) / BufferHeight;
   return coord;
  }
 // ================================== ==== === == =
 // Halton low discrepancy series generator. maybe replace with something more efficient later?
 float Halton(int i, int b) 
  {float f = 1.0;
   float r = 0.0;
   while (i > 0)
    {f /= float(b);
     r += f * mod(float(i), float(b));
     i /= b;
    }
   return r;
  }
 // ================================== ==== === == =
 // ALL ABOUT GLOSS..
 vec3 DistortWithHalton(vec3 vec, vec3 ref, int i, float n) 
  {vec3 z = vec;
   vec3 y = cross(z, ref);
   vec3 x = cross(z, y);
   float ran1 = mod(Halton(i, 2) + ref.x * 167.0, 1.0);
   float ran2 = mod(Halton(i, 3) + ref.y * 167.0, 1.0);
   // Assumes an isotropic surface..
   float phi = ran2 * pi * 2.0;
   float theta;
   // int SSRMode = 1;  // SSRMode == SSRModeIn  // Shouldn't assign Uniforms..
   // - Blinn - 
   if (SSRMode == 0) {theta = acos(pow(ran1, 1.0 / (n + 2.0)));}
   // - Mestre - 
   if (SSRMode == 1) {theta = log(ran1 / (1.0 - ran1)) / n;}
   // - GGX - 
   if (SSRMode == 2) {theta =  acos(sqrt((1.0 - ran1) / ((SSRGloss * SSRGloss * SSRGloss * SSRGloss - 1.0) * (ran1) + 1.0)));}
   // The standard form of GGX uses roughness^2, and not roughness^4, but a cuadratic scale is prefered by many..
   float xc = sin(theta) * cos(phi);
   float yc = sin(theta) * sin(phi);
   float zc = cos(theta);
   vec3 mod = xc * x + yc * y + zc * z;
   if (dot(mod, vec) < 0.0) 
    {mod = reflect(mod, vec);
    }
   return mod;
  }
 // ================================== ==== === == =
 vec4 LINEARtoSRGB(vec4 ColourRGBA) 
  {return pow(ColourRGBA, vec4(2.2));
  }
 // ================================== ==== === == =
 vec4 SRGBtoLINEAR(vec4 ColourRGBA) // What is this all about?
  {return pow(ColourRGBA, vec4(1.0 / 2.2));
  }
 // ================================== ==== === == =
 // See "C:\____DATA_0031_XXX\001_CODE\0548_ADV_49_REBIRTH_NEW_TESTS_ADVANCED\__DESK__\__USEFUL__MATH__\Schlicks_approximation.pdf"
 float schlick(float Reflectance, vec3 Normal, vec3 ViewSpacePos)
  {return Reflectance + (1.0 - Reflectance) * pow(1.0 - dot(-ViewSpacePos, Normal), 5.0);
  }
 // ================================== ==== === == =
 vec3 raymarch(vec3 Position, vec3 RayDirection) 
  {// See how the initial normalisation makes it possible for us to set the size distance of the ray..
   RayDirection = normalize(RayDirection) * SSRRayMarchStepSize; 
   float stepScale = SSRStepScaleStart;
   //   for (int steps = 0; steps < maxstepsIn; steps++) 
 
   for (int steps = 0; steps < SSRRayMarchStepCount; steps++) 
    {vec3 deltapos = RayDirection * stepScale * Position.z; // Remember that were working in "View Space" so we Multiply by Z..
     Position += deltapos;
     vec2 AcquiredScreenCoord = ScreenCoordFromPos(Position);
     bool OUT_OF_BOUNDS = false; // OUT OF BOUNDS..
     // It means.. If the given Screen Coord is below min x or y or above max x or y, then it is out of bounds..
     // Remember that this AcquiredScreenCoord is generated by "ScreenCoordFromPos(Position)".
     // The same applies for the Z Position.
     OUT_OF_BOUNDS = OUT_OF_BOUNDS || (AcquiredScreenCoord.x < 0.0) || (AcquiredScreenCoord.x > 1.0); // X 
     OUT_OF_BOUNDS = OUT_OF_BOUNDS || (AcquiredScreenCoord.y < 0.0) || (AcquiredScreenCoord.y > 1.0); // Y 
     OUT_OF_BOUNDS = OUT_OF_BOUNDS || (Position.z >  CameraFAR ) || (Position.z <  CameraNEAR); // Z 
     if (OUT_OF_BOUNDS) 
      {//EARLY_EXIT = 1;
       return vec3(0.0);
      }
      AcquiredScreenCoord = SnapToPixel(AcquiredScreenCoord);
     float Penetration = length(Position) - length(GetViewSpacePosFromCoord(AcquiredScreenCoord));
     if (Penetration > 0.0) 
      {if (stepScale > 1.0) {Position -= deltapos; stepScale *= 0.5; }  // SSRSTEPSCALEINCREMENT
       else 
       if (Penetration < SSRMaxPenetration) {return Position;}
      }
    }
   return vec3(0.0);
  }
 // ================================== ==== === == =
 vec4 glossyReflection(vec3 Position, vec3 Normal, vec3 View, int RayCount) 
  {vec4 radiance = vec4(0.0);
   vec4 irradiance = vec4(0.0);
   //float SurfaceGlossXXX = 0.15;
   float blinnExponent = pow(2.0, 15.0 * (1.0 - SSRGloss));
   for (int i = 0; i < RayCount; i++) 
    {vec3 middle      = DistortWithHalton(Normal, View, i + 1, blinnExponent);
     vec3 omega       = reflect(View, middle);
     vec3 collision   = raymarch(Position, omega);
    // if (EARLY_EXIT == 1) {gl_FragColor = vec4(1.0,1.0,1.0,1.0);  }
     vec2 screenCoord = ScreenCoordFromPos(collision);
     irradiance = SRGBtoLINEAR(texture2D(Image02, screenCoord));
     float backamount = max(abs(screenCoord.x - 0.5), abs(screenCoord.y - 0.5));
     backamount = pow(backamount * 2.0, 5.0) * 1.5 - 0.25;
     if (collision.z == 0.0) 
      {backamount = 1.0;
      }
     radiance += mix(irradiance, SRGBtoLINEAR(vec4(skycolor, 1.0)) * (pi/2.0), backamount) / float(RayCount);
    }
   return radiance;
  }
 
 // ================================== ==== === == =
 
 void main() 
  {
   //float SurfaceReflectivenessXXX = 0.25;
   // int  SSRRayCount = 1;
   vec3 position = GetViewSpacePosFromCoord(texCoord);
 
   // THE VIEW NORMAL SHOULD COME FROM AN IMAGE.. (done)
   // VERTEX: "GL_0640_V_GEO_RGB_NORMAL_WORLD_NON_CLIPPED.glsl"  & FRAGMENT "GL_0007_F_GEO_RGB_RC_NORMAL_CLIPPED_WV.glsl"..
 
  vec3 normal;
 
 
  vec3 RCnormalViewSpace   = texture2D(Image03, texCoord).xyz;      // NOT NORMALISED EVER WHEN IN RANGE COMPRESSED FORM!!
  vec3 URCNormal = normalize ((RCnormalViewSpace.xyz - 0.5) * 2.0); // Must normalise! (fixes funny orb!!)
 
   normal = URCNormal;  // USE THE CLEAN NORMALS IN THE BUFFER  (higher quality)..
  //  - OR -
  // normal   = GetViewNormalViaDepth(texCoord); // ORIGINAL: USE NORMALS CALCULATED FROM DEPTH (lower quality)..
 
      
 
   
   normal.x *= 1.0;
   normal.y *= 1.0;
   normal.z *= 1.0;  // !!!!! IS THIS THE BEST PLACE??
   
   vec3 view = normalize(position);
   float reflectivity = schlick(SSRReflectionLevel, normal, view);   // fragment shading data..
   vec4 image = texture2D(Image02, texCoord);// fragment color data..
   vec4 direct = SRGBtoLINEAR(image);
   vec4 reflection = glossyReflection(position, normal, view, SSRRayCount);
 
   // reflection.xyz *= reflection.xyz;
 
 
   //if (BufferWidth == 800.0) {gl_FragColor = vec4 (1,1,1,1); return; }
   //Depth24BitPacked = texture2D(Image01, texCoord).xyz;
   //DepthLinearised = UnPackDepth_002(Depth24BitPacked) ;
   gl_FragColor = LINEARtoSRGB(mix(direct * 1, reflection * 1, reflectivity)); 
   // gl_FragColor = vec4 (0.5 * normal.x + 0.5, 0.5 * normal.y + 0.5, 0.5 * -normal.z + 0.5, 1.0);
    // gl_FragColor = vec4 (normal.x, normal.y , normal.z , 1.0);
  }
 
// RANGE COMPRESSED..
  // TheNormal = 0.5 * Normal.xyz + 0.5;
 
// EXPANDED (UN COMPRESSED)..
  // ((RCNormalIN.xyz - 0.5) * 2;
 
 
 
 
 
devsh
Competition winner
Posts: 2057
Joined: Tue Dec 09, 2008 6:00 pm
Location: UK
Contact:

Re: Advanced Effects

Post by devsh »

I have two suggestions:
1) Construct custom mip-maps of the depth buffer using the MIN filter to obtain an implicit quadtree of nearest depth to camera to accelerate your ray marching
2) Do a screen-space blur postprocess on just the reflections component to take out the grid pattern/noise

A further idea would be a deep G-Buffer (essentially to depth-peel the scene once) of 2 layers or more to fill in your gaps when a ray shoots behind the object and we have missing data.
Vectrotek
Competition winner
Posts: 1087
Joined: Sat May 02, 2015 5:05 pm

Re: Advanced Effects

Post by Vectrotek »

devsh: You're a genius!
After all, I didn't even know about the different render target formats before you told me!
I'll look into your suggestions!
Post Reply