-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: implement colorspace support (sRGB, etc.) #1543
base: master
Are you sure you want to change the base?
Conversation
Here is an arachnid2 build, to be used with that branch: I need help to understand why the creep is so bright: Here is the creep material:
I guess it should be blended with the lightgrid. Models lit with the lightgrid don't look that wrong. |
What happens if you use overbright clamping? That looks like a light factor multiplication bug. |
I also don't really get why you want to store everything in sRGB and do extra conversions (which include |
How does that interact with overbright? Does it define some extended sRGB curve that can operate on the range [0, 2^overbrightBits]? Or does it divide by 2^n first and then use standard sRGB?
Which ones are those?
On the other hand, it would be kinda weird if stuff like |
Answers to @VReaperV:
The same happen if I disable overbright to begin with. Overbright clamping is not a fix, neither a feature, and precomputed overbright (and then clamping) cannot be used here anyway so it is forcibly disabled when such map is loaded.
Because this is the industry standard. The industry standard here is DarkPlaces/Xonotic, it was DarkPlaces/Xonotic guys who implemented the sRGB stuff in q3map2 (I had nothing to implement in q3map2), and then, the de-facto test maps for the feature are Xonotic maps. Xonotic went in the way to also convert the lightmap to sRGB as a storage precision trick, likely because it makes possible to trick the storage precision while retaining compatibility with in-BSP lightmaps (8-bit per channel array with fixed-size dimension), and then, without breaking the BSP format. We Unvanquished discourage to use in-BSP lightmaps, but thanks to that trick DarkPlaces/Xonotic people made possible for mappers to just add the single I would like to see HDR lightmap being implemented, there exist multiple implementations of it in some q3map2 forks, most of them being dead. But this is for the future. First step is to achieve what DarkPlaces/Xonotic already does, which is the easiest way to achieve because that's their standard pipeline and we have a standard test bed for that in the form of their own assets. This branch already has some code to decide to convert textures and lightmaps separately, on purpose for a later HDR work that would store lightmaps as linear HDR while having q3map2 still does the conversions when raytracing (q3map2 already have separate options too). But right now, the best we can do is to do the same as DarkPlaces/Xonotic: it is the least effort. So I dont want to “store everything in sRGB”, I want to do the least effort, have a test bed and another software as reference, so I do the way DarkPlaces/Xonotic does. |
85ae75e
to
eeb0ce4
Compare
Replying to @slipher:
Actually, thank you for aking, because I forgot that patch was initially targeting my overbright implementation (that multiplied at the end of the render pipeline after the light is blended) and then I had to port it to the new overbright implementation (that multiplies before blending the light). Here is the pseudo-code with overbright:
This would also work, because of multiplication commutativity:
The first one is what is implemented (simpler). The very important thing to remember is that ALL computations should be done in linear space.
This legacy Doom 3 syntax:
Or this legacy XreaL syntax:
But actually, by re-reading
Yes, that's a problem. We may just assume that they are in linear space. Old maps are supposed to mix different color spaces, but we may decide that new maps would use linear space for example. One thing to know is that the q3map2 |
Here is an example of such material:
We better assume When Edit: This material is a good example of when people would just use a color picker and then write values in sRGB space. |
Also, something very bold currently done with my implementation is that specular map is assumed to be in sRGB, of course it's wrong. Specular maps should be in linear space. But people are doing specular maps from far before people cared about computing lights in linear spaces. Many specular maps may have just been loosely sampled from textures, and then done in sRGB space. And anyway, they were tested with engines doing wrong computations. Here is the same branch with specular maps assumed to be in sRGB space: Now the same branch with specular maps assumed to be in linear space: See how on the first image, the egg looks correct and the overmind not that bad. See also how on the second image, the egg looks wrong and the overmind looks better. So it looks like wrongly assuming a specular map is in sRGB space is less problematic than wrongly assuming a specular map is in linear space, that's why as a proof of concept I started with that intentional wrong assumption. But I guess I will implement some material keyword to tell the format (like we already do with normal maps), so we can assume specular maps are in linear space as they should be, and configure the wrong ones like the egg one to be in sRGB space. Actually this kind of material configuration keyword would need a redesign if we want to use OpenGL sRGB formats instead of explicit GLSL code as the This wrong assumption has nothing to do with the creep being too bright anyway. |
The creep is too bright because it it is not linearized (so it gets delinearized at the end, it's like doing gamma 2.2 on it). It is not linearized because the texture is loaded before the map is loaded, so it is not known yet that it should be linearized.
|
In the cgame, |
Moving the call to // load configs after initializing particles and trails since it registers some
CG_UpdateLoadingStep( LOAD_CONFIGS ); It is probably because Actually I better move it at the very beginning, before loading particles and trails. |
This doesn't require changing the image data itself, right? Just some |
Yes. |
What if you just go through all non-UI shaders after loading a map and change those values then? |
I did that instead: |
The On the other hand, |
eeb0ce4
to
1117112
Compare
I added code to disable bloom when not using a map in linear space (so bloom will be disabled with all maps produced until today). I also modified rim-lighting to multiply with linearized factors when doing a compute in linear space, I don't know if it's right, but this looks OK. |
@@ -4757,6 +4761,15 @@ static bool ParseShader( const char *_text ) | |||
return true; | |||
} | |||
|
|||
static int packLinearizeTexture( bool linearizeColorMap, bool linearizeMaterialMap, bool linearizeLightMap ) | |||
{ | |||
/* HACK: emulate three-bits bitfield |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's with the bit tests written in a way that's very hard to read instead of just & 0x2
or whatever? u_ColorModulate
stuff uses normal bitwise operations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As far as I know bitwise operators are only supported in GLSL starting with GLSL 1.30, so OpenGL 3.0.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know how far back you want to go, but GLSL 1.20 definitely supports bit-wise operators.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK thanks, GLSL 1.20 is fine (OpenGL 2.1), so let's use them then.
So this doesn't do anything different unless you have a map compiled with sRGB? I'm afraid it would be a nightmare to design any non-map graphical assets (player and buildable models, weapon effects, etc.) if they have to work with 2 very different blending modes. Is there any way we could get linear blending to work with existing maps? Like could we render all the BSP stuff with the old-fashioned rendering pipeline, run an srgb2linear shader over the whole screen, then render everything else in linear blending mode? Of course this wouldn't work for translucent surfaces in the map. |
It is thought to render legacy maps like they were rendered before.
Why? When a map is using the linear blending mode, any texture is linearized and delinearized, so for example with a fullbright light it should produce the same. Player, weapon and buildable models use the map light (light grid), in either case. It just happens that with the old way, the pre-computed light attenuation and things like that are not correct. Like, you can get a very dark shadow in a very lit room in a non-physically correct way. Basically with the old way some contrast is too strong to be real.
The whole data used for linear blending is done in q3map2 when raytracing the lightmaps/lightgrid before releasing the assets. So no. Here the engine just blends the data computed in q3map2. Existing maps have broken pre-computed data we can't fix afterward. What is done in engine with that branch and compatible maps is just to make sure the data is in the right colorspace when blending texture and precomputed light, the precomputed light is already generated with either incorrect or correct equations by q3map2. The engine just makes sure the data is correctly processed when q3map2 was using the correct equations. When q3map2 wasn't using the correct equations, the engine just does “as before” for backward compatibility purpose. The only thing we really have to care about is about real time light, because here we do the job of q3map2 but in real time, hence this thread: |
vec3 high = pow((color + 0.055f) * (1.0f / 1.055f), vec3(2.4f)); | ||
|
||
color = (yes * low) + (no * high); | ||
#elif defined(SRGB_BOOL) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why the 3 different approaches doing the same thing?
color = pow(color, vec3(gamma)); | ||
|
||
#elif defined(SRGB_NAIVE) | ||
// (((c) <= 0.04045f) ? (c) * (1.0f / 12.92f) : (float)pow(((c) + 0.055f)*(1.0f/1.055f), 2.4f)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The thresholds used don't match this comment.
float gamma = 2.2; | ||
color = pow(color, vec3(1/gamma)); | ||
#elif defined(SRGB_NAIVE) | ||
// (((c) < 0.0031308f) ? (c) * 12.92f : 1.055f * (float)pow((c), 1.0f/2.4f) - 0.055f) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't this be (((c) <= 0.0031308f) ? (c) * 12.92f : 1.055f * (float)pow((c), 1.0f/2.4f) - 0.055f)
?
I'd also drop the ()
around c
and around the whole formula in these comments, to reduce parentheses noise and make them more readable.
#if defined(USE_GRID_LIGHTING) || defined(USE_GRID_DELUXE_MAPPING) | ||
void ReadLightGrid( in vec4 texel, in float lightFactor, out vec3 ambientColor, out vec3 lightColor ) { | ||
void ReadLightGrid( in vec4 texel, in float lightFactor, in bool linearizeLightMap, out vec3 ambientColor, out vec3 lightColor) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
linearizeLightMap
is unused.
@@ -2806,6 +2806,36 @@ class u_ShadowTexelSize : | |||
} | |||
}; | |||
|
|||
class u_LinearizeTexture : |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I said in #1034, this should go into u_ColorModulateColorGen
, instead of creating multiple bit-fields.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, I can investigate that later, I'm currently focusing on getting everything rendering correctly first (this patch was started 6 years ago, so other things in engine have changed, it's possible to port that code to other mechanisms).
I don't think you can count on anything working both ways except single-stage opaque shaders. Anything with translucency (there is a lot of that with flames, explosions, projectiles etc.) will be affected, and any multi-stage shader will be affected by blending changes. |
What about |
The purpose of this change is to achieve linear blending, this is a continuation of:
Some context:
Textures are in sRGB space, lightmaps are usually in linear space. So the computation for applying a lightmap on a texture and rendering it is:
convertToSRGB( convertFromSRGB( texture ) * light )
Q3map2 has a trick to also store lightmaps in sRGB space, this makes possible to bias the storage and allocate more precision to some wanted values (makes black less banded if I'm right). When this Q3map2 option is used, the computation for applying a lightmap on a texture and rendering it is:
convertToSRGB( convertFromSRGB( texture ) * convertFromSRGB( light ) )
OpenGL provides features to store images while saying if they are in linear or sRGB space, when using those features, OpenGL would automatically convert the colors to linear space when sampling them, and convert them back to sRGB when displaying.
I still want to start with a fully explicit implementation where we do the conversions ourselves, because it makes easier to control what is happening at every step with our current engine design. It's very unlikely that our engine works with graphics card that don't support sRGB image OpenGL formats (even my 23 years old Radeon 2700 Pro supports it if I'm right). Though, our engine design may not makes it easy to use an implicit OpenGL implementation because of its current design:
There can be other shortcomings like that, but also, there can be things that require some in-engine conversions. For example the rgbgen colors are assumed to be in sRGB space, because if people copy the RGB values from a color picker in an image editor like GIMP or Photoshop, those values are in sRGB. So we need to convert those colors back to linear in the C++ engine code.
Migrating to OpenGL sRGB features is something we can postpone for the future, it will reduce the amount of GLSL code and may limit precision loss (so, more performance and less color imprecision), but we don't require that to get a viable product. And we better want to get proper colorspace management as soon as possible.
My effort to achieve proper colorspace management is now 6 years old.
@sweet235 wrote today in chat:
So we better merge this as soon as possible.
Once this is merged, there would not be strong changes to expect. For example if one day we implement HDR rendering, this will not change the colorspace neither the blend computations, it will just add more precision.
This effort is also required (but not enough) to get true PBR.
It is assumed that once this is merged, the light computation formulæ will be stable (except for PBR which is still work-in-progress).