Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: implement colorspace support (sRGB, etc.) #1543

Draft
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

illwieckz
Copy link
Member

@illwieckz illwieckz commented Feb 3, 2025

The purpose of this change is to achieve linear blending, this is a continuation of:

Some context:

  • Everything should be displayed in sRGB space.
  • Everything should be computed in linear space.

Textures are in sRGB space, lightmaps are usually in linear space. So the computation for applying a lightmap on a texture and rendering it is:

  • convertToSRGB( convertFromSRGB( texture ) * light )

Q3map2 has a trick to also store lightmaps in sRGB space, this makes possible to bias the storage and allocate more precision to some wanted values (makes black less banded if I'm right). When this Q3map2 option is used, the computation for applying a lightmap on a texture and rendering it is:

  • convertToSRGB( convertFromSRGB( texture ) * convertFromSRGB( light ) )

OpenGL provides features to store images while saying if they are in linear or sRGB space, when using those features, OpenGL would automatically convert the colors to linear space when sampling them, and convert them back to sRGB when displaying.

I still want to start with a fully explicit implementation where we do the conversions ourselves, because it makes easier to control what is happening at every step with our current engine design. It's very unlikely that our engine works with graphics card that don't support sRGB image OpenGL formats (even my 23 years old Radeon 2700 Pro supports it if I'm right). Though, our engine design may not makes it easy to use an implicit OpenGL implementation because of its current design:

  • When we load a light style light map, we may not know yet it's a lightmap so we may not know yet in which format it is when we need to know it (when uploading it to GPU).
  • When we load an image we may not know yet what kind of image it is, for example a normal map is always in linear space, but some legacy shader keywords make possible to load and upload first the image to GPU memory, then tell if it is a normal map or not only after the OpenGL format is already selected.

There can be other shortcomings like that, but also, there can be things that require some in-engine conversions. For example the rgbgen colors are assumed to be in sRGB space, because if people copy the RGB values from a color picker in an image editor like GIMP or Photoshop, those values are in sRGB. So we need to convert those colors back to linear in the C++ engine code.

Migrating to OpenGL sRGB features is something we can postpone for the future, it will reduce the amount of GLSL code and may limit precision loss (so, more performance and less color imprecision), but we don't require that to get a viable product. And we better want to get proper colorspace management as soon as possible.

My effort to achieve proper colorspace management is now 6 years old.

@sweet235 wrote today in chat:

<Sweet> since almost a year, i cannot decide on what specularity to use for the atcshd texture package
<Sweet> because there is always a change coming up

So we better merge this as soon as possible.

Once this is merged, there would not be strong changes to expect. For example if one day we implement HDR rendering, this will not change the colorspace neither the blend computations, it will just add more precision.

This effort is also required (but not enough) to get true PBR.

It is assumed that once this is merged, the light computation formulæ will be stable (except for PBR which is still work-in-progress).

@illwieckz illwieckz added A-Renderer T-Feature-Request Proposed new feature labels Feb 3, 2025
@illwieckz illwieckz marked this pull request as draft February 3, 2025 19:48
@illwieckz
Copy link
Member Author

Here is an arachnid2 build, to be used with that branch:

I need help to understand why the creep is so bright:

unvanquished_2025-02-03_192748_000

Here is the creep material:

gfx/buildables/creep/creep
{
	polygonoffset
	twoSided
	imageMinDimension 16
	{
		clamp
		diffuseMap gfx/buildables/creep/creep_d
		specularMap gfx/buildables/creep/creep_s
		normalMap gfx/buildables/creep/creep_n
		blendfunc blend
		rgbGen identity
		alphaGen vertex
		alphaFunc GE128
	}
}

I guess it should be blended with the lightgrid. Models lit with the lightgrid don't look that wrong.

@VReaperV
Copy link
Contributor

VReaperV commented Feb 3, 2025

Here is an arachnid2 build, to be used with that branch:

I need help to understand why the creep is so bright:

unvanquished_2025-02-03_192748_000

Here is the creep material:

gfx/buildables/creep/creep
{
	polygonoffset
	twoSided
	imageMinDimension 16
	{
		clamp
		diffuseMap gfx/buildables/creep/creep_d
		specularMap gfx/buildables/creep/creep_s
		normalMap gfx/buildables/creep/creep_n
		blendfunc blend
		rgbGen identity
		alphaGen vertex
		alphaFunc GE128
	}
}

I guess it should be blended with the lightgrid. Models lit with the lightgrid don't look that wrong.

What happens if you use overbright clamping? That looks like a light factor multiplication bug.

@VReaperV
Copy link
Contributor

VReaperV commented Feb 3, 2025

I also don't really get why you want to store everything in sRGB and do extra conversions (which include pow) back and forth instead of just using more precision for textures, which should be enough.

@slipher
Copy link
Member

slipher commented Feb 4, 2025

Q3map2 has a trick to also store lightmaps in sRGB space, this makes possible to bias the storage and allocate more precision to some wanted values (makes black less banded if I'm right). When this Q3map2 option is used, the computation for applying a lightmap on a texture and rendering it is:

* `convertToSRGB( convertFromSRGB( texture ) * convertFromSRGB( light ) )`

How does that interact with overbright? Does it define some extended sRGB curve that can operate on the range [0, 2^overbrightBits]? Or does it divide by 2^n first and then use standard sRGB?

  • some legacy shader keywords make possible to load and upload first the image to GPU memory, then tell if it is a normal map or not only after the OpenGL format is already selected.

Which ones are those?

For example the rgbgen colors are assumed to be in sRGB space, because if people copy the RGB values from an color picker in an image editor like GIMP or Photoshop, those values are in sRGB.

On the other hand, it would be kinda weird if stuff like rgbgen wave is not in linear space. But we could do it differently depending on which function is used. And even have an rgbgenColorspace override if needed.

@illwieckz
Copy link
Member Author

Answers to @VReaperV:

What happens if you use overbright clamping? That looks like a light factor multiplication bug.

The same happen if I disable overbright to begin with. Overbright clamping is not a fix, neither a feature, and precomputed overbright (and then clamping) cannot be used here anyway so it is forcibly disabled when such map is loaded.

I also don't really get why you want to store everything in sRGB and do extra conversions (which include pow) back and forth instead of just using more precision for textures, which should be enough.

Because this is the industry standard. The industry standard here is DarkPlaces/Xonotic, it was DarkPlaces/Xonotic guys who implemented the sRGB stuff in q3map2 (I had nothing to implement in q3map2), and then, the de-facto test maps for the feature are Xonotic maps. Xonotic went in the way to also convert the lightmap to sRGB as a storage precision trick, likely because it makes possible to trick the storage precision while retaining compatibility with in-BSP lightmaps (8-bit per channel array with fixed-size dimension), and then, without breaking the BSP format.

We Unvanquished discourage to use in-BSP lightmaps, but thanks to that trick DarkPlaces/Xonotic people made possible for mappers to just add the single -sRGB q3map2 command line option to already used compile commands (using internal lightmaps or not) to both get proper colorspace conversion when raytracing the lightmap, bias the lightmap for precision in some wanted areas, and keep the Q3 BSP format compatibility, all at once.

I would like to see HDR lightmap being implemented, there exist multiple implementations of it in some q3map2 forks, most of them being dead. But this is for the future.

First step is to achieve what DarkPlaces/Xonotic already does, which is the easiest way to achieve because that's their standard pipeline and we have a standard test bed for that in the form of their own assets.

This branch already has some code to decide to convert textures and lightmaps separately, on purpose for a later HDR work that would store lightmaps as linear HDR while having q3map2 still does the conversions when raytracing (q3map2 already have separate options too). But right now, the best we can do is to do the same as DarkPlaces/Xonotic: it is the least effort.

So I dont want to “store everything in sRGB”, I want to do the least effort, have a test bed and another software as reference, so I do the way DarkPlaces/Xonotic does.

@illwieckz illwieckz force-pushed the illwieckz/srgb-glsl branch from 85ae75e to eeb0ce4 Compare February 4, 2025 12:12
@illwieckz
Copy link
Member Author

illwieckz commented Feb 4, 2025

Replying to @slipher:

Q3map2 has a trick to also store lightmaps in sRGB space, this makes possible to bias the storage and allocate more precision to some wanted values (makes black less banded if I'm right). When this Q3map2 option is used, the computation for applying a lightmap on a texture and rendering it is:

* `convertToSRGB( convertFromSRGB( texture ) * convertFromSRGB( light ) )`

How does that interact with overbright? Does it define some extended sRGB curve that can operate on the range [0, 2^overbrightBits]? Or does it divide by 2^n first and then use standard sRGB?

Actually, thank you for aking, because I forgot that patch was initially targeting my overbright implementation (that multiplied at the end of the render pipeline after the light is blended) and then I had to port it to the new overbright implementation (that multiplies before blending the light).

Here is the pseudo-code with overbright:

  • convertToSRGB( convertFromSRGB( texture ) * overbright( convertFromSRGB( light ) ) )

This would also work, because of multiplication commutativity:

  • convertToSRGB( overbright( convertFromSRGB( texture ) * convertFromSRGB( light ) ) )

The first one is what is implemented (simpler).

The very important thing to remember is that ALL computations should be done in linear space.

  • some legacy shader keywords make possible to load and upload first the image to GPU memory, then tell if it is a normal map or not only after the OpenGL format is already selected.

Which ones are those?

This legacy Doom 3 syntax:

textures/castle/brick
{
	{
		map textures/castle/brick_d
		blend diffuseMap
	}
	{
		map textures/castle/brick_n
		blend normalMap
	}
}

Or this legacy XreaL syntax:

textures/castle/brick
{
	{
		map textures/castle/brick_d
		stage diffuseMap
	}
	{
		map textures/castle/brick_n
		stage normalMap
	}
}

But actually, by re-reading tr_shader.cpp, it looks like the parsing of map textures/castle/brick_n only calls ParseMap() and then the call to LoadMap() may be delayed to after having parsed stage normalMap

For example the rgbgen colors are assumed to be in sRGB space, because if people copy the RGB values from an color picker in an image editor like GIMP or Photoshop, those values are in sRGB.

On the other hand, it would be kinda weird if stuff like rgbgen wave is not in linear space. But we could do it differently depending on which function is used. And even have an rgbgenColorspace override if needed.

Yes, that's a problem. We may just assume that they are in linear space. Old maps are supposed to mix different color spaces, but we may decide that new maps would use linear space for example. One thing to know is that the q3map2 -sRGB option also implies -sRGBcolor which means material lines like q3map_lightRGB are converted from sRGB, so it would be weird if in the same material the rgbGen lines were considered linear…

@illwieckz
Copy link
Member Author

illwieckz commented Feb 4, 2025

Here is an example of such material:

textures/shared_vega/squarelight01_blue_1500
{
	qer_editorImage textures/shared_vega_src/squarelight01_blue_p

	// blue #73C0D7
	q3map_lightRGB .6078 .7529 .8431
	q3map_surfacelight 1500

	{
		diffuseMap textures/shared_vega_src/squarelight01_d
		normalMap textures/shared_vega_src/squarelight01_n
		normalFormat -X -Y Z
		specularMap textures/shared_vega_src/squarelight01_s
	}
	{
		map textures/shared_vega_src/squarelight01_a
		blend add
		red .6078
		green .7529
		blue .8431
	}
}

We better assume q3map_lightRGB and red, green, blue are in the same colorspace.

When -sRGB or -sRGBcolor is used as q3map2 option, q3map_lightRGB values are converted from sRGB.

Edit: This material is a good example of when people would just use a color picker and then write values in sRGB space.

@illwieckz
Copy link
Member Author

illwieckz commented Feb 4, 2025

Also, something very bold currently done with my implementation is that specular map is assumed to be in sRGB, of course it's wrong. Specular maps should be in linear space.

But people are doing specular maps from far before people cared about computing lights in linear spaces. Many specular maps may have just been loosely sampled from textures, and then done in sRGB space. And anyway, they were tested with engines doing wrong computations.

Here is the same branch with specular maps assumed to be in sRGB space:

unvanquished_2025-02-04_135240_000

Now the same branch with specular maps assumed to be in linear space:

unvanquished_2025-02-04_135406_000

See how on the first image, the egg looks correct and the overmind not that bad.

See also how on the second image, the egg looks wrong and the overmind looks better.

So it looks like wrongly assuming a specular map is in sRGB space is less problematic than wrongly assuming a specular map is in linear space, that's why as a proof of concept I started with that intentional wrong assumption.

But I guess I will implement some material keyword to tell the format (like we already do with normal maps), so we can assume specular maps are in linear space as they should be, and configure the wrong ones like the egg one to be in sRGB space.

Actually this kind of material configuration keyword would need a redesign if we want to use OpenGL sRGB formats instead of explicit GLSL code as the specularMap path/to/image keyword loads the image before finishing parshig the stage, if I'm right.

This wrong assumption has nothing to do with the creep being too bright anyway.

@illwieckz
Copy link
Member Author

So, if I implement a special material keyword to tell the specular map is in sRGB colorspace and use it for the egg, and use the linear space for specular maps by default, here is how that looks:

unvanquished_2025-02-04_162805_000

@illwieckz
Copy link
Member Author

But then some specular maps are too strong. I guess some of our texture packs have specular maps in sRGB.

unvanquished_2025-02-04_163040_000

unvanquished_2025-02-04_163054_000

So we will have to add such keyword to whole texture packs.

@illwieckz
Copy link
Member Author

illwieckz commented Feb 4, 2025

The creep is too bright because it it is not linearized (so it gets delinearized at the end, it's like doing gamma 2.2 on it). It is not linearized because the texture is loaded before the map is loaded, so it is not known yet that it should be linearized.

Debug: Uploading image gfx/buildables/creep/creep_n (2048×2048, 1 layers, 0xde1 type, 0x8dbd format) 
…
Debug: ----- RE_LoadWorldMap( maps/plat23.bsp ) ----- 

@illwieckz
Copy link
Member Author

In the cgame, trap_R_LoadWorldMap() is called before trap_R_RegisterShader("gfx/buildables/creep/creep", RSF_DEFAULT );, but maybe trap_R_LoadWorldMap() is asynchronous…

@illwieckz
Copy link
Member Author

illwieckz commented Feb 4, 2025

Moving the call to trap_R_LoadWorldMap() before this in cgame fixes the creep bug:

 	// load configs after initializing particles and trails since it registers some
	CG_UpdateLoadingStep( LOAD_CONFIGS );

It is probably because configs/missiles/lockblob.model.cfg or configs/missiles/slowblob.model.cfg loads the creep shader before we attempt to load it explicitly.

Actually I better move it at the very beginning, before loading particles and trails.

@VReaperV
Copy link
Contributor

VReaperV commented Feb 4, 2025

The creep is too bright because it it is not linearized (so it gets delinearized at the end, it's like doing gamma 2.2 on it). It is not linearized because the texture is loaded before the map is loaded, so it is not known yet that it should be linearized.

Debug: Uploading image gfx/buildables/creep/creep_n (2048×2048, 1 layers, 0xde1 type, 0x8dbd format) 
…
Debug: ----- RE_LoadWorldMap( maps/plat23.bsp ) ----- 

This doesn't require changing the image data itself, right? Just some shaderStage_t data?

@illwieckz
Copy link
Member Author

This doesn't require changing the image data itself, right? Just some shaderStage_t data?

Yes.

@VReaperV
Copy link
Contributor

VReaperV commented Feb 4, 2025

This doesn't require changing the image data itself, right? Just some shaderStage_t data?

Yes.

What if you just go through all non-UI shaders after loading a map and change those values then?

@illwieckz
Copy link
Member Author

@illwieckz
Copy link
Member Author

The r_rimLighting feature produces wrong results in linear space. In some way I'm not surprised as it as been likely created for engines not using a linear space.

On the other hand, r_bloom works out of the box in linear space, maybe the values were meant for an engine in linear space, but then was broken in our engine, and such brokenness was cancelled by the clamping bug, so maybe it was just not modified to fit the non-linear space.

@illwieckz illwieckz force-pushed the illwieckz/srgb-glsl branch from eeb0ce4 to 1117112 Compare February 5, 2025 13:00
@illwieckz
Copy link
Member Author

I added code to disable bloom when not using a map in linear space (so bloom will be disabled with all maps produced until today).

I also modified rim-lighting to multiply with linearized factors when doing a compute in linear space, I don't know if it's right, but this looks OK.

src/engine/renderer/tr_init.cpp Show resolved Hide resolved
@@ -4757,6 +4761,15 @@ static bool ParseShader( const char *_text )
return true;
}

static int packLinearizeTexture( bool linearizeColorMap, bool linearizeMaterialMap, bool linearizeLightMap )
{
/* HACK: emulate three-bits bitfield
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's with the bit tests written in a way that's very hard to read instead of just & 0x2 or whatever? u_ColorModulate stuff uses normal bitwise operations.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as I know bitwise operators are only supported in GLSL starting with GLSL 1.30, so OpenGL 3.0.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know how far back you want to go, but GLSL 1.20 definitely supports bit-wise operators.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK thanks, GLSL 1.20 is fine (OpenGL 2.1), so let's use them then.

@slipher
Copy link
Member

slipher commented Feb 7, 2025

So this doesn't do anything different unless you have a map compiled with sRGB? I'm afraid it would be a nightmare to design any non-map graphical assets (player and buildable models, weapon effects, etc.) if they have to work with 2 very different blending modes. Is there any way we could get linear blending to work with existing maps?

Like could we render all the BSP stuff with the old-fashioned rendering pipeline, run an srgb2linear shader over the whole screen, then render everything else in linear blending mode? Of course this wouldn't work for translucent surfaces in the map.

@illwieckz
Copy link
Member Author

illwieckz commented Feb 7, 2025

So this doesn't do anything different unless you have a map compiled with sRGB?

It is thought to render legacy maps like they were rendered before.

I'm afraid it would be a nightmare to design any non-map graphical assets (player and buildable models, weapon effects, etc.) if they have to work with 2 very different blending modes.

Why? When a map is using the linear blending mode, any texture is linearized and delinearized, so for example with a fullbright light it should produce the same. Player, weapon and buildable models use the map light (light grid), in either case. It just happens that with the old way, the pre-computed light attenuation and things like that are not correct. Like, you can get a very dark shadow in a very lit room in a non-physically correct way. Basically with the old way some contrast is too strong to be real.

Is there any way we could get linear blending to work with existing maps?

The whole data used for linear blending is done in q3map2 when raytracing the lightmaps/lightgrid before releasing the assets. So no. Here the engine just blends the data computed in q3map2. Existing maps have broken pre-computed data we can't fix afterward.

What is done in engine with that branch and compatible maps is just to make sure the data is in the right colorspace when blending texture and precomputed light, the precomputed light is already generated with either incorrect or correct equations by q3map2. The engine just makes sure the data is correctly processed when q3map2 was using the correct equations. When q3map2 wasn't using the correct equations, the engine just does “as before” for backward compatibility purpose.

The only thing we really have to care about is about real time light, because here we do the job of q3map2 but in real time, hence this thread:

vec3 high = pow((color + 0.055f) * (1.0f / 1.055f), vec3(2.4f));

color = (yes * low) + (no * high);
#elif defined(SRGB_BOOL)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why the 3 different approaches doing the same thing?

color = pow(color, vec3(gamma));

#elif defined(SRGB_NAIVE)
// (((c) <= 0.04045f) ? (c) * (1.0f / 12.92f) : (float)pow(((c) + 0.055f)*(1.0f/1.055f), 2.4f))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The thresholds used don't match this comment.

float gamma = 2.2;
color = pow(color, vec3(1/gamma));
#elif defined(SRGB_NAIVE)
// (((c) < 0.0031308f) ? (c) * 12.92f : 1.055f * (float)pow((c), 1.0f/2.4f) - 0.055f)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this be (((c) <= 0.0031308f) ? (c) * 12.92f : 1.055f * (float)pow((c), 1.0f/2.4f) - 0.055f)?

I'd also drop the () around c and around the whole formula in these comments, to reduce parentheses noise and make them more readable.

#if defined(USE_GRID_LIGHTING) || defined(USE_GRID_DELUXE_MAPPING)
void ReadLightGrid( in vec4 texel, in float lightFactor, out vec3 ambientColor, out vec3 lightColor ) {
void ReadLightGrid( in vec4 texel, in float lightFactor, in bool linearizeLightMap, out vec3 ambientColor, out vec3 lightColor) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

linearizeLightMap is unused.

@@ -2806,6 +2806,36 @@ class u_ShadowTexelSize :
}
};

class u_LinearizeTexture :
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I said in #1034, this should go into u_ColorModulateColorGen, instead of creating multiple bit-fields.

Copy link
Member Author

@illwieckz illwieckz Feb 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, I can investigate that later, I'm currently focusing on getting everything rendering correctly first (this patch was started 6 years ago, so other things in engine have changed, it's possible to port that code to other mechanisms).

@slipher
Copy link
Member

slipher commented Feb 7, 2025

I'm afraid it would be a nightmare to design any non-map graphical assets (player and buildable models, weapon effects, etc.) if they have to work with 2 very different blending modes.

Why? When a map is using the linear blending mode, any texture is linearized and delinearized, so for example with a fullbright light it should produce the same. Player, weapon and buildable models use the map light (light grid), in either case.

I don't think you can count on anything working both ways except single-stage opaque shaders. Anything with translucency (there is a lot of that with flames, explosions, projectiles etc.) will be affected, and any multi-stage shader will be affected by blending changes.

@slipher
Copy link
Member

slipher commented Feb 8, 2025

What about r_halfLambertLighting? That may be another part of the universe of hacks to work around sRGB-unaware rendering pipelines.

@slipher
Copy link
Member

slipher commented Feb 8, 2025

As an example of potential problems with non-map assets in a new color space, the rifle impact dust puff is too bright and looks bad. The shader gfx/weapons/rifle/puff has 3 stages so it may be more affected by blending differences.

unvanquished_2025-02-07_223950_000

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants