WebGPU: Support for custom materials; simple glTF-compliant material#717
WebGPU: Support for custom materials; simple glTF-compliant material#717TheBlek wants to merge 28 commits intogkjohnson:webgpu-pathtracerfrom
Conversation
|
Thanks for this - with the last couple PRs there will be quite a few conflicts here. I'm not sure how easy it is but if possible it would be nice to break this up into smaller chunks for review if possible. A few things on my mind as we move into BRDF implementation - there are definitely some improvements that can be made to the handling of BRDF blending from the previous implementation. GLTF, which is at least the base model I'd like to follow here, specifies the BRDF blending hierarchy here and some of the extensions like irridescence specify their terms internally, as well. It's been a long time since I've looked at the material code but it might be a good opportunity to look at that and try to match some of the gltf notation for consistency. I'm wondering if it makes sense to try to get a lot of the material qualities and BRDFs passing some of the more complicated tests (like the furnace test) before moving onto things like next event sampling and bi directional path tracing? Something to think about. These aren't things I'm expected in a first PR, to be clear. But just things that are on my mind and to consider to what would come next. |
|
Decided to split this into multiple PRs:
Let me know if you think it could/should be done differently. Once (1) merges, other PRs will be unblocked. This PR's focus will be about actual BRDF implementation. Do you think its better to build up the implementation over multiple iterations? First, base GLTF spec, then adding extensions? Could you expand on testing? Do you usually use some other implementation (blender?) as ground truth? I've read about furnace test but was unable to find them in examples. Does it need to be constructed manually? |
This seems good to me. I think this is a good opportunity to build this up again and validate it against a known specification while developing.
The viewer test demo (which you've already done the leg work for migrating here) is probably the best way to test these things. For some background, the viewer test page allows for loading and rendering the litany of gltf-sample-assets models which should cover the broad set of gltf features. For comparisons, there's an You'll notice that there is a dropdown with other comparison options (babylon, stellar etc - some of which are path tracers as well) that are currently broken. The This set of screenshots is good litmus test for ensuring we're covering the right features and should be a good way to catch any inconsistencies. A good long term goal would be to see if we can get our screenshots updated in the Khronos gltf-render-fidelity repo, as well, but one step at a time 😁 |
|
I forgot to mention that the material database demo can also be useful for testing against materials and Blender screenshots from physicallybased.info, which can be good for evaluating against something like cycles. |
…gltf-compliant form
…ee-gpu-pathtracer into webgpu-pbr-materials
|
@gkjohnson Hi, sorry for the long wait, I've been going through the math a number of times here. Now this PR adds:
There is a problem with brights fireflies appearing some of the time, so I had to clamp impact of individual ray to 4.0 but I'm not sure they should be there in the first place. It seems we would need to port filter glossy as well. While on that I noticed a couple of things that could be improved in webgl version:
float fl = schlickFresnel( wi.z, 0.0 );
float fv = schlickFresnel( wo.z, 0.0 );
float rr = 0.5 + 2.0 * surf.roughness * fl * fl;
float retro = rr * ( fl + fv + fl * fv * ( rr - 1.0f ) );
float lambert = ( 1.0f - 0.5f * fl ) * ( 1.0f - 0.5f * fv );Paper: Note the I think it would be good to fix those issues in glsl as well, in another PR |
|
This is great! Thanks for taking the time to deep dive into it -
Yeah I think this may become more clear as we continue to work on things. Eventually we'll want to support multiple BSDFs so we'll need to find a way to allow the user to associate one with their materials. The default ones should be able to be automatically mapped internally, though (eg Standard / PhysicalStandard -> glTF BSDF, BasicMaterial -> simplified / immediately terminated BSDF)
What do you mean by this?
Yeah fireflies are a pain but clamping is something we should avoid doing. Fireflies can happen particularly when there's a low probability path that happens to hit an particularly bright light after multiple bounces (usually after a diffuse-specular-diffuse bounce pattern). Next Event Estimation can help a bit and I expect bi directional path tracing will help, as well, but I don't think it will get rid of them entirely. The "filterGlossyFactor" was intended to help with this. Basically it artificially increase the roughness of subsequent hit materials based on the roughness of the materials it has hit. Blender includes an option for this reason, as well. See "filter glossy" option here and some more discussion here
Happy to get it fixed in the WebGL version, as well -- I'll have to take some time later to take a look at the code here in more detail but in the mean time I've added a "webgpu_furnace_test" to evaluate energy conservation since it looks like the model had been removed from the gltf sample data. If the BSDF is implemented and blended correctly then the scene should all be the same as the background color. You can see in three.js' furnace demo here. This is what it looks like right now in this PR - my understanding is this can be hard to get exactly right but it may be worth taking a look to see if it can be improved:
|
We could reserve some of the bits from material index to signify which BSDF should be used. Also, with different, complex BSDFs we could explore dividing ProcessHitsKernel into multiple kernels to improve occupancy. Or sort hits by bsdf?
What I meant is that not every BSDF implementation can fit into the glTF framework. For example, OpenPBR handles coating much differently from what glTF implies: https://academysoftwarefoundation.github.io/OpenPBR/#model/coat. It seems to alter roughness of the base material and emulate darkening explicitly. And that differs from glTF's approach of blending in another specular lobe basically.
Yeah. And the blender page on reducing noise is where the idea for clamping samples came from. I agree that we need a filter glossy option, I will look into it a bit later. I've modified furnace_test to be able to switch between webgl and webgpu versions. Added energy compensation from FrostBite's paper for disney diffuse to conserve energy, here's what it looks like now: I will look into multiple-scatter compensation for GGX next. It should help with high roughness metals (and hopefully dielectrics) |
I understand now - I think we can tackle some of these things as they're requested or as we come to them. Depending on the features it could be that the glTF features are a subset of some of these more complex models. This may not be the case with all parameters you're referring to but in this case it sounds like "coat_darkening" is a [0, 1] that blends in the implicit darkening effect from the coat ior. So in glTF's model that would mean the "darkening" factor is implied to always be 0. I think the bigger issue is that these parameters are not present on three.js' material objects, which we're using as the medium for reading these parameters. With nodes, though, it may be more possible to add some of these features in on top of the existing materials if we want to make a more complex definition. Problems for later, though, if they're actually needed 😅
Oh I didn't realize Blender had that option. Sounds like a last resort in Blender, as well. |
…ation using Turquin's method. Only for conductors for now.



Transmissive materials need support from the light transport algorithm and will not work correctly for now. I think they can be addressed later.
@gkjohnson Should I divide this PR into multiple for better readability?