Replies: 2 comments
-
Thanks for checking this out and the kind words! A couple of answers:
There has been a lot of work done since 0.9, I highly suggest using master. We are one regression away from a release. |
Beta Was this translation helpful? Give feedback.
-
I did try to build using the current master, IIRC there was some internal dependency-version mismatch that prevented it. Probably user error though. If a release is near I could try again or just wait for 0.10. The bindless stuff isn't a complete blocker yet, there are other moving parts that can be worked on in the meantime. I just don't know that any alternative exists for the core RT bits. The host-side would IMO benefit a lot from a 'plugin' architecture, with GPU code, its interface, and maybe some simple execution graph specified in a generic way, with a lot of the supporting machinery left to the host implementation. I cite the audio plugin scene as evidence in favor 😄 Thank you for the quick reply. |
Beta Was this translation helpful? Give feedback.
-
Hello.
Rust GPU is a cool concept, and in trying it recently I'm surprised how much if it 'just works'.
I do have a few not-fully-formed questions and comments that don't really deserve a discussion thread or issue each. Hope that's alright.
I've placed the Rust GPU shader code in a separate workspace to the host code. This is slightly inconvenient compared to using, say, HLSL, which can be written as inline strings or emitted from reflection/macros, and therefore be placed alongside the related parts of the host code.
I believe this separation is necessary (in my case anyhow) as the CPU side of things is built with the mainline compiler, and cargo cannot specify different toolchains per-target or per-package inside the same workspace.
Is my understanding correct?
Has this limitation been discussed with the Rust/Cargo developers? Is there a chance this is a temprary inconvenience?
Rust Analyzer features like autocomplete, error hints, etc do not work in shader code. I understand this to be due to the nonstandard target_arch cfg, and RAs inability to evaluate non-default cfg branches. Is there a workaround for this, or are we doomed writing shaders without assistive tooling?
SpirvBuilder appears able to produce debug-info and readable SPIRV, but only when in 'single module' mode. I may switch to building in that mode, if this is a concrete limitation, as reading the IR can save a ton of time when debugging and performance tuning. If it's only a matter of multi-module simply being less used and lagging a bit, that's understandable.
My first attempt at using Rust GPU involves porting a realtime hybrid path tracer from HLSL. This seems like a reasonable test case, being comprised of multiple raster and compute passes and varied techniques and workloads within each. The ease of sharing code between the host and device contexts is already proving invaluable.
Unfortunately I've already become stuck.
I need to turn the geometry/primitive/instance index obtained during ray traversal into geometry and material data in order to progress the ray and shade the result. The renderer must access vertex and index (etc) buffers arbitrarily, and the scene data is contained in many of these.
Is this possible, currently (as in 0.9)?
Binding these naively as &[&[T]] won't compile, and I was unable to use RuntimeArray, though I'm not sure if that's due to inexperience or a lack of documentation or what.
TypedBuffer looks related, but isn't available yet, and how all these are meant to be used together is not clear.
ByteAddressableBuffer seems intended to emulate a Buffer Reference. If it did, I could bind a single-level buffer of Instance objects, which each provide a Device Address of the relevant vertex and index buffers, but constructing a TypedBuffer seems to require a per-buffer &[u32] which would still require a nested slice binding, as far as I can tell.
I'm not able to even formulate this as a simple question, but could one of you could clarify Rust GPUs bindless design, and if it does work currently, or approximately when it will, if not?
If so, was it intended to be 'general', i.e. applicable to graphics in addition to GPGPU, or was it to be compute-focused?
With the renewed interest in Rust CUDA, is a host API now less interesting?
I've personally found most of the community-built Vulkan/DX12-abstracting APIs to be completely terrible.
There are a few exceptions, but these are small-scale efforts that progress slowly, and there is always the concern that future advancements in graphics technology will not be supported by them, or not in a timely fashion.
Probably too off topic for this, but considering how much potential there is in a Rust GPU shading language, the thought of a unified interface is an appealing one.
There are a few automatic differentiation methods available for rust. Are any of them possible to use with Rust GPU?
Slang, which by its recent inclusion in the Vulkan SDK I take to be the 'future default' shading language, has autodiff as a standard feature.
That's not necessary here, if it can be achieved through other means, but I'm unable to find an example of someone doing so.
As regards external crates and their suitability for use in shaders, what are the criteria?
Is 'no-std' the only hard requirement, or are there others?
That's a mess, but I've rewritten it twice. Some of it may be answerable 😃 Thanks
Beta Was this translation helpful? Give feedback.
All reactions