Post

Replies

Boosts

Views

Activity

Reply to Bad mesh shader performance
intended to enable use-cases that cannot be expressed as draws (such as, dynamic geometry expansion / culling) There is culling. Tested today compilation variant, when instead of shared memory array local variable been used. This happen to be valid for my shaders, since there is 1-to-1 match between gl_LocalInvocationID and vertex. Shader example: https://shader-playground.timjones.io/641b24c9f6700a03eb9f69414ebbf22b Still FPS roughly as bad as it was. So far it doen't look like mesh-shader is working well on M1. Can it be some sort of driver bug? I mean: I can understand something like 5-10% performance regression since it's new feature, but not 200%.
Dec ’22
Reply to Bad mesh shader performance
Metal-supported devices implement mesh shaders through emulation Very interesting, many thanks @philipturner! Unfortunately can't really test on anything other than M1. Meanwhile I have a few new questions to @Apple on perfornace side: Any caps/property application can check in runtime, to know that shaders are emulated? (I don't really want to black-list devices) AFAIK at least one of mobile vendors do not run vertex shader natively, splitting it into position+varying shader instead. Is it same for M1 or not? Would it make sense to reduce varying count and relay on [[barycentric_coord]] as much as possible ? Thanks!
Jan ’23