Post

Replies

Boosts

Views

Activity

SCNParticleSystem Change Current Time
Hello everyone, I have the same question as the question posted in this thread - https://developer.apple.com/forums/thread/74073. I've been searching and searching to find a way to change the current time of an SCNParticleSystem so that I can render it offscreen into a video that I'm making. Since the video frames are rendered asynchronously and the time between each frame render isn't necessarily 1 frame time (1/60) seconds, the video output of the particle system doesn't look smooth because the SCNParticleSystem follows the current system time when being rendered. I already tried to set the sceneTime but the particle system doesn't follow the scene time since it uses the system time source I also tried to use the answer provided in the other thread with the code particleSys.warmupDuration = timeYouWant particleSys.reset() but it didn't work because every time I reset the particle system, it despawned the current particles on the screen and spawned a completely different layout of particles. I was wondering if there is a way that I can change it so that I can tell the particle system to render the particles at a specific time. Thanks in advance!
0
0
536
Jan ’21
Synchronizing Metal's MTLRenderPassDescriptor Texture Read/Write on iOS
Hello! Here's my situation: I am using an MTLRenderPassDescriptor to render to a 2D color attachment. My end goal is to be able to render this effect in the most efficient way possible: Here's my current process: I draw to the output color attachment using the drawIndexedPrimitives(...) on the render encoder I now need to copy the output color attachment's color data to a separate texture such that I'll be able to sample it in the future. I'm using a custom tile shader that copies each imageblock to this auxilary texture I now run a fragment shader on the output color attachment which samples the auxilary texture at 3 separate offsets, once for the red value, once for the green value, and once for the blue value. The Problem: The fragment shader is run concurrently with the tile shader and I'm not getting the desired effect since the fragment shader is sampling data that hasn't yet been written to by the tile shader The Question: How can I tell the fragment shader to wait for the tile shader to finish working with all tiles before beginning its fragment work? Current Solution: After calling dispatchThreadsPerTile(...) on the render encoder to run the tile shader, I end encoding. I then using a new MTLRenderCommandEncoder for the fragment shader. This is not ideal due to the overhead and memory bandwidth associated with ending/beginning a new render encoder. Also the debugger is telling me that these 2 encoders can be coalesced into 1. But how? Here's what it looks like when I use 1 render pass (1 render command encoder). Strange artifacts can be seen on the edges of the letters. Note that this screenshot was taken during an animation so the artifacts are present from the previous frame of animation. This behavior is undesirable. And here's what it looks like when I use 2 render passes. The artifacts are no longer present: Final Note: I can't use the MTLRenderCommandEncoder's memoryBarrier(...) method since it's not available for iOS.
0
0
550
Dec ’21
Architecture-Specific Build Settings in Xcode 13
I'm trying to use architecture-specific build settings in my Xcode Framework so that I can handle the case of the Apple Silicon Mac and the Intel Mac separately. I need to use this build setting as a macro in Metal to check whether the half data type is supported on the CPU (it's supported on Apple Silicon but not supported on Intel). My current implementation uses an xcconfig file with the following line: IS_X86_64[arch=x86_64] = 1 Then, in my build settings I have the following user-defined condition (from the xcconfig): I then create the macro IS_INTEL so it can be used in Metal: Here's The Problem In theory I feel like this should work. However, running my Metal app on my Intel mac yields 0 for IS_X86_64. The first thing I did was check whether I set up my build setting correctly and replaced IS_X86_64[arch=x86_64] = 1 with IS_X86_64[arch=*] = 1 This worked so I knew that the problem had to be that my current architecture wasn't being represented correctly. Diving further into why this was happening, it turns out that the value for CURRENT_ARCH (which should hold the value for the current device architecture that the project is being run on) is undefined_arch. In their Xcode 10 release notes, apple mentioned something about undefined_arch saying: The new build system passes undefined_arch as the value for the ARCH environment variable when running shell script build phases. The value was previously not well defined. Any shell scripts depending on this value must behave correctly for all defined architectures being built, available via the ARCHS environment variable. However, I'm not building my project on the shell and I don't have any shell scripts. Does anyone know how I can fix this so that architecture-specific build settings behave as they should?
1
0
2k
Feb ’22