Posts

Post not yet marked as solved
1 Replies
329 Views
I am struggling to figure out how to make a shader to animate each vertex of a model separately using noise. I watched a video on how to do this in Unity, but I think something must be different with how Reality Composer Pro handles the noise nodes? For example, in this graph I just hooked up the noise node directly to the geometry modifier: In my output you can see the plane is adjust per-vertex using the noise node. My goal would be to animate this like waves, but moving the noise. So in this graph I use time with sin to adjust the UV of the noise. This seems to change the noise node to output a single value (I guess that makes sense, since I modify the UV, it results in a single value, at that UV in the noise map). So then, I take that as the Y value and put it back into the geometry modifier. But now it doesn't work per-vertex, it moves the whole model up and down (based on the single value coming out of the noise map). How do I make this apply to each vertex of the noise map individually? This is an example of the output I want in Unity, the plane is being adjusted per-vertex by a scrolling 2d noise node:
Posted
by pj4533.
Last updated
.
Post not yet marked as solved
1 Replies
355 Views
I have been digging into learning shader graphs by watching Unity shader graph content, cause lots of the same concepts apply. One thing I noticed was that in Unity, each node in the shader graph has a little preview. I don't think this exists in Reality Composer Pro, but is there anyway to mimic it (like can I hook up a node that allows me to debug the graph at that point?) If not, I'm happy to just file a feedback about it, but just thought I'd ask!
Posted
by pj4533.
Last updated
.
Post not yet marked as solved
0 Replies
383 Views
I am very new to shaders, never used one of the large systems like Unity. However I have started exploring visionOS programming and that led me to create some effects for materials in Reality Composer Pro. I have been overwhelmed with the possibilities, but also kind of lost. I understand that RCPs shaders are based on MaterialX, so maybe there are tutorials on the web that would cover how to create procedural effects (fire, wind, water, etc)? I’ve stumbled through…but it’s slow going. Are there any good resources that talk about how to use the various nodes to create procedural effects? For example, it took me a while to figure out that using the “time” node allows me to animate cool color changes, especially when combined with various math and remap nodes. Just looking for some basic resources I think. Would the shader graph tutorials about Unity, apply to using RCP? Are the node types similar enough?
Posted
by pj4533.
Last updated
.
Post not yet marked as solved
0 Replies
185 Views
I have read about the various limitations in post-processing the camera feed (Metal is only in VR mode, no access to camera feed, etc) Just to be clear, there is currently no way for a third party developer to do something similar to the 'Summer Light' environment, where a color filter is applied to an ImmersiveSpace in mixed mode? ...am hoping I am just missing something simple, thanks in advance.
Posted
by pj4533.
Last updated
.
Post not yet marked as solved
1 Replies
388 Views
I am new to visionOS development, just slowly figuring out the difference in immersion styles to figure out how I want my app to behave. It seems that when you use a progressive immersive space the minimum immersion level (set via the digital crown) is not 0? Meaning, there is no way to go from mixed to full by using the Digital Crown. Even when I try to set it to 0 (such as in the Destination Video sample), it pops back up to around 30-40%, and I always see the background. Is this expected behavior, or are there some settings that allow me to change this minimum immersion level? Further, in the video 'Meet ARKit for spatial computing', it is stated that to get access to ARKit tracking data you must use a 'Full Space', not the 'Shared Space'. This wording is confusing to me. Is an ImmersiveSpace set to the .mixed (or .progressive) immersion style still a 'Full Space' (because it isn't in the shared space, with other apps)? OR, is ARKit only available in an ImmersiveSpace with the .full immersion style? Just feels like maybe 'full' is being used in two different ways here... Thanks in advance, -pj
Posted
by pj4533.
Last updated
.
Post marked as solved
1 Replies
370 Views
I am writing code to monitor the incoming audio levels in VisionOS. It works properly in the simulator, but gets an error on the device. Curious if anyone has any tips. I took out some of the code so it's a bit shorter, as it fails in setupAudioEngine when I try to start the engine with this error: Error starting audio engine: The operation couldn’t be completed. (com.apple.coreaudio.avfaudio error 561145187.) Thanks in advance! Here is my code: class AudioInputMonitor: ObservableObject { private var audioEngine: AVAudioEngine? @Published var inputLevel: Float = 0 init() { requestMicrophonePermission() } private func requestMicrophonePermission() { AVAudioApplication.requestRecordPermission { granted in DispatchQueue.main.async { if granted { self.setupAudioSessionAndEngine() } else { print("Microphone permission not granted") // Handle the case where permission is not granted } } } } private func setupAudioSessionAndEngine() { do { let audioSession = AVAudioSession.sharedInstance() try audioSession.setCategory(.playAndRecord, mode: .measurement, options: []) try audioSession.setActive(true) self.setupAudioEngine() } catch { print("Failed to set up the audio session: \(error)") } } private func setupAudioEngine() { audioEngine = AVAudioEngine() guard let inputNode = audioEngine?.inputNode else { print("Failed to get the audio input node") return } let recordingFormat = inputNode.outputFormat(forBus: 0) inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { [weak self] (buffer, _) in self?.analyzeAudio(buffer: buffer) } do { try audioEngine?.start() } catch { print("Error starting audio engine: \(error.localizedDescription)") } } private func analyzeAudio(buffer: AVAudioPCMBuffer) { // removed to be brief } func stopMonitoring() { // removed to be brief } }
Posted
by pj4533.
Last updated
.
Post marked as solved
9 Replies
9.3k Views
I have added an intent definition file, and added a custom intent to an existing (rather large) project. However, if I reference the custom intent, the symbol is not found.I did the exact same procedure, but adding the intent definition to a brand new single view project, and when I refernce the custom intent there, the symbol is found.It seems like my older pre-xcode10 project isn't generating the intent classes properly? Is there something I need to do in my older project settings to trigger this code generation? Or perhaps it isn't supported yet in this beta?Thanks in advance!-pj
Posted
by pj4533.
Last updated
.
Post not yet marked as solved
1 Replies
995 Views
I just got a new MacMini M1 to replace an older one that was doing my automated builds for several projects. The bot works on the older MacMini, fails on the M1. I go to the M1 and build directly in Xcode, and it fails with the same error. However, if I choose the 'Open using Rosetta' option in 'Get Info' for Xcode, it builds fine. I realize what is happening is that likely there is a 3rd party dependency that hasn't updated to the correct architecture yet, and the CORRECT solution is to fix that. But I am not in control of that dependency so I am looking for a interim solution. Is there anyway to do 'Open using Rosetta', but for an Xcode server bot? Thanks in advance! pj
Posted
by pj4533.
Last updated
.
Post not yet marked as solved
1 Replies
1.3k Views
Does the new HTTP Traffic instrument require iOS15 running on the device? I am attempting to profile my app and I get an error saying 'This device is lacking a required capability'?
Posted
by pj4533.
Last updated
.