While a bit new to keyboard shortcuts, I am looking to add a specific piece of functionality to my app. Specifically, I am wanting to allow my user to be able to trigger an action by pressing the spacebar, both on iPadOS, when using a keyboard, and macOS. This would function similarly to how video editing programs like iMovie and Final Cut Pro work.
I have a "play" button in place, and am trying to add a modifier, like so;
Button(action: {
	 self.isPlaying.toggle()
}) {
	 Image(systemName: isPlaying ? "pause.fill" : "play.fill")
}.keyboardShortcut(.space)
.help("Play timeline")
Based on the KeyboardShortcut documentation - https://developer.apple.com/documentation/swiftui/keyboardshortcut, this should be all I need to get things running. However, when building and testing my app, using the spacebar does not do anything (nor does the shortcut appear in the keyboard shortcuts list).
Post
Replies
Boosts
Views
Activity
I am trying to follow the guidance for testing a Local Experience, as listed in the Testing Your App Clip’s Launch Experience - https://developer.apple.com/documentation/app_clips/testing_your_app_clip_s_launch_experience documentation. I have successfully created my App Clip target, and can confirm that running the App Clip on my device does launch the App Clip app as I expected. Further, I can successfully test the App Clip on device, by setting the _XCAppClipURL argument in the App Clip's scheme.
I would like to test a Local Experience. The documentation states that for testing Local Experiences;
To test your app clip’s invocation with a local experience, you don’t need to add the Associated Domains Entitlement, make changes to the Apple App Site Association file on your web server, or create an app clip experience for testing in TestFlight. Therefore, I should be able to configure a Local Experience with any desired domain in Settings -> Developer -> Local Experience, generate a QR code or NFC tag with that same URL, and the App Clip experience should appear. I have taken the following steps;
Built and run my App Clip on my local device.
In Settings -> Developer -> Local Experience, I have registered a new experience using a URL prefix https://somewebsite.com
Set my Bundle ID to com.mycompany.myapp.Clip, which exactly matches the Bundle Identifier, as listed in Xcode, under my App Clip target.
Generated a QR code which directs me to https://somewebsite.com
In theory, I believe I should be able to open the Camera app on my device, point the camera at the QR code, and the App Clip experience should appear. However, I received mixed experiences. 50% of the time, I receive a pop-up directing me to open https://somewebsite.com in Safari, the other 50% of the time, no banner or action occurs whatsoever.
Is this an issue anyone has faced before, or have I pursued these steps out of order?
Much of this question is adapted from the idea of building a SCNGeometry from an ARMeshGeometry, as indicated in this - https://developer.apple.com/forums/thread/130599?answerId=414671022#414671022 very helpful post by @gchiste.
In my app, I am creating a SCNScene with my scanned ARMeshGeometry built as SCNGeometry, and would like to apply a "texture" to the scene, replicating what the camera saw as each mesh was built. The end goal is to create a 3D model somewhat representative of the scanned environment.
My understanding of texturing (and UV maps) is quite limited, but my general thought is that I would need to create texture coordinates for each mesh, then sample the ARFrame's capturedImage to apply to the mesh.
Is there any particular documentation or general guidance one might be able to provide to create such an output?
I have seen this question come up a few times here on Apple Developer forums (recently noted here - https://developer.apple.com/forums/thread/655505), though I tend to find myself having a misunderstanding of what technology and steps are required to achieve a goal.
In general, my colleague and I are try to use Apple's Visualizing a Point Cloud Using Scene Depth - https://developer.apple.com/documentation/arkit/visualizing_a_point_cloud_using_scene_depth sample project from WWDC 2020, and save the rendered point cloud as a 3D model. I've seen this achieved (there are quite a few samples of the final exports available on popular 3D modeling websites), but remain unsure how to do so.
From what I can ascertain, using Model I/O seems like an ideal framework choice, by creating an empty MDLAsset and appending a MDLObject for each point to, finally, end up with a model ready for export.
How would one go about converting each "point" to a MDLObject to append to the MDLAsset? Or am I going down the wrong path?
I am a bit confused on the proper usage of GeometryReader. For example, I have a SwiftUI View, like so;
	 var body: some View {
VStack {
Text("Hello, World!")
.background(Color.red)
Text("More Text")
.background(Color.blue)
}
}
This positions my VStack perfectly in the middle of the device, both horizontally and vertically. At some point, I may need to know the width of the View's frame, and therefore, want to implement a GeometryReader;
var body: some View {
GeometryReader { geometry in
VStack {
Text("Hello, World!")
.background(Color.red)
Text("More Text")
.background(Color.blue)
}
}
}
While I now have access to the View's frame using the GeometryProxy, my VStack is now moved to the top left corner of the device.
Why is this? Subsequently, is there any way to get the size of the View without having the layout altered?
Within my app, I have an Image that I am modifying with several modifiers to create an ideal appearance (code sample below). When taking this approach, I am finding that anything that is "underneath" the Image becomes unusable.
In my case, I have a VStack with a Button and the Image. When the Image modifier of clipped() is applied, the Button becomes unusable (presumably because the Image is technically covering the button, but anything outside of the Image's frame is invisible).
Is there a means of allowing an object below a clipped Image to still be functional/receive touches?
VStack {
	 Button(action: {
			print("tapped!")
	 }, label: {
			Text("Tap Here")
	 })
	 Image(uiImage: myImage)
			.resizable()
			.aspectRatio(contentMode: .fill)
			.frame(height: 150.0)
			.clipped()
}
I can confirm that if I change the aspectRatio to .fit, the issue does not appear (but, of course, my Image does not appear as I'd like it to). Subsequently, if I remove the .clipped() modifier, the issue is resolved (but, again, the Image then does not appear as I'd like it to).
I have noticed that iOS 14, macOS 11, and tvOS 14 include the ability to process video files using a new VNVideoProcessor class. I have tried to leverage this within my code, in an attempt to perform a VNTrackObjectRequest, with no success. Specifically, my observations report invalid within the body, and the confidence and detected bounding box never change.
I am setting up my code like such;
let videoProcessor = VNVideoProcessor(url: videoURL)
let asset = AVAsset(url: videoURL)
let completion: VNRequestCompletionHandler = { request, error in
		let observations = request.results as! [VNObservation]
		if let observation = observations.first as? VNDetectedObjectObservation {
print("OBSERVATION:", observation)
		}
}
let inputObservation = VNDetectedObjectObservation(boundingBox: rect.boundingBox)
let request: VNTrackingRequest = VNTrackObjectRequest(detectedObjectObservation: inputObservation, completionHandler: completion)
request.trackingLevel = .accurate
do {
	 try videoProcessor.add(request, withProcessingOptions: [:])
	 try videoProcessor.analyze(with: CMTimeRange(start: .zero, duration: asset.duration))
} catch(let error) {
	 print(error)
}
A sample output I receive in the console during observation is;
OBSERVATION: <VNDetectedObjectObservation: 0x2827ee200> 032AB694-62E2-4674-B725-18EA2804A93F requestRevision=2 confidence=1.000000 timeRange={{0/90000 = 0.000}, {INVALID}} boundingBox=[0.333333, 0.138599, 0.162479, 0.207899]
I note that the observation reports something is invalid, alongside the fact that the confidence is always reported as 1.000000 and the bounding box coordinates never change. I'm unsure if this has to do with my lack of VNVideoProcessingOption setup or something else I am doing wrong.
Is any documentation available for supporting the AfterBurner card in third-party applications? Documentation for the AfterBurner card indicates that support is available for third-party developers, but I cannot seem to find any documentation that would indicate how to take advantage of this hardware within my own video processing application.Thanks!
I'm curious if anyone has discovered a way to determining if their Messages app is in landscape left or landscape right? I've seen this topic come up in other discussions, but have not seen a resolution. Since Messages Extensions do support use of the camera and AVFoundation, I've been unable to set my video orientation as I'd usually use UIDevice.current.orientation to determine the orientation. Messages Extensions consistently report an unknown orientation, rather than Face Up, Face Down, Portrait, Landscape Left, Landscape Right, etc.I've been able to use a helpful suggestion of someone here to determine portrait vs. landscape by checking the following in my viewDidLayoutSubviews();if UIScreen.main.bounds.size.width < UIScreen.main.bounds.size.height {
// Portrait
} else {
// Landscape
}This, however, results in things working well in portrait, but can result in rotated or upside down images in landscape since I cannot set landscape left or landscape right (or portrait upside down on iPad, for that matter).Thanks!