Post

Replies

Boosts

Views

Activity

AVCaptureSessionControlsDelegate Not Being Called From Capture App
I am looking to learn more about the new Capture Button controls for iPhone 16, and am working to adapt the AVCam Sample Code to support the Capture Button. While I believe I've followed the guidance in the Enhancing your app experience with the Camera Control documentation, I'm finding that while my AVCaptureControl items seem to be added to the capture session, the Capture Button does not ever do anything, nor are any of the delegate methods called. After I configure my capture session per the setupSession() method, I'm calling a method I added, func configureCameraControls(device:AVCaptureDevice): func configureCameraControls(device: AVCaptureDevice) { guard captureSession.supportsControls else { assertionFailure("App does not support camera control.") return } // Set the controls delegate captureSession.setControlsDelegate(controlsDelegate, queue: sessionQueue) // Begin configuring the capture session. captureSession.beginConfiguration() // Remove previously configured controls, if any. for control in captureSession.controls { captureSession.removeControl(control) } // Add a zoom control let systemZoomSlider = AVCaptureSystemZoomSlider(device: device) { zoomFactor in // TODO } // Create a control to adjust the device's exposure bias. let systemBiasSlider = AVCaptureSystemExposureBiasSlider(device: device) // Add a custom slider let focusSlider = AVCaptureSlider("Focus", symbolName: "scope", in: 0...1) focusSlider.setActionQueue(sessionQueue) { focusValue in // TODO } // Iterate over the passed in controls. for control in [systemZoomSlider, systemBiasSlider, focusSlider] { // Add the control to the capture session if possible. if captureSession.canAddControl(control) { captureSession.addControl(control) } else { print("Unable to add control \(control).") } } // Commit the capture session configuration. captureSession.commitConfiguration() } I define the controls delegate like so: final class CaptureControlsDelegate: NSObject, AVCaptureSessionControlsDelegate { func sessionControlsDidBecomeActive(_ session: AVCaptureSession) { } func sessionControlsWillEnterFullscreenAppearance(_ session: AVCaptureSession) { } func sessionControlsWillExitFullscreenAppearance(_ session: AVCaptureSession) { } func sessionControlsDidBecomeInactive(_ session: AVCaptureSession) { } } Which I instantiate earlier on in my app's lifecycle and make available to the CaptureService actor. I'm not sure if this snippet can provide enough detail to gather some help, but I can't quite fathom why the camera/capture pipeline works, but I'm not getting any functionality from the Capture Button nor is the AVCaptureSessionControlsDelegate ever having its methods called.
3
0
354
Sep ’24
Best practices for live-streaming MV-HEVC content?
I was wondering of anyone had guidance on how to “livestream“ MV-HEVC content. More specifically, I have a left and right eye view for stereoscopic content (perhaps, for example, the views were taken from a stereoscopic video being passed through an AVPlayer). I know, based on sample code, that I can convert the stereoscopic video into a MV-HEVC file using AVAssetWriter. However, how would I take the stereoscopic video and encode it, in realtime, to a stream that could then leverage HLS Tools to deliver to clients? Is AVFoundation capable of this directly? Or is there an API within VideoToolbox that can help with this?
0
2
438
Jun ’24
Suggested guidance for creating MV-HEVC video files?
After taking a look at the Deliver Video Content for Spatial Experiences session, alongside the Destination Video sample code, I'm a bit unclear on how one might go about creating stereoscopic content that can be bundled up as a MV-HEVC file and played on Vision Pro. I see the ISO Base Media File Format and Apple HEVC Stereo Video format specifications, alongside new mvhevc1440x1440 output presets in AVFoundation, but I'm unclear what sort of camera equipment could be used to create stereoscopic content and how one might be able to create a MV-HEVC using a command-line tool that leverages AVFoundation/VideoToolbox, or something like Final Cut Pro. Is such guidance available on how to film and create this type of file? Thanks!
6
8
4.5k
Jun ’23
Messages In Removing Query Parameters from App Clip Invocation URL
I currently have an app published in the App Store with an associated App Clip (the App Clip is configured in App Store Connect with a default App Clip Experience and an Advanced App Clip Experience). All debugging (whether from App Store Connect, or from Settings -> Developer -> App Clips Testing -> Diagnostics), confirms everything is configured properly (and more-so, the entire App Clip invocation experience is working great from Safari, QR code, and App Clip Code). My issues is when sharing a link via Messages. For example, let's say I'm sharing a URL of: https://myapp.com/events?eventID=123&userID=456. If I open this URL in Safari (without Private Browsing enabled), I see the banner to launch my App Clip. If I opt to launch the App Clip from the banner, it launches with the expected invocation URL, as noted above. If I share this website via Messages to someone else on an iPhone or iPad (running iOS 14+), the expected App Clip preview appears, and the receiver has an action button to launch my App Clip. However, when they launch the App Clip, all of the path/query parameters are missing (that is, the invocation URL appears as https://myapp.com). I'm wondering if this is intended behavior, or if the path I'm following isn't supported. Ideally, I would like my users to be able to visit the full URL, with query parameters, and if they opt to share that URL via Messages, the App Clip launches with the query parameters still in place to the receiver of the message. Thanks!
0
0
835
Apr ’23
Previewing Live Activity Views in SwiftUI Previews
I am curious if there is suggested guidance on how to create "mock" Live Activities/ActivityKit for the sake of developing Lock Screen/Dynamic Island views in SwiftUI and taking advantage of SwiftUI Previews. For example, in the Display live data with Live Activities documentation, in the Create Lock Screen view section, demonstrates encapsulating the LockScreenActivityView in its own SwiftUI view. However, a subview of an ActivityConfiguration vends a generic context of type ActivityViewContext<MyActivityAttributes>, which does not seem to be able to be initialized directly. This makes it difficult to use SwiftUI Previews for building the Live Activity views. If I try to add a SwiftUI Preview; struct MyLockScreenLiveActivityView_Previews: PreviewProvider {     static var previews: some View {         MyLockScreenLiveActivityView(context: ...)     } } I am unsure how I would define a context that I could pass into the preview, as trying to manually define something like let context = ActivityViewContext<MyActivityAttributes> does not yield any accessible initializers to construct a mock object that conforms to ActivityViewContext. I might be missing something super simple, but would love any guidance, otherwise, I'm unable to use SwiftUI Previews for building the view.
1
2
4.5k
Sep ’22
Undefined symbols for architecture arm64 in Xcode 14
I am facing an issue when building a project that builds without issue in Xcode 13.4.1, but hits a build error in Xcode 14. Specifically, this project uses CocoaPods and references a specific pod, "AdobeMobileSDK". In a sample project where I can replicate this issue, my Podfile looks like; target 'AdobeTestProject' do   # Comment the next line if you don't want to use dynamic frameworks   use_frameworks!   # Pods for AdobeTestProject   pod "AdobeMobileSDK" end I receive a build error reporting; Undefined symbols for architecture arm64:   "_OBJC_CLASS_$_ADBMobile", referenced from:       objc-class-ref in ContentView.o ld: symbol(s) not found for architecture arm64 clang: error: linker command failed with exit code 1 (use -v to see invocation) After digging around a bit, I'm noticing that in Xcode 13.4, during the linking phase, Xcode issues a command that includes -lAdobeMobile. However, in Xcode 14, this is not the case. If I manually add -lAdobeMobile to the Other Linker Flags in my project settings, I can resolve this error, but this workaround is not suitable for all use cases (in a more complex project, adding -lAdobeMobile does not rectify the issue). I recognize this is likely some influenced by the SDK itself, but is there any documented change for Xcode 14 that would cause this build error only in Xcode 14 and not Xcode 13.4.1? This issue occurs regardless of whether Xcode 14 is running in Rosetta or not.
7
4
13k
Sep ’22
Best practices for capturing the bottom of objects?
I have been exploring the sample image capture app, as well as command-line tool, for object capture. I've not yet figured out the best practices for capturing the bottom of images. For example, I have been attempting to demonstrate this with a sneaker. Per the WWDC21-10076 session, I have been circling around my object, taking photos using the sample capture app, using the automatic capture mode. While this is creating a 3D model, during my capture, I also turn my sneaker over, and capture the bottom. However, when my 3D model is created via the command-line tool, the "bottom" of the sneaker is always missing. Is there a given configuration when creating the PhotoGrammetrySession.Configuration that would be ideal for also including photos of the bottom of objects? While I pause, rotate my object to show the bottom, and continue capturing, I find that the bottom of the object is nearly always missing, despite many image captures that do include the bottom.
0
0
899
Jun ’21
Determining rotation of AppClipCodeAnchor in ARKit?
With the availability of tracking AppClipCodeAnchor in ARKit on iOS/iPadOS 14.3+, I'm curious if there is a way to determine the rotation (or more specifically, the "angle") at which the App Clip Code is detected. For example, an App Clip Code could appear on a business card, which a user might have laying flat on a table (therefore at a 0° angle). In another case, an App Clip Code could be printed and mounted to a wall, such as in a museum or a restaurant (therefore at a 90° angle). Anchoring AR experiences (especially ones built in Reality Composer) to the detected AppClipCodeAnchor results in a strange behavior when the App Clip Code is anything other than 0°, as the content appears "tethered" to the real-world App Clip Code, and therefore appears unexpectedly rotated without manually transforming the rotation of the 3D content. When I print the details of the AppClipCodeAnchor, once detected in my ARKit session, I can see that a human-readable descriptor for the "angle" of the detected code is available. However, I can't seem to figure out how to determine this property from the AppClipCodeAnchor's transform. Is there an easy way to rotate 3D content to match the rotation of the scanned App Clip Code?
0
0
656
Jan ’21
Adding a shortcut to the Shortcuts app without INUIAddVoiceShortcutViewController?
Many apps that I download from the App Store seem to be adding shortcuts to the Shortcuts app, without me ever setting up a voice command. I was under the impression that to add a shortcut to the Shortcuts app, a user would need to create a voice command, via INUIAddVoiceShortcutViewController, which would then add the shortcut to the Shortcuts app. This is how I am currently adding a shortcut in my app, but am wondering how I could go about offering shortcuts in the Shortcuts app without needing to call INUIAddVoiceShortcutViewController?  let activity = NSUserActivity(activityType: "com.example.shortcut")  activity.title = "Sample Shortcut"  activity.userInfo = ["speech" : "This is a sample."]  activity.isEligibleForSearch = true  activity.isEligibleForPrediction = true activity.persistentIdentifier = "com.example.shortcut.myshortcut"            self.view.userActivity = activity  activity.becomeCurrent()              let siriShortcut = INShortcut(userActivity: activity)              // Setup view controller  let viewController = INUIAddVoiceShortcutViewController(shortcut: siriShortcut)              // Setup modal style  viewController.modalPresentationStyle = .formSheet              // Setup delegate  viewController.delegate = self              // Show view controller  DispatchQueue.main.async {     self.present(viewController, animated: true, completion: nil)  }
1
0
1.2k
Nov ’20
Improving ground shadows for objects in RealityKit?
I am currently working with RealityKit to load a USDZ model from my application's bundle. My model is being added like so; var modelLoading: Cancellable? modelLoading = Entity.loadAsync(named: name) .receive(on: RunLoop.main) .sink(receiveCompletion: { (completion) in modelLoading?.cancel() }, receiveValue: { (model) in model.setScale(SIMD3(repeating: 5.0), relativeTo: nil) let parentEntity = ModelEntity() parentEntity.addChild(model) let entityBounds = model.visualBounds(relativeTo: parentEntity) parentEntity.collision = CollisionComponent(shapes: [ShapeResource.generateBox(size: entityBounds.extents).offsetBy(translation: entityBounds.center)]) self.arView.installGestures(for: parentEntity) &#9;&#9;let anchor = AnchorEntity(plane: .horizontal) &#9;&#9;anchor.addChild(aparentEntity) &#9;&#9;arView.scene.addAnchor(anchor) }) When my model is added to the scene, which works as expected, I notice that the model has no "ground shadows." This varies in comparison to viewing this model via AR Quick Look, as well as when loading a Reality Composer project (.rcproject), which seems to automatically add grounding shadows. While I have done some research into PointLight, DirectionalLight, and SpotLight entities, I am quite a novice at 3D modeling, and just only seek to add a shadow just below the object, to give it a more realistic appearance on tables. Is there a methodology for achieving this?
2
0
1.7k
Nov ’20
Properly projecting points with different orientations and camera positions?
Summary: I am using the Vision framework, in conjunction with AVFoundation, to detect facial landmarks of each face in the camera feed (by way of the VNDetectFaceLandmarksRequest). From here, I am taking the found observations and unprojecting each point to a SceneKit View (SCNView), then using those points as the vertices to draw a custom geometry that is textured with a material over each found face. Effectively, I am working to recreate how an ARFaceTrackingConfiguration functions. In general, this task is functioning as expected, but only when my device is using the front camera in landscape right orientation. When I rotate my device, or switch to the rear camera, the unprojected points do not properly align with the found face as they do in landscape right/front camera. Problem: When testing this code, the mesh appears properly (that is, appears affixed to a user's face), but again, only when using the front camera in landscape right. While the code runs as expected (that is, generating the face mesh for each found face) in all orientations, the mesh is wildly misaligned in all other cases. My belief is this issue either stems from my converting the face's bounding box (using VNImageRectForNormalizedRect, which I am calculating using the width/height of my SCNView, not my pixel buffer, which is typically much larger), though all modifications I have tried result in the same issue. Outside of that, I also believe this could be an issue with my SCNCamera, as I am a bit unsure how the transform/projection matrix works and whether that would be needed here. Sample of Vision Request Setup: // Setup Vision request options var requestHandlerOptions: [VNImageOption: AnyObject] = [:] // Setup Camera Intrinsics let cameraIntrinsicData = CMGetAttachment(sampleBuffer, key: kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, attachmentModeOut: nil) if cameraIntrinsicData != nil { requestHandlerOptions[VNImageOption.cameraIntrinsics] = cameraIntrinsicData } // Set EXIF orientation let exifOrientation = self.exifOrientationForCurrentDeviceOrientation() // Setup vision request handler let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, orientation: exifOrientation, options: requestHandlerOptions) // Setup the completion handler let completion: VNRequestCompletionHandler = {request, error in let observations = request.results as! [VNFaceObservation] // Draw faces DispatchQueue.main.async { drawFaceGeometry(observations: observations) } } // Setup the image request let request = VNDetectFaceLandmarksRequest(completionHandler: completion) // Handle the request do { try handler.perform([request]) } catch { print(error) } Sample of SCNView Setup: // Setup SCNView let scnView = SCNView() scnView.translatesAutoresizingMaskIntoConstraints = false self.view.addSubview(scnView) scnView.showsStatistics = true NSLayoutConstraint.activate([ scnView.leadingAnchor.constraint(equalTo: self.view.leadingAnchor), scnView.topAnchor.constraint(equalTo: self.view.topAnchor), scnView.bottomAnchor.constraint(equalTo: self.view.bottomAnchor), scnView.trailingAnchor.constraint(equalTo: self.view.trailingAnchor) ]) // Setup scene let scene = SCNScene() scnView.scene = scene // Setup camera let cameraNode = SCNNode() let camera = SCNCamera() cameraNode.camera = camera scnView.scene?.rootNode.addChildNode(cameraNode) cameraNode.position = SCNVector3(x: 0, y: 0, z: 16) // Setup light let ambientLightNode = SCNNode() ambientLightNode.light = SCNLight() ambientLightNode.light?.type = SCNLight.LightType.ambient ambientLightNode.light?.color = UIColor.darkGray scnView.scene?.rootNode.addChildNode(ambientLightNode) Sample of "face processing" func drawFaceGeometry(observations: [VNFaceObservation]) { // An array of face nodes, one SCNNode for each detected face var faceNode = [SCNNode]() // The origin point let projectedOrigin = sceneView.projectPoint(SCNVector3Zero) // Iterate through each found face for observation in observations { // Setup a SCNNode for the face let face = SCNNode() // Setup the found bounds let faceBounds = VNImageRectForNormalizedRect(observation.boundingBox, Int(self.scnView.bounds.width), Int(self.scnView.bounds.height)) // Verify we have landmarks if let landmarks = observation.landmarks { // Landmarks are relative to and normalized within face bounds let affineTransform = CGAffineTransform(translationX: faceBounds.origin.x, y: faceBounds.origin.y) .scaledBy(x: faceBounds.size.width, y: faceBounds.size.height) // Add all points as vertices var vertices = [SCNVector3]() // Verify we have points if let allPoints = landmarks.allPoints { // Iterate through each point for (index, point) in allPoints.normalizedPoints.enumerated() { // Apply the transform to convert each point to the face's bounding box range _ = index let normalizedPoint = point.applying(affineTransform) let projected = SCNVector3(normalizedPoint.x, normalizedPoint.y, CGFloat(projectedOrigin.z)) let unprojected = sceneView.unprojectPoint(projected) vertices.append(unprojected) } } // Setup Indices var indices = [UInt16]() // Add indices // ... Removed for brevity ... // Setup texture coordinates var coordinates = [CGPoint]() // Add texture coordinates // ... Removed for brevity ... // Setup texture image let imageWidth = 2048.0 let normalizedCoordinates = coordinates.map { coord -> CGPoint in let x = coord.x / CGFloat(imageWidth) let y = coord.y / CGFloat(imageWidth) let textureCoord = CGPoint(x: x, y: y) return textureCoord } // Setup sources let sources = SCNGeometrySource(vertices: vertices) let textureCoordinates = SCNGeometrySource(textureCoordinates: normalizedCoordinates) // Setup elements let elements = SCNGeometryElement(indices: indices, primitiveType: .triangles) // Setup Geometry let geometry = SCNGeometry(sources: [sources, textureCoordinates], elements: [elements]) geometry.firstMaterial?.diffuse.contents = textureImage // Setup node let customFace = SCNNode(geometry: geometry) sceneView.scene?.rootNode.addChildNode(customFace) // Append the face to the face nodes array faceNode.append(face) } // Iterate the face nodes and append to the scene for node in faceNode { sceneView.scene?.rootNode.addChildNode(node) } }
3
0
2.0k
Oct ’20
Dynamic Intent Selection not Generating Handling Class for Widget?
I am trying to build my first Widget, and am following the guidance in the Making a Configurable Widget - https://developer.apple.com/documentation/widgetkit/making-a-configurable-widget article. After confirming the default Widget runs successfully, I am trying to set up my Intent Definition and Intent Handler. I have taken the following steps; Created a new intent definition file, with the custom intent's category as View, eligibility for Widgets, and the parameter set to a custom type while selecting Options are provided dynamically. Created a new Intent Handler target, and set the Supported Intent's class name to something relevant, such as SelectCharacterIntent. The article implies that the newly created IntentHandler.swift file, which has an IntentHandler class, should be able to have that class extended to the intent definition file, as noted; Based on the custom intent definition file, Xcode generates a protocol, SelectCharacterIntentHandling, that the handler must conform to. Add this conformance to the declaration of the IntentHandler class. However, my project immediately reports that it Cannot find type 'SelectCharacterIntentHandling' in scope. I am unsure if I am doing something wrong, but it seems peculiar that the SelectCharacterIntentHandling protocol is created/implemented without me being aware. Surely, there must be a step to take to have that protocol created so I can extend my IntentHandler class to support my dynamic intent. Thank you!
5
0
3.6k
Oct ’20
SwiftUI Text Alignment Issue with Date and WidgetKit
I am attempting to set up a Text property that shows a "timer" based countdown. My code is like so; VStack { &#9; Text(Date().addingTimeInterval(600), style: .relative) } When I preview this code in a traditional SwiftUI view, the code appears as expected; in the middle of the canvas (as there are no vertical or horizontal spacers). Conversely, when I attempt to use the same code within a Widget, I find that the text is pushed all the way to the left side of the canvas, with no particular reason. Due to this, I have no way of centering the text. My only success in centering the text has been to embed in a HStack with multiple spacers; HStack { &#9;&#9;Spacer() &#9;&#9;Spacer() &#9;&#9;Spacer() &#9;&#9;Spacer() &#9;&#9;Text(Date().addingTimeInterval(600), style: .relative) } Is there any particular reason this would be the case? I've not found any documentation indicating that the manner in which WidgetKit views render Text would be any different than traditional SwiftUI views?
5
4
7.4k
Oct ’20
Is the NearbyInteraction Framework available in watchOS?
As noted on the comparison page for Apple Watch - Series 6 - https://www.apple.com/watch/compare/, the U1 chip (Ultra Wideband) is a feature of the Apple Watch - Series 6. The WWDC 2020 session, Meet Nearby Interaction - https://developer.apple.com/videos/play/wwdc2020/10668/, does imply that this functionality exists on devices with a U1 chip, though the NearbyInteraction framework appears unavailable in watchOS. Can anyone confirm whether NearbyInteraction is available for watchOS?
4
0
1.9k
Sep ’20