Post

Replies

Boosts

Views

Activity

XCode git commit showing other Projects
I imported few files in my Xcode project from other projects using drag and drop. Even though the files are copied in the new project and there are no softlinks pointing to the location of other project, the issue is whenever I do a git commit and push, Xcode keeps showing all the projects to commit to rather than just the current project. There seem to be no setting to permanently remove git dependency of other projects. Is there anyway to remove references to other projects?
0
0
98
2d
AVCam sample code build errors in Swift 6
The AVCam sample code by Apple fails to build in Swift 6 language settings due to failed concurrency checks ((the only modification to make in that code is to append @preconcurrency to import AVFoundation). Here is a minimally reproducible sample code for one of the errors: import Foundation final class Recorder { var writer = Writer() var isRecording = false func startRecording() { Task { [writer] in await writer.startRecording() print("started recording") } } func stopRecording() { Task { [writer] in await writer.stopRecording() print("stopped recording") } } func observeValues() { Task { for await value in await writer.$isRecording.values { isRecording = value } } } } actor Writer { @Published private(set) public var isRecording = false func startRecording() { isRecording = true } func stopRecording() { isRecording = false } } The function observeValues gives an error: Non-sendable type 'Published<Bool>.Publisher' in implicitly asynchronous access to actor-isolated property '$isRecording' cannot cross actor boundary I tried everything to fix it but all in vain. Can someone please point out if the architecture of AVCam sample code is flawed or there is an easy fix?
3
0
187
2w
AVAssetWriter append audio/video streams concurrently in Real time recording setup
I see in most of the old sample codes from Apple that when using AVAssetWriter to append audio, video, and metadata samples in a real time camera recording setup, calls to .append(sampleBuffer) are either synchronised using an NSLock or all the samples are sent to the asset writer on the same dispatch queue thereby preventing concurrent writes. However I can't find any documentation that calls to assetWriterInput.append(sampleBuffer) for different media samples such as Audio and Video should not be done concurrently. Is it not valid for these methods to be executed in parallel for instance? `videoSamplesAssetWriterInput.append(videoSampleBuffer)` from DispatchQueue 1 `audioSamplesAssetWriterInput.append(audioSampleBuffer)` from DispatchQueue 2
1
0
246
3w
Using AsyncStream vs @Observable macro in SwiftUI (AVCam Sample Code)
I want to understand the utility of using AsyncStream when iOS 17 introduced @Observable macro where we can directly observe changes in the value of any variable in the model(& observation tracking can happen even outside SwiftUI view). So if I am observing a continuous stream of values, such as download progress of a file using AsyncStream in a SwiftUI view, the same can be observed in the same SwiftUI view using onChange(of:initial) of download progress (stored as a property in model object). I am looking for benefits, drawbacks, & limitations of both approaches. Specifically, my question is with regards to AVCam sample code by Apple where they observe few states as follows. This is done in CameraModel class which is attached to SwiftUI view. // MARK: - Internal state observations // Set up camera's state observations. private func observeState() { Task { // Await new thumbnails that the media library generates when saving a file. for await thumbnail in mediaLibrary.thumbnails.compactMap({ $0 }) { self.thumbnail = thumbnail } } Task { // Await new capture activity values from the capture service. for await activity in await captureService.$captureActivity.values { if activity.willCapture { // Flash the screen to indicate capture is starting. flashScreen() } else { // Forward the activity to the UI. captureActivity = activity } } } Task { // Await updates to the capabilities that the capture service advertises. for await capabilities in await captureService.$captureCapabilities.values { isHDRVideoSupported = capabilities.isHDRSupported cameraState.isVideoHDRSupported = capabilities.isHDRSupported } } Task { // Await updates to a person's interaction with the Camera Control HUD. for await isShowingFullscreenControls in await captureService.$isShowingFullscreenControls.values { withAnimation { // Prefer showing a minimized UI when capture controls enter a fullscreen appearance. prefersMinimizedUI = isShowingFullscreenControls } } } } If we see the structure CaptureCapabilities, it is a small structure with two Bool members. These changes could have been directly observed by a SwiftUI view. I wonder if there is a specific advantage or reason to use AsyncStream here & continuously iterate over changes in a for loop. /// A structure that represents the capture capabilities of `CaptureService` in /// its current configuration. struct CaptureCapabilities { let isLivePhotoCaptureSupported: Bool let isHDRSupported: Bool init(isLivePhotoCaptureSupported: Bool = false, isHDRSupported: Bool = false) { self.isLivePhotoCaptureSupported = isLivePhotoCaptureSupported self.isHDRSupported = isHDRSupported } static let unknown = CaptureCapabilities() }
0
0
171
4w
Checking authorization status of AVCaptureDevice or CLLocation Manager gives runtime warnings in iOS 18
I have the following code in my ObservableObject class and recently XCode started giving purple coloured runtime issues with it (probably in iOS 18): Issue 1: Performing I/O on the main thread can cause slow launches. Issue 2: Interprocess communication on the main thread can cause non-deterministic delays. Issue 3: Interprocess communication on the main thread can cause non-deterministic delays. Here is the code: @Published var cameraAuthorization:AVAuthorizationStatus @Published var micAuthorization:AVAuthorizationStatus @Published var photoLibAuthorization:PHAuthorizationStatus @Published var locationAuthorization:CLAuthorizationStatus var locationManager:CLLocationManager override init() { // Issue 1 (Performing I/O on the main thread can cause slow launches.) cameraAuthorization = AVCaptureDevice.authorizationStatus(for: AVMediaType.video) micAuthorization = AVCaptureDevice.authorizationStatus(for: AVMediaType.audio) photoLibAuthorization = PHPhotoLibrary.authorizationStatus(for: .addOnly) //Issue 1: Performing I/O on the main thread can cause slow launches. locationManager = CLLocationManager() locationAuthorization = locationManager.authorizationStatus super.init() //Issue 2: Interprocess communication on the main thread can cause non-deterministic delays. locationManager.delegate = self } And also in route Change notification handler of AVAudioSession.routeChangeNotification, //Issue 3: Hangs - Interprocess communication on the main thread can cause non-deterministic delays. let categoryPlayback = (AVAudioSession.sharedInstance().category == .playback) I wonder how checking authorisation status can give these issues? What is the fix here?
1
0
316
Dec ’24
Tap Gesture on Subview disables drag gesture on super view
I have the following two views in SwiftUI. The first view GestureTestView has a drag gesture defined on its overlay view (call it indicator view) and has the subview called ContentTestView that has tap gesture attached to it. The problem is tap gesture in ContentTestView is blocking Drag Gesture on indicator view. I have tried everything including simultaneous gestures but it doesn't seem to work as gestures are on different views. It's easy to test by simply copying and pasting the code and running the code in XCode preview. import SwiftUI struct GestureTestView: View { @State var indicatorOffset:CGFloat = 10.0 var body: some View { ContentTestView() .overlay(alignment: .leading, content: { Capsule() .fill(Color.mint.gradient) .frame(width: 8, height: 60) .offset(x: indicatorOffset ) .gesture( DragGesture(minimumDistance: 0) .onChanged({ value in indicatorOffset = min(max(0, 10 + value.translation.width), 340) }) .onEnded { value in } ) }) } } #Preview { GestureTestView() } struct ContentTestView: View { @State var isSelected = false var body: some View { HStack(spacing:0) { ForEach(0..<8) { index in Rectangle() .fill(index % 2 == 0 ? Color.blue : Color.red) .frame(width:40, height:40) } .overlay { if isSelected { RoundedRectangle(cornerRadius: 5) .stroke(.yellow, lineWidth: 3.0) } } } .onTapGesture { isSelected.toggle() } } } #Preview { ContentTestView() }
1
0
327
Oct ’24
Selecting Metal 3.2 as language causes crash on iPhone 11 Pro (iOS 17.1.1)
XCode 16 seems to have an issue with stitchable kernels in Core Image which gives build errors as stated in this question. As a workaround, I selected Metal 3.2 as Metal Language Revision in XCode project. It works on newer devices like iPhone 13 pro and above but metal texture creation fails on older devices like iPhone 11 pro. Is this a known issue and is there a workaround? I tried selecting Metal language revision to 2.4 but the same build errors occur as reported in this question. Here is the code where assertion failure happens on iPhone 11 Pro. let vertexShader = library.makeFunction(name: "vertexShaderPassthru") let fragmentShaderYUV = library.makeFunction(name: "fragmentShaderYUV") let pipelineDescriptorYUV = MTLRenderPipelineDescriptor() pipelineDescriptorYUV.rasterSampleCount = 1 pipelineDescriptorYUV.colorAttachments[0].pixelFormat = .bgra8Unorm pipelineDescriptorYUV.depthAttachmentPixelFormat = .invalid pipelineDescriptorYUV.vertexFunction = vertexShader pipelineDescriptorYUV.fragmentFunction = fragmentShaderYUV do { try pipelineStateYUV = metalDevice?.makeRenderPipelineState(descriptor: pipelineDescriptorYUV) } catch { assertionFailure("Failed creating a render state pipeline. Can't render the texture without one.") return }
3
0
442
Sep ’24
Unable to build Core Image kernels in XCode 16
My app is suddenly broken when I build it with XCode 16. It seems Core Image kernels compilation is broken in XCode 16. Answers on StackOverflow seem to suggest we need to use a downgraded version of Core Image framework as a workaround, but I am not sure if there is a better solution out there. FYI, I am using [[ stitchable ]] kernels and I see projects having stitchable are the ones showing issue. air-lld: error: symbol(s) not found for target 'air64_v26-apple-ios17.0.0' metal: error: air-lld command failed with exit code 1 (use -v to see invocation) Showing Recent Messages /Users/Username/Camera4S-Swift/air-lld:1:1: ignoring file '/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS.sdk/System/Library/Frameworks/CoreImage.framework/CoreImage.metallib', file AIR version (2.7) is bigger than the one of the target being linked (2.6)
1
2
362
Sep ’24
SwiftUI keep background view stationary as main view resized
I have implemented a sample video editing timeline using SwiftUI and am facing issues. So I am breaking up the problem in chunks and posting issue each as a separate question. In the code below, I have a simple timeline using an HStack comprising of a left spacer, right spacer(represented as simple black color) and a trimmer UI in the middle. The trimmer resizes as the left and right handles are dragged. The left and right spacers also adjust in width as the trimmer handles are dragged. Problem: I want to keep the background thumbnails (implemented currently as simple Rectangles filled in different colors) in the trimmer stationary as the trimmer resizes. Currently they move along as the trimmer resizes as seen in the gif below. How do I fix it? import SwiftUI struct SampleTimeline: View { let viewWidth:CGFloat = 340 //Width of HStack container for Timeline @State var frameWidth:CGFloat = 280 //Width of trimmer var minWidth: CGFloat { 2*chevronWidth + 10 } //min Width of trimmer @State private var leftViewWidth:CGFloat = 20 @State private var rightViewWidth:CGFloat = 20 var chevronWidth:CGFloat { return 24 } var body: some View { HStack(spacing:0) { Color.black .frame(width: leftViewWidth) .frame(height: 70) HStack(spacing: 0) { Image(systemName: "chevron.compact.left") .frame(width: chevronWidth, height: 70) .background(Color.blue) .gesture( DragGesture(minimumDistance: 0) .onChanged({ value in leftViewWidth = max(leftViewWidth + value.translation.width, 0) if leftViewWidth > viewWidth - minWidth - rightViewWidth { leftViewWidth = viewWidth - minWidth - rightViewWidth } frameWidth = max(viewWidth - leftViewWidth - rightViewWidth, minWidth) }) .onEnded { value in } ) Spacer() Image(systemName: "chevron.compact.right") .frame(width: chevronWidth, height: 70) .background(Color.blue) .gesture( DragGesture(minimumDistance: 0) .onChanged({ value in rightViewWidth = max(rightViewWidth - value.translation.width, 0) if rightViewWidth > viewWidth - minWidth - leftViewWidth { rightViewWidth = viewWidth - minWidth - leftViewWidth } frameWidth = max(viewWidth - leftViewWidth - rightViewWidth, minWidth) }) .onEnded { value in } ) } .foregroundColor(.black) .font(.title3.weight(.semibold)) .background { HStack(spacing:0) { Rectangle().fill(Color.red) .frame(width: 70, height: 60) Rectangle().fill(Color.cyan) .frame(width: 70, height: 60) Rectangle().fill(Color.orange) .frame(width: 70, height: 60) Rectangle().fill(Color.brown) .frame(width: 70, height: 60) Rectangle().fill(Color.purple) .frame(width: 70, height: 60) } } .frame(width: frameWidth) .clipped() Color.black .frame(width: rightViewWidth) .frame(height: 70) } .frame(width: viewWidth, alignment: .leading) } } #Preview { SampleTimeline() }
1
0
248
Sep ’24
SwiftUI video editing timeline implementation
I am trying to build a video editing timeline using SwiftUI which consists of a series of trimmers (sample code for trimmer below). I plan to embed multiple of these trimmers in an HStack to make an editing timeline (HStack in turn will be embedded in a ScrollView). What I want to achieve is that if I trim end of any of these trimmers by dragging it's left/right edge, the other trimmers on timeline should move left/right to fill the gap. I understand as the view shrinks/expands during trimming, there might be a need for spacers on the edges of HStack whose widths need to be adjusted while trimming is ongoing. Right now the code I have for the trimmer uses a fixed frameWidth of the view, so the view occupies full space even if the trimming ends move. It's not clear how to modify my code below to achieve what I want. This was pretty much possible to do in UIKit by the way. import SwiftUI struct SimpleTrimmer: View { @State var frameWidth:CGFloat = 300 let minWidth: CGFloat = 30 @State private var leftOffset: CGFloat = 0 @State private var rightOffset: CGFloat = 0 @GestureState private var leftDragOffset: CGFloat = 0 @GestureState private var rightDragOffset: CGFloat = 0 private var leftAdjustment: CGFloat { // NSLog("Left offset \(leftOffset + leftDragOffset)") var adjustment = max(0, leftOffset + leftDragOffset) if frameWidth - rightOffset - adjustment - 60 < minWidth { adjustment = frameWidth - rightOffset - minWidth - 60.0 } return adjustment } private var rightAdjustment: CGFloat { var adjustment = max(0, rightOffset - rightDragOffset) if frameWidth - adjustment - leftOffset - 60 < minWidth { adjustment = frameWidth - leftOffset - 60 - minWidth } return adjustment } var body: some View { HStack(spacing: 10) { Image(systemName: "chevron.compact.left") .frame(width: 30, height: 70) .background(Color.blue) .offset(x: leftAdjustment) .gesture( DragGesture(minimumDistance: 0) .updating($leftDragOffset) { value, state, trans in state = value.translation.width } .onEnded { value in var maxLeftOffset = max(0, leftOffset + value.translation.width) if frameWidth - rightAdjustment - maxLeftOffset - 60 < minWidth { maxLeftOffset = frameWidth - rightAdjustment - minWidth - 60 } leftOffset = maxLeftOffset } ) Spacer() Image(systemName: "chevron.compact.right") .frame(width: 30, height: 70) .background(Color.blue) .offset(x: -rightAdjustment) .gesture( DragGesture(minimumDistance: 0) .updating($rightDragOffset) { value, state, trans in state = value.translation.width } .onEnded { value in var minRightOffset = max(0, rightOffset - value.translation.width) if minRightOffset < leftAdjustment - 60 - minWidth { minRightOffset = leftAdjustment - 60 - minWidth } rightOffset = minRightOffset } ) } .foregroundColor(.black) .font(.title3.weight(.semibold)) .padding(.horizontal, 7) .padding(.vertical, 3) .background { RoundedRectangle(cornerRadius: 7) .fill(.yellow) .padding(.leading, leftAdjustment) .padding(.trailing, rightAdjustment) } .frame(width: frameWidth) } } #Preview { SimpleTrimmer() }
0
0
382
Sep ’24
SwiftUI view body invoked in infinite loop
I am observing infinite loop of view creation, deletion, and recreation when I move my app to background and bring back to foreground. I am clueless what is causing this repeated invocation of view body, so I tried Self._printChanges() inside the view body . But all I get is the following in console. ChildView: @ self , _dismiss changed. //<--- This is the problematic view ParentView: unchanged. //<--- This is parent view The issue has also been reported on Apple developer forums but no solution found. What are other options to debug this issue?
1
1
394
Jul ’24
Making ScrollView based discrete scrubber in SwiftUI
I am trying to recreate Discrete scrubber in SwiftUI with haptic feedback and snap to nearest integer step. I use ScrollView and LazyHStack as follows: struct DiscreteScrubber: View { @State var numLines:Int = 100 var body: some View { ScrollView(.horizontal, showsIndicators: false) { LazyHStack { ForEach(0..<numLines, id: \.self) { _ in Rectangle().frame(width: 2, height: 10, alignment: .center) .foregroundStyle(Color.red) Spacer().frame(width: 10) } } } } } Problem: I need to add content inset of half the frame width of ScrollView so that the first line in the scrubber starts at the center of the view and so does the last line, and also generate haptic feedback as it scrolls. This was easy in UIKit but not obvious in SwiftUI.
0
0
547
Feb ’24
AppTransaction issues for previous purchases under VPP
Dear StoreKit Engineers, I recently migrated my app to freemium model from paid and am using AppTransaction to get original purchase version and original purchase date to determine if user already paid for the app before or not. It seems to be working for normal AppStore users but I am now flooded with complains from VPP users who previously purchased the app. It seems AppTransaction history is absent for these users and these users are asked to sign-in using Apple ID. What is the solution here?
1
0
624
Jan ’24
SwiftUI Video Player autorotation issue
I am embedding SwiftUI VideoPlayer in a VStack and see that the screen goes black (i.e. the content disappears even though video player gets autorotated) when the device is rotated. The issue happens even when I use AVPlayerViewController (as UIViewControllerRepresentable). Is this a bug or I am doing something wrong? var videoURL:URL let player = AVPlayer() var body: some View { VStack { VideoPlayer(player: player) .frame(maxWidth:.infinity) .frame(height:300) .padding() .ignoresSafeArea() .background { Color.black } .onTapGesture { player.rate = player.rate == 0.0 ? 1.0 : 0.0 } Spacer() } .ignoresSafeArea() .background(content: { Color.black }) .onAppear { let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(AVAudioSession.Category.playback, mode: AVAudioSession.Mode.default, options: AVAudioSession.CategoryOptions.duckOthers) } catch { NSLog("Unable to set session category to playback") } let playerItem = AVPlayerItem(url: videoURL) player.replaceCurrentItem(with: playerItem) } }
0
0
783
Jan ’24
AVSampleBufferDisplayLayer vs CAMetalLayer for displaying HDR
I have been using MTKView to display CVPixelBuffer from the camera. I use so many options to configure color space of the MTKView/CAMetalLayer that may be needed to tonemap content to the display (CAEDRMetadata for instance). If however I use AVSampleBufferDisplayLayer, there are not many configuration options for color matching. I believe AVSampleBufferDisplayLayer uses pixel buffer attachments to determine the native color space of the input image and does the tone mapping automatically. Does AVSampleBufferDisplayLayer have any limitations compared to MTKView, or both can be used without any compromise on functionality?
0
0
776
Nov ’23