I made a new build and it is finally visible!
My understanding is as follows (to be checked): I think that we have several problems:
1) high processing times on the Apple side (due to the release of iOS 14?)
2) Apple does not alert in case of configuration problems (everything is green when uploading but the build does not appear on the Apple Store interface)... No email , or messages allowing to understand what is happening
My advice: carefully check the configuration of your info.plist file!
Post
Replies
Boosts
Views
Activity
Thank you for your reply.
In fact I noticed a color problem that occurs after correcting the orientation of the image (the captured image is in landscape format)...
The corrected image has slightly different colors from the uncorrected version.
Here is the code I use to correct the orientation:
let pixelBufferRef = frame.capturedImage
let resolution = frame.camera.imageResolution
var image = CIImage(cvPixelBuffer: pixelBufferRef)
let viewportSize = CGSize(width: resolution.height, height: resolution.width)
let transform = frame.displayTransform(for: .portraitUpsideDown, viewportSize: viewportSize)
image = image.transformed(by: transform)
let context = CIContext()
if let imageRef = context.createCGImage(image, from: image.extent) {
let png = context.pngRepresentation(of: image, format: .BGRA8, colorSpace: image.colorSpace!)
try? png?.write(to: documentsURL.appending(component: "captured-image-corrected.png"))
}
Did I miss something?
Thank you!
The color spaces are identical.
Before:
<CGColorSpace 0x281da9440> (kCGColorSpaceICCBased; kCGColorSpaceModelRGB; QuickTime 'nclc' Video (1,1,6))
After:
<CGColorSpace 0x281da9440> (kCGColorSpaceICCBased; kCGColorSpaceModelRGB; QuickTime 'nclc' Video (1,1,6))
But you are absolutely right, the code I posted was wrong!
Here is a cleaner version that seems to work
let pixelBufferRef = frame.capturedImage
let resolution = frame.camera.imageResolution
var image = CIImage(cvPixelBuffer: pixelBufferRef)
let viewportSize = CGSize(width: resolution.height, height: resolution.width)
let transform = frame.displayTransform(for: .portraitUpsideDown, viewportSize: viewportSize)
image = image.transformed(by: transform)
do {
let context = CIContext()
let url = getDocumentsDirectory().appending(component: "captured-image-corrected.png")
try context.writePNGRepresentation(of: image, to: url, format: .RGBA8, colorSpace: image.colorSpace!)
} catch {
fatalError(error.localizedDescription)
}
Does this sound better to you?
I am encountering a similar issue with a code that works perfectly on all devices except the iPhone 16 Pro Max: I observe a significant random slowdown that occurs when using AVAssetWriter.
The issue only appears at the launch of the application (e.g., when it hasn't been used for several hours).
@WindowsMEMZ: Thank you!!
@Quinn Thank you for your clear and precise answer, which helped me understand my mistake: I had misunderstood how withObservationTracking works. It only observes the state change of the observed property once (I thought it was doing so continuously).
Does the following code seem correct to you? (I added the observe method)
import Foundation
import Observation
@Observable
class AsyncJob {
var progress: Int = 0
func run() async throws {
for _ in 0..<3 {
print("will do some work")
try await Task.sleep(for: .seconds(1))
self.progress += 1
print("did do some work")
}
}
}
func observe(job: AsyncJob) {
withObservationTracking {
print("apply, \(job.progress)")
} onChange: {
observe(job: job)
}
}
func main() async {
do {
let job = AsyncJob()
observe(job: job)
try await job.run()
} catch {
print(error)
}
}
await main()
Thank you again for your response! Indeed, you are right, concurrent access management is missing.
Does this seem correct to you? (It's not easy, the subject seems complex to me, I can't use NSLock in an asynchronous context or an Actor in this case...) What would you recommend in an architecture of this type?
import Foundation
import Observation
@Observable
class AsyncJob: @unchecked Sendable {
var progress: Int = 0
let serialQueue = DispatchQueue(label: "serial.queue")
func run() async throws {
for _ in 0..<3 {
try await Task.sleep(for: .seconds(1))
serialQueue.async {
self.progress += 1
}
}
}
}
func observe(job: AsyncJob) {
withObservationTracking {
print("apply, \(job.progress)")
} onChange: {
observe(job: job)
}
}
func main() async {
do {
let job = AsyncJob()
observe(job: job)
try await job.run()
} catch {
print(error)
}
}
await main()
The topic is very interesting, and I humbly acknowledge that I haven’t grasped all the subtleties of Swift 6 (I’ll need to improve my skills on this subject).
Here’s more information: my application takes input data and generates an output file. The file generation (which is complex) is carried out on a secondary thread (old school with GCD). Compatibility with Swift Concurrency is handled through a facade using withCheckedThrowingContinuation.
The processes performed on the secondary thread (file creation) require configuration data that is used solely in a read-only way (the secondary thread does not modify this data, which is encapsulated in an object).
In short, it looks something like this (names have been simplified for the example):
Job (file creation on a secondary thread using GCD)
AsyncJob (a facade over Job to ensure compatibility with Swift Concurrency)
Configuration (data needed for the job processing)
Code example:
do {
let configuration = Configuration(...)
let job = AsyncJob(configuration)
try await job.run()
} catch {
print(error)
}
To be honest, I haven’t established a specific strategy regarding concurrent access in my application, as the configuration is not modified and is only used by AsyncJob (but perhaps this is a mistake on my part?).
In light of our discussion, I’m wondering what would be the best approach if I had multiple jobs using shared data that could be modified?
Thank you again for your help! Your advice will allow me to improve my architecture and my vision for Swift 6.
I really appreciate your approach, and the use of AsyncStream (which I wasn’t familiar with) seems well-suited for event transmission in an async-await context.
My application handles cancellation at the lowest level using the cancel() method of the Job object. The facade (AsyncJob object) exposes the same API. The implementation of cancellation is done with a traditional approach, which involves constantly checking if the user has requested cancellation of the job within the processing code block.
Otherwise, I usually manage cancellation in an async-await context by keeping a reference to the task and calling task?.cancel.
What approach do you recommend?
Thank you, you've helped me a lot!!