Post

Replies

Boosts

Views

Activity

How to improve the captured image's resolution?
To create signatures for human faces and compare the similarities, I'm using ARKit's captureImage from ARFrame which is derived from the front facing camera with ARFaceTrackingConfiguration. However, compared to using the Vision and AVFoundation frameworks, the quality of the signature analysis is significantly impacted by the captureImage's low resolution. The resolution of the capturedImage in ARKit is just 640x480, according to capturedDepthData even though the video format is set to the highest resolution. let configuration = ARFaceTrackingConfiguration() if let videoFormat = ARFaceTrackingConfiguration.supportedVideoFormats.sorted(by: { ($0.imageResolution.width * $0.imageResolution.height) < ($1.imageResolution.width * $1.imageResolution.height) }).last { configuration.videoFormat = videoFormat } I tried using captureHighResolutionFrame and as well as change the video format: if let videoFormat = ARFaceTrackingConfiguration.recommendedVideoFormatForHighResolutionFrameCapturing { configuration.videoFormat = videoFormat } However, according to the documentation: The system delivers a high-resolution frame out-of-band, which means that it doesn't affect the other frames that the session receives at a regular interval The asynchronous nature of taking the high resolution images seems to result alternating between the standard captured image and the high resolution images rather than replacing the regular captured images. This is a concern because, depending on the size variations, displayTransform and CGAffineTransform must be used in different ways to scale the images. Not only that I need to able to use the frames continuously at either 30 fps or 60 fps, as they're produced rather than taking pictures occasionally, which the captureHighResolutionFrame method seems to be designed for considering the shutter sound. In order to use the captured image, I'm currently transforming it in the following way let image: CIImage = CIImage(cvImageBuffer: imageBuffer) let imageSize: CGSize = CGSize(width: CVPixelBufferGetWidth(imageBuffer), height: CVPixelBufferGetHeight(imageBuffer)) let normalizeTransform: CGAffineTransform = CGAffineTransform(scaleX: 1.0 / imageSize.width, y: 1.0 / imageSize.height) let flipTransform: CGAffineTransform = metadata.orientation.isPortrait ? CGAffineTransform(scaleX: -1, y: -1).translatedBy(x: -1, y: -1) : .identity guard let viewPort: CGRect = face.viewPort else { return nil } let viewPortSize: CGSize = viewPort.size guard let displayTransform: CGAffineTransform = face.arFrame?.displayTransform(for: metadata.orientation, viewportSize: CGSize(width: viewPortSize.width, height: viewPortSize.height)) else {   return nil } let scaleX: CGFloat = viewPortSize.width let scaleY: CGFloat = viewPortSize.height let viewPortTransform: CGAffineTransform = CGAffineTransform(scaleX: scaleX, y: scaleY) let scaledImage: CIImage = image   .transformed(by: normalizeTransform     .concatenating(flipTransform)     .concatenating(displayTransform)     .concatenating(viewPortTransform)   )   .cropped(to: viewPort)
2
0
1.3k
Feb ’23
dyld: Library not loaded
I'm getting what seems to be a dynamic linker error, but can't seem to resolve it. dyld[541]: Library not loaded: @rpath/TCore.framework/TCore  Referenced from: /private/var/containers/Bundle/Application/EEBCD2F1-3A25-4BCE-A151-04EC09374288/NavigationApp.app/Frameworks/FaceSDK.framework/FaceSDK  Reason: tried: '/private/var/containers/Bundle/Application/EEBCD2F1-3A25-4BCE-A151-04EC09374288/NavigationApp.app/Frameworks/TCore.framework/TCore' (no such file), '/private/var/containers/Bundle/Application/EEBCD2F1-3A25-4BCE-A151-04EC09374288/NavigationApp.app/Frameworks/FaceSDK.framework/Frameworks/TCore.framework/TCore' (no such file), '/usr/lib/swift/TCore.framework/TCore' (no such file), '/private/var/containers/Bundle/Application/EEBCD2F1-3A25-4BCE-A151-04EC09374288/NavigationApp.app/Frameworks/TCore.framework/TCore' (no such file), '/private/var/containers/Bundle/Application/EEBCD2F1-3A25-4BCE-A151-04EC09374288/NavigationApp.app/Frameworks/TCore.framework/TCore' (no such file), '/private/var/containers/Bundle/Application/EEBCD2F1-3A25-4BCE-A151-04EC09374288/NavigationApp.app/Frameworks/TCore.framework/TCore' (no such file), '/usr/lib/swift/TCore.framework/TCore' (no such file), '/private/var/containers/Bundle/Application/EEBCD2F1-3A25-4BCE-A151-04EC09374288/NavigationApp.app/Frameworks/TCore.framework/TCore' (no such file), '/private/var/containers/Bundle/Application/EEBCD2F1-3A25-4BCE-A151-04EC09374288/NavigationApp.app/Frameworks/TCore.framework/TCore' (no such file), '/private/var/containers/Bundle/Application/EEBCD2F1-3A25-4BCE-A151-04EC09374288/NavigationApp.app/Frameworks/TCore.framework/TCore' (no such file), '/System/Library/Frameworks/TCore.framework/TCore' (no such file) Library not loaded: @rpath/TCore.framework/TCore  Referenced from: /private/var/containers/Bundle/Application/EEBCD2F1-3A25-4BCE-A151-04EC09374288/NavigationApp.app/Frameworks/FaceSDK.framework/FaceSDK  Reason: tried: '/private/var/containers/Bundle/Application/EEBCD2F1-3A25-4BCE-A151-04EC09374288/NavigationApp.app/Frameworks/TCore.framework/TCore' (no such file), '/private/var/containers/Bundle/Application/EEBCD2F1-3A25-4BCE-A151-04EC09374288/NavigationApp.app/Frameworks/FaceSDK.framework/Frameworks/TCore.framework/TCore' (no such file), '/usr/lib/swift/TCore.framework/TCore' (no such file), '/private/var/containers/Bundle/Application/EEBCD2F1-3A25-4BCE-A151-04EC09374288/NavigationApp.app/Frameworks/TCore.framework/TCore' (no such file), '/private/var/containers/Bundle/Application/EEBCD2F1-3A25-4BCE-A151-04EC09374288/NavigationApp.app/Frameworks/TCore.framework/TCore' (no such file), '/private/var/containers/Bundle/Application/EEBCD2F1-3A25-4BCE-A151-04EC09374288/Navigatio So far I've tried embedding it and linking it in both "General" and "Build Phase" of the Settings. I've tried setting Dynamic Library Install Name Base to @rpath as well as setting Always Embed Swift Standard Libraries to YES, but none seem to work.
6
0
3.9k
May ’22
How does UIEvent happen?
Broadly speaking, what is the sequence of events that happens from the moment UIEvent is created, particularly the UITouch-related event? I'm getting conflicting information from various places. One article is saying: UITouch/UIEvent is created UIApplication.shared.sendEvent() sends the event to a queue The event goes directly to the view's UIGestureRecognizer or UIResponder It doesn't mention when the hit-tests from UIWindow all the way up to the touched view happens. Second article says: An event is created The hit-test is performed on the view that was touched UIApplication.shared.sendEvent(&#92;&#95;:) UIWindow.sendEvent(&#92;&#95;:) UIGestureRecognizer Hit-test happens again? UIResponder Another article says: An event is created UIApplication manages the queue of UIEvents UIWindow performs the hit test. I'm guessing this is done by UIWindow.sendEvent(&#92;&#95;:)? UIApplication.shared.sendEvent(&#92;&#95;:) sends the event to the first responder The event is handled by the gesture recognizer or the responder Finally, another book I'm reading is saying: An event is created The event is handed to an instance of UIApplication through UIApplication.shared.sendEvent(&#92;&#95;:) UIWindow gets the event through UIWindow.sendEvent(&#92;&#95;:) UIWindow performs the hit-tests on the view hierarchy as well as the UIResponder I'm ok with some of these resources having incomplete information. What frustrates me is how some of them have completely different orders. So far, I'm certain that an instance of UIEvent and UITouch are created first and UIGestureRecognizer and UIResponder are reached last, but that's about it. What I'm confused about is: When do the hit-tests of the view hierarchy happen? What exactly does UIApplication.shared.sendEvent(&#92;&#95;:) do? The definition says "Dispatches an event to the appropriate responder objects in the app", but this means that the UIEvent has to already know the touched view associated with it, which means this method has to be invoked after the hit-tests. But, none of the above resources say that UIApplication.shared.sendEvent(&#92;&#95;:) happens AFTER the hit-tests. The definition of UIWindow.sendEvent(&#92;&#95;:) says that it's for dispatching the event to a view (and not a responder like UIApplication.shared.sendEvent(&#92;&#95;:)). I'm not exactly sure what the difference is. And does this happen before or after the hit-tests? What is this "queue" they keep referring to? Is it the queue to be sent to the instance of UIApplication or the responders or to UIWindow or to perform hit-tests? P.S. for some reason I'm unable to post any of the links to the articles and am getting "This URL can't be included in your post. Please remove it to continue".
0
0
535
Dec ’20
What can the intrinsic content size provide that the Autolayout anchors can't
I'm having difficult time understanding the practical usage of the intrinsic content size, especially the cases when you have to override the existing intrinsic size. It seems like using the widthAnchor and the heightAnchor seem to resolve what the intrinsic content size is trying to solve. For example if I wanted a certain intrinsic size: class CustomView: UIView { &#9;&#9;override var intrinsicContentSize: CGSize { &#9;&#9;&#9;&#9;return CGSize(width: 200, height: 100) &#9;&#9;} } I could achieve the same effect by using the Autolayout  anchors: NSLayoutConstraint.activate([ &#9;&#9;view.heightAnchor.constraint(equalToConstant: 100), &#9;&#9;view.widthAnchor.constraint(equalToConstant: 200) ]) When the stack view are required to have a size, you can also provide it with the anchors. Only two use cases I found it to be helpful is if you wanted to manipulate the existing original intrinsicContentSize: class CustomView: UIView { &#9;&#9;override var intrinsicContentSize: CGSize { &#9;&#9;&#9;&#9;let originalSize = super.intrinsicContentSize &#9;&#9;&#9;&#9;return CGSize(width: originalSize.width + 200, height: 100) &#9;&#9;} } But, I feel like intrinsicContentSize offers far more than simply using it as margins. The second use case is when you can change the priority by using either setContentCompressionResistancePriority(_:for:) or setContentHuggingPriority(_:for:) to indicate how to respond to the changes in its superview dynamically.
1
0
627
Nov ’20
How to use additionalSafeAreaInsets
I'm trying to implement additionalSafeAreaInsets, similar to the example from the Apple's documentation, - https://developer.apple.com/documentation/uikit/uiview/positioning_content_relative_to_the_safe_area but the child view controller is not respecting the additional insets: import UIKit import PlaygroundSupport class MyVC: UIViewController { &#9;&#9;var leftView: UIView! &#9;&#9;let secondVC = SecondVC() &#9;&#9; &#9;&#9;override func viewDidLoad() { &#9;&#9;&#9;&#9;super.viewDidLoad() &#9;&#9;&#9;&#9;leftView = UIView(frame: CGRect(origin: .zero, size: .init(width: 50, height: self.view.bounds.size.height))) &#9;&#9;&#9;&#9;leftView.backgroundColor = .blue &#9;&#9;&#9;&#9;self.view.addSubview(leftView) &#9;&#9;} &#9;&#9; &#9;&#9;override func viewDidAppear(_ animated: Bool) { &#9;&#9;&#9;&#9;super.viewDidAppear(true) &#9;&#9;&#9;&#9;&#9;&#9; &#9;&#9;&#9;&#9;var newSafeArea = UIEdgeInsets() &#9;&#9;&#9;&#9;if let sideViewWidth = leftView?.bounds.size.width { &#9;&#9;&#9;&#9;&#9;&#9;newSafeArea.left += sideViewWidth &#9;&#9;&#9;&#9;} &#9;&#9;&#9;&#9; &#9;&#9;&#9;&#9;print(newSafeArea) &#9;&#9;&#9;&#9; &#9;&#9;&#9;&#9;addChild(secondVC) &#9;&#9;&#9;&#9;secondVC.view.frame = self.view.safeAreaLayoutGuide.layoutFrame &#9;&#9;&#9;&#9;self.view.addSubview(secondVC.view) &#9;&#9;&#9;&#9;secondVC.didMove(toParent: self) &#9;&#9;&#9;&#9;secondVC.additionalSafeAreaInsets = newSafeArea &#9;&#9;} } class SecondVC: UIViewController { &#9;&#9;override func viewDidLoad() { &#9;&#9;&#9;&#9;super.viewDidLoad() &#9;&#9;&#9;&#9;self.view.backgroundColor = .yellow &#9;&#9;} } PlaygroundPage.current.needsIndefiniteExecution = true PlaygroundPage.current.liveView = MyVC() The expectation is that the left column view (leftView) is juxtaposed to a child view controller (childVC) within a root view controller (MyVC), but the child view controller simply takes up the entire main view.
13
0
5.8k
Nov ’20
How do you use UIEdgeInsets?
I've been trying different ways on the Playground, but the margins are not being implemented. I'm trying to understand the proper way of using UIEdgeInsets (and NSDirectionalEdgeInsets). My expectation is that the super view should have margins of specified size and the subview should be within that margin, sort of like an invisible border. Following didn't show any margins for the superview: import UIKit import PlaygroundSupport let rootView = UIView(frame: CGRect(x: 0, y: 0, width: 500, height: 500)) rootView.layoutMargins = UIEdgeInsets(top: 100, left: 100, bottom: 100, right: 100) let subview = UIView() subview.translatesAutoresizingMaskIntoConstraints = false NSLayoutConstraint.activate([   subview.widthAnchor.constraint(equalToConstant: 100),   subview.heightAnchor.constraint(equalToConstant: 100) ]) subview.backgroundColor = .red rootView.addSubview(subview) PlaygroundPage.current.needsIndefiniteExecution = true PlaygroundPage.current.liveView = rootView I've also tried with: rootView.layoutMargins.top = 100 or instead of Autolayout: let subview = UIView(frame: CGRect(origin: .zero, size: .init(width: 100, height: 100))) No need to mention NSDirectionEdgeInsets didn't work: rootView.directionalLayoutMargins = NSDirectionalEdgeInsets(top: 100, leading: 100, bottom: 100, trailing: 100) I tried it with a view controller, but still futile: class MyVC: UIViewController {   override func viewDidLoad() {     super.viewDidLoad()           let rootView = UIView(frame: CGRect(x: 0, y: 0, width: 500, height: 500))     rootView.layoutMargins = UIEdgeInsets(top: 100, left: 100, bottom: 100, right: 100)     self.view.addSubview(rootView)           let subview = UIView()     subview.translatesAutoresizingMaskIntoConstraints = false     NSLayoutConstraint.activate([       subview.widthAnchor.constraint(equalToConstant: 100),       subview.heightAnchor.constraint(equalToConstant: 100)     ])     subview.backgroundColor = .red     rootView.addSubview(subview)   } } What am I doing wrong?
0
0
954
Nov ’20
Difference between UIWindowScene and View Controller Scene.
When an app launches, UIApplicationMain creates the UIWindowScene which is used to instantiate the window. A view controller is then instantiated and assigned to the rootViewController property of the window. In other words: UIWindowScene -> window -> rootViewController As far as I understand, there is usually one scene to which view controllers can be added (although you can create more than one scene). But, when I see a storyboard, I see View Controller Scene per view controller. Are these scenes different from the window scene? If so, are these properties of UIWindowScene or the window?
1
0
3.0k
Nov ’20
What is a nil-targeted action for in Swift?
As far as I understand, a UIControl subclass maintains an internal dispatch table for the target-action pairs. When a user event occurs, a control finds the appropriate target and its matching action to call. And it does this by climbing the UIResponder hierarchy until it finds the right target and the action. Why wouldn't you specify the target? Is it so that the control becomes reusable?
5
1
1.3k
Nov ’20
The File’s Owner object of a NIB file
I have a few questions regarding the proxy objects of a NIB file, specifically in regards to the File's Owner object. The File's Owner object is a placeholder instance which represents some instance of a class outside the NIB file that will soon substitute this tentative instance. From what I understand, NIB files and XIB files are lazy loaded. Does this mean the placeholder instance (before it is substituted by the "real" class's instance) is already an instance (not a type) even before the NIB files are called? What is this placeholder instance an instance of? The File's Owner object is configured with a custom class, say ViewController. Why do you have to specify the owner again when you load this NIB file? Bundle.main.loadNibNamed("SomeNIBFile", owner: self )
13
0
2.7k
Oct ’20
Is manually linking binary with libraries and frameworks necessary?
When you build your project, what makes a framework usable is a two-stage process. You first import the framework's header in order to compile and it must link to the framework's binary so that they can interact with each other during runtime. But, how am I allowed to use the Apple's frameworks by only importing them and not linking them in to your target in the Build Phase ? For example, I've seen some people manually add MapKit to the Link Binary With Libraries section of Build Phase , but it works perfectly fine without doing so. What is the rule of thumb? Do I ever have to do this process manually? Do I have to make sure to do this for the 3rd party frameworks?
1
0
3.5k
Oct ’20
Questions regarding the fee for the public container of CloudKit
I'm having difficulty understanding the true cost of using the public container of CloudKit. I understand that the private container's limit counts towards the personal iCloud storage and the public container towards the separate limit, but I'm curious about the cost of the public container. Virtually every website I've visited to research this refers the Apple's calculator. This is the few information I was able to scrape. Basically, the public container is free to use up until a certain limit and then you get charged for anything extra. The limit is as follows: Asset Storage: 250 MB Database Storage: 2.5 MB Data Transfer: 50 MB Requests Per Second: 10 per 100k users Overage Fees Asset storage$0.03/GB Database storage$3.00/GB Data transfer$0.10/GB Requests per sec$100 per 10 requests This is what Apple's website says about these numbers: Limits are calculated and based on an average usage across the total number of active users per month. Users must have active container usage within the last 16 months. If your account exceeds its free limits of public storage, data transfer, or requests per second, overage charges may apply. Your app will not get additional free allowances by using multiple containers I have a few questions and any help would be appreciated: Is the 50 MB transfer limit per month or per transfer? I'm not sure if I'm understanding the 2.5 MB of the database storage limit correctly because that's pretty minuscule. Does the user bear the cost or the developer? If former, what happens when the limit is reached? Does the iOS or iPadOS inform the user or does the app have to notify the user like the In App Purchase? Does the app have to inform the user before using the app that there is a limit because I've never seen any of the apps do warning me of the public container usage limit. If latter, is there a way to set up an In-app purchasing option and have the public container access as a premium feature?
0
0
599
Sep ’20
CloudKit's public container can be accessed on wifi, but not on the cellular network
I'm attempting to write to the public container of CloudKit, but I keep getting the error message: CKError 0x281ff9ec0: "Network Unavailable" (3/NSURLErrorDomain:-1009); "The Internet connection appears to be offline." only when I'm accessing the container through the cellular network. When I try on a wifi network, however, it works perfectly fine and can confirm the presence of the uploaded data on the dashboard. There is absolutely nothing wrong with the cellular connectivity of my device. let publicCloudDatabase = CKContainer.default().publicCloudDatabase let operation = CKModifyRecordsOperation(recordsToSave: [exampleRecord], recordIDsToDelete: nil) let operationConfiguration = CKOperation.Configuration() operationConfiguration.allowsCellularAccess = true operationConfiguration.qualityOfService = .userInitiated operation.configuration = operationConfiguration operation.perRecordProgressBlock = {(record, progress) in &#9;&#9;print(progress) } operation.perRecordCompletionBlock = {(record, error) in &#9;&#9;print("Upload complete") } publicCloudDatabase.add(operation) publicCloudDatabase.save(exampleRecord) { [unowned self] record, error in &#9;&#9;if let error = error { &#9;&#9;&#9;&#9;print("public cloud database error: \(error)") &#9;&#9;} else { &#9;&#9;&#9;&#9;print("Sucessfully uploaded to Public Cloud DB") &#9;&#9;} } I've tried the different quality of service, like userInteractive, but still doesn't work. I initialize the container in AppDelegate.swift with the standard code: lazy var persistentCloudKitContainer: NSPersistentCloudKitContainer = {   let container = NSPersistentCloudKitContainer(name: "Example")       guard let description = container.persistentStoreDescriptions.first else {     fatalError("Could not retrieve a pesistent store description")   }       let options = NSPersistentCloudKitContainerOptions(containerIdentifier: "iCloud.com.noName.Example")   description.cloudKitContainerOptions = options   do {     try container.initializeCloudKitSchema()   } catch {     print("Initialize error: \(error)")   }       container.loadPersistentStores(completionHandler: { (storeDescription, error) in     if let error = error as NSError? {       fatalError("Unresolved error \(error), \(error.userInfo)")     }   })   return container }() The Security Role on the dashboard shows that the creator can both read and write. Xcode: Version 12.0 beta 6 iOS: 14.0
2
0
1.1k
Sep ’20