Running
xcrun xcresulttool get build-results --path ResultFile
does not always show the warnings that the build emitted, If I run the legacy command,
xcrun xcresulttool get --legacy --path ResultFile
against the same file, there are (numerous) warnings, where the non-legacy call above shows no warnings.
Overview
Post
Replies
Boosts
Views
Activity
We're build a pkg with three apps in it from the command line. There is one primary app and two supporting apps. We build a folder structure inside a temp directory like below (some folder names replaced with generic ones):
mkdir -p ./tmp/Applications/.hiddenfolder/
mkdir -p ./tmp/Library/Application\ Support/Company/
mkdir -p ./tmp/Library/Preferences/
mkdir -p ./tmp/Library/Logs/Company/
mkdir -p ./tmp/Library/LaunchAgents/
mkdir -p ./tmp/Library/Company/
mkdir -p ./tmp/Library/LaunchDaemons/
#Grant Logs Folder Read-Write Access to All
chmod a+rw ./tmp/Library/Logs/Company/
chmod a+rw ./tmp/Library/Application\ Support/Company/
We then build and sign each app dependency and place them into the temporary folder. For each app we're calling:
xcodebuild -workspace "$PROJECT" -scheme "$TARGET" -configuration Release -derivedDataPath "$WORKING" clean build
codesign --force --deep -o runtime --entitlements "../$TARGET/$APPLICATION.entitlements" --sign "$DEVKEY" "$WORKING/Build/Products/Release/$APPLICATION.app"
cp -R "$WORKING/Build/Products/Release/$APPLICATION.app" "$DESTINATION"
The primary app is copied into ./tmp/Applications/.hiddenfolder/ . The other two apps are put in ./tmp/Library/Company/ and ./tmp/Applications/
We then create the component list, build, and notarize the final pkg using the script below:
Some definitions of the variables used below:
IDENTIFIER=com.company.pkg.app1 (not real id but an example)
ROOT=./tmp
SCRIPTS=./scripts
GUI=./pkggui
pkgbuild --analyze --identifier "$IDENTIFIER" --version "$VERSION" --root "$ROOT" --scripts "$SCRIPTS" "$NAME-tmp.plist"
/usr/libexec/PlistBuddy -c "SET 0:BundleIsRelocatable NO" "$NAME-tmp.plist"
/usr/libexec/PlistBuddy -c "SET 1:BundleIsRelocatable NO" "$NAME-tmp.plist"
/usr/libexec/PlistBuddy -c "SET 2:BundleIsRelocatable NO" "$NAME-tmp.plist"
pkgbuild --identifier "$IDENTIFIER" --version "$PKGVERSION" --root "$ROOT" --scripts "$SCRIPTS" --component-plist "$NAME-tmp.plist" "$NAME-tmp.pkg"
productbuild --synthesize --package "$NAME-tmp.pkg" distribution.xml
sed -i "" \
-e '$ i\
\ <title>App1</title>' \
-e '$ i\
\ <background file="background.icns" alignment="bottomleft" scaling="proportional" />' \
-e '$ i\
\ <welcome file="welcome.txt" />' \
-e '$ i\
\ <installation-check script="InstallationCheck()"/> \
<script> \
function InstallationCheck(prefix) { \
if (system.compareVersions(system.version.ProductVersion, '12.0') < 0) { \
my.result.message = "This update requires OS X version 12.0 or later."; \
my.result.type = "Fatal"; \
return false; \
} \
return true; \
} \
</script>' \
"distribution.xml"
productbuild --distribution distribution.xml --resources "$GUI" --package-path "./$NAME-tmp.pkg" --sign "$DEVKEY" "$NAME.pkg"
Once built and notarized this pkg becomes the base for the installers we give to customers. For each customer we have some custom parameters we set in a plist file inside the pkg, which requires us to expand the pkg out, add the plist file to /Library/Preferences/ in the expanded pkg, and then flattent/re-notarize the edited pkg.
What we're running into is that the primary app (App1 above) will intermittently disappear after installation. We check all of the files we lay down in the postinstall script, and usually it detects the app in the correct location (/Applications/.hiddenfolder/), so it appears that it is correctly laying the files down but something is removing the app after installation. A couple of times the postinstall script has detected that the app is not in the correct place and fails, but usually the install will finish and only the other two apps remain. So far we've found no logs or evidence of it being moved or deleted; it just ceases to exist after installation.
Has anyone else had this issue and found a solution?
When you launch Xcode and then open Devices and Simulators, and connect your Apple TV 4K, you have a new menu item in Settings named Developer. If you close Xcode, the item stays, but if you restart the Apple TV 4K, it's gone until the next time you open Xcode and pair it again.
Is there a way to leave it there permanently? I'm not a developer, but it's still useful to me because it has the playback HUD with lots of information about the codecs, streaming bitrate and so on, and since I'm an A/V nerd, that is quite useful to me.
I'm attempting to use NWConnection as a websocket given a NWEndpoint returned by NWBrowser, setup like:
let tcpOptions = NWProtocolTCP.Options()
tcpOptions.enableKeepalive = true
tcpOptions.keepaliveIdle = 2
let parameters = NWParameters(tls: nil, tcp: tcpOptions)
parameters.allowLocalEndpointReuse = true
parameters.includePeerToPeer = true
let options = NWProtocolWebSocket.Options()
options.autoReplyPing = true
options.skipHandshake = true
parameters.defaultProtocolStack.applicationProtocols.insert(options, at: 0)
self.connection = NWConnection(to: endpoint, using: parameters)
The initial connection does make it to the ready state but when I first try to send a text message over the websocket, i get
nw_read_request_report [C1] Receive failed with error "Input/output error"
nw_flow_prepare_output_frames Failing the write requests [5: Input/output error]
nw_write_request_report [C1] Send failed with error "Input/output error"
immediately, and the websocket is closed.
Send code here:
let encoder = JSONEncoder()
let dataMessage = try encoder.encode(myMessage)
let messageMetadata = NWProtocolWebSocket.Metadata(opcode: .text)
let context = NWConnection.ContentContext(identifier: "send", metadata: [messageMetadata])
connection.send(content: dataMessage, contentContext: context, completion: .contentProcessed({ error in
if let error = error {
print (error)
}
}))
What would typically cause the Input/output error when writing? Am I doing something obviously wrong or is there something else I can do to get additional debug information?
Thanks in advance for any help.
Hi!
I'm creating an app like this:
Using Image Tracking to set world anchor in real world first.
The timeline in Reality Composer Pro scene needs to be played in same time(for the people in same place using the app).
People using the app will see the same contents in same position in same time in same place.
I already made Image Tracking feature worked. But the big problem is "Synchronization". I found Group Activities and TabletopKit to solve the problem. But I don't know if this are the right modules for this project.
How do I solve this problem technically?
If you have ideas, please let me know. I really need help for this.
I'm building a custom camera screen that displays the camera image on a preview layer and then captures an image, using AVCaptureSession. When the picture is captured, I immediately load it into a UIImageView in order to display it to the user for approval.
I've actually done this many times before, but this is the first time I've tried to do it in an app that supports interface rotation. If I hold the phone in Portrait mode and capture a picture, everything works as expected.
When the user rotates the phone into Landscape orientation, I detect this and I replace the preview layer (AVCaptureVideoPreviewLayer) with a new one, specifying connection.videoRotationAngle in order to make the image appear in the right orientation. I'm a little surprised that this is necessary, and it's not a smooth transition, but that doesn't matter.
What does matter is that when I capture the image, it is in the wrong orientation. I tried rotating it myself, but this doesn't seem to make any difference. What am I doing wrong?
Our app involves using the camera to scan barcodes or QR codes, with a working distance of about 5 cm. However, we’ve noticed variations in the focus distance of camera lenses across different iPhone models.
Currently, we mainly use two types of lenses: wide-angle and ultra-wide-angle.
• For iPhone 13 and earlier models, we use the wide-angle lens.
• For iPhone 13 Pro and later models, we use the ultra-wide-angle lens.
We are not certain if this setup is correct since we don’t have all iPhone models to test.
There is a users have reported focus issues on his iPhone 15.
We would like to ask if there’s a resource where we can find the minimum focus distance of different cameras in each iPhone model. This is to verify whether our current configuration is accurate.
Alternatively, if such data is not readily available, could apple tam advise which camera should be used on various iPhone models for scenarios with a working distance of approximately 5 cm?
Thank you!
Hi all,
I’ve been facing a behavior issue with shouldAutomaticallyForwardAppearanceMethods in UITabBarController. According to Apple’s documentation, this property should default to YES, which means that the appearance lifecycle methods (like viewWillAppear and viewDidAppear) should be automatically forwarded to child view controllers.
However, in my current development environment, I’ve noticed that shouldAutomaticallyForwardAppearanceMethods returns NO by default in UITabBarController, and this is causing some issues with lifecycle management in my app. I even tested this behavior in several projects, both in Swift and Objective-C, and the result is consistent.
Here are some details about my setup:
I’m using Xcode 16.0 with iOS 16.4 Simulator.
I’ve tested the behavior in both a new UIKit project and a simple SwiftUI project that uses a UITabBarController.
Even with a clean new project, the value of shouldAutomaticallyForwardAppearanceMethods is NO by default.
This behavior contradicts the official documentation, which states that it should be YES by default. Could someone clarify if this is expected behavior in newer versions of iOS or if there is a known issue regarding this?
Any help or clarification would be greatly appreciated!
Thanks in advance!
My app currently captures video using an AVCaptureSession set with the AVCaptureSessionPreset1920x1080 preset. However, I'd like to update this behavior, such that video can be recorded at a range of different resolutions.
There isn't a preset aligning to each desired resolution, so I thought I'd instead directly set the AVCaptureDeviceFormat. For any desired resolution, I would find the format that is closest without going under the desired resolution, and then crop it down as a post-processing step.
However, what I've observed is that there can be a range of available formats for a device at each resolution, with various differing settings. Presumably there is logic within AVCaptureSession that selects a reasonable default based on all these different settings, but since I am applying the format directly, I think I don't have a way to make use of that default logic? And it is undocumented?
Does this mean that the only way to select a format is to implement a comparison function that considers all different values of all different properties on AVCaptureDeviceFormat, and then sort the formats according to this comparator?
If so, what if some new property is added to AVCaptureDeviceFormat in the future? The sort would not take this new property into account, and the function might select a format with some new undesired property.
Are there any guarantees about what types for formats will be supported on a device? For example, can I take for granted that a '420v' format will exist at each resolution? If so I could filter the formats down only to those with this setting without risking filtering out all of the supported formats.
I suspect I may be missing something obvious. Any help would be greatly appreciated!
Function Introduction "https://developer.apple.com/documentation/avkit/creating-a-multiview-video-playback-experience-in-visionos/"
When I use this function, my videoPlayer has no back Action in player.
And we did not find any method provided by the system "addChildViewControllerAndView(form)"
"https://developer.apple.com/documentation/avkit/adopting-the-system-player-interface-in-visionos"
Referencing this document also did not work
As long as you enter this line of code
let playerController = AVPlayerViewController()
// Enable the multiview experience along with the default recommended set.
playerController.experienceController.allowedExperiences = .recommended(including: [.multiview])
there is no back button, only full screen and zoom out
So get a swift file and put this in it
import Foundation
import AVFoundation
let synthesizer = AVSpeechSynthesizer()
let utterance = AVSpeechUtterance(string: "Hello, testing speech synthesis on macOS.")
if let voice = AVSpeechSynthesisVoice(identifier: "com.apple.voice.compact.en-GB.Daniel") {
utterance.voice = voice
print("Using voice: \(voice.name), \(voice.language)")
} else {
print("Daniel voice not found on macOS.")
}
synthesizer.speak(utterance)
I get no speech output and this log output
Error reading languages in for local resources.
Error reading languages in for local resources.
Using voice: Daniel, en-GB
Program ended with exit code: 0
Why? and whats with "Error reading languages in for local resources." ?
Is anyone else experiencing this error after uploading new screenshots and attempting to submit a new release for review?
`Unable to Add for Review
The items below are required to start the review process:
There are still screenshot uploads in progress.`
I have tried waiting several minutes after uploading the new screenshots before submission. I have also tried waiting several minutes after uploading before rearranging the images. And I have tried uploading the images one at a time. No luck any way I try it.
I also had an issue signing in this morning either. It seemed like I was in a loop where it would just return me back to login after entering username and password. I saw that others had reported this, too.
Thanks
I submitted a Mac Catalyst app for TestFlight and before it can be tested by external testers it requires an App Review. The iOS app passed review, but the Mac Catalyst app failed review.
The rejection reason given was that App Sandbox needed the entitlement:
"com.apple.security.network.client" to be YES / true (not false).
I do have "com.apple.security.device.bluetooth" set to YES / true.
The Developer docs for entitlement "com.apple.security.network.client" say "Use this key to allow your sandboxed app to connect to a server process running on another machine, or on the same machine." for entitlement "com.apple.security.network.client", then go on to discuss TCP and UDP. https://developer.apple.com/documentation/security/app_sandbox
While technically a Bluetooth app connecting to another Bluetooth device puts the app in "client mode" and the device in "server mode", I think this network entitlement was intended for TCP / UDP, not Bluetooth.
The entitlement "com.apple.security.device.bluetooth" says
"A Boolean value indicating whether your app may interact with Bluetooth devices." - this seems to cover all the necessary needs for Bluetooth "your app may interact with Bluetooth devices".....
Would someone at Apple familiar with the docs please clarify what entitlements are required for an app that only uses Bluetooth?
If the "com.apple.security.network.client" is required, then I believe the docs for that property should also specify Bluetooth.
Hello,
I am currently working on an app that features multiple environments in which I combine Reality Composer Pro scenes with objects managed at runtime as well as make heavy use of RealityView attachments that modify the appearance of certain objects. Is it possible to keep track of an AR anchor when transitioning between immersive spaces?
About my app:
There are two main contexts/scenes in the app that the user progresses through. The first takes place in AR and is non-interactive and driven by a timeline animation. The second is in VR and allows the user to change materials of select models. Both scenes need to be placed relative to a real-life object that functions as an image anchor. Anchoring is necessary for visual purposes in AR context and it would be nice to use it in the VR context as well in order to provide passive haptics to the user.
If the user doesn't have access to the physical object, we make use of plane-based anchoring. Either way, we would like to keep the anchor's position across the scenes.
I was able to obtain the depth map image using AVCapturePhotoOutput from the delegate method
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: (any Error)?)
I convert the depth map to kCVPixelFormatType_DepthFloat32 format and get the pixel values of the depth map using the below code
func convertDepthData(depthMap: CVPixelBuffer) -> [[Float32]] {
let width = CVPixelBufferGetWidth(depthMap)
let height = CVPixelBufferGetHeight(depthMap)
var convertedDepthMap: [[Float32]] = Array(
repeating: Array(repeating: 0, count: width),
count: height
)
CVPixelBufferLockBaseAddress(depthMap, CVPixelBufferLockFlags(rawValue: 2))
let floatBuffer = unsafeBitCast(
CVPixelBufferGetBaseAddress(depthMap),
to: UnsafeMutablePointer<Float32>.self
)
for row in 0 ..< height {
for col in 0 ..< width {
if floatBuffer[width * row + col].isFinite{
convertedDepthMap[row][col] = floatBuffer[width * row + col]
}
}
}
CVPixelBufferUnlockBaseAddress(depthMap, CVPixelBufferLockFlags(rawValue: 2))
return convertedDepthMap
}
Is this the right way of accessing the depth float values from a depth map. And what will be the unit for it. Because some times the depth values are in range of 0.7 when I keep the device close to the subject around 15 to 30 cm.
LSSetDefaultHandlerForURLScheme is flagged as deprecated, but it isn't clear to me (very much not a frequent macOS developer) what the alternative is.
Can anyone point me in the right direction?
Thanks.
I am trying to install our latest dev version from Testflight onto our clients phone. The user can't see any screen other than Redeem Code, and there is no code provided in the build email.
What is the best way to move past redeem code screen and get a working staging app on the phone? This user is also the account holder/admin.
I am trying to get my head around how to implement a MapKit view using UIViewRepresentable (I want the map to rotate to align with heading, which Map() can't handle yet to my knowledge). I am also playing with making my LocationManager an Actor and setting up a listener. But when combined with UIViewRepresentable this seems to create a rather convoluted data flow since the @State var of the vm needs to then be passed and bound in the UIViewRepresentable. And the listener having this for await location in await lm.$lastLocation.values seems at least like a code smell. That double await just feels wrong. But I am also new to Swift so perhaps what I have here actually is a good approach?
struct MapScreen: View {
@State private var vm = ViewModel()
var body: some View {
VStack {
MapView(vm: $vm)
}
.task {
vm.startWalk()
}
}
}
extension MapScreen {
@Observable
final class ViewModel {
private var lm = LocationManager()
private var listenerTask: Task<Void, Never>?
var course: Double = 0.0
var location: CLLocation?
func startWalk() {
Task {
await lm.startLocationUpdates()
}
listenerTask = Task {
for await location in await lm.$lastLocation.values {
await MainActor.run {
if let location {
withAnimation {
self.location = location
self.course = location.course
}
}
}
}
}
Logger.map.info("started Walk")
}
}
struct MapView: UIViewRepresentable {
@Binding var vm: ViewModel
func makeCoordinator() -> Coordinator {
Coordinator(parent: self)
}
func makeUIView(context: Context) -> MKMapView {
let view = MKMapView()
view.delegate = context.coordinator
view.preferredConfiguration = MKHybridMapConfiguration()
return view
}
func updateUIView(_ view: MKMapView, context: Context) {
context.coordinator.parent = self
if let coordinate = vm.location?.coordinate {
if view.centerCoordinate != coordinate {
view.centerCoordinate = coordinate
}
}
}
}
class Coordinator: NSObject, MKMapViewDelegate {
var parent: MapView
init(parent: MapView) {
self.parent = parent
}
}
}
actor LocationManager{
private let clManager = CLLocationManager()
private(set) var isAuthorized: Bool = false
private var backgroundActivity: CLBackgroundActivitySession?
private var updateTask: Task<Void, Never>?
@Published var lastLocation: CLLocation?
func startLocationUpdates() {
updateTask = Task {
do {
backgroundActivity = CLBackgroundActivitySession()
let updates = CLLocationUpdate.liveUpdates()
for try await update in updates {
if let location = update.location {
lastLocation = location
}
}
} catch {
Logger.location.error("\(error.localizedDescription)")
}
}
}
func stopLocationUpdates() {
updateTask?.cancel()
updateTask = nil
}
func locationManagerDidChangeAuthorization(_ manager: CLLocationManager) {
switch clManager.authorizationStatus {
case .authorizedAlways, .authorizedWhenInUse:
isAuthorized = true
// clManager.requestLocation() // ??
case .notDetermined:
isAuthorized = false
clManager.requestWhenInUseAuthorization()
case .denied:
isAuthorized = false
Logger.location.error("Access Denied")
case .restricted:
Logger.location.error("Access Restricted")
@unknown default:
let statusString = clManager.authorizationStatus.rawValue
Logger.location.warning("Unknown Access status not handled: \(statusString)")
}
}
func locationManager(_ manager: CLLocationManager, didFailWithError error: Error) {
Logger.location.error("\(error.localizedDescription)")
}
}
Wondering if others have encountered this issue with PSSO 2.0.
We are observing that if, after registration, a user changes their IDP password, they may be prompted for their previous password in order to unlock the Keychain. We are trying to determine if this is expected behavior or if there is a way to avoid it.
To reproduce this, the flow would be as follows:
user registers with PSSO
user logs out and logs back in with their IDP password
user is authenticated (and not prompted for previous password)
user logs out
user changes their IDP password on another machine
user logs in and is prompted to use their previous password to unlock the Keychain.
Failure to provide the previous password nukes the Keychain, which is not an outcome we want.
Any insight anyone has on this issue would be most welcome.
Thanks
Issues with the mail app notifications.
mail shows only 3 unread in ‘all inboxes’, whereas there are 24 unread mails and badge icon only shows 3 also