Overview

Post

Replies

Boosts

Views

Activity

SwiftData: How can I tell if a migration went through successfully?
I created 2 different schemas, and made a small change to one of them. I added a property to the model called "version". To see if the migration went through, I setup the migration plan to set version to "1.1.0" in willMigrate. In the didMigrate, I looped through the new version of Tags to check if version was set, and if not, set it. I did this incase the willMigrate didn't do what it was supposed to. The app built and ran successfully, but version was not set in the Tag I created in the app. Here's the migration: enum MigrationPlanV2: SchemaMigrationPlan { static var schemas: [any VersionedSchema.Type] { [DataSchemaV1.self, DataSchemaV2.self] } static let stage1 = MigrationStage.custom( fromVersion: DataSchemaV1.self, toVersion: DataSchemaV2.self, willMigrate: { context in let oldTags = try? context.fetch(FetchDescriptor<DataSchemaV1.Tag>()) for old in oldTags ?? [] { let new = Tag(name: old.name, version: "Version 1.1.0") context.delete(old) context.insert(new) } try? context.save() }, didMigrate: { context in let newTags = try? context.fetch(FetchDescriptor<DataSchemaV2.Tag>()) for tag in newTags ?? []{ if tag.version == nil { tag.version = "1.1.0" } } } ) static var stages: [MigrationStage] { [stage1] } } Here's the model container: var sharedModelContainer: ModelContainer = { let schema = Schema(versionedSchema: DataSchemaV2.self) let modelConfiguration = ModelConfiguration(schema: schema, isStoredInMemoryOnly: false) do { return try ModelContainer( for: schema, migrationPlan: MigrationPlanV2.self, configurations: [modelConfiguration]) } catch { fatalError("Could not create ModelContainer: \(error)") } }() I ran a similar test prior to this, and got the same result. It's like the code in my willMigrate isn't running. I also had print statements in there that I never saw printed to the console. I tried to check the CloudKit console for any information, but I'm having issues with that as well (separate post). Anyways, how can I confirm that my migration was successful here?
0
0
62
9h
Control Player Camera with PS5 Controller on Vision Pro
I recently completed a freelance project where I was tasked with creating room-scale environments that could be used as AR elements. As a bonus, I suggested that these could be done to scale, and repurposed for eventual viewing in Vision Pro. To illustrate, I was able to quickly create a simple Immersive project in Xcode, add the USDZ file (authored in Maya, with baked lighting from Arnold) to Reality Composer Pro, and compile for quick sending to headset. I then would do screen recordings inside the immersive space, which the client loved to see. However, I am unable to walk around due to the boundary limitations. My next obvious thought is, how can I setup the “player” camera so that I can control with a PS5 controller inside AVP? In addition to Maya, I’m an Unreal Engine artist, and have been waiting patiently to get any projects compiled for AVP. With 5.5 release, I was able to get a VR Template test over to AVP, where I have rudimentary navigation control via the PS5 controller. Ideally, I’d also love to learn how to set this up natively, so I can take simple USDZ scenes created in Maya, import to RCP, setup a simple camera controller, and then be able to use this to navigate my VR Immersive spaces on Vision Pro. How can we go about doing this? Part two of this question/suggestion is, how would I go about controlling a rigged, animated character in AR/passthrough mode in a similar fashion? Thx!
0
0
50
10h
copyfile causes NSPOSIXErrorDomain 12 "Cannot allocate memory" when copying symbolic link from NTFS partition
I was able to confirm with a customer of mine that calling copyfile with a source file that is a symbolic link on a NTFS partition always causes the error NSPOSIXErrorDomain 12 Cannot allocate memory They use NTFS drivers from Paragon. They tried copying a symbolic link from NTFS to both APFS and NTFS with the same result. Is this an issue with macOS, or with the NTFS driver? Copying regular files on the other hand always works. Copying manually from the Finder also seems to always work, both with regular files and symbolic links, so I'm wondering how the Finder does it. Here is the sample app that they used to reproduce the issue. The first open panel allows to select the source directory and the second one the destination directory. The variable filename holds the name of the symbolic link to be copied from the source to the destination. Apparently it's not possible to select a symbolic link directly in NSOpenPanel, as it always resolves to the linked file. @main class AppDelegate: NSObject, NSApplicationDelegate { func applicationDidFinishLaunching(_ notification: Notification) { let openPanel = NSOpenPanel() openPanel.canChooseDirectories = true openPanel.canChooseFiles = false openPanel.runModal() let filename = "Modules" let source = openPanel.urls[0].appendingPathComponent(filename) openPanel.runModal() let destination = openPanel.urls[0].appendingPathComponent(filename) do { let state = copyfile_state_alloc() defer { copyfile_state_free(state) } var bsize = UInt32(16_777_216) if copyfile_state_set(state, UInt32(COPYFILE_STATE_BSIZE), &bsize) != 0 { throw NSError(domain: NSPOSIXErrorDomain, code: Int(errno)) } if copyfile_state_set(state, UInt32(COPYFILE_STATE_STATUS_CB), unsafeBitCast(copyfileCallback, to: UnsafeRawPointer.self)) != 0 || copyfile_state_set(state, UInt32(COPYFILE_STATE_STATUS_CTX), unsafeBitCast(self, to: UnsafeRawPointer.self)) != 0 || copyfile(source.path, destination.path, state, copyfile_flags_t(COPYFILE_NOFOLLOW)) != 0 { throw NSError(domain: NSPOSIXErrorDomain, code: Int(errno)) } } catch { let error = error as NSError let alert = NSAlert() alert.messageText = "\(error.localizedDescription)\n\(error.domain) \(error.code)" alert.runModal() } } private let copyfileCallback: copyfile_callback_t = { what, stage, state, src, dst, ctx in if what == COPYFILE_COPY_DATA { if stage == COPYFILE_ERR { return COPYFILE_QUIT } var size: off_t = 0 copyfile_state_get(state, UInt32(COPYFILE_STATE_COPIED), &size) } return COPYFILE_CONTINUE } }
0
0
64
11h
App Store Connect Certificates API
Hi all, I‘m using the certificates API in order to create a development certificate. I want to create a Jenkins job that will give employees an option to create a certificate without giving them admin rights. I’m creating a new certificate without any issues. When I try to create another certificate with a different CSR (for a diff user) I get an error that a certificate already exists. Is it limited to create only one certificate per API key?? Thanks!
0
0
64
11h
AR anchor shared across multiple immersive scenes
Hello, I am currently working on an app that features multiple environments in which I combine Reality Composer Pro scenes with objects managed at runtime as well as make heavy use of RealityView attachments that modify the appearance of certain objects. Is it possible to keep track of an AR anchor when transitioning between immersive spaces? About my app: There are two main contexts/scenes in the app that the user progresses through. The first takes place in AR and is non-interactive and driven by a timeline animation. The second is in VR and allows the user to change materials of select models. Both scenes need to be placed relative to a real-life object that functions as an image anchor. Anchoring is necessary for visual purposes in AR context and it would be nice to use it in the VR context as well in order to provide passive haptics to the user. If the user doesn't have access to the physical object, we make use of plane-based anchoring. Either way, we would like to keep the anchor's position across the scenes.
0
0
48
11h
Time to Have Membership Completed?
Hello - Asking a question that's probably been asked, and maybe a couple more. I bought a membership yesterday and received the email, I'm assuming it's "approved", but when I log in on developer account it says "PENDING". I will be the only user on my account. I already have the app, it's been tested by another party and all I need do is make some minor changes. So that's done, now I need to create a "debug or release testing distribution", but when I do that I get this error in xCode. Is this normal...until my status changes from "PENDING" to (guessing) APPROVED? And one further question. I am the only developer, and this app will only be used by one person as it's specialized for a single purpose. What are the option for that person to install the app? Is there a special section on the app store for such an app? Thanks, Gary
2
0
57
11h
sourceImageURL in imagePlaygroundSheet isn't optional
I can't shake the "I don't think I did this correctly" feeling about a change I'm making for Image Playground support. When you create an image via an Image Playground sheet it returns a URL pointing to where the image is temporarily stored. Just like the Image Playground app I want the user to be able to decide to edit that image more. The Image Playground sheet lets you pass in a source URL for an image to start with, which is perfect because I could pass in the URL of that temp image. But the URL is NOT optional. So what do I populate it with when the user is starting from scratch? A friendly AI told me to use URL(string: "")! but that crashes when it gets forced unwrapped. URL(string: "about:blank")! seems to work in that it is ignored (and doesn't crash) when I have the user create the initial image (that shouldn't have a source image). This feels super clunky to me. Am I overlooking something?
0
0
52
11h
Family Controls Usage Data
Hi all, For context, the Family Controls entitlement request (for the Personal Device Management category/individual use case) includes the question: Will your app share device or usage data beyond the individual for the individual use case, or Family Sharing for the parent/guardian use case, including through means such as screenshots, screen recordings, or server logging? I'm looking for clarification on how to interpret this. I originally answered Yes and was rejected, then later answered No and was accepted. Ideally, I would like my screen time management app to allow users to opt-in to social features. One simple example is opting into a leaderboard with your friends for who has the lowest screen time. If the user installed this app for themself and chooses to share this basic data with their friends, it sounds like an ethical and unproblematic feature but I suppose storing that data would fall under "server logging"? If anyone has any experience with this, I would appreciate a more explicit description of the requirement above. Is what I described allowed? Thanks for reading!
0
0
52
11h
Captured photos in wrong orientation
I'm building a custom camera screen that displays the camera image on a preview layer and then captures an image, using AVCaptureSession. When the picture is captured, I immediately load it into a UIImageView in order to display it to the user for approval. I've actually done this many times before, but this is the first time I've tried to do it in an app that supports interface rotation. If I hold the phone in Portrait mode and capture a picture, everything works as expected. When the user rotates the phone into Landscape orientation, I detect this and I replace the preview layer (AVCaptureVideoPreviewLayer) with a new one, specifying connection.videoRotationAngle in order to make the image appear in the right orientation. I'm a little surprised that this is necessary, and it's not a smooth transition, but that doesn't matter. What does matter is that when I capture the image, it is in the wrong orientation. I tried rotating it myself, but this doesn't seem to make any difference. What am I doing wrong?
0
0
49
11h
Increased and Mismatched Audio Buffer Sizes on iOS 18 when Sound Recognition or Vocal Shortcuts Is Enabled
Description As of iOS 18, AVAudioSession.setPreferredIOBufferDuration ignores the requested buffer size when Sound Recognition or Vocal Shortcuts is enabled. This results in 1) much larger buffer sizes and 2) mismatched buffer sizes between input and output buffers, which causes ‘glitchy’ audio and increased latency. Additionally, when this issue occurs AVAudioSession.setPreferredIOBufferDuration continues to return ‘true’ and no error is produced. Steps to Reproduce: Enable Vocal Shortcuts on a device running iOS 18. Enable at least one shortcut (e.g. Control Center). Open or clone the example project (https://github.com/cwalo/SoundRecognitionBug) Build and install the example project Attach a headset and launch the application Observe console logs showing a requested buffer size of 0.005805 (256 samples @ 48k) an actual buffer size of 0.023220 (1104 samples @48k - this is regularly the resulting buffer size in all of our tests) Quit the app and detach the headset. Enable mutesOutput in AudioSystem.mm (to avoid feedback) Launch the application Observe Same result from step 4 Mismatched hardware buffer size of 1104 and recorded frame count of 1024 Mismatched playbackCount and recordCount Quit the app and disable vocal shortcuts Launch the app Observe IOBufferDuration matching the requested duration and matched buffer sizes (expected behavior) Expected results: Requested IOBufferDuration is respected or AVAudioSession returns false or error is produced Input and output buffer sizes match Device(s): iPhone 11 Pro, iPad Pro OS: iOS 18.0.1 Environment: Xcode 16.1 FB: FB15715421 Related to: https://forums.developer.apple.com/forums/thread/765477
0
0
64
12h
AVAudioUnitTimePitch: speeding up introduces artifacts
For an upcoming update of one of my apps, I’m facing an issue: The .rate parameter of a AVAudioUnitTimePitch allows me to slow down an audio track without any issues: setting .rate to 0.7 or 0.8 results in an almost perfect playback without changing pitch. However, whenever the .rate parameter is greater than 1 (e.g. 1.1 or 1.15), I’m starting to hear audio artifacts (“flattering”) in the audio output which is not so nice (even at .overlap = 32). Intuitively, I’d’ve thought that speeding up the file should contain less artifacts than slowing it down?? I’ve tried different sample rates (44.1 kHz and 48 kHz), but same result. Grateful for any input on this 🙏
0
0
46
12h
D3DMetal unsupported CheckFeatureSupport query 53 while running simple vulkaninfo using Mesa 24.3 Dozen (Vulkanon12) driver..
Hi, wanted to test if possible to use Mesa3D Dozen driver(Vulkan on D3D12 )+D3DMetal 2b3 to get maybe better Vulkan driver on Wine than default MoltenVK.. this will support Vulkan windows apps via using D3D12Metal.. using vulkan_dzn.dll,dzn_icd.x86_64.json,dxil.dll from x64 folder from: https://github.com/pal1000/mesa-dist-win/releases/download/24.3.0-rc1/mesa3d-24.3.0-rc1-release-msvc.7z using simple vulkaninfo app and running like: wine64 vulkaninfo I get error: [D3DMetal:LOG:2A825] Unsupported API: CheckFeatureSupport, unhandled support query 53 also seems D3DMetal Wine integration on Whisky doesn't expose d3d12core.dll and d3d12.dll like new Agility D3D12 dlls or VKD3D, so getting: MESA: error: Failed to retrieve D3D12GetInterface MESA: error: Failed to load DXCore but anyways seems to try to load the driver as: WARNING: dzn is not a conformant Vulkan implementation, testing use only. full log: MESA: error: Failed to retrieve D3D12GetInterface MESA: error: Failed to load DXCore WARNING: dzn is not a conformant Vulkan implementation, testing use only. [D3DMetal:LOG:2A825] Unsupported API: CheckFeatureSupport, unhandled support query 53 00bc:fixme:dcomp:DCompositionCreateDevice 0000000000000000, {c37ea93a-e7aa-450d-b16f-9746cb0407f3}, 000000000011E328. MESA: error: Failed to load DXCore WARNING: dzn is not a conformant Vulkan implementation, testing use only. [D3DMetal:LOG:2A825] Unsupported API: CheckFeatureSupport, unhandled support query 53 00bc:fixme:dcomp:DCompositionCreateDevice 0000000000000000, {c37ea93a-e7aa-450d-b16f-9746cb0407f3}, 000000000011E578. ERROR: [Loader Message] Code 0 : setup_loader_term_phys_devs: Call to 'vkEnumeratePhysicalDevices' in ICD c:\windows\system32\.\vulkan_dzn.dll failed with error code -3 ERROR: [Loader Message] Code 0 : setup_loader_term_phys_devs: Failed to detect any valid GPUs in the current config ERROR at C:\j\msdk0\build\Khronos-Tools\repo\vulkaninfo\vulkaninfo.h:241:vkEnumeratePhysicalDevices failed with ERROR_INITIALIZATION_FAILED
0
0
52
12h
RoomCaptureSession persistence, ARSession pause broken?
Hi all, Our app allows a user to scan a room and then save that scan on a separate view, followed by additional scans. We're looking into allowing room combining via CapturedStructure, so we need rooms to be scanned in the same ARWorldMap without necessarily needing to re-localize in the same session. This should fit within the first scenario that Apple described. The only way I have found that allows our requirements is to save RoomCaptureView and to re-use that RoomCaptureView whenever we need to start a session again. This creates a number of other issues, and ideally, we wouldn't need to save a View in something like a singleton. We are using captureSession.stop(pauseARSession: false). Additionally, if we use the same RoomCaptureView and an error occurs during the scanning process, we can't get the instructions overlay to appear again if we reuse this view (specifically, the instructions in the middle of the view that state "Move device to start"). It's as if the instructions are completely removed and scanning is stuck on an error state if an error occurs. These instructions also seem to be separate from the instructions we can grab from RoomCaptureViewDelegate via didProvide instruction: RoomCaptureSession.Instruction), so we can't use that either. There's a couple subviews that seem relevant to this: RoomCaptureCoachingOverlayView and ARGlyphView - but both are not public, so we can't force them to appear. Also attempted a number of other things to try to get these subviews to appear, such as layoutIfNeeded(). Saving the ARSession and using it in let roomCaptureView = RoomCaptureView(frame: viewBounds, arSession: arSession) where we're creating a new view with the same ARSession seems much more ideal as that solves the above issues, but we run into another issue: world tracking seems to be completely lost when a new RoomCaptureView (and thus a new RoomCaptureSession) is started, even with the same already started ARSession, almost as if captureSession.stop(pauseARSession: false) doesn't work as described. Is there any way around needing to use the same RoomCaptureView or RoomCaptureSession for subsequent scans in the same session without needing to re-localize via ARWorldMap loading? Is there a way to force the guiding instructions to appear?
0
2
50
12h
Safari ios 18.2 download problems
iPhone 15pro iOS 18.2 Downloaded files cannot be located anywhere in Files, only by accessing Downloads in Safari. I have tried setting download folder to various locations, iCloud, Phone, Google Disk, but nothing is stored. Has an invisible cache or temp folder been introduced? If so, it is a total fail: When press-holding any file in Safari download, the normal file action options (Quick Look, share, store to Files, etc) are not available. When clicking any file it opens any of several apps that has this file type associated with it, and there is no way to change the default app or disable the forced opening of an app. I tried deleting the app opening .csv (in this case OneDrive), and another irrelevant app opened. There seems to be a hierarchy of apps-file types, and it has no logic to it. in Chrome behaviour is as expected. Chrome vs. Safari screen recordings: https://shorturl.at/my3Oy
0
0
66
13h
Post processing in VisionOS
WWDC21 had a cool demo project with fish, with a watery, misty look (Dive into RealityKit). It used post processing in RealityKit, but the ARView class isn’t available in VisionOS. Can CompositorLayer be used instead for post processing in full immersion?
0
0
45
14h