I'm trying to understand better how to 'navigate' around a large USD scene inside a RealityView in SwiftUI (itself in a volume on VisionOS).
With a little trial and error I have been able to understand scale and translate transforms, and I can have the USD zoom to 'presets' of different scale and translation transforms.
Separately I can also rotate an unscaled and untranslated USD, and have it rotate in place 90 degrees at a time to return to a rotation of 0 degrees.
But if I try to combine the two activities, the rotation occurs around the center of the USD, not my zoomed location.
Is there a session or sample code available that combines these activities? I think I would understand relatively quickly if I saw it in action.
Thanks for any pointers available!
Post
Replies
Boosts
Views
Activity
I am trying to verify my understanding of adding a HoverEffectComponent on entities inside a scene in RealityViews.
Inside RealityComposer Pro, I have added the required Input Target and Collision components to one entity inside a node with multiple siblings, and left any options as defaults. They appear to create appropriately sized bounding boxes etc for these objects.
In my RealityView I programmatically add the HoverEffectComponents to the entities as I don't see them in RCP.
On device, this appears to "work" in the sense that when I gaze at the entity, it lights up - but so does every other entity in the scene - even those without Input Target and Collision components attached.
Because the documentation on the components is sparse I am unsure if this is behavior as designed (e.g. all entities in that node are activated) or a bug or something in between.
Has anyone encountered this and is there an appropriate way of setting these relationships up?
Thanks
I am trying to use ArchiveStream.process on MacOS as outlined in the documentation here:
https://developer.apple.com/documentation/accelerate/decompressing_and_extracting_an_archived_directory
I have been able to successfully do the following:
Take objects and create Data objects
Create a UIDocument that is FileWrapper based
Compress the UIDocument as an .aar archive
Upload it to iCloud as a CKRecord
Download it as an .aar and decode it back to a directory
Decode the directory as individual data items and back to objects
The problem is that I sometimes can only download from iCloud once and have the decompression function - other times it may work two or three times but ultimately fails when trying to call ArchiveStream.process
The error reported by ArchiveStream.process is simply 'ioError' but on the console I see the following:
[0xa5063c00] Truncated block header (8/16 bytes read)
[0xa503d000] NOP received
[0xa5080400] processStream
[0xa7019000] decoder failed
[0xbd008c00] istream read error
[0xbd031c00] refill buffer
[0x90008000] archive stream read error (header)
[0xc8173800] stream cancelled
The test data I am using does not change so it does not appear to be related to what I am compressing.
But I am at a loss how to prevent the error.
This is IOS 17 running on MacOS (as iPad) and on IOS 17 devices.
I am trying to understand if what I am seeing is expected behavior or not with the following UIKit components.
1.I have a view controller "A" embedded in a navigation controller (part of a multi-step flow). Large titles are active on this navigation controller.
In this view controller "A", I have a container view that contains another view controller "B" (I want to reuse the contents of B in other flows)
Inside view controller "B" I have a UICollectionView using a diffable data source.
When you load view controller "A" it appears to work fine. My collection view loads data, I see a nice list and when I scroll it...
... the expectation is it scrolls inside it's container and has no impact on the parent controller "B"
However, the navigation bar and title in "A" reflect the content offset of the collection view. Scroll a couple lines, the large title turns small and centered on top. If I turn off large title, I still see the background color of the navigation bar change as it would if you were scrolling a view directly inside controller "A" without the container view.
Am I supposed to be manually capturing the gesture recognizer in B and somehow preventing the gesture to bubble up to A? It seems like strange behavior to have to correct. Any suggestions?
Thanks!
I am trying to understand the concepts between two of the CloudKit code samples:
CoreDataCloudKitDemo, which shows sync between CoreData and CloudKit
CoreDataFetchedProperty which shows how you can keep public and private data in two CoreData configurations and join them together.
After some trial and error I created a single NSPersistentCloudKitContainer that I thought used the two separate configurations - each had it's own local persistent store, database scope was set properly for both stores, etc. But when I run the app it complained of the following:
Failed to load persistent stores:Error Domain=NSCocoaErrorDomain Code=134060 "A Core Data error occurred." UserInfo={NSLocalizedFailureReason=CloudKit integration does not allow relationships to objects that aren't sync'd. The following relationships have destination entities that not in the specified configuration.
EntityA: entityB - EntityB
So, I went back to the model and although I had created two separate Configurations (with EntityA in one and EntityB in the other, I had not enabled them for use in CloudKit. When I did that, then the app now refuses to build.
This feels like a common scenario so I am assuming I have misconfigured something in the model. Are there any pointers that can help me correct this?
Thanks
`
I've been trying to replicate a specific type collection view using compositional layouts, and am not fully understanding how to use background items correctly.
The effect I am trying to recreate is the 'favorites' section of the initial Maps sheet:
As the user pans the orthogonal section, the background moves along with the items, even if the number of items grows past the width of the view.
However in my code, although I attach a NSCollectionLayoutDecorationItem.background to the decorationItems of the enclosing section, the background doesn't scroll with the items.
Do I need to instead create NSCollectionLayoutSupplementaryItems and attach them to the group for this effect? If so I am not understanding the correct process - if I create the layout group with NSCollectionLayoutGroup(layoutSize: layoutGroupSize, supplementaryItems: [item]) I can't add subitems later and if I use NSCollectionLayoutGroup.horizontal(layoutSize: layoutGroupSize, subitems: [items]) then I can't add supplementary items?
I found the Scrumdinger sample application really helpful in understanding SwiftUI, but I have a question about the transcription example.
Regardless of using either the "StartingProject" and doing the tutorial section, or using the "Completed" project, the speech transcription works but only for a small number of seconds.
Is this a side effect of something else in the project? Should I expect a complete transcription of everything said when the MeetingView view is presented?
This was done on Xcode 13.4 and Xcode 14 beta 4, with iOS 15 and iOS 16 (beta 4).
Thanks for any assistance!
I'm trying to solve a behavior difference between iOS and the respective Catalyst app with regards to pulling data from an AWS Lambda.When I run the app on Mac OS X 10.15, the dataTaskPublisher that requests the URL completes successfully.But when running on an iOS device, the dataTask is cancelled with an error (-999) that I believe is related to SSL restrictions that are more fully enforced in iOS 13.I can't seem to find a specific pointer to the correct procedure -- any pointers would be greatly appreciated.Thanks!