Post

Replies

Boosts

Views

Activity

Rendering bug when layering transparent textures front and back
If I put an alpha image texture on a model created in Blender and run it on RCP or visionOS, the rendering between the front and back due to alpha will result in an unintended rendering. Details are below. I expor ted a USDC file of a Blender-created cylindrical object wit h a PNG (wit h alpha) texture applied to t he inside, and t hen impor ted it into Reality Composer Pro. When multiple objects t hat make extensive use of transparent textures are placed in front of and behind each ot her, t he following behaviors were obser ved in t he transparent areas ・The transparent areas do not become transparent ・The transparent areas become transparent toget her wit h t he image behind t hem the order of t he images becomes incorrect Best regards.
1
0
322
Nov ’24
Construction of luminous expression methods
Hi, I am investigating how to emit the following in my visionOS app. https://www.hiroakit.com/archives/1432 https://blog.terresquall.com/2020/01/getting-your-emission-maps-to-work-in-unity/ Right now, I'm trying various things with Shader Graph in Reality Composer Pro, but I can't tell from the official documentation and WWDC session videos what the individual functions and combined effects of Reality Composer Pro's Shader Graph nodes are, I am having a hard time understanding the effects of the individual functions and combinations of them. I have a feeling that such luminous materials and expressions are not possible in visionOS to begin with. If there is a way to achieve this, please let me know. Thanks.
0
0
454
Mar ’24
GroupActivity: Dropping activity as there is no active conversation:
Hi, I am having trouble with Share Play working. When I create and run the GroupActivity sample in SharePlay, I get the following message and GroupActivity does not work. https://mitemmetim.medium.com/shareplay-tutorial-share-custom-data-between-ios-and-macos-a50bfecf6e64 Dropping activity as there is no active conversation: <TUMutableConversationActivityCreateSessionRequest 0x2836731c0 activityIdentifier=jp.co.1planet.sample.SharePlayTutorial.SharePlayActivity applicationContext={length = 42, bytes = 0x62706c69 73743030 d0080000 00000000 ... 00000000 00000009 } metadata=<TUConversationActivityMetadata 0x28072d380 context=CPGroupActivityGenericContext title=SharePlay Example sceneAssociationBehavior=<TUConversationActivitySceneAssociationBehavior 0x28237a740 targetContentIdentifier=(null) shouldAssociateScene=1 preferredSceneSessionRole=(null)>> UUID=3137DDE4-F5B2-46B2-9097-30DD6CAE79A3> I tried running it on Mac and iOS, but it did not work as expected. By the way, we are also trying the following https://developer.apple.com/forums/thread/683624 I have no knowledge of GroupActivity; I have Group Activities set in Capability. Do I need to set anything else? Please let me know if you can find any solution to this message. By the way, I am using Xcode 15.2 Beta, iOS 17.1.1 and iOS 17.3 Beta, Mac OS 14.2.1 (23C71). Best Regards.
0
0
723
Jan ’24
How to access Persona Virtual Camera features
How do I access Persona Virtual Camera features from the app? I would be happy to add permissions or a simple implementation example. I know that this feature is probably only available with the Apple Vision Pro device, but it would be nice to share information about Persona Virtual Camera, including whether or not it works with the visionOS simulator, and a solid description of Persona Virtual Camera to help us understand how it works. If you have a page or video that explains Persona Virtual Camera well, please share it as well. Best Regards. Sadao Tokuyama https://1planet.co.jp/ https://twitter.com/tokufxug
0
0
922
Dec ’23
How to set the default size of WindowGroup volumetric in SwiftUI to fit the size of Model Entity loaded in USDZ
Hello, I'm here. I am posting this in the hope that you can give me some advice on what I would like to achieve. What I would like to achieve is to download the USDZ 3D model from the web server within the visionOS app and display it with the Shared Space volume (volumetric) size set to fit the downloaded USDZ model. Currently, after downloading USDZ and generating it as a Model Entity, Using openWindow, The Model Entity is created as a volumetric WindowGroup in the RealityViewContent of the RealityView using openWindow. The Model Entity generated by downloading USDZ is added to the RealityViewContent of the RealityView in the View called by openWindow. The USDZ downloaded by the above method appears in the volume on visionOS without any problems. However, the size of the USDZ model to be downloaded is not uniform, so it may not fit in the volume. I am trying to generate a WindowGroup with openWindow using Binding with the appropriate size value set to defaultSize, but I am not sure which property of ModelEntity can be set to the appropriate value for defaultSize. The attached image does not have the correct position and I would like to place the position down if possible. I would appreciate your advice on sizing and positioning the downloaded USDZ to fit in the volume. Incidentally, I tried a plane style window and found that it displayed a USDZ Model Entity that was much larger in scale compared to the volume, so I have decided not to support a plane style window. If there is any information on how to properly set the position and size of the USDZ files created by visionOS and RealityKit, I would appreciate it if you could also provide it. Best regards. Sadao Tokuyama https://twitter.com/tokufxug https://1planet.co.jp/tech-blog/category/applevisionpro
1
0
771
Oct ’23
How to place a 3D model in front of you in the Full Space app.
Hi, I am currently developing a Full Space App. I have a question about how to implement the display of Entity or Model Entity in front of the user. I want to move the Entity or Model Entity to the user's front, not only at the initial display, but also when the user takes an action such as tapping. (Animation is not required.) I want to perform the initial placement process to the user's front when the reset button is tapped. Thanks. Sadao Tokuyama https://twitter.com/tokufxug https://www.linkedin.com/in/sadao-tokuyama/ https://1planet.co.jp/tech-blog/category/applevisionpro
1
0
590
Oct ’23
Play spatial video shot on iPhone 15 Pro in visionOS simulator
I heard that iPhone 15 Pro or iPhone 15 Pro Max can shoot spatial video. However, I also know that the iPhone 15 Pro does not support spatial video shooting at first. When the iPhone 15 Pro becomes able to shoot spatial video, can the shot spatial video be played back on the visionOS simulator? When played back, is the video playback represented in three dimensions as a spatial video also performed in visionOS simulator? I would like to play back the spatial video shot with the iPhone 15 Pro using the VideoPlayerComponent of RealityKit. I am concerned that if the visionOS simulator does not support the operation verification of the shot spatial video, it will take a long time to verify it because I do not have an Apple Vision Pro device.
0
1
1.8k
Sep ’23
OrbitAnimation does not work.
Hi, I implemented it as shown in the link below, but it does not animate. https://developer.apple.com/videos/play/wwdc2023/10080/?time=1220 The following message was displayed No bind target found for played animation. import SwiftUI import RealityKit struct ImmersiveView: View { var body: some View { RealityView { content in if let entity = try? await ModelEntity(named: "toy_biplane_idle") { let bounds = entity.model!.mesh.bounds.extents entity.components.set(CollisionComponent(shapes: [.generateBox(size: bounds)])) entity.components.set(HoverEffectComponent()) entity.components.set(InputTargetComponent()) if let toy = try? await ModelEntity(named: "toy_drummer_idle") { let orbit = OrbitAnimation( name:"orbit", duration: 30, axis:[0, 1, 0], startTransform: toy.transform, bindTarget: .transform, repeatMode: .repeat) if let animation = try? AnimationResource.generate(with: orbit) { toy.playAnimation(animation) } content.add(toy) } content.add(entity) } } } }
0
0
777
Aug ’23
How to reproduce MagnifyGesture on visionOS simulator
Hi, I have one question. How do I issue MagnifyGesture's onChange event in the visionOS simulator? I have tried various operations, but the onChange event does not work. https://developer.apple.com/videos/play/wwdc2023/10111/?time=994 @main struct WorldApp: App { @State private var currentStyle: ImmersionStyle = .mixed var body: some Scene { ImmersiveSpace(id: "solar") { SolarSystem() .simultaneousGesture(MagnifyGesture() .onChanged { value in let scale = value.magnification if scale > 5 { currentStyle = .progressive } else if scale > 10 { currentStyle = .full } else { currentStyle = .mixed } } ) } .immersionStyle(selection:$currentStyle, in: .mixed, .progressive, .full) } } Thanks.
2
0
1.2k
Aug ’23
GeometryReader3D and Scene Phases do not work properly.
Hi Scene Phases, but no event is issued when Alert is executed. Is this a known bug? https://developer.apple.com/videos/play/wwdc2023/10111/?time=784 In the following video, the center value is obtained, but a compile error occurs because the center is not found. https://developer.apple.com/videos/play/wwdc2023/10111/?time=861 GeometryReader3D { proxy in ZStack { Earth( earthConfiguration: model.solarEarth, satelliteConfiguration: [model.solarSatellite], moonConfiguration: model.solarMoon, showSun: true, sunAngle: model.solarSunAngle, animateUpdates: animateUpdates ) .onTapGesture { if let translation = proxy.transform(in: .immersiveSpace)?.translation { model.solarEarth.position = Point3D(translation) } } } } } Also, model.solarEarth.position is Point3D. This is not a simple Entity, is it? I'm quite confused because the whole code is fragmented and I'm not even sure if it works. I'm not even sure if it's a bug or not, so it's taking me a few days to a week to investigate and verify.
1
0
688
Aug ’23
About GestureState<ManipulationState>
The source code for visionOS's WWDC23 session, Take SwiftUI to the next dimension, suddenly makes extensive use of GestureState. However, there is no sample code that shows the full extent of GestureState, nor is there any explanation of its use in the video. I cannot proceed with understanding unless you share information about this. https://developer.apple.com/videos/play/wwdc2023/10113/?time=969 URL of the capture of the part of the video where GestureState is used. (An error occurred when uploading the image.) https://imgur.com/a/ZAeWk2k Sincerely, Sadao Tokuyama https://twitter.com/tokufxug https://www.linkedin.com/in/sadao-tokuyama/
3
0
901
Aug ’23
About metersPerUnit in USDZ
Hi, I watched the WWDC23 session video, "Create 3D models for Quick Look spatial experiences." https://developer.apple.com/videos/play/wwdc2023/10274/ In the video, I understood that the scale of models displayed using visionOS's AR Quick Look is determined by referencing the "metersPerUnit" value in USDZ files. I tried to find tools to set the "metersPerUnit" in 3D software or tools to view the "metersPerUnit" in USDZ files, but I couldn't find any. I believe adjusting the "metersPerUnit" in USDZ is crucial to achieve real-world scale when displaying models through visionOS's AR Quick Look. If anyone knows of apps or tools that can reference USDZ's "metersPerUnit" or 3D editor apps or tools that allow exporting with the "metersPerUnit" value properly reflected, I would greatly appreciate the information. Best regards. Sadao Tokuyama https://twitter.com/tokufxug https://www.linkedin.com/in/sadao-tokuyama/
0
0
751
Jul ’23