Posts

Post not yet marked as solved
5 Replies
Your question about the sample code kicked me in the right direction. The error message is a complete red-herring. I was also doing some fiddling to make my peer to peer connection instance async/await, and in the listener setup, I did something that didn't fully enable the state update handler, so I never heard the message of it coming live, and I'd based a bunch of work off that. I re-worked the code back to using a closure for capturing and reacting to that state, and that sorted the issue entirely. I also learned that it's apparently acceptable to queue a single call to receive() even before the state is ready, that callback just won't be called until the state is ready. I wasn't 100% sure of the constraints expected by Network.NWConnection between registering sends() and or receive() callbacks and the state reported by the framework. After I backed off the idea of "wait until it's ready before doing ANYTHING" with async calls, the whole thing worked a lot more smoothly. (I had been setting up an AsyncStream queue with a continuation to handle the state update callbacks, but it ended up before a lot more ceremony without a whole lot of value - I do still want to wrangle something in there that's async aware at a higher level to allow initiated connections to retry on failure, but I'm in a way better space for that now.)
Post not yet marked as solved
5 Replies
Thanks Quinn, Yeah, I killed the watchOS companion bits and ran it without issue. I'll go through with a fine-toothed comb and see if I can spot the (any) difference(s) in how it's set up vs anything I might have incorrectly changed. The logs from the app also show that same error - so I'm begininng to think that the message Unable to extract cached certificates from the SSL_SESSION object a red herring: boringssl_session_set_peer_verification_state_from_session(448) [C1.1.1.1:2][0x106208510] Unable to extract cached certificates from the SSL_SESSION object [C1 connected joe._tictactoe._tcp.local. tcp, tls, attribution: developer, path satisfied (Path is satisfied), viable, interface: en8] established
Post marked as solved
1 Replies
The effect is similar because the two types are closely. RealityView being a top-level view that renders 3D content (using the RealityKit renderer) and provides RealityViewContent as "context" - a means to manipulate any of the entities, adding, removing, updating them, etc. If you were to use RealityView by itself, you're responsible for loading models, updating the situating the entities so they work as you expect, etc. ModelView, by comparison, reads as those it's meant to the "fast path" to getting a model loaded and visible, without providing all the open context to manipulate the RealityKit renderer and entities. Model3D isn't exposing the context and doesn't provide a path to adjust, manipulate, and so on - any of the models you're loading and displaying. The article Adding 3D content to your app covers both of them with snippets that might make it easier to see in context.
Post not yet marked as solved
1 Replies
RealityKit "runs" on macOS, but it doesn't use the camera for any input - you have to create your own virtual camera and arrange all the positioning yourself. I made a SwiftUI view wrapped (for macOS) that lets me see RealityKit models and representations that you're welcome to dig through and take what you'd like from it: https://github.com/heckj/CameraControlARView - although if you use it as a package, make sure to use the release version, because I've got it somewhat torn apart working on adding some different camera motion modes to the original creation.
Post not yet marked as solved
3 Replies
I’ve been hitting the same questions and quandaries - and like @ThiloJaeggi , the only consistent pattern i’ve managed is recreating shader effects in RealityComposer Pro and applying them there. Annoying AF after the effort expended in Blenders shader nodes, but I do get the complications with export. Related to that, there is working pending to export blender shader nodes (or more specifically, some subset of them) as MaterialX, but as far as I can see it’s stalled in discussions of how to handle this inside blender eve. it comes to their internal renderers (eve and cycles), which currently don’t have materialX support. I’m just starting to slowly increment through nodes to experiment with what can and can’t be exported, as I’d really prefer to use Blender as my “DCC” tool of choice.
Post marked as solved
1 Replies
I found a solution, although I had to go tweaking my origin data to enable it. I switched the percentiles reported out of my data structure so that they represented 1.0 - the_original_percentile to get numbers into a log range that could be represented. Following that, the keys were figuring out chartXScale and chartXAxis with a closure to tweak the values presented as labels using AxisValueLabel. For anyone else trying to make a service latency review chart and wanting to view the outliers at wacky high percentiles, here's the gist: Chart {     ForEach(Converter.invertedPercentileArray(histogram), id: \.0) { stat in         LineMark(             x: .value("percentile", stat.0),             y: .value("value", stat.1)         )         // Use curved line to join points         .interpolationMethod(.monotone)     } } .chartXScale(     domain: .automatic(includesZero: false, reversed: true),     type: .log ) .chartXAxis {     AxisMarks(values: Converter.invertedPercentileArray(histogram).map{ $0.0 }     ) { value in         AxisGridLine()         AxisValueLabel(centered: true, anchor: .top) {             if let invertedPercentile = value.as(Double.self) {                 Text("\( ( (1 - invertedPercentile)*100.0).formatted(.number.precision(.significantDigits(1...5)))) ")             }         }     } } Although of note - at least in Xcode Version 14.3 beta 2 (14E5207e), using "reversed:true" in the ScaleDomain caused the simulator to crash. (filed as FB12035575) In a macOS app, there wasn't any issue Resulting chart:
Post marked as solved
1 Replies
Version 22.08 of the USD tools dropped with explicit native M1 support, although there's some quirks IF you're installing it over a version of python that was set up and installed using Conda. I wrote a blog post with the details at https://rhonabwy.com/2022/08/16/native-support-for-usd-tools-on-an-m1-mac/, but the core of the issue is that an additional build argument is needed: PXR_PY_UNDEFINED_DYNAMIC_LOOKUP=ON which you can pass through their build tooling: python build_scripts/build_usd.py /opt/local/USD --build-args USD,"-DPXR_PY_UNDEFINED_DYNAMIC_LOOKUP=ON" The details, and very helpful explanation for why this is required, is covered in the USD GitHub issue https://github.com/PixarAnimationStudios/USD/issues/1996
Post not yet marked as solved
2 Replies
The fluid dynamics simulation stuff is well beyond what Apple provides in SceneKit (or RealityKit). The underlying topic is a deep, deep well of research, with a lot of interesting work, but most of the papers that you'll find are focused on simulations that are MUCH higher fidelity than you'll want (or need) for this kind of example. One paper I found while googling around was http://graphics.cs.cmu.edu/nsp/course/15-464/Fall09/papers/StamFluidforGames.pdf. It might be a worthwhile starting point for digging around and finding other places to read and research. In any case, I fully expect that you'll need to use Metal directly (or a series of interesting shaders that replicate the effects on the existing mesh). The visual above looks like it's texture painted onto the existing mesh of that room, but I can't tell for sure.
Post not yet marked as solved
6 Replies
SceneKit doesn't preclude using an ECS system, but Apple's version of that setup is only built-in to the RealityKit framework. The stock SceneKit setup doesn't provide the same kind of setup, instead leaving that up to how-ever you'd like to implement any relevant gameplay/simulation logic. My "SceneKit is quirky with Swift" was mostly about the API and how it's exposed. There's zero issue with using it with Swift, the API is, however, far more C/Objective-C oriented - not at all surprising for when it was initially released. The RealityKit API's (in comparison) feel to me like they fit a bit more smoothly into a swift-based codebase.
Post not yet marked as solved
6 Replies
I fully expect any answer you get from Apple would be "They're both fully supported frameworks", and so far that's boiled down to how you want to use the content. For quite a while, only SceneKit had APIs for generating geometry meshes procedurally, but two years ago RealityKit quietly added API (although it's not really documented) - so you can do the same there. RealityKit comes with a super-easy path to making 3D content overlaying the current world (at least through the lens of an iPhone or iPad currently), but if you're just trying to display 3D content on macOS its quite a bit crankier to deal with (although it's possible). RealityKit also comes with a presumption that you'll be coding the interactions with any 3D content leveraging an ECS pattern, which is rather "built-in" at the core. The best examples & content I've seen for learning how to procedurally assemble geometry with RealityKit is RealityGeometries at (https://swiftpackageindex.com/maxxfrazer/RealityGeometries) - read through the code and you'll see how the MeshDescriptors are used to assemble things. SceneKit is a slightly older API, but in some ways much easier to get into for procedurally generated (and displayed) geometry. There's also some libraries you can leverage (such as Euclid at (https://github.com/nicklockwood/Euclid) which has been a joy for my experiments and purpose. There's quite a bit more (existing) sample content out there for SceneKit, so while the API can be a bit quirky from swift, it's quite solid.
Post not yet marked as solved
1 Replies
I recently stubbed out some brutally ugly UI code to render a 3D USDZ file into an animated gif - screen grabbing while orbiting the object. It's far from great or pretty, but should you want to explore and experiment, I made the bits I wrote open source: https://github.com/heckj/Film3D I wanted the animated gif as a placeholder for documentation content in HTML formatted docs, so I only took it to the point of getting my render and leaving the UI pretty trashy - fair warning.
Post not yet marked as solved
1 Replies
There's no direct API within RealityKit to do that today. There is API to generate procedural meshes though - released last year with WWDC 21 and the RealityKit updates, although they lack any documentation on Apple's site. There's some documentation for it embedded within the Swift generated headers though, and Maxx Fraser wrote a decent blog post about how to use MeshDescriptors, which are at the core of the API. (https://maxxfrazer.medium.com/getting-started-with-realitykit-procedural-geometries-5dd9eca659ef). He also has some public swift projects that build geometry that makes a good example of how to use those APIs: https://github.com/maxxfrazer/RealityGeometries I've been poking at the same space myself, generating meshes for Lindenmayer systems output - but I don't have anything to the extent of rendering 2D shapes into geometry using lathing or extrusion. The closest library to that I've seen available is Nick Lockwood's Euclid, but it only targets SceneKit currently.
Post not yet marked as solved
2 Replies
Thanks @MobileTen - that's what I'd found I could do - I was just hoping there might be a path to leave the measurement itself alone and include a unit value to the formatter itself, but that apparently isn't a thing.
Post not yet marked as solved
2 Replies
There's https://developer.apple.com/documentation/multipeerconnectivity that works as a baseline, but the API is somewhat awkward to use and can be a bit slow to establish into a full connection. It leverages both Wifi and Bluetooth locally, and once established the connection is pretty decent. I wouldn't be surprised to see this either deprecate or evolve significantly in the next year or two as Actors in swift, and more specifically Distributed Actors, get established and into the base language, and more systems can be built atop them in a pretty reasonable format. In the past there was a GameKit mechanism that supported Peer to Peer networking as well (https://developer.apple.com/documentation/gamekit/gksession) - although it's now deprecated, so it's more for awareness than anything else. GameKit itself appears to have shifted a bit more to internet-friends connectivity, rather than a strict peer to peer model, but it may still be worth investigating depending on your needs: https://developer.apple.com/documentation/gamekit/connecting_players_with_their_friends_in_your_game Beyond that, there's WebSocket support in URLSession these days, and StarScream if that's falling short - but you'd need to host your own HTTP services on some device and come up with an advertising process to let other devices know about it for peer to peer. If you want to go the route of hosting an HTTP service within an iOS app, it's possible with SwiftNIO - has been for a couple years now, with a decent article talking about it at https://diamantidis.github.io/2019/10/27/swift-nio-server-in-an-ios-app - and perhaps most interestingly, the article references a couple other libraries that let you do the same. Hopefully some research down that path provides interesting food for thought.
Post marked as solved
1 Replies
Answering my own question - it's in the documentation, I just missed it earlier. Yes - use the Extension file mechanism, as described in the section Arrange Nested Symbols in Extension Files of the article Adding Structure to your Documentation Pages. I'm presuming that as you get multiple of these, a good practice would be consistent naming based on the symbol to which they're providing the organization. That's my own observation and not advised in the article. The key to not repeating details of the symbol (the overview, etc) is with the following metadata code: @Metadata { @DocumentationExtension(mergeBehavior: append) }