The fluid dynamics simulation stuff is well beyond what Apple provides in SceneKit (or RealityKit). The underlying topic is a deep, deep well of research, with a lot of interesting work, but most of the papers that you'll find are focused on simulations that are MUCH higher fidelity than you'll want (or need) for this kind of example. One paper I found while googling around was http://graphics.cs.cmu.edu/nsp/course/15-464/Fall09/papers/StamFluidforGames.pdf. It might be a worthwhile starting point for digging around and finding other places to read and research.
In any case, I fully expect that you'll need to use Metal directly (or a series of interesting shaders that replicate the effects on the existing mesh). The visual above looks like it's texture painted onto the existing mesh of that room, but I can't tell for sure.
Post
Replies
Boosts
Views
Activity
SceneKit doesn't preclude using an ECS system, but Apple's version of that setup is only built-in to the RealityKit framework. The stock SceneKit setup doesn't provide the same kind of setup, instead leaving that up to how-ever you'd like to implement any relevant gameplay/simulation logic.
My "SceneKit is quirky with Swift" was mostly about the API and how it's exposed. There's zero issue with using it with Swift, the API is, however, far more C/Objective-C oriented - not at all surprising for when it was initially released. The RealityKit API's (in comparison) feel to me like they fit a bit more smoothly into a swift-based codebase.
I fully expect any answer you get from Apple would be "They're both fully supported frameworks", and so far that's boiled down to how you want to use the content. For quite a while, only SceneKit had APIs for generating geometry meshes procedurally, but two years ago RealityKit quietly added API (although it's not really documented) - so you can do the same there.
RealityKit comes with a super-easy path to making 3D content overlaying the current world (at least through the lens of an iPhone or iPad currently), but if you're just trying to display 3D content on macOS its quite a bit crankier to deal with (although it's possible). RealityKit also comes with a presumption that you'll be coding the interactions with any 3D content leveraging an ECS pattern, which is rather "built-in" at the core. The best examples & content I've seen for learning how to procedurally assemble geometry with RealityKit is RealityGeometries at (https://swiftpackageindex.com/maxxfrazer/RealityGeometries) - read through the code and you'll see how the MeshDescriptors are used to assemble things.
SceneKit is a slightly older API, but in some ways much easier to get into for procedurally generated (and displayed) geometry. There's also some libraries you can leverage (such as Euclid at (https://github.com/nicklockwood/Euclid) which has been a joy for my experiments and purpose. There's quite a bit more (existing) sample content out there for SceneKit, so while the API can be a bit quirky from swift, it's quite solid.
I recently stubbed out some brutally ugly UI code to render a 3D USDZ file into an animated gif - screen grabbing while orbiting the object. It's far from great or pretty, but should you want to explore and experiment, I made the bits I wrote open source: https://github.com/heckj/Film3D
I wanted the animated gif as a placeholder for documentation content in HTML formatted docs, so I only took it to the point of getting my render and leaving the UI pretty trashy - fair warning.
There's no direct API within RealityKit to do that today.
There is API to generate procedural meshes though - released last year with WWDC 21 and the RealityKit updates, although they lack any documentation on Apple's site. There's some documentation for it embedded within the Swift generated headers though, and Maxx Fraser wrote a decent blog post about how to use MeshDescriptors, which are at the core of the API. (https://maxxfrazer.medium.com/getting-started-with-realitykit-procedural-geometries-5dd9eca659ef). He also has some public swift projects that build geometry that makes a good example of how to use those APIs: https://github.com/maxxfrazer/RealityGeometries
I've been poking at the same space myself, generating meshes for Lindenmayer systems output - but I don't have anything to the extent of rendering 2D shapes into geometry using lathing or extrusion. The closest library to that I've seen available is Nick Lockwood's Euclid, but it only targets SceneKit currently.
Thanks @MobileTen - that's what I'd found I could do - I was just hoping there might be a path to leave the measurement itself alone and include a unit value to the formatter itself, but that apparently isn't a thing.
The full playground content in case anyone else comes exploring:
import Foundation
//var time1 = Measurement<UnitDuration>(value: 1.23, unit: .seconds)
var time2 = Measurement<UnitDuration>(value: 6.54, unit: .milliseconds)
//var time3 = Measurement<UnitDuration>(value: 9.01, unit: .microseconds)
time2.converted(to: .picoseconds)
let times = [1,10,100,1000,10103,2354,83674,182549].map { Measurement<UnitDuration>(value: $0, unit: .microseconds)
}
//print("\(time1), \(time2), \(time3)")
//print(time1.formatted())
//print(time2.formatted())
//print(time3.formatted())
//for t in times {
// print(t.formatted())
//}
// Static method 'list(type:width:)' requires the types 'Measurement<UnitDuration>' and 'String' be equivalent
// print(times.formatted(.list(type: .and, width: .standard)))
let f = MeasurementFormatter()
print("unitStyle: .short, unitOptions: .naturalScale")
f.unitOptions = .naturalScale
f.unitStyle = .short
for t in times {
print(f.string(from: t))
}
print("unitStyle: .medium, unitOptions: .naturalScale")
f.unitOptions = .naturalScale
f.unitStyle = .medium
for t in times {
print(f.string(from: t))
}
print("unitStyle: .long, unitOptions: .naturalScale")
f.unitOptions = .naturalScale
f.unitStyle = .long
for t in times {
print(f.string(from: t))
}
print("unitStyle: .short, unitOptions: .providedUnit")
f.unitOptions = .providedUnit
f.unitStyle = .short
for t in times {
print(f.string(from: t))
}
print("unitStyle: .medium, unitOptions: .providedUnit")
f.unitOptions = .providedUnit
f.unitStyle = .medium
for t in times {
print(f.string(from: t))
}
print("unitStyle: .long, unitOptions: .providedUnit")
f.unitOptions = .providedUnit
f.unitStyle = .long
for t in times {
print(f.string(from: t))
}
There's https://developer.apple.com/documentation/multipeerconnectivity that works as a baseline, but the API is somewhat awkward to use and can be a bit slow to establish into a full connection. It leverages both Wifi and Bluetooth locally, and once established the connection is pretty decent. I wouldn't be surprised to see this either deprecate or evolve significantly in the next year or two as Actors in swift, and more specifically Distributed Actors, get established and into the base language, and more systems can be built atop them in a pretty reasonable format.
In the past there was a GameKit mechanism that supported Peer to Peer networking as well (https://developer.apple.com/documentation/gamekit/gksession) - although it's now deprecated, so it's more for awareness than anything else. GameKit itself appears to have shifted a bit more to internet-friends connectivity, rather than a strict peer to peer model, but it may still be worth investigating depending on your needs: https://developer.apple.com/documentation/gamekit/connecting_players_with_their_friends_in_your_game
Beyond that, there's WebSocket support in URLSession these days, and StarScream if that's falling short - but you'd need to host your own HTTP services on some device and come up with an advertising process to let other devices know about it for peer to peer. If you want to go the route of hosting an HTTP service within an iOS app, it's possible with SwiftNIO - has been for a couple years now, with a decent article talking about it at https://diamantidis.github.io/2019/10/27/swift-nio-server-in-an-ios-app - and perhaps most interestingly, the article references a couple other libraries that let you do the same. Hopefully some research down that path provides interesting food for thought.
Answering my own question - it's in the documentation, I just missed it earlier. Yes - use the Extension file mechanism, as described in the section Arrange Nested Symbols in Extension Files of the article Adding Structure to your Documentation Pages.
I'm presuming that as you get multiple of these, a good practice would be consistent naming based on the symbol to which they're providing the organization. That's my own observation and not advised in the article.
The key to not repeating details of the symbol (the overview, etc) is with the following metadata code:
@Metadata {
@DocumentationExtension(mergeBehavior: append)
}
The snippets I want to include are more than just code snippets - I wanted to have the same content in multiple locations - akin to an "include this markdown file here" sort of marker, which I can do in other documentation systems to re-use some short content in multiple locations.
I submitted this as feedback: FB9779628, and now that DocC is open source, I'll look into the avenue of how I might enable this myself.
While I’m writing my documentation, I’d like to reference the same content - which includes a code snippet blocked out using the triple-backtick syntax as well as narrative content in regular markdown - in multiple locations within my docs.
Right now I have to copy/paste this bit into multiple areas, but I’d really prefer to have a means to “import/include” a stand-alone markdown file, that isn’t otherwise rendered into the documentation, so that I can keep a single location up to date with my code.
For code snippets in particular, there was a pitch recently in on the Swift Evolution open source forums by Ashley Garland that related to this — the idea being the documentation catalog could contain a specific set of swift snippets, and the compiler (or SwiftPM in this case I think) could/would build those to verify that everything kept working, no warnings, etc.
I just built this for Xcode 13.1 by applying the following diffs (make sure you're building for a device, not the simulator - the code doesn't compile for the simulator, which isn't mentioned in the README - FB9181536):
diff --git a/Underwater/Octopus.swift b/Underwater/Octopus.swift
index e447a32..7a99708 100644
--- a/Underwater/Octopus.swift
+++ b/Underwater/Octopus.swift
@@ -82,7 +82,7 @@ struct OctopusSystem: RealityKit.System {
let scene = context.scene
for octopus in scene.performQuery(OctopusComponent.query) {
guard octopus.isEnabled else { continue }
- guard var component = octopus.components[OctopusComponent] as? OctopusComponent else { continue }
+ guard var component = octopus.components[OctopusComponent.self] as? OctopusComponent else { continue }
guard component.settings?.octopus.fearsCamera ?? false else { return }
switch component.state {
case .hiding:
diff --git a/Underwater/Scattering.swift b/Underwater/Scattering.swift
index 0883bdb..db3aa47 100644
--- a/Underwater/Scattering.swift
+++ b/Underwater/Scattering.swift
@@ -127,8 +127,8 @@ extension Entity {
if let animation = try? AnimationResource.generate(with: FromToByAnimation<Transform>(
from: transformWithZeroScale,
to: transform,
- duration: 1.0,
- targetPath: .transform
+ duration: 1.0
+// targetPath: .transform
)) {
playAnimation(animation)
}
In case it helps anyone else, the diffs I had to apply (Xcode 13.1) to compile it:
diff --git a/Underwater/Octopus.swift b/Underwater/Octopus.swift
index e447a32..7a99708 100644
--- a/Underwater/Octopus.swift
+++ b/Underwater/Octopus.swift
@@ -82,7 +82,7 @@ struct OctopusSystem: RealityKit.System {
let scene = context.scene
for octopus in scene.performQuery(OctopusComponent.query) {
guard octopus.isEnabled else { continue }
- guard var component = octopus.components[OctopusComponent] as? OctopusComponent else { continue }
+ guard var component = octopus.components[OctopusComponent.self] as? OctopusComponent else { continue }
guard component.settings?.octopus.fearsCamera ?? false else { return }
switch component.state {
case .hiding:
diff --git a/Underwater/Scattering.swift b/Underwater/Scattering.swift
index 0883bdb..db3aa47 100644
--- a/Underwater/Scattering.swift
+++ b/Underwater/Scattering.swift
@@ -127,8 +127,8 @@ extension Entity {
if let animation = try? AnimationResource.generate(with: FromToByAnimation<Transform>(
from: transformWithZeroScale,
to: transform,
- duration: 1.0,
- targetPath: .transform
+ duration: 1.0
+// targetPath: .transform
)) {
playAnimation(animation)
}
With the WWDC21 software updates, I noticed that RealityKit now has some additional methods on MeshResource:
generate(from:), generate(from:)
replace(with:)
(and async oriented variations of these methods)
The generate method is used in the example code BuildingAnImmersiveExperienceWithRealityKit
which looks like it is now possible to provide the indices, normals, etc to build up a mesh procedurally. These methods don't have any documentation on them, so I'm not 100% certain about the details, but the Swift generated headers for it have some relevant notes that make it appear like these might the the "RealityKit" way to procedurally create meshes.
Am I inferring correctly that these methods enable procedural generation with this year's release, or are these methods more focused elsewhere?
This session is what prompted this question - The canvas is a single element, and I want to have multiple accessibility elements that I'm displaying within it. That may not be (yet?) possible, but I wanted to see if there was any suggestions for a pattern of how to create multiple children and align them with specific in-view-coordinate spaces, which mimics a bit of what you can do within UIKit where you specify a frame location for a child accessibility object.