I fully expect any answer you get from Apple would be "They're both fully supported frameworks", and so far that's boiled down to how you want to use the content. For quite a while, only SceneKit had APIs for generating geometry meshes procedurally, but two years ago RealityKit quietly added API (although it's not really documented) - so you can do the same there.
RealityKit comes with a super-easy path to making 3D content overlaying the current world (at least through the lens of an iPhone or iPad currently), but if you're just trying to display 3D content on macOS its quite a bit crankier to deal with (although it's possible). RealityKit also comes with a presumption that you'll be coding the interactions with any 3D content leveraging an ECS pattern, which is rather "built-in" at the core. The best examples & content I've seen for learning how to procedurally assemble geometry with RealityKit is RealityGeometries at (https://swiftpackageindex.com/maxxfrazer/RealityGeometries) - read through the code and you'll see how the MeshDescriptors are used to assemble things.
SceneKit is a slightly older API, but in some ways much easier to get into for procedurally generated (and displayed) geometry. There's also some libraries you can leverage (such as Euclid at (https://github.com/nicklockwood/Euclid) which has been a joy for my experiments and purpose. There's quite a bit more (existing) sample content out there for SceneKit, so while the API can be a bit quirky from swift, it's quite solid.