Hi,
if I understand what you want to do correctly; here’s one possible thing to look at and consider when combining ‘behavior actions’.
if the Time playback value is not the same between the two behaviors you wish to combine (which always seems to work for me), you can also combine a differing time value of the second behavior - so long as it is a multiple of 2 of the one you wish to combine it with.
I use pulses and spins lot but to get differing looks in a ‘looped’ behavior I’ve made an initial behavior which plays for 44 seconds. I am able to make another behavior which plays for 22 seconds (it doesn’t matter what the behavior does) and am then able to combine both behaviors and see them execute simultaneously.
hope this helped!
larry ‘Catfish’’ Kuhn
Post
Replies
Boosts
Views
Activity
Hi,
I’ve found the way so far is to edit your audio with an external application and save it as a ‘Apple lossless MP4’ Reality Composer will play a variety of audio file formats, like .WAV and the way the spins AE imported into a RC scene is by adding a Behavior and choosing the type of audio playback you desire. This where you may find it helpful to have already made the decision about; ‘am I going to want to use the ‘spatial audio’ capability of RC?’ If so, Apple recommends you save your audio files as ‘mono’ files. They use some elaborate panning schemes behind the scenes to fool your ears and brain into thinking sounds get closer or further away when assigned to an object for playback in your RC scene.
Saving audio files as a single non-stereo file seems a bit counterintuitive on the surface, but you also get file size savings when taking this approach. I have not been dissatisfied with playback quality, and when using ‘spatial audio’ for playing a sound effect or voice of a character I have been happy with the results I’ve gotten using mono .MP4 files.
Larry ‘Catfish’ Kuhn
Hi,
I began using Reality Composer on the iPad Pro about a year ago, approaching it from having taught 2D-3D graphics 20 year’s ago. Apple’s AR is the most exciting presentation format since QuickTime. I am in much the same position you are in...I have ‘content’, now how do I sew it together and into an app?.
Apple has solid information on making apps in ‘normal product category domains’. AR is still a bit new. The difficulty of making the leap from ‘content’ to ‘app’ depends somewhat on your background with Xcode, along with the the complex’s and goals of your app,
You also may or may not need an ‘app’ in the ‘AppStore’ sense; I’ve seen discussion of using QuickLook for viewing.reality files. If all you need to do is share your AR content, exporting a Reality Composer ‘scene’ as a .reality file can be a very useful way of making your ‘content’ available the way it functions within Reality Composer’s building environment.
C
You may also wish to look at SwiftPlaygrounds in iOS as a way to examine if your content ‘fits’ the use model.
But for an honest to goodness AR capable ‘app’ on the ‘AppStore’; all roads lead through Xcode.
There are tidbits of relevant information out on the web, but very little exists in the way of a ‘Start-to-Finish’ (with all the nuts and bolts needed) book or compendium. There are reasons for this, I think. AR is a ‘different’ presentation medium for artists and programmers alike. Judging by just the number of ‘2D Games’ versus ‘AR apps’ on the AppStore, The paltry number of AR apps reflect the ‘knowledge landscape’ on the subject currently.
There is also another aspect of this discrepancy; at the moment we are in a netherworld of non-capable devices being used by the general public. Unless you work in a more urban environment, chances are you are using older technology with plans of riding it until it breaks into silicon dust. Time will fix this, but it is another reason that we don’t see the development ‘wave’ (thus spanning writer interest) for AR being the same as with more familiar and less processor intensive graphics formats.
I remember reading two things back in the 80’s from ACM journals which apply to ‘programmatic development’ (think Xcode);
’it is easier to build a 4” mirror; then build a 6” mirror, than it is to build a 6” mirror.
and...
’Furious activity is no substitute for understanding!’
You and I are both looking for material on ‘how to build a 4” mirror’.’
Best,
Larry ‘Catfish’ Kuhn
Hi,
The addition of QuickTime as a ‘material’ will offer new potential for RC compositions, but there is much value in using transparency in 2D PNG images has much potential, a transparent or semi transparent PNG can have behaviors applied to it once you’ve assigned the image to the picture frame or document object in the RC ‘add’ section (under Arts- why it’s not labeled better I can’t tell you!)
PNG’s offer transparency for 2D images in RC, but the down side is poor file compression compare to JPEG’s (which compress nicely but do not offer transparency).
i have still not had the courage to upgrade either of my IPP’s to iOS14 yet. I’m certain I will enjoy using some of the new RK and RC features for iOS14, but I am still quite impressed by RC’a abilities in iOS13.x. It is hands down the most capable and easiest to use of any 3D (much less AR) app on iOS that I have used in the past 3-5 years.I prefer doing content creation all on the iOS side of the fence. It fits my ‘Goldilocks’ zone for being productive as a ‘device’.
I’ve certainly learned a great deal about RC by ‘experimenting’, but you may find that a useful strategy is to have a specific ‘scene’ or RC ‘composition’ In mind when testing specific features in RC.
in the WWDC RC Demi of QT as a material, one of the things I took away from that session was that it might pay to keep QT segments short, and sized appropriately for the object which they will be applied to. The fly in the demo was pretty tiny, so I don’t think we will be putting 640x480 QT movies on anything just yet. Even as it stands, it’s a rich feature in a very capable and friendly tool.
best,
larry ‘Catfish’ Kuhn
Hi,
if I correctly understand what you wish to do;
1) “create pre-built models and animations for dynamic element models which can skip to particular frames in the code depending on data” — In RealityKit and Composer one of the behavior actions which can be added is ‘send notify message to Xcode’ consider placing one of these ‘actions’ at various points in a specific scene or object’s ‘overall behavior’ where you wish to have a the Xcode ‘do something’ in response to being ‘notified’ by the ‘action’ in your ‘dynamic element’ in RealityKit-Composer.If, instead, you meant; you want Xcode to first respond to ‘data’ (perhaps from a Core ML Model) and based on the ‘data stream’ to then perform code assigned to a RealityKit scene, object grouping, or single object entity; that too, is possible, but may require more intricate coding in Xcode.
2) coordinating with a UIView or SKView subview, unprojecting points from the 3D space to 2D to align the elements.
if I understand correctly, you’d be using UIview of SKView sub view for the UI aspect of taking your existing app and going AR with it?
So, the desired outcome is to move the ‘coordinate space’ of a specific 3D Object or points in a 3D ‘AR box’ and ‘pancaking’ them into a 2D interface? ‘Unprojecting’ May be the wrong term; it sounds more like a ‘coordinate space transform’ to me.
While there may be an app for that; there’s at least Xcode. I do not know the ‘how’ to do it, but my internal ML says it can be done without too much bloodshed.
3) Are there other approaches to consider, especially ones that would allow me to include 3D elements.
Your ‘approach’ should usually be connected to ‘the identifiable needs’ of your conversion to AR space for your app. My experience has been that you’ll benefit by trying multiple strategies and approaches to any given ‘how do do this in RealityKit or Composer?’ Being ‘open’ to applying new strategies is half the battle. AR is still a very ‘new space’ for us Developers to explore. A sense of curiosity is almost as important as specific detailed knowledge about ‘this or that’.
Part of the ‘untold secrets’ of effectively using the great capabilities Apple has given us with ARkit,RealityKit-Composer-Converter, Scenekit and SpriteKit together in the AR ‘experience’ is going back and forth between ‘what result you want’ and comparing it with the ‘available capabilities of the tools’. By refining the ‘fit’ between them, strategies, approaches and solutions have fertile ground to emerge.
I hope this helps.
Best,
Catfish
Hi,
i believe curveddesign has a good point; the ability to use mapped movie materials will help achieve the effects and results many developers would like. Perhaps not served up the way they might ‘normally’ think of applying a ‘shader’ file effect. The movie material is an economical way to deal with some ‘transparency’ needs. Apply the transparency needed on the frames of the movie; then apply the movie as ‘material’ to the object.
During the past 6 months or so since I discovered Reality Composer (with a dash of some RealityKit and Converter on the Mac) for the iPad Pro+Pencil2 I am consistently pleased with how good a 3-D production environment it is. In general, I have found that; ‘there’s usually a way’ to do what you’d like to accomplish, but it isn’t always obvious from the Developer documentation ‘how to do things’. I do understand the documentation focus is on addressing specifics of ‘the what’.
The ‘thinking outside the point cloud’ aspect has been important for me. More than once I have attempted ‘this or that idea’ in a Reality Composer scene; I’ll work on it for awhile, and if I hit the wall, I stop. I put that project file away, and find another file that would benefit from less lofty aspirations to work on. It may take some re-reading of both Apple Developer resources and others, but in a few days to a week I usually can find at least a ‘different approach’ to try.
Getting ‘outside the box’; or me, has often been coupled to ‘widening my focus’ to include ‘combining’ features and capabilities in ways not considered before. It also seems one gets there quicker by knowing when to stop working on one strategy or approach; at least long enough to scramble out of the ‘box’ for a bit!
Somewhere along the way ‘out of the box’, I discovered that; ‘Sometimes Reality is the Strangest Fantasy of them all!’ Another good reason to ‘widen the focus’ sometimes.
Best,
Catfish
At this point 6-21-20, you have to look at using resources outside of RK and RC if you know the opacity level you want for your mesh.Scenekit is a viable path to this end. If your meshModel is .OBJ or other SceneKit ‘Importable’ file types, you can adjust the opacity, color and lighting schemes; the export the meshModel as a .USDZ file. As you know, you cn bring .USDZ objects directly into any RK or RC scene or project from there.
’I agree with Max; ‘object opacity’ sliders in RK-RC is part of the path towards the future!
catfish
Reality Composer just hit us 6-20-20 with new content; including a few ‘glass’ objects in a variety of shapes; test tubes, beakers, flasks,and more.
in my own efforts to ‘find out where the Glass Factory was?’, I ended up using SceneKit to create a few items (exported as >USDZ).
it worked well enough, but I prefer doing all my ‘content creation’ in Reality Composer (because I can use Pencil2-which offers the best control of any styli device I’ve user used in the past 35 years)
I hope Apple folds some of the PBR aspects of SceneKit into Reality (kit and composer) in coming months. SceneKit has many usable features but they need the ‘interface makeover’ by Artists to make them as accessible as features found in RC. I hope that’s where we’re headed. Going back and forth between SceneKit and RC is a bit cumbersome, but you can make glass objects in SceneKit and export them as .usdz files to incorporate into RC scenes Nd projects.
Object ’transparancy’ sliders would be high on my list of improvements for RC.
catfish
’