The ARKit API supports simultaneous world and face tracking via the back and front cameras, but unfortunately due to hardware limitations, the new iPad Pro 2020 is unable to use this feature (probably because the LIDAR camera takes a lot more power). This is a bit of a step back.Here is an updated reference in the example project.swift:guard ARWorldTrackingConfiguration.supportsUserFaceTracking else {
fatalError("This sample code requires
iOS 13 / iPad OS 13, and an iOS device with
a front TrueDepth camera. Note: 2020 iPads
do not support user face-tracking while world tracking.")
}There is also a forum conversation proving that this is an unintentional hardware flaw.It looks like the mobile technology is not "there yet" for both. However, has anyone confirmed whether this limitation extends to simply getting multiple video feeds, as opposed to using tracking on both cameras?What I want to achieve: run a continuous world tracking session in ARKit and render the rear camera feed. At the same time, get front camera data using the regular video APIs, but don't do any tracking. Just process the front camera video frames using CoreML or vision APIs for other purposes.The comment says "Note: 2020 iPads do not support user face-tracking while world tracking." This almost suggests the issue is related exclusively to *tracking*. Simultaneous front/back camera feed support was only made available in 2019, I believe. There's a new API for starting a session with both cameras. Since ARKit implicitly initializes one of the cameras, does this make it impossible to do what I want?In short, can I use ARKit to do rear-camera world tracking and simultaneously receive and process front-camera data? If so, how? May I have a code example? I hope there is a solution to this.
Post
Replies
Boosts
Views
Activity
I develop mainly UNIX-style command line applications without Xcode and use plenty of open-source software that isn't necessarily Mac-specific or signed (e.g. from GitHub repositories). Given the new security enforced at the hardware level, I'm concerned what this means for my development cycle. On Catalina, you can re-enable "download from anywhere" via command line. Since I'm not able to get an Apple Silicons development kit, I can't test behavior myself.
if someone knows already or is expecting a development kit, it would be great to get more information. These are some of my questions:
Will I still be able to compile code locally and run it without code-signing? Might I be able to re-enable download from anywhere like on Catalina, so people in my own group can run binaries I might send securely (not via the App Store)? I am wondering, also, if the tagged pointer memory in the WWDC presentation will be optional, or if the idea is that it will be mandatory once Apple gets them working for user-level programs.
I currently use home-brew to download the latest versions of clang and packages, again not for Xcode Mac apps, but rather command line programs. Will I still be able to do this once home-brew/llvm update to build to arm Macs?
Lastly, can I always just disable SIP at my own risk to restore full freedom, or is this unrelated?
Note that I understand all of the security risks. I just want to know about all options. For Development purposes and research, it's sometimes beneficial to enable easy non-signed development and sharing and simply be a responsible programmer/computer user.
Thanks.
Hello all. I'm developing a user-generated content 2D drawing application for iOS (iPad) that requires a lot of dynamic drawing, texture swaps, and all-around changing data, meaning that I can't allocate most of my resources up-front.
The general idea of what I'm trying to do:
One thing I'd like to do is support lots of texture loading and swapping at runtime, using images fetched on-the-fly. I would prefer not to set textures over and over using setFragmentTexture to avoid all of the extra validation.
Here is what I've tried and thought of doing so far, for some context:
A way of doing what I want, I think, might involve creating a large array of texture2D representing an active set of resident textures. Each object in my world should have an associated index set into a subset of textures into that array (so I could support sprite animation for example). Then, in the shader, I could use that index to access the correct texture and sample from it.
Is dynamic indexing to sample a texture even allowed?
There's a problem, however. Everything I've learned about other graphics APIs and Metal suggests that this sort of dynamic indexing might be illegal unless the index is the same for all invocations of the shader within a draw call. In Vulkan, this is called "dynamically uniform," though there's an extension that loosens this constraint, apparently. I think what I'm trying to achieve is called "mindless textures" or a form of "descriptor indexing."
Potentially use Argument Buffers?
So I looked into Argument Buffers here, which seem to support more efficient texture-setting that avoids a lot of the overhead of validation by packing everything together. From my understanding, it might also relax some constraints. I'm looking at this example: using_argument_buffers_with_resource_heaps - https://developer.apple.com/documentation/metal/buffers/using_argument_buffers_with_resource_heaps?language=objc
The example uses arrays of textures as well as heaps, but you can toggle-off the heaps (so let's ignore them for now).
Assuming I have tier-2 iOS devices (e.g. iPad mini 2019, iPad Pro 2020), there's a lot I can do with these.
If I used Argument Buffers, how would I support dynamically-added, removed, and swapped resources?
Still, there's another problem: none of the examples show how to modify which textures are set in the argument buffer. Understandably, those buffers were optimized for set-once, use every frame use cases, but I want to be able to change the contents during my draw loop in case I run out of texture slots, or maybe I want to delete a texture outright, as this is a dynamic program after all.
On to the concrete questions:
Question 1
how do I best support this sort of mindless texturing that I'm looking to do?
Should I still use argument buffers, in which case, how do I use them efficiently considering I may have to change their contents. How *do* I change their contents? Would I just create a new argument buffer encoder and set one of the texture slots to something else for example, or would I need to re-encode the entire thing? What are the costs?
May I have clarification and maybe some basic example code? The official documentation, as I said, does not cover my use case.
Question 2:
is it even legal to access textures from a texture array using a dynamic index provided in an object-specific uniform buffer? Basically it would be the equivalent of a "material index."
Last Question:
I think I also have the option of accessing the texture from the vertex shader and passing it to the fragment shader as an output. I hadn't considered doing that before, but is there a reason this may be useful? For example, might there be some restrictions lifted if I did things this way? I can imagine that because there are far more vertices than fragments, this would reduce the number of accesses into my "large buffer" of resident textures.
Thanks for your time. I'm looking forward to discussing and getting to the bottom of this!
A lab I very much wanted to attend this Friday was assigned at the absolute worst time that conflicts with a critical business meeting. It was a gamble. Is there any possibility to reschedule within a more specific time range?
Thank you for your time.
I forgot to ask this during my lab session, but I noticed iPadOS is not listed under supported OSes under the GroupActivities documentation page.
iPadOS supports FaceTime, but is it that GroupActivies doesn't work on iPadOS? This would be a crying shame since one of the examples specifically involved drawing collaboratively. The iPad is the perfect device for that use case.
EDIT: Quick edit. Coordinate media experiences with Group Activities mentions iPadOS support, in which case the first page I linked might have a missing OS entry.
Is it possible to feed ReplayKit with custom live stream data, e.g. a cvPixelBuffer created from a Metal Texture, and stream that to youtube? My use case is to give the broadcaster hidden UI manipulation controls that the stream audience cannot see. (Think of a DJ. No one gets to see all the DAW controls on the DJ's laptop and doesn't need to because that's not part of the experience.)
If it's possible, might anyone be able to help figure out the correct way to implement this? From what I can tell, ReplayKit doesn't let you send custom data, in which case, what else can be done?
Is there a relationship between AVAudio’s time here: docs link
and the system uptime? docs link
I’d like to be able to convert between them, but I’m not sure how they're related, if at all.
Specifically, I'd like the hostTime in terms of systemUptime because several other APIs offer systemUptime timestamps.
Thank you.
I’m very interested in trying to have an iOS and watchOS device pair communicate and want to know if it’s possible for the iOS device to get the direction to the watchOS device. (I cannot try this because I don’t have an Apple Watch yet.)
I’m looking at the documentation here and am not sure how to interpret the wording: nearby interaction docs
Nearby Interaction on iOS provides a peer device’s distance and direction, whereas Nearby Interaction on watchOS provides only a peer device's distance.
I’m not sure what is considered the peer.
Let’s assume I’m communicating over a custom server and not using an iOS companion app. Is the above saying that:
A: iOS will send watchOS the distance from the iOS device to the watchOS device and watchOS will send out its distance and direction to the iOS device? (i.e. NearbyConnevtivity on iOS receives distance and direction of any other device, regardless of whether it’s a phone or watch, but watchOS only gets distance.)
B: The watch receives distance and direction to the phone, and the phone receives only the distance to the watch.
C: the iOS device only gets distance to the watchOS device, and the watchOS device only gets distance to iOS device, period.
May I have clarification?
A secondary question is how often and how accurate the distance and directions are calculated and sent, but first things first.
I’m looking forward to a reply. That would help very much and inform my decision to develop for watchOS. I have some neat project ideas that require option A or B to be true.
Thanks for your time!
I updated Xcode to Xcode 13 and iPadOS to 15.0.
Now my previously working application using SFSpeechRecognizer fails to start, regardless of whether I'm using on device mode or not.
I use the delegate approach, and it looks like although the plist is set-up correctly (the authorization is successful and I get the orange circle indicating the microphone is on), the delegate method speechRecognitionTask(_:didFinishSuccessfully:) always returns false, but there is no particular error message to go along with this.
I also downloaded the official example from Apple's documentation pages:
SpokenWord SFSpeechRecognition example project page
Unfortunately, it also does not work anymore.
I'm working on a time-sensitive project and don't know where to go from here. How can we troubleshoot this? If it's an issue with Apple's API update or something has changed in the initial setup, I really need to know as soon as possible.
Thanks.
I am seeing that seemingly after the macOS 12.0.1 update, the macbook pro 16" 2021 is having widespread issues with the magsafe charger when shut-off, in which fast-charge causes the charger to loop the connection sound over and over without actually charging.
discussion pages:
https://forums.macrumors.com/threads/2021-macbook-pro-16-magsafe-light-flashing-amber-and-power-chime-repeating-during-charging-when-off.2319925/
https://www.reddit.com/r/macbookpro/comments/qi4i9w/macbook_pro_16_m1_pro_2021_magsafe_3_charge_issue/
https://www.reddit.com/r/macbookpro/comments/qic7t7/magsafe_charging_problem_2021_16_macbook_pro_read/
Most people suspect it's a firmware/OS issue. Is Apple aware of this / is it being worked-on?
Has anyone tried this with the latest 12.1 beta as well?
Xcode 13.4 only provides an SDK for macOS 12.3 according to the release notes. Can I build to macOS 12.4 using the lower point version SDK? I would not want to update the OS if I could not build to it yet.
Thanks.
I notice new C++ 23 features such as the multi subscript operator overload mentioned in Xcode beta release notes, but I don’t see a way to enable C++ 23 in the build flags. What is the correct flag, or is C++ 23 unusable in Apple Clang?
I wanted to try structured logging with os_log in C++, but I found that it fails to print anything given a format string and a variable:
eg.
void example(std::string& str)
{
os_log_info(OS_LOG_DEFAULT, "%s", str.c_str());
os_log_debug(OS_LOG_DEFAULT, "%s", str.c_str());
os_log_error(OS_LOG_DEFAULT, "%s", str.c_str());
}
This prints a blank row in the console, but with no text.
How is this meant to work with variables? It only works with literals and constants now as far as I can tell.
I'm looking forward to getting this working.
In ARKit for iPad, I could 1) build a mesh on top of the real world and 2) request a people occlusion map for use with my application so people couls move behind or in fromt of virtual content via compositing. However, in VisionOS, there is no ARFrame image to pass to the function that would generate the occlusion data. Is it possible to do people occlusion in visionOS? If so, how it is done—through a data provider, or is it automatic when passthrough is enabled? If it’s not possible, is this something that might have a solution in future updates as the platform develops? Being able to combine virtual content and the real world with people being able to interact with the content convincingly is a really important aspect to AR, so it would make sense for this to be possible.