Oh, here's one clue related to the first issue: when you turn the accessory on, you sometimes get a popup "Instaflow 360 detected" and if you tap on it, launches the Insta360 app (which fixes the issue). Somehow the Instaflow 360 app has some specialness associated with this hardware device, such that the app is advertised in this state.
How is that accomplished? Let's have a fair playing field here! :)
You know you're in the correct state when the Instaflow 360 does the cute "nod its head at you" gesture, but like I said, it shouldn't be necessary to run their app to get back to this state after powering/unpowering the dock accessory.
Post
Replies
Boosts
Views
Activity
Here's my own personal track() routine, since dockkit's track() doesn't meet my needs:
var errorIsLarge = false
func track(rectangles: [CGRect], fov: Double) async {
guard !rectangles.isEmpty else {
await setVelocity(pitch: 0, yaw: 0, roll: 0)
return
}
let r = rectangles.reduce(CGRect.null) { $0.union($1) }
let xOffset = 2 * (r.midX - 0.5)
var thetaOffset = xOffset * fov
print("Tracking: \(thetaOffset.asDegrees) degrees off, midX = \(r.midX)")
if abs(thetaOffset) > (3.0).asRadians {
print("Error is large set true")
errorIsLarge = true
}
if abs(thetaOffset) < (3.0).asRadians {
if !errorIsLarge {
thetaOffset = 0
}
}
if abs(thetaOffset) < (1.0).asRadians {
errorIsLarge = false
print("error is large is FALSE")
thetaOffset = 0
}
print("Setting velocity to \(-thetaOffset * 3)")
await setVelocity(pitch: 0, yaw: -thetaOffset * 3, roll: 0)
What are we doing here? Whenever we get more than 3 degrees off target, that's enough error to try to correct it. If our error is less than 3 degrees, and we're not in a "large error state", ignore the error. We want smooth tracking.
We leave being in a "large" error state when we reduce the error to less than 1 degree.
We set the velocity to the opposite of 3 X thetaOffset, where thetaOffset is how far off target we are, noting that as I said above, at times we want to tolerate small errors. The idea is that if you get close enough, stop trying to micro-correct.
In my case, I'm taking all my observation rectangles, unioning them, and taking the center. However you make observations, at the end of the day, you're going to get some absolute theta error, and that's what you deal with.
Note: I'm not trying to pitch up or down (or, god forbid, roll!) the camera. Just yaw (i.e. rotate around the vertical axis).
For a little while I tried to pass rectangles to dockAccessory.track(), but then I gave up. For whatever reason, it didn't seem to work so well.
Instead, I measure how far the center of my rectangle is from being centered in the image (in angular units), and then call setVelocity (in the "yaw" axis) with a speed that is proportional to this distance. (If you get the sign wrong, you'll figure it out... quickly.)
I observed this gave me much smoother results than letting the camera do its own tracking. I also tried to directly call setOrientation() to make the camera turn to what I think should be the exact amount, but that works poorly. Using the error of how far the center of the rectangle is from the image center, as a measure of how fast to rotate the camera, gives much smoother results.
Note: be very careful that if you don't call this routine very frequently, that you have a safeguard in place to countermand the last "setVelocity" call. Because if you don't, and the last thing you tell the camera is "rotate that way with speed X", and you never say anything again, your camera will just spin and spin and spin...
I'll let you know. I had actually given up hope that this would work, because I see lots of posts about people getting popups saying "You don't appear to be connected to the internet" which makes it seem like if you do hotspot to an accessory, in most cases that's your current wifi connection, and you're going to get that message.
But, if in fact, that's only for accessories that "lie" about their capabilities, maybe it's not entirely hopeless. I guess I'll do the experiment and see what transpires.
I seemed to have asked about the same requirement just yesterday:
"The camera's Wi-Fi does not have internet access; it is purely for file transfers (e.g., photos/videos) between the camera and the iOS app and to command camera to do certain actions.
The iOS device should still use mobile data for internet access while connected to the camera's Wi-Fi. The mobile data will be used to upload files to the cloud, as I have a large data plan available for internet use."
Please, if you do figure this out, let me know? (email: davidbaraff @ icloud.com )
As always, you are a life saver.
Thank you so much, I’ve started reading (and will continue) and one thing has almost immediately jumped out at me:
It sort of sounds like if my app connected to my wifi-accessory, which has ZERO access to the wider internet, my iPhone is quickly going to figure out “hey, this wifi network leads nowhere”.
And because of that, attempts to make HTTP calls might very well route through WWAN (I always have cellular data on in the circumstances where I connect to this device, and depend on it) and my issue is basically solved immediately?
In which case I’ll have what I want: a direct connection (via which API, I’ll have to see) to my wifi-accessory, and regular URLSession calls to my server will just continue to work (over WWAN)?
These snippets might be of use to you:
if let captureConnection = videoDataOutput.connection(with: .video) {
captureConnection.isEnabled = true
captureConnection.isCameraIntrinsicMatrixDeliveryEnabled = true
}
[God almighty. Why is it so impossible to format code in this editor?]
This function pulls out the intrinsics and computes the field-of-view, but that was for something I was doing; just the intrinsics matrix here might be what you want:
nonisolated func computeFOV(_ sampleBuffer: CMSampleBuffer) -> Double? {
guard let camData = CMGetAttachment(sampleBuffer, key:kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, attachmentModeOut: nil) as? \
Data else { return nil }
let intrinsics: matrix_float3x3? = camData.withUnsafeBytes { pointer in
if let baseAddress = pointer.baseAddress {
return baseAddress.assumingMemoryBound(to: matrix_float3x3.self).pointee
}
return nil
}
guard let intrinsics = intrinsics else { return nil }
let fx = intrinsics[0][0]
let w = 2 * intrinsics[2][0]
return Double(atan2(w, 2*fx))
}
Again, sorry for the totally ****** formatting. If someone can tell me how this is supposed to work, I'm all ears. I pasted code and hit "code block" but it didn't help much.
Be sure to pass in the camera intrinsics. Rather than compute them yourself, pull them from the AVCaptureDevice.
I've seen something similar, when the zoom is at default, it's fine, as it increases, the fact that your view is zoomed in isn't known to the tracking system because incorrect intrinsics. So a small offset at low zoom because a big offset at bigger zoom and the system tells the accessory to rotate too much. Feedback loop.
Thank you for your reply (and, indeed all the advice and monitoring you’ve been doing for the past few years — I‘ve read most of your posts.)
One thing that I just can’t discern, through neither documentation, nor discussions, is what happens when I try to send a large message over UDP through Network framework. Given that there are no constraints on the size of a message in Network, let’s suppose I (stupidly) just send 1 megabyte of Data as a single message.
Does Network:
Say, “forget it!” (i.e. error). That’s just too big.
Break it into large packets (anything way over the MTU) and send it (which would require reassembling it?)
Break it into packets of approximately MTU size (say between 500 bytes and 1500 bytes) which again requires Network to reassemble on the other side.
Suppose I ask to send 30K as a single message:
Does Network just send this as a single large packet? (I assume “yes”)
Or does Network still break it down into more MTU sized chunks, again requiring reassembly by Network on the other side? (I assume “no”)
Last question: I understand fragmentation is “bad”. Does this mean if I opt to use Network and UDP, I should always try to break my messages up into chunks of between 500 and 1500 bytes (depending on what I believe the MTU is)?
Thanks for the QUIC note, which I am unfamiliar with. I will read about this, and maybe that’s what I want.
—————————-
My actual scenario is trying to use my iPad to see the camera stream from my iPhone, which is located perhaps 10 to 20 feet away from the iPad, but up on a tripod (perhaps in a dockkit accessory). I want to both see what the iPhone sees and possibly control the positioning of the iPhone in the dockkit accessory.
So yes it is streaming video, but from extremely close range. Assuming Network forms a peer-to-peer connection in places where I don’t have a wifi network (say, outside!) this would fantastic.
This is a bit off-topic, but hoping one of you might reply. I just learned about the Network framework. In an introductory WWDC talk, they show a live-streaming video example, but the code isn’t available (sadly).
There’s a reference to “breaking the frame up into blocks” because of delivery over UDP: I assume this is because of message lengths?
At any rate, if someone can give me a quick idea of the strategy of sending video frames from device to device, over UDP (which must assume some things can get lost), I’d greatly appreciate it. I assume UDP has message length constraints in Network, but I don‘t see them mentioned.
Surely I can’t just send an entire 2K jpeg image (i.e. 1920x1080 pixels) in one UDP message. Or can I?
OK, it's almost two years later: any signs/indications that an async API might be showing up in the Network framework? (I've just learned about this framework now, so obviously a little late to the party, and wow it looks exciting.)
But async/await makes things so easy, I was really hoping it would be part of the framework before I start coding with it.