Thank you so much! That's perfect. (I should've provided more details, but it seems you've inferred the correct info.)
Post
Replies
Boosts
Views
Activity
For an update, I felt that I got a lot out of the chat I had today. Thanks again!
It says "feedback not found."
Is it possible to send data from the ios device to a companion mac app programmatically over a network, and have the mac send the processed model back to the ios device programmatically? I imagine such a networked solution might be a decent enough workaround for now.
Oh I see. Well, I hope the sample code is released. Do we know the data size limitations, by the way?
Do you know if it's possible to start a call in the background after the application has started, assuming we add some application logic to attempt to re-initialize at runtime?
Also, I didn't realize this application was so SwiftUI-heavy. Admittedly, I don't know if I understand it. It looks like the drawing primitives are all swift UI. I roll by own Metal renderer. Is it possible to send non-SwiftUI data if I make some sort of encodable/decodable representation of my user input data?
I am trying to figure out how to convert things.
Thanks for your reply.
The device has been connected to the internet via WIFI for nearly 24-hours, so what you're suggesting seems a little strange to me. I have filed a bug report:
FB9649632 (Potential Major Speech Framework Bug: SFSpeechRecognizer fails to start)
I did not have an issue like this with iPadOS 14, but I assume there might've been much bigger changes for 15 considering the additional offline on-device support.
In iOS 14 Siri and dictation didn’t need to be on for speech recognition to work. Do you mean “turning either off CAUSED the issue?” (not fixed).
This seems really strange since many people don’t want the global Hey Siri functionality.
Which settings specifically are you looking at? @adadas
Do you know if this is the on device speech or the cloud version? I would assume the former to avoid limitations.
EDIT: It seems somewhat buggy. Oftentimes a speech result will contain a duplicate of itself concatenated:
Speak "This is a test" -> result -> "This is a test This is a test."
Yes, I agree. We need the API to prompt the user to make this change, either globally, or selectively for the application.
That's unfortunate. I hope they address this. Otherwise it's unusable.
I never quite figured out how to capture data for something like this. There’s no crash log.
I'm intensely curious what the issue was. Did they accidentally put something on the main ui queue?
Then you probably should report it since they believe it's resolved according to the patch notes.