How does Apple transfer/store Siri (WebSpeechAPI) voice data?

I'm trying to find specific information on how Apple transfers & stores the voice data that's transferred for speech recognition in Safari as part of WebSpeechAPI. All I keep seeing are generic privacy documents that do not provide any detail. Is anyone able to point me in the right direction of an explanation of how customer data is used?

Side-note: how were you able to get it to work in the first place? I am running a localhost python3 server to test and I can't get the page to request access to the microphone in Safari--only in Chrome.

Safari now supports WebSpeechAPI in the latest version. It's listed on the Safari release notes. Personally I'm using react-speech-recognition npm package which works in both Chrome and Safari. Caveat is you need to have Siri enabled on your Mac/Phone.

Safari now supports WebSpeechAPI in the latest version. It's listed on the Safari release notes. Personally I'm using react-speech-recognition npm package which works in both Chrome and Safari. Caveat is you need to have Siri enabled on your Mac/Phone.

How does Apple transfer/store Siri (WebSpeechAPI) voice data?
 
 
Q