I have my Swift app that records audio in chunks of multiple files, each M4A file is approx 1 minute long. I would like to go through those files and detect silence, or the lowest level.
While I am able to read the file into a buffer, my problem is deciphering it. Even with Google, all it comes up with is "audio players" instead of sites that describe the header and the data.
Where can I find what to look for? Or even if I should be reading it into a WAV file? But even then I cannot seem to find a tool, or a site, that tells me how to decipher what I am reading.
Obviously it exists, since Siri knows when you've stopped speaking. Just trying to find the key.