ShazamKit looks like it would be a great way to sync content to a live performance. Could I record audio from a live performance and use that in the catalog to synchronize content for future performances? Or does the sound have to be exact? Would it be thrown off if the singer had a cold, or the rhythm guitar player felt like using a different guitar tonight?
If no, then could you suggest another approach to do something like this?
The Shazam algorithm works on exact audio matching, with a certain degree of tolerance for frequency / time differences between the original and query audio, so it wouldn't be a good fit for recognizing live performances. There are a number of papers that have been published detailing techniques for recognizing live / cover versions of recorded songs; my advice would be to study those and select the most appropriate for your use case.