I am trying to build a "Sing that Tune" game. For example:
- The app will tell the user to sing, "Row row your boat."
- The user will sing "Row row your boat" into the microphone.
- If the user's melody is close enough to the actual melody, the game is won.
My question: Since I'm dealing with live audio that might be "correct" but not "exact," is the best strategy to use ShazamKit and an SHCustomCatalog, or is it better to use Create ML and sound classification? I know Create ML model can learn the difference between a baby and a firetruck, but can it learn the difference between a good guess and a wrong guess of a sung melody?
Thank you,
Eli