Yalp is an application for musicians: just search for a song, get its chords and separate its tracks. Additionally, you can improve your skills by learning from online bite-sized lessons.
Every month, hundreds of thousands of musicians generate millions of pageviews to play and learn music through the www.yalp.io website.
We started developing Yalp in 2016 as a side-project, enjoying the whole process. A new challenge presented itself at every step:
- Is it possible to transcribe chords directly from audio?
- What about aligning them with the song to play along?
- If one is playing to learn music, or just for fun, is it possible to recommend the right, next song to play?
To do this, we decided to leverage machine learning, in particular deep learning. A recurrent-convolutional architecture allows us to transcribe chords from mp3s, while the downbeat is computed with a recurrent neural network.
Well, we never managed to stop working: in time, we continued working to improve the quality of the transcription and add new cool tools. Our new virtual (de)mixer allows splitting a song into single-instrument tracks. This way, for example, you can mute the singer and sing along with your favorite band, or turn a song in a never-ending guitar solo!
What is so cool about this? Well, it works for any mp3.
We are now focusing on users’ adoption and growth. On the AI side, we are working toward the realization of a fresh, very general recommendation engine, to always provide the right song to play at the right time!