I do like a nice music-and-tech project and this one is pretty special. Called the Nsynth, this tablet-with-controls device is powered by a Raspberry Pi and accepts MIDI inputs which it then feeds through an openFrameworks app. The MIDI samples are then processed using the app according to whatever the user does with the controls. It uses “machine learning” to work out how to process the sounds using samples from Google. This is an open-source project from Magenta, a research group within Google, and all the files are available on GitHub.
Obviously, the actual production of the case is down to you, so you might end up with something closer to this:
or even one of these:
You can see a demo of Google’s version below: