Sheet music contains a lot of information. When a sighted person reads it, they can choose what aspect they are interested in and disregard the rest - ie just read the tune, just concentrate on the rhythm or just the pitch etc. This app aims to describe sheet music using verbal descriptions and audio - but allows the information to be filtered for the specific requirements of the musician - which will most likely change as they progress through each phrase.
The aim of this app is to make sheet music and learning interactive. It also aims to be hands free by using a pedal (which emulates a USB keyboard) and speech recognition.
This app uses a midi file and can play or describe specific a bar(s) or note(s). The user can alter the speed, arpeggiate the chords, add a metronome, split left and right hands and solo or remove top / bottom notes within a hand.
The music can be described while it is being played - ie the notes are announced one bar bar before they are played.
By default it responds to a key press of 'b' - to trigger the pedal - or use the on screen buttons. This can be changed from Menu -> Show options.
The screen contains 3 buttons to simulate pedal presses. The app will wait for speech input after the top button is pressed (pedal 1). there are buttons for long press and double press too. Under those 3 buttons is a check box to control which results are shown from speech recognition - unchecked = just the first result (shown in blue). Ticked = other results shown too (in green).
The button underneath, labelled Recognise Speech, will wait for speech input - but will only display the results - they won't actually be processed so the music / description won't respond.
The rest of the screen below has a scrollable area which displays the results from speech recognition (in blue and green) along with any description of the music which the app provides (shown in red).
A full list of commands that speech recognition responds to can be found on - http://www.marchantpeter.co.uk/talking-sheet-music.php
The way in which the app understands results from speech is by no means fully developed - so it may not be very accurate.
The reason for publishing this app in a very unfinished state is to demonstrate the potential and get feedback on whether it might be beneficial and what direction to take next.