I have a creative vision of controlling evolving fm synth sounds via voice while improvising in the generated soundscapes on guitar (using my hands). The guitar will be tuned, is set up to an A≠440 system (same relative gaps between notes, only shifted). I wish to sing in reference to the A≠440 guitar, have the approximate scale degree of notes correctly interpreted (with windows of accuracy symmetric about pitches on a western temperament scale with A≠440) and their corresponding MIDI integer information sent to an fm synth that will act on the triggers , for live performance (my primary intention is performance, not track generation).
Is this possible?
I do not have Dubler currently to see whether it may perform as desired. It seems capable of performing as both a standalone and in a daw context (please confirm). I understand its output is MIDI, I would like to know whether, when used as a daw plugin, it would judge notes’ deviation from scale degrees according to scale degrees derived from the master tuning of said daw. That is, e.g., were one to change the master tuning of Ableton Lite to, say, A=447 Hz, I would like to know whether Dubler would judge a pitch of 447 Hz as “A! spot on!” or as “A, 7 Hz sharp”. This wish is so that, e.g., singing in A=447 along with a guitar pitched accordingly, one would have the same leeway on either side of a note for Dubler to correctly interpret the sung scale degree as would one playing in A=440 (the conventional pitching) with the global tuning so set.
I heard that Dubler functions as a MIDI keyboard in that it works with MIDI integers. However, Dubler also translates pitch (We don’t sing in MIDI integers!) into said integers—which it outputs. So, provided Dubler runs as a standalone as well, does it have a way whereby it would translate frequency to MIDI out in the way described with the A=447 example?
I appreciate your taking the time and energy to read and consider this inquiry.