After I created my own IO library that has audio features that are easier to interoperate with D code (iota), I decided to finish up my phase modulation (often sold as either frequency modulation or phase distortion too by some brands, with minor tweaks to the underlying math to avoid patent infringement) synthesizer for my game engine. This one uses a simplified math with fixed-length wavetables (can be user supplied too), highly configurable envelops, and has up to 16 voice polyphony if 2 operator mode is used for all channels (8 if channels are combined). It can even do resonant waveforms with some tricks (modulating a sine wave with a triangle wave).
Why write a synth when you can just play MP3 and WAV files?
- The very same teacher whose class I made the original (and very bad) version of the engine for just suggested this to me at the time.
- Seemed like an easy undertaking, and in some ways it was, except for the fact I needed to work on multiple things too. Hardest was the finetuning of the math with this kind of language.
- This way, adaptive soundtracks are easier to do if I or someone else using this engine decides so.
Future plans include more testing and fixing + implementing functions I didn't have time for, and porting it to VST too as I don't have the capacity to write a fully-featured DAW (someone maybe?) with a real polyphonic mode instead of the current 1-voice-per-channel solution. Maybe I'll even create a more upmarket version that is better suited for musicians if there's demand.
Also I'm yet again looking for team members on a gamedev team, but now I have some experience and I plan to "go commercial" with a coop later on.