A new mindset.. Sound Generation and Performance Generation should be totally separate
To build a richer and more flexible music creation ecosystem, a fundamental change needs to happen in how our "instruments" are designed.
The near future will bring in many exciting abilities in "performance" or "play" assistive technology that is much more "aware" of musical context. To best apply these new abilities, it is imperative to have them independent of sound generation.
These emerging assistive abilities will enhance our human abilities to control sound generating parameters in flexible ways.. as much or as little as we like.
But the environments of how sounds are generated and how they are then performed must be separate domains to offer the best user experience.
Currently, too much "performance" is baked into our sound generation. Triggering recorded music phrases and articulations is too limiting.
Modeling for sound generation will offer a much more flexible approach to music expression vs recorded articulations.
And musically aware assistive AI will offer a much more flexible approach to building arrangements vs recorded phrases.
Comments
-
Well, in some ways I agree and in other ways disagree.
But I respect your philosophy.
1 -
Thanks for giving it some shared thought.
So much potential for how "Play Assist" could evolve with the right framework.
1 -
Ye, I agree with that.
1
Categories
- All Categories
- 19 Welcome
- 1.4K Hangout
- 60 NI News
- 761 Tech Talks
- 4K Native Access
- 16.2K Komplete
- 2K Komplete General
- 4.2K Komplete Kontrol
- 5.6K Kontakt
- 1.6K Reaktor
- 373 Battery 4
- 831 Guitar Rig & FX
- 423 Massive X & Synths
- 1.2K Other Software & Hardware
- 5.6K Maschine
- 7.1K Traktor
- 7.1K Traktor Software & Hardware
- Check out everything you can do
- Create an account
- See member benefits
- Answer questions
- Ask the community
- See product news
- Connect with creators