A new mindset.. Sound Generation and Performance Generation should be totally separate

nightjar
nightjar Member Posts: 1,311 Guru

To build a richer and more flexible music creation ecosystem, a fundamental change needs to happen in how our "instruments" are designed.

The near future will bring in many exciting abilities in "performance" or "play" assistive technology that is much more "aware" of musical context. To best apply these new abilities, it is imperative to have them independent of sound generation.

These emerging assistive abilities will enhance our human abilities to control sound generating parameters in flexible ways.. as much or as little as we like.

But the environments of how sounds are generated and how they are then performed must be separate domains to offer the best user experience.

Currently, too much "performance" is baked into our sound generation. Triggering recorded music phrases and articulations is too limiting.

Modeling for sound generation will offer a much more flexible approach to music expression vs recorded articulations.

And musically aware assistive AI will offer a much more flexible approach to building arrangements vs recorded phrases.

Comments

Back To Top