Audio Modeling instead of gigs and gigs of sampling...

PulseCode
PulseCode Member Posts: 132 Helper
edited October 22 in Tech Talks

So I am not sure if this is the right place for this, but I just recently purchased sax and violin from Audio Modeling | SWAM and Sound Engine Technologies and WOW! this is the way things should be.

I am surprised not more companies are doing this type of development work. Instead of sampling gigs and gigs of wave forms, every articulation etc. why not analyze the structure of an instrument and resulting wave signatures when played, and model that, using AI perhaps?

Does native instruments have any plans to create instruments like that? Love to see more, of this type of versatile instrument development.

«1

Comments

  • nightjar
    nightjar Member Posts: 1,322 Guru
    edited April 2023

    Would be awesome to see some modeling added to the catalog!

    And to have an expansion of NKS to standardize the parameters of interpreting the physical realm and the articulations of interaction within... and to develop a new generation of controllers that are optimized for human kinetics and ergonomics.

  • James Thornton
    James Thornton Member Posts: 20 Member

    Yes, the Audio Modeling products are amazing, but I still sometimes prefer some of the sample-based products. The Chris Hein Solo Strings come to mind.

  • nightjar
    nightjar Member Posts: 1,322 Guru

    Instead of the endless rehashing of old synths.... modeling opens up the imagination of new things that could be...

  • Kubrak
    Kubrak Member Posts: 3,067 Expert

    One may use Reaktor to open up imagination.... Also, there are several physically modeled instruments in Reaktor.

    NI already uses physical modelling in Guitar Rig. And also AI in few of their models....

  • EasyTiger32
    EasyTiger32 Member Posts: 15 Member
    edited April 2023

    Modeling is great. I have almost all of the SWAM products, as well as modeled or hybrid libraries by Sample Modeling and IK Multimedia. After working with modeled libraries, sample libraries can seem so frustratingly rigid.

    But, while modeled libraries are the cat's meow in terms of flexibility, sampling captures a snapshot of the instrument in-performance. When it comes to sound, nothing (yet) beats an actual sampled articulation. Straight Ahead Samples, for example, took sample libraries to another level with their Smart Delay feature that assembles performances from a huge pool of samples. The result is unbeatable sound, but it comes at the cost of the instruments being arguably intolerably inflexible.

    There are companies out there continually researching and developing modeling and other technologies. AcousticSamples V-Horns are relatively new hybrid libraries. And just look at what Dreamtonics has done with vocal synthesis in the past two years. The industry will undoubtedly produce more what you're looking for, but we're still in the early stages of development. The best is yet to come. It won't be long.

  • nightjar
    nightjar Member Posts: 1,322 Guru
    edited April 2023

    Writing code optimized for realistic rendering of vibrating physical objects seems like a task extremely well-suited for machine learning.

    Once this gains just a bit more attention from virtual instrument developers, modeling could be the dominant design approach for instruments real and imagined.

    A trickle will quickly become a flood.

  • LostInFoundation
    LostInFoundation Member Posts: 4,488 Expert

    What I’m afraid of is that these guys don’t have the skills to code something like this, seen the problems they have with “easier” programs like installations center. Unless, as nightjar says, an AI makes the job for them…

  • DunedinDragon
    DunedinDragon Member Posts: 973 Guru

    I'm a big fan of modeling, but in the right places. And for me when it comes to instruments, that's not the right place. Modeling, by definition, is about creating a mathematical formula that simulates a real world characteristic and that's pretty hard to define with something like a trumpet or string section with various articulations. It can be done, and it's better than synth based models, but it doesn't compare with an instrument captured being played by a seasoned professional in a near perfect environment, because a lot of that depends on playing technique that varies from person to person. That's why one of the biggest concerns of many people is that models can often strip away the emotions of the player because that's pretty hard to mathematically simulate.

    What is also true, modeling is very much a different specialty than sampling with people that aren't necessarily recording engineers, but are highly capable technical developers that can code realtime processor applications.

  • Kubrak
    Kubrak Member Posts: 3,067 Expert

    What I’m afraid of is that these guys don’t have the skills to code something like this, seen the problems they have with “easier” programs like installations center. 

    NI already uses maschine learning in developing models for Guitar Rig....

  • nightjar
    nightjar Member Posts: 1,322 Guru

    There are two things going on that should be modeled separately.

    The first thing is the instrument itself: How a collection of physical masses would vibrate in a defined environment over a range of interactions.

    The second thing is the performance: How a person might choose to control all of these interactions.

    Native Instruments should think deeply about how this first thing could exist within an enhanced NKS. NI is positioned with NKS to set an "industry standard" in how a this collection of physical masses is presented as parameters for interaction. This could be their biggest thing ever.

    Getting this first thing right would set the stage (pun intended) for a rich environment of intelligent Performance Libraries. This would be a huge new space for 3rd parties to contribute. Leaping over MIDI 2.0 and creating something far more forward-thinking.

  • DunedinDragon
    DunedinDragon Member Posts: 973 Guru

    I think the key for modeling to supplant samples won't necessarily be found in the quality of libraries from NI, but probably in libraries from deep sampling companies such as 8DIO or Spitfire. If it could successfully simulate NI libraries you might be willing to do modeling for the savings in storage over sampled libraries and maybe even some savings in latency. But the hyper-realistic sounds one can achieve through deep sampling of instruments such as those in the 8DIO Ostinato libraries which are sensitive to the proximity of notes in fast or slow ostinato phrasing with multiple instruments would need to be the target for modeling to be worthy of displacing sampled libraries IMHO. And what you end up with in modeling is the more complex the modeling characteristics are, the less people are willing to overcome the learning curve in how to set things up correctly to get the sound they're after. Maybe if in the near future companies can successfully emulate the expertise of the 10 to 15% of the people that can successfully use the complex modelers to their fullest extent through some form of AI, you might have something.

  • nightjar
    nightjar Member Posts: 1,322 Guru

    Pulled from quote: "Maybe if in the near future companies can successfully emulate the expertise of the 10 to 15% of the people that can successfully use the complex modelers to their fullest extent through some form of AI, you might have something."

    Exactly. Separating the generative AI for Performance from that of the Instruments itself is a pathway to achieving this. Imagine performing a "sketch" version of a part you imagine (humming, finger taps and/or whatever skill level you may have on some purpose-built controller) and having an AI Performance Library flesh it out in ways for you to choose between.

  • PulseCode
    PulseCode Member Posts: 132 Helper

    I see lots of emotion in a modeled sax here, and he's not even using a breath controller.

    Single "physical instrument" should be modeled, string section can be sampled on the other hand.

    https://youtu.be/Tw95tBk84K4

  • PulseCode
    PulseCode Member Posts: 132 Helper

    Perhaps, but I find Audio Modeling violins, that I am playing with, are far more emotional, and expressive then the sampled versions. Articulations can also be modeled, actions performed on the instrument. snapshots of recordings are great but, are outdated technology, literally coming from tape strings, just in digital form(s). But yes, I am hopeful, one of my favorite companies' Native instruments will eventually get into this technology and make something more interesting than a FM / Analog synthesizer model.

  • DunedinDragon
    DunedinDragon Member Posts: 973 Guru
    edited May 2023

    Nice sound, but it's hard to really get a feel for whether or not there's some realistic articulation capabilities such as growls and growl vibrato, rips, grace notes, grace vibrato, trills and forte-piano-crescendo in that particular piece. Not to mention the mouth elements like "ta" versus "da" all of which are generally individually captured in samples.

    I think one of the things that keeps me locked into sampling is that the technology proficiency and the musical proficiency don't come from the same place. The musician can provide depth in terms of the various ways the instrument is commonly used in practice and the technologist can translate that into what samples are needed in order to achieve those things. I think we'll eventually get there as we get more and more sophisticated in layering synths, but it may be a while for some of the processing power to catch up to avoid any latency. When it does happen I'll be more than happy to get back some of my disk storage space!!!

This discussion has been closed.
Back To Top