Whats the difference between linear and cubic interpolation?

Michael O'Hagan
Michael O'Hagan Member Posts: 93 Helper

I understand linear interpolation, it's basically just crossfading values in a set timing, but what is cubic interpolation and how does it differ?

I've never gotten a good explanation of cubic interpolation, so please, take me to school on this subject.

Answers

  • colB
    colB Member Posts: 762 Guru

    cubic uses 4 points instead of two, so it can get closer to following (some?) curves.

    here's a link that has a pretty good explanation, its relatively terse, explains the math in a way that you can follow to the extent you understand, but even if you don't fully get that, you will come away with a better understanding.

    Cubic is not perfect for everything, sometimes you need higher order, and sometimes linear can be better. There is a pretty well optimised cubic interpolation macro in some of the factory modules

  • Michael O'Hagan
    Michael O'Hagan Member Posts: 93 Helper

    Thank you, I'm going to have to take some time on this as I was never formally educated in college level mathematics, so wish me luck 😀.

  • Studiowaves
    Studiowaves Member Posts: 451 Advisor
    edited March 2022

    I think the delay with interpolation does a pretty good job when varying the delay. Not sure what they did there but it's pretty amazing how well it work. Have a look at their method of doing things in the delay module, I never understood how it works, but I do wish I was that smart. These developers on the reactor project must be top notch to do what they've done. I'm hoping they start threading out voices on different processor cores. They say everything needs to done in one core and I can see that. But I don't understand why they haven't made use of other cores running the same code for extra voices. Seem an easy task in comparison to their current skill levels. Heck they could under clock 4 cores and use interpolation from each cores output. It would be like delaying each successive core by 0,1,2,3 clocks. This would make it easy to grab the outputs from all 4 cores much like interpolation. It seems a sampling rate of 12k would be the equivalent of 48k in the end. And all this talk of the needs of sticking with 1 core because Reaktor needs to be perfectly timed would keep it that way. It's like saying it's still done with 1 core, it's just that the missing data would be would be filled in like linear interpolation from the other cores. Seems a viable approach for making use of multi core processors. I would imagine for Reaktor "effects" it would be easy to distribute an input signal clocked at 48k with the first clock going to the first core, the second clock to the second core and so on. By doing It like that and running each core at 12k it seem possible the stream of outputs from the 4 cores could end up a 48k stream of processed samples. Not positive on all effect types but a simple delay would most likely work fine. Using Reaktor as a VSTI is a different story, for that it might be foolproof to alternately evenly distribute voices among cores and "sum" the outputs. It's no different than using 4 different computers with the same latency but distribute the midi or osc data to each core by evenly dividing the "Voices". Not exactly interpolation in this case but still gives us ideas on interpretation in general.

  • colB
    colB Member Posts: 762 Guru

    I think the delay with interpolation does a pretty good job when varying the delay. Not sure what they did there but it's pretty amazing how well it work.

    Afaict it's a cubic catmull-rom. I implemented catmull rom as per the link I posted, then looking at the NI 4 point, it seems to be an optimised version of exactly the same thing.

  • Chet Singer
    Chet Singer Member Posts: 56 Advisor

    I agree the 4-point delay is quite good. I once ran a dozen of them in series and even when the interpolation point was exactly between two samples and the frequency was 10 kHz it was almost flat.

  • Studiowaves
    Studiowaves Member Posts: 451 Advisor
    edited March 2022

    Ya, I was impressed, you can really tell the difference on the non interpolated delay. These devs are sharp, possibly some of the finest in Germany. I'll bet most are musicians and really enjoy their jobs. How cool, gotta be a dream come true. I would imagine using the trend of a spectrum analysis and filling in the interpolation points to match the harmonic trends would make a real nice interpolation. BTW, I had forgotten about the import options of a simple read only memory array, it appears they let you import all types of audio files, .wav, aiff, and more. I was using it to import text files and it appears to be limited to 20 bits off address space. It may actually expand if you import a large .wav file or something. Maybe there are text import limitations. Also I'm trying to wrap up the fm project and would like to hear what it can do with a wind controller. It's pretty elaborate with some extra features that respond to dynamics. It not only changes the timbre but it varies the decay times as well. It can easily be modified to modulate the attack times or really any envelopes but I geared it up for a sampler companion for pianos. Since your attempt to go the additive route was derailed by processing power you might like it as an alternative. I was able to mimic the tonal characteristics of a clarinet but without a wind controller it's pretty awful sounding. lol Talk Later...

  • Studiowaves
    Studiowaves Member Posts: 451 Advisor

    You're a gifted individual, I hope you know that. My brain runs a hundred times slower, trust me. Come on man, you made a convolution reverb, your in a class of your own. I'm surprised you don't a few doctorates under your belt. I should look at that, what's it do, measure the time and amplitude of the reflections of an impulse recording. Then you have delays following that match the time and amplitude of the impulses. Imagine each reflection has to be spectrum analyzed as well so each delay also needs to be filtered. I would image there is a trend of reflections that repeat themselves and some of those good convolution programs use that trend to lengthen the delay times and prevent duplicate processing minimizing cpu. Btw, I'm getting the hang of events and how they work. I don't think there is an event clock. I think a latch is the only thing that will terminate events. Meaning the latch can be polled at some interval like the serial clock. I'm thinking using event inputs to other modules blocks the serial action and that event inputs are polled. I'm almost sure of it because I tried to use events inputs and it is not as quick acting as an audio input. Leading me to believe they are polled with some event clock. I figured this out by using a comparator to monitor the output on an envelope generator. I used 60 db down as the cutoff point and turned off the fm operator module which has a lot going on. Is works pretty good. So I changed the audio input to event inputs and the cpu went way up when playing a bunch of notes with a faster decay with the sustain pedal on. It is a test and I don't think the event inputs are as fast as the audio rate. The dup filter can really come in handy for stopping events but I'm thinking of using the router from the comparator to dump the events. Which is something you would have done right off the bat with all the experience you have. The only reason I know what to do half the time is from my electronics background, otherwise I'd be totally lost in core. What funny here is I made an analog version the LR4 crossover when the Commode 64 was the big thing. lol Ya, see if you can make that convolution use little cpu with some trend analysis, I can see that being a real treat for us guys. Like really cool!

  • colB
    colB Member Posts: 762 Guru
    edited March 2022

    [off topic]

    All convolution does is apply an impulse response to a signal. Impulse response(IR) is a recording of an impulse being played in some environment. e.g. popping a balloon in a room, or tapping the bridge of a violin with a mallet.

    Basic convolution just takes each sample of the main signal, then for that sample it takes every sample of the IR, multiplies it with the signal sample and adds them all up in a buffer. Basically each signal sample becomes a copy of the IR scaled by the signal sample's amplitude... All these are superimposed in the buffer

    The result sounds like the signal played in the environment IR was recorded in. It is very simple. And in that form, trivial to implement, however it is impractical due to cpu requirements - if the IR is 1 second long and you're running at 44100 sample rate, then you need 44100 multiply adds plus 44100 buffer reads and writes for each audio tick.

    Fortunately, some folk much cleverer than me worked out how to use FFT to implement convolution making it much more efficient, and practical for real time processing. I just hacked together a basic Reaktor implementation of that.

    Short IRs are particularly useful for physical modelling, so cabinet models for amp sims, or modelling the body of stringed instruments to pair with bowed and plucked string simulations.

Back To Top