Wavetable package
I think I finished my little work on wavetables, started here. Some days ago David uploaded an ILO version I like it more, mainly because it uses one thing for what I'm using two, so the result is more uniform. I still uploaded it because I also like it, mainly because I think it's neat. It ended up being much simpler and, like, orthogonal than the Carbon 2 implementation made me expect it to be. It makes very easy to use several oscillators on the same tables and morph between tables, without needless duplication. Also, because I like pictures so much, here are some colorful spectra:
ILO is pretty much the right technique for this, but hey, my boys are pretty too (?)
I couldn't find better interpolators than a 4point cubic Hermite and an 8point Lagrange that wouldn't burn the computer, and not for lack of trying, but I think I'll go back to that. Something might be hidden somewhere π§
Comments

i don't know that i'd ever seen antaliasing before appear to make extrasonic partials comically pratfall off the edge of oblivion when viewed on an analyzer during a sweep. well, nevermind its just the scope settings, well...
it is funny about ILO, every time i see it utilized, its often missing parts that i considered might be most fundamental to its operation.. or the manner in which they are employed are totally transformed by having to accommodate a different process...
oh thank god this uses interpolation not ILO, i was starting to lose my temper with myself not being able to find it. the point about ILO still stands, but not today apparently. this is good.. for a moment i thought the antialiasing was coming from normal use of the interpolate. i was looking it over seeing nothing to steal, almost missed the 8pt Lagrange interpolator!!
did the ILO one get posted anywhere?
0 
New version I'm pretty happy with this one. Sorry that I keep disappearing, life is a mess π
it is funny about ILO, every time i see it utilized, its often missing parts that i considered might be most fundamental to its operation.. or the manner in which they are employed are totally transformed by having to accommodate a different process
There are two complications with ILO. First, some functions have awful integrals, or are not analytically integrable at all. Second, if the function produces stretches of very close values, you can't use ILO there, because it gets too close to a division by zero (or right there). Both problems are worse for triangular ILO, so you don't see it used very much.
oh thank god this uses interpolation not ILO
Now it uses both π
did the ILO one get posted anywhere?
David (bolabo) uploaded it to the library some weeks ago. My new version comes in three orders first order is the same as rectangular ILO, it differs from David's in the way of dealing with the limit case and the choice of interpolator.
I was trying to get my head around the ILO version, and I kept thinking that this is more simply understood as good old preintegrated wavetables. What's specific about ILO is bandlimiting a function of a function of time (or phase, which is circular time): a waveshaper on an arbitrary waveform. In this case we just have a waveform, which is a function of time/phase itself. We're reading a table, we want the value for the current time. Say we call our table f. Instead of interpolating f around the current position, we take the average of the values between the previous and current positions. We make a table F with the partial sums of f: the first sample, the sum of the first two, the sum of the first three, etc. To know the average of f between positions, we do: (F(now)  F(before)) / (now  before). This is the formula for rectangular ILO, only that "x" is just time instead of some unknown input. We still use interpolation, but now on F instead of f.
This is all good for downsampling (raising pitch), where the aim is to reduce aliasing. Below the natural frequency of the table, there's less than one sample between read positions, so taking an average doesn't make a lot of sense. For upsampling, the aim is to reduce imaging, and preintegration is not very good at that. At 0 Hz, now  before would equal zero, so we'd have a division by zero. This is the limit case of ILO, with the small "x" difference being a low instantaneous frequency. We don't have to set an arbitrary threshold: below the natural frequency of the table, we switch to plain interpolation.
Things get more interesting if we consider integrating more than once: taking the sum of the sum of f, then the difference of the difference of F, gives 2nd order preintegrated wavetables, which are more bandlimited. Differentiation amplifies interpolation error, so higher orders of integration need better interpolation. It also may lose some bits of precision at low frequencies, so 1st order works alright in 32 bit, 2nd order is mostly ok in 32 but it needs 64 for wide phase modulation, and 3rd order only works in 64.
Two more things I tried. Doing double integration for the general case of ILO (I tested an hyperbolic saturator) doesn't seem to work: it's better than nothing, but still worse than single integration (ILO rect). I suppose you can't just treat the arbitrary "x" as if it were an independent variable like time. I also couldn't get ILO tri to work with wavetables there's a discontinuity at phase wraps that all the tweaks I tried failed to fix. In any case, I don't think it's worth it: ILO tri needs one more table read/interpolation and a rather complicated formula. It would be nice to see what's the relation between higher order integration for wavetables and ILO with higher order kernels they don't look the same, but I've not done the math.
Some pictures of a 2020k saw sweep. The rows are no antialiasing, 1st, 2nd and 3rd order. The columns are the sweep at 44.1, the sweep at 176.4, and the sweep at 44.1 phase modulated by a sine with ratio 1/16 and index 1/2.
2 
Amazing! The 3rd order version is genius and sounds great.
Two more things I tried. Doing double integration for the general case of ILO (I tested an hyperbolic saturator) doesn't seem to work: it's better than nothing, but still worse than single integration (ILO rect). I suppose you can't just treat the arbitrary "x" as if it were an independent variable like time.Β
Yeah I couldn't get this to work either. I've found that combining some oversampling with ILO works well when applying shaping / phase distortion or 'sync' style effects to oscillators, plus regular ILO rolls off some of the top frequencies due to the averaging, and so oversampling handles this too.
3 
this has activated at least one (1) neuron in my smooth brain, and that is a previous unexamined observation that ILO seems to share some similarity with the DPW process: the integral is done analytically (continuous time domain?) and then differentiated. benjamin poddig described it on the old forum like that, as basically stealing the antialiasing juice from the difference between the continuous integral and discrete time differentiation. he also talked about preintegrating wavetables or even samples i think as well working with DPW. would it be a bridge to far to say that DPW (like a?) is a specific case of the more general ILO?
its a pretty good result, about on order with the factory polyblep. and it seems this doesn't suffer the same problems with FM at higher orders as DPW does (pops and clicks)
also what is the difference between triangular and rectangular ILO? one more sample point?
2 
Amazing! The 3rd order version is genius and sounds great.
Thanks! *hides head in the sand*
I've found that combining some oversampling with ILO works well when applying shaping / phase distortion or 'sync' style effects to oscillators, plus regular ILO rolls off some of the top frequencies due to the averaging, and so oversampling handles this too.
It's like, ILO does something oversampling just can't, which is cleaning all the foldovers up to infinity, but the reduction of the first foldover is quite mild, so the combination is not redundant.
ILO seems to share some similarity with the DPW process
I would say it's the same thing, with ILO being the general case. Vadim does mention it in the ILO paper. Thing is, the generalization to an arbitrary input makes the math quite more complicated π
what is the difference between triangular and rectangular ILO? one more sample point?
Rectangular ILO takes a plain average of the values from the previous to the current x, triangular (or linear) ILO takes a weighted average around the current x. Rectangular and triangular refers to the shape of the weighting function, aka lowpass kernel.
it seems this doesn't suffer the same problems with FM at higher orders as DPW does (pops and clicks)
I spent like 90% of the effort solving this π The main thing is that you can't extend differentiation to arbitrarily low frequencies, and phase/frequency modulation can make the instantaneous frequency reach or even cross zero very fast, so if you don't switch to plain interpolation below SR / table size you get a division by zero. There are a couple more problems though. Each differentiation has a delay of 1/2 sample, and at negative frequencies the delay goes in the opposite direction with respect to the table, so there's a 1 sample difference per order between positive and negative frequencies which you have to deal with. Any messing with the read position for the differentiated branch risks breaking the differentiators' states, so I fixed it "bridging" the negative and positive sides with a 1/2/3sample phase shift across the plain interpolation branch (that is, for low frequencies around 0 Hz):
Say you have a 1024sample frame and order 3. There's a 3 sample phase difference between positive and negative frequencies, so I apply a linear phase shift to the nondifferentiated branch, from 3 samples at SR/1024 Hz to zero at SR/1024 Hz.
The last thing concerns the part I marked in red. You can't use the same phase difference for all differentiators, like, for 3rd order with y = table value and x = phase/position:
Each phase difference needs to span the samples involved in the numerator. I haven't really proved this, I just spent a stupid amount of time trying to fix it in other ways, until I wrote it like this and it looked suspicious π€
2 
trying to get a head around your ensemble but my ADHD is resisting with everything its got (as well as your expert use of bundles and distribution busses haha)
It's like, ILO does something oversampling just can't, which is cleaning all the foldovers up to infinity, but the reduction of the first foldover is quite mild, so the combination is not redundant.
i'd guess bleps and their ilk don't have this property?
anyway, its really impressive to see such a good result considering the wavetables aren't prefiltered or anything (right?)
it also seems there is no reason this also couldn't apply to samples too, correct?
one more observation is that, like DPW, there is some high frequency noise that other antialiasing methods don't seem to suffer from, but that higher orders seem to help with that
2 
i'd guess bleps and their ilk don't have this property?
They do. Like ILO, they bandlimit a continuoustime function before sampling it, they just focus specifically on discontinuities (jumps and corners). You can use them for functions whose discontinuities you know beforehand, and you'll have to keep track of their position. It's much more complicated to apply them to an arbitrary function, and pretty impossible to a wavetable: how do you know what's a jump and what's not?
considering the wavetables aren't prefiltered or anything (right?)
They're not, and actually the tables I loaded are already aliased, they're naive, nonbandlimited triangle/saw/square. They only need to be centered, aka DC filtered, so the average of all values is subtracted from all values:
This is necessary to get a periodic integral: the integral of a noncentered signal grows indefinitely.
it also seems there is no reason this also couldn't apply to samples too, correct?
Well, a wavetable is a (usually short) sample in a (usually fast) loop, so yeah. I mean, it could be used generally as a form of bandlimited downsampling, for transposing a sample upwards.
like DPW, there is some high frequency noise that other antialiasing methods don't seem to suffer from, but that higher orders seem to help with that
I think you mean there's some aliasing π Like, there could be actual noise too. For example in the current implementation, if 3rd order is used in 32 bit, or 2nd order is used in 32 bit with wide phase modulation. You'd also get noise if the low frequency gap was unsolved or wrongly solved, or the differentiators' gains were applied all together after taking all differences, etc. these were the things I had to solve. I think you shouldn't get noise now, but you still get some aliasing, which is seen in the pictures. You don't hear aliasing at high frequencies using mipmaps because the table has been transposed and (heavily) filtered before resampling. The practicality of preintegration is that you don't need to have multiple copies of the table: using a single copy for the whole range, you get quite good antialiasing. Now, you could actually do both things: have, say, two copies, one for the upper half of the range which has been heavily filtered, and preintegrate both. You could call that preintegrated mipmaps. I'd consider it a bit overkill π
2 
There's another advantage of using a single table. I've been only showing frequency responses let's take a look at the response in time, for the same 2020k saw sweep:
Above is peroctave mipmaps with 4point interpolation, below is 3rd order preintegration with no mipmaps. That staircased level wouldn't play well with, say, compression or saturation. You could maybe turn the staircase into a monotonic polygonal by having two sets of overlapping mipmaps, sort of like overlapped FTs. Doesn't seem worth the effort though.
3 
yeah, i think it is aliasing, i guess what i meant more specifically was that the aliasing seems to have some audible amplitude modulation, especially at lower orders. this was something i noticed originally with Ben Poddigs' oscillators. you can hear what i mean with an fft gate that passes lower amplitudes:
interestingly, this is not entirely dissimilar from what can be seen in analog oscillators. at least the ones that i've looked at display similar modulation effects towards the noise floor
2 
This is necessary to get a periodic integral: the integral of a noncentered signal grows indefinitely.
i guess this means using it with samples might be kind of a ****** shoot if the samples contained even a small amount of DC, or might require leaky integration?
2 
yeah, i think it is aliasing, i guess what i meant more specifically was that the aliasing seems to have some audible amplitude modulation, especially at lower orders. this was something i noticed originally with Ben Poddigs' oscillators. you can hear what i mean with an fft gate that passes lower amplitudes
Most of it is still the actual saw, but the "noisy" part is indeed aliasing. You can see it here in a zoom around 16k, for order 1/2/3:
i guess this means using it with samples might be kind of a ***** shoot if the samples contained even a small amount of DC, or might require leaky integration?
Nope, the builder module takes care of it. The sum is run twice, the first time to get the average (= DC), then with the average subtracted:
2 
Nicely done.
0 
That's very beautiful, I hope it sounds as good as it looks. I think I'll make it my wallpaper for a while. Well done Rolando. Is there an ensemble I can use to play with?
1 
yeah, i put it against the factory polyblep in a listening test with the FFT passing only the noise floor and there was no immediate difference between that and the 3rd order. i guess the difference i was hearing in the first order was just that the aliasing components were louder in comparison to saw? its strange how the first order subjectively gives me the impression of some kind of bad electronic monitor hum (when isolated) but i wasn't able to spot any difference in the spectrum when pulling it up to compare visually
i suppose that isn't too surprising actually, given the little jig that the aliasing part of the waveform does when separated, that is should have its own frequency stuff going on. doyeee. kinda like how if you have a periodic signal and filter out all harmonic content leaving only the noise inbetween partials the residual signal will still have a discernible imprint of the original period visible in the time domain
its a good time for wavetables for reaktor users. Jan Brahler uploaded a microwave oscillator with a discontinuity sharpener, whole different thing than what you got going on but equally arresting
anyway, forgive my ignorance but summing twice sounds like integrating and then integrating again which sounds like the opposite of getting rid of DC. is the subtraction something that is done between summing steps?
wait a tick, i think i actually found it
this, right? ok, actually makes sense now. tell you one that that is still grinding my gears is that three sample phase shift across positive/negative frequencies. actually looking at that to pin down what exactly i was confused about made it less confusing too... you are using separate table read/interpolate each for negative and positive frequencies?
0 
i put it against the factory polyblep in a listening test with the FFT passing only the noise floor and there was no immediate difference between that and the 3rd order
3rd order and the factory polyblep are indeed quite similar.
The passband and first foldover are almost the same. The next foldovers are lower in the polyblep, but 3rd order has wider holes at multiples of 2pi so it's cleaner below ~4k @44.1. The core polyblep uses a 7th order poly, quite longer than the quadratic in the VΓ€limΓ€ki paper. All these methods do essentially the same, they lowpass a continuoustime reconstruction of a signal. Integration is a very rough lowpass (a moving average), so it takes three passes to get something comparable with a 7th order bandlimited step. Generalizing such a lowpass to an arbitrary waveform takes much more complexity (and processing power).
given the little jig that the aliasing part of the waveform does when separated, that is should have its own frequency stuff going on. doyeee. kinda like how if you have a periodic signal and filter out all harmonic content leaving only the noise inbetween partials the residual signal will still have a discernible imprint of the original period visible in the time domain
That's because it's not noise, it's the rest of the harmonic spectrum successively folded at sr/2 and 0. It's inharmonic with respect to the signal when the fundamental is not a submultiple of sr, but it's still periodic.
summing twice sounds like integrating and then integrating again which sounds like the opposite of getting rid of DC. is the subtraction something that is done between summing steps? wait a tick, i think i actually found it
Yep, that's the place π Summation is discrete integration. DC is the constant offset of a signal, hence its average, sum / length. If you subtract the average of a signal from it, you're removing DC.
one that that is still grinding my gears is that three sample phase shift across positive/negative frequencies. actually looking at that to pin down what exactly i was confused about made it less confusing too... you are using separate table read/interpolate each for negative and positive frequencies?
No I'm not I considered it, but it wouldn't make a... difference (cuack). This will be more clear with an example. Say you have a 5sample saw:
1 2 3 4 5 π
The sum gives 15, and 15/5 = 3, so you subtract it from all values to remove DC:
2 1 0 1 2
Now you take the partial sums: 1st, 1st+2nd, etc:
2 3 3 2 0
Now you take differences: 1stlast, 2nd1st etc:
20, 3(2), 3(3), 2(3), 0(2) = 2 1 0 1 2
You've integrated and differentiated, congratulations. Now suppose for some reason you need to read the sums backwards:
0 2 3 3 2
Taking differences:
0(2), 20, 3(2), 3(3), 2(3) = 2 2 1 0 1
See? You've shifted by 1 sample. If you had taken 3 sums, you'd have shifted by 3 samples. Ok so, no problem: if for some reason you need to read backwards, just start reading with a 1 sample offset.
π
It happens that in our case, the table is read backwards when the frequency is negative, and you go into negative frequency all the time when modulating phase. If you start reading with a 1 sample offset each time the frequency goes negative, you get a discontinuity at those points. I couldn't find a way to make that shift without breaking the differentiators' states (the memories of their previous sample), which causes clicks, and very big ones, because they happen around low frequencies, when the table is read slowly, and we take (table value diff) / (read position diff). Because at frequencies around 0 Hz I switch to plain interpolation, I can offset this read instead, which is stateless, so there's no risk of big clicks. Instead of just jumping at negative frequencies, which still causes a small discontinuity, I apply the offset progressively, from sr/size to +sr/size. It's a hack but it works π€
0
Categories
 All Categories
 21 Welcome
 2.1K Hangout
 84 NI News
 1.4K Tech Talks
 2.2K Native Access
 13.1K Komplete
 1.3K Komplete General
 2.7K Komplete Kontrol
 3.7K Kontakt
 3.5K Reaktor
 323 Battery 4
 590 Guitar Rig & FX
 316 Massive X & Synths
 684 Other Software & Hardware
 3.9K Maschine
 4.8K Traktor
 4.5K Traktor Software & Hardware
 Check out everything you can do
 Create an account
 See member benefits
 Answer questions
 Ask the community
 See product news
 Connect with creators