R5 library ladder filter (look at this crazy thing)
as a hands on learner in pursuit of figuring out how stuff, works one great method is to look at older stuff. looking at this older version of the core ladder filter is pleasantly baking my noodle in this regard
first thing that jumped out was it seems to be a 4 pole with 3x oversampling, not 3 pole with 4x as the macro info states. lol.
also the decision to oversample a saturating filter while only passing though the unsaturated signal to the subsequent stages seems a bit silly unless it was necessary for some reason
but the true head scratcher and thing i'm most interested to discuss is the method of oversampling itself. instead of a polyphase filter it elects for linear interpolation?
but the only thing i can point to that seems obviously wrong is the lack of any kind of decimation filter (not pictured: doesn't exist)
aside from all that though, it points to something about the method of oversampling filters employed today that i have always found confusing: the chaining of OBC elements between stages
i get that that z-1 elements need to share state for oversampling to work properly, but how does that work when, say for four stages, there are four chained "write" elements all being clocked at sample rate. which "state" is being passed though the obc chain? do they fight over it? take turns??? hopefully someone knows, to me it simply makes no sense
Comments
-
Quote:"aside from all that though, it points to something about the method of oversampling filters employed today that i have always found confusing: the chaining of OBC elements between stages
i get that that z-1 elements need to share state for oversampling to work properly, but how does that work when, say for four stages, there are four chained "write" elements all being clocked at sample rate. which "state" is being passed though the obc chain? do they fight over it? take turns??? hopefully someone knows, to me it simply makes no sense"
they don't fight, it's just an unrolled loop!
for 3x oversampling, for each sample rate tick, the process needs to be repeated three times.
The crucial part is the state memory of the process which is usually some OBC like a z^-1, or some delay line, because that's the part that all the unrolled iterations need to share.
As long as the order is explicit (connected in series), and the last iteration is fed back to the first via a z^-1 so that the next tick starts its unrolled loop where the last finished, everything is just fine. no battles.
…the order of processing of the filter is left to right, and the order of processing of the unrolled loop is top to bottom…
So the top row is iteration 1, it passes it's output to the second row/unrolled iteration, applying res, and input2… all good.
I suppose the confusing part might be that the internal state of the individual poles is just connect vertically, so e.g. pole 2 of the top row connects its state directly to pole 2 of the second row. But if you think it though, it makes sense. That state value is only relevant to the second pole of the whole oversampled filter, so it can just pass down a column without breaking the logic of the process. It's passing it to the next 'iteration'. And for the unrolled part, it doesn't need to store it in OBC, it can just pass it on directly. It only needs to be stored after the last unrolled iteration so it can be preserved for the next audio cycle…
0 -
suppose its just down to read/write order? the write to the left of the chain gets read immediately and when the read precedes the write in the chain it works like a z-1
i'm still a little opaque on how the state propagates through the structure, is it always passed between stages. is a given state that's written at a given stage ever read out again at that stage? or is it only passed down to the next stage?or put another way, regular obc chaining information propagates between left and right. but when chained like this information propagates vertically instead? is that correct?
well, i thought i understood it well enough to do it myself and something here is terribly, terribly wrong. i've checked everything i could possibly think of and cross referenced against other examples of over-sampled filters but not only is it not oversampling (checked with aliasing test) its not even a filter anymore !!
it blows up with resonance more than anything over around .3…. the only thing i'm unsure of is whether the SR bundle thing is correct or not. other than that i have no idea what i've done so horribly wrong
maybe you'lll have an idea?0 -
for reference here is the qnado filter from the UL with 8x OS
and here's my attempt to get 8x. one thing i find strange i was able to confirms the state was propagating correctly just like you said, for both my attempt and Qnado. as in the sample written at the end of one stage was the same being read out in the next stage
wonder if its something to do with using the ZDF framework??
to save you the work of figuring out that i neglected to connect the output of the filter to the correct module, and forgot to add saturates to the feedback stages, i fixed both of those things, and it is still not even close to being a filter!!
i seriously don't understand what's happening here. in this new version i have a single stage isolated and when plugged in by itself is totally a filter. string eight of them together and they stop being a filter!
well, at least this one shouldn't be blowing up anyway
noticing even more things that are stupid, so it would probably be prudent to wait for the next update before troubleshooting0 -
I would be trying a simpler 2x version first, get that working, then extend to 4x then up if really needed.
0 -
that's what i'd started with, and been using for a while
then decided to test that r5 ladder filter with an aliasing test, it did not pass
then decided to test my 2x filter (the one i've been using) also saw no reduction
so i scaled it up to 8x to be absolutely sure, sure enough there was no aliasing reductionanyway i ended up getting it figured out. not sure quite how lol. one of the more notable problems was the OBC chains being in reverse order between the input and output lolol
reaktor wouldn't even try to compile the 8x version lol, had to scale it down to 4x but it ended up being plenty notice a distinct drop in aliasing to be sure it was actually working properly
now i'm happily back to 2x. it was already getting OBX8 sounds i was extremely pleased with before oversampling already. think you may actually be impressed with how it sounds. will maybe post a clip later0 -
oh yeah, did you get a look at the "oversampling" in the r5 filter? can't imagine it counts as true oversampling but it also looks like it would probably do… something?
0 -
I'd like to know how oversampling is even possible. I always thought the sample rate needed to be four times higher and everything was underclocked except the stuff that is four times overclocked. You guys keep mentioning iteration. Seems like it spits out clocks or events higher that the sample rate. But yeah, I'd like to learn how it's done.
0 -
ok… imagine cloning a structure of some audio process like a soft clipper 4 times
running the same calculation on the same input signal, so nothing happens (except wasting CPU cycles)
BUT, if first you split the signal beforehand with something called a polyphase filter you get four seperate versions of the input that are slightly different. you could think of it as generating the "inbetween" samples you'd be getting with normal classically overclocked oversampling like you were describingso now when you feed this split signal into the four instances of the soft clipper which all get a slightly different result
then these signals are combined with an inverse filterbank, basically resampling back to the base sample rate, thus giving the result
0 -
So the result gets clocked out. I guess if the result is closer than what it was then there's some kind of progress. So your basically saying it's just a way to approximate the value if it were truly oversampled.
0 -
its literally the exact same thing as oversampling, only multiplexing the calculation across parallel instances instead of doing it in time
the only possible source of error ig would be with the filter. i don't know that i really understand how those work. i guess they must be some kind of allpass? but oversampling an oscillator doesn't even need a filter, you can just add an offset to each successive phase wrapper to calculate what the inter sample-value would be directly
0 -
So, there is a way to truly oversample by storing a series on values in a parallel fashion. For instance, a value is clocked in and after the next value is clocked, there is a gizmo that stores 3 values if it's a 2 times oversample. The first clock in value in one storage cell, The second clocked in value in another storage storage cell and an intermediate (calculated average) in the 3rd storage cell. So now we have an interpolated value in the middle which is an educated guess of a true sample taken if the sample rate were truly doubled. Now for a times 3 oversampling, there will be two additional storage cells that contain 2 interpolated values instead of 1 as the 2 times does. So the 4 times oversampling will have 4 interpolated points stored in a parallel fashion and so one. Ok, this produces a linear ramp between samples, I suspect the there's another gizmo that shapes these values base on the slew rate of the signal and fashions the interpolated points in more of a sinusoidal pattern instead of a linear distance between all interpolated points. Ok, I can visualize this working because the modules following this will deal with all of the interpolated points on the next clock. In a sense it's processing the interpolated points in a parallel fashion on each clock cycle. So in the end it gets compiled into a series of over sampled data. So the filters and what not using the interpolated data are actually processing 2 3 or 4 parallel data streams depending on the oversample multiple used. I think I starting to get the idea. In the end when the data gets clocked back out at the fixed sample rate, it's possible the values handed over to the serial clock will be slightly more accurate. It actually gets down sampled but can end up with results like filters working with an multiplied clock frequency. I think that's about the extent of it. Well, we do know that filters are borderline decent if they don't exceed one quarter of the clock frequency. So by oversampling, it equates to filtering at higher sample rates. That I can see, and if the interpolation scheme can come really close to a true higher sample rate and do a good job of simulating what input waveforms actually do then I probably would work better.
0 -
yeah,you got the idea. i didn't realize that was actually how they worked until i looked just now and the input filters seem be be nothing more than three unit delays and interpolation for inter sample values
the reconstruction filters are another matter entirely though, they're pretty steep FIR filters essentially band limiting all the ultrasonic content.(even though each parallel instance is still running at the base sample rateso there's no reclocking needed back to base sample rate , mixing the four signals gives the oversampled result instead. its neat the reconstruction filter can "see" the full bandwidth of the signal and filter off only what is above nyquist even though no signal as actually running above nquist
here's a saturator i fixed up for maximum antialiasing, it uses ILO and oversampling
but you can dig around with the before and after filters, its pretty easy to get a sense for whats going on1 -
I have to admit it didn't seem possible at first because within Reaktor nothing happens at the same time. It first thought it makes you think parallel processing. However what I didn't realize is they can do many things in between the sample rate and the result is presented to the next sample clock. When the cpu usage climbs you know it's doing a lot in between sample rates. Then there is a time delay between the input and output and that makes it possible to interpolate between the successive actual captured samples after being read by Reaktor on the clock. So in essence it seems like internal oversampling most definitely has a input output latency buildup depending on the oversampling type; 2x 3x 4x etc. My initial guess is a times 4X oversampling system has an inherent latency of 4 samples. I bet it does, but if not then I'll really be blown away. lol I'll check it out and see if I can understand it. This will be a good test for dementia. lol
0 -
Latency for oversampling depends on the interpolation and reconstruction filters.
You could so 8x oversampling, and have interpolation that only needs 2 samples of latency. IIRC, the more x the oversampling, the less steep the reconstruction needs to be too, so you probably don't need anything like 8 samples there either for 8x oversampling.
However, in my experience, the benefits diminish rapidly when you go above 4x…
For ILO stuff, the best compromise 'sweet spot' seems to be linear ILO with 2x oversampling, although 4x becomes a noticeable improvement when there are many stages… like in an amp sim type structure.
You probably only need to go to 8x or above if you are going full brute force with a completely basic aliasing algorithm, and the only anti aliasing is coming from the oversampling.
===================================================================================
In terms of 'parallel' processing, core processes with 'logical simultaneity'… read the manuals ;). So that apart from special cases, macros processing values with the same event source, can be thought of as simultaneous.
In reality, Primary and core are just filling buffers with data that is then processed in chunks by the VSTi API in a DAW, or by the audio sub-systems of the OS (and receiving buffers of data in the case of audio input). The idea that primary is depth first one sample/event at a time, or that core is doing anything simultaneously, or serially one tick at a time is an illusion. It is a logical model, but it's not what is really happening.
Reaktor definitely doesn't get nice queue of individual input samples one by one from the OS, process them individually and send them back out one by one to the OS. It's just easier for us to pretend that's what is happening.
0 -
Yeah, the OS does toss data around it packets. But Reaktor still clocks the data in at the sample rate and does all of it's processing between samples. I think it starts out like that and the interpolators start filling in the dots after the sample value it clocked. So if a 4x oversampling scheme is used there are three extra processes going on; One for each interpolated data value. So in effect there are four parallel processes going in instead of one. So I see your point about overall latency as you can interpolate even 16 points before the next audio clock. Not sure how the reconstuction works but Reaktor still only clocks out one value per sample clock. It think the asio buffer deals with the operating system and the op system feeds sends data to the sound card. That must be what asio buffer overruns are all about.
0
Categories
- All Categories
- 19 Welcome
- 1.3K Hangout
- 59 NI News
- 707 Tech Talks
- 3.6K Native Access
- 15.2K Komplete
- 1.8K Komplete General
- 4K Komplete Kontrol
- 5.3K Kontakt
- 1.5K Reaktor
- 354 Battery 4
- 783 Guitar Rig & FX
- 403 Massive X & Synths
- 1.1K Other Software & Hardware
- 5.3K Maschine
- 6.7K Traktor
- 6.7K Traktor Software & Hardware
- Check out everything you can do
- Create an account
- See member benefits
- Answer questions
- Ask the community
- See product news
- Connect with creators