NI Reaktor - Is there an abstract way to let voices interact?

DerMakrophag
DerMakrophag Member Posts: 6 Member
edited October 22 in Reaktor

Hi,

I would like to "abuse" the polyphony to easily set the number of certain repetitive elements in my building project, much like it is done for the classic vocoder that was part of the Reaktor 5 factory library. However, I would like to get some interaction between the voices.

For example, I want all even voices to reveive the value of all odd voices, independent of the number of voices set.

As far as my knowledge goes, this is not possible, since the Voice From and Voice To modules are monophonic and only take a single integer number input. Only way I can think of is using the Interation module and process this at audio rate. Would this work or do you have another idea of solving this?

Workarounds would of course be:

a) Building everything with a maximum allowed number of elements and then build a "selection routine" to reduce the number of active elements.

b) Build different variants of the ensemble and make the choice before using the effekt.

«1

Answers

  • ANDREW221231
    ANDREW221231 Member Posts: 349 Pro
    edited May 28

    audio/event tables save the day here

    they allow for arbitrary complexity in reading/writing/moving around voices.

    As far as my knowledge goes, this is not possible, since the Voice From
    and Voice To modules are monophonic and only take a single integer
    number input. Only way I can think of is using the Interation module and
    process this at audio rate. Would this work or do you have another idea of solving this?

    if you think about it, they allow the reading and writing from any voice to any other voice with the same functionality of the to/from voice modules but with none of the restrictions

  • ANDREW221231
    ANDREW221231 Member Posts: 349 Pro
    edited May 28

    i believe it would even be possible to use the event tables for audio, by reading/writing to them at audio rate

  • Studiowaves
    Studiowaves Member Posts: 640 Advisor

    If you find a way to tap into the voice assignment algorithm, let me know. I know about the to and from voice but have no idea how the voices are organized. My guess is you have to send to a voice in order to receive from the voice to know which voice it is. But how do you know which voice you sent in the first place. I used the from voice one time just so I could the value from a single voice but Paule showed me how to do that. I still don't really get how it's suppose to be used. Good luck

  • DerMakrophag
    DerMakrophag Member Posts: 6 Member
    edited June 6

    I guess this is possible, if I think more in an iterative way. Need to check whether I can wrap my brains around this, it would go heavy on my building skills.

    Also, in tables I cannot set the number of rows and columns from the instruments panel, can I? It seems to be quite fixed, unless changed in the settings?

    I am afraid, if you want to do real voice management as in "instrument voice", this is not possible. I also do not see a way to detect the current "active" voice. Maybe you can somehow look at the gate info per Voice? Not sure.

    However, since I am building an audio effect without MIDI, this is not necessary in my case. I built a raw structure with Iterator that does what I suggested above. It currently works like a voice shifter with a sum operator, i.e. the data from Voice n gets added to Voice n + 1. There is already a primary module built for this, but it does not seem to work with Event inputs.

    In my structure, there are some situations where it does not work, I need to debug it. For example I have no idea what happens to the last voice. And I need to think of whether I scale this up or rather switch to tables. I would need several data matrices anyhow to be able to interact with the effect.

  • colB
    colB Member Posts: 988 Guru

    @DerMakrophag Maybe there is a better way to achieve your goal. It seem you like the idea of hacking the polyphony system, so instead of asking for help with your goal, you have asked for help with a hacky implementation that you like the idea of, that maybe isn't' even viable without compromises and problems?

    If you are working with events not audio, then the cpu saving that hacking polyphony is often used for might not even be necessary…

    Maybe if you explained in more detail what you are actually trying to do, not how your are trying to do it, you might get more useful help?

  • DerMakrophag
    DerMakrophag Member Posts: 6 Member
    edited June 6

    I forgot to thank all of you for you input. Thanks!

    You are right, maybe I am following a solution which is none. I need some time to think about that table idea. As said earlier, I was inspired by the vocoder design.

    My Idea is a living pattern generator: I want to have a matrix of interacting elements ("neuron"), which resemble the behavior of neuronal cells. This Network is fed with a clock to initiate activity. Activity then spreads through the network and some additional features make this a living selftriggering network. All or a certain subset of neurons are routed to the output, where the events can be used as Midi Notes.

    I have a single cell Macro which works fine. Also having a handful cells works. Now I want to try upscaling it without having to copy and connect x macros. This is of course an experiment with no predictable outcome. Maybe a handful elements is much more useful, music-wise.

    Yes and connections between the elements shall be configurable and certain properties also need to be different between cells, to make it interesting. At least a set if algorithms shall be selectable (like in FM synthesis). Simplest model is as above, a series connection ("voice shift").

    So putting aside whether I can set the size of the network on the panel or not:

    My "neuron" algorithm" needs to become iterated to make it work with tables, right?

    Or would you rather copy and paste macros?

  • ANDREW221231
    ANDREW221231 Member Posts: 349 Pro
    edited June 9

    I guess this is possible, if I think more in an iterative way. Need
    to check whether I can wrap my brains around this, it would go heavy on
    my building skills.




    nah mate, couldn't be easier! here's an example using an audio table, w/ an event table to control voice offset which is set up in draw mode to control a voice offset from -4 to 4

  • ANDREW221231
    ANDREW221231 Member Posts: 349 Pro
    edited June 9

    also there's another neat trick with tables, if you save a table in the function pane you can then copy and paste to get multiple clients of the same table, allowing to bypass voice restrictions between instruments. meaning you could write to a table polyphonically then access it monophonicanlly elsewhere in another client, and vice versa

    Also, in tables I cannot set the number of
    rows and columns from the instruments panel, can I? It seems to be quite
    fixed, unless changed in the settings?

    yeah this is not a modulatable parameter unfortunately , it can only be changed in settings

  • Studiowaves
    Studiowaves Member Posts: 640 Advisor

    It seems you are attempting artificial intelligence. Does this have any learning capabilities? I know little about Neural Networks but aren't they suppose to somehow figure things out before modulating the sound like a trumpet player does with his lips. Hope it turns out good.

  • ANDREW221231
    ANDREW221231 Member Posts: 349 Pro

    i was just thinking about a neural network in reaktor tonight. i think it would be actually pretty doable with the right combination of tables and iteration

    but one thing i can confidently state is you would never want to train a model in reaktor. you would certainly do it with some external program and import the results as tables

  • DerMakrophag
    DerMakrophag Member Posts: 6 Member

    Hi,

    No, as I said the idea is not to train something, but have an organic system which yields dynamic but not random results. In the end, simplified its just a bunch of envelopes and threshold-dependend triggers. Something like: put a stupid straight clock in, get a living, evolving beat instantly.

    Currently my focus is at another project but I want to keep you updated on this.

    Best,

  • DerMakrophag
    DerMakrophag Member Posts: 6 Member

    No, this is not what I want to do. I merely want to simulate real neuronal cells as in biology. And use this zo achieve interesting patterns.

    See my post above.

  • ANDREW221231
    ANDREW221231 Member Posts: 349 Pro

    did you see the attachment i uploaded? it shows a way to arbitrarily route voices. though you could also do the same thing with the voice shifter module. really the only thing the voice shifter can't do that the tables can is reassign events across voices

  • DerMakrophag
    DerMakrophag Member Posts: 6 Member

    No, shame on me, but I have it in mind to check your example! Definitely will have a look. Sounds promising.

  • Studiowaves
    Studiowaves Member Posts: 640 Advisor

    I'd like to see a decent ai drum machine that could play along like we were having a jam session. That would be fun.

This discussion has been closed.
Back To Top