The No.1 Website for Pro Audio
The Korg Logue User Oscillator Programming Thread
Old 14th April 2020
  #1
Lives for gear
 
Synthpark's Avatar
The Korg Logue User Oscillator Programming Thread

I read that some people might be interested or already active in programming their own user oscillators for the Prologue/Minilogue XD. So if there is some interest this could be a common thread to share and exchange information since Korg does not provide any technical support and leaves some stuff rather unclear or I overlooked some information.

In my case I was able to install MSYS2 on a Win10 machine, together with the required tools and was able to rebuild the Waves oscillator and test oscillators.

In order to make the best out of it including oversampling one has to be aware of all the hardware limitations.

There is no tool to say whether the code violates timing rectrictions.
The voice chip also controls the analog part, so if the code is too slow it might also affect the behavior of envelopes.

Not sure which processor is actually used for the voices. These guys

https://www.elektronauts.com/t/korg-...atrol/55632/49

claim that it is a ARM Cortex STM32F446xC/E

https://www.mouser.com/datasheet/2/3...6mc-956216.pdf

If this is right, according to the spec it has a FPU, and the following tips might be helpful.

https://community.arm.com/developer/...--m4-processor

Currently I am running into the following problem: The SDK says the code should be optimized for 64 frames (samples), but it could also support 32 or 16 if the user wants to. There is no hint how big the data chunk actually is what they use and where this is defined.

For example: since you get only one pitch value per function call, you have to interpolate between the last value and the current in order to realize a nice pitch sweep. So you have to make a division for a linear approximation to support any value for the frame size. If you know that the frame chunk is limited to powers of two and the ones in question (64, 32, 16) then it is a shift operation and takes much less time.

The next step for me is to understand how much of the provided DSP libraries are really required in order to make use of this specific CPU in the best way or if one can more carelessly create his own code.

So as I said, if people are running into common problems, this could be a nice exchange. I haven't found any other thread on the net. Maybe I was just too stupid, but if there exists one, somebody could point to it, please.
Old 14th April 2020
  #2
I would hazard a guess that the processing block length stays a power of 2. So a simple shift or multiplication by a reciprocal operation should suffice and no actual division is necessary.

The microcontroller in question is just that: a standard microcontroller. So you should be free to use any DSP code you come up with. It would be weird if you were forced to use the Korg's DSP libraries. There should be no HW requirements to do so. Only thing that comes to mind is the data input and output to/from the oscillator algorithm. Those should be very specific operations but everything that happens between those two points, should be irrelevant as long as the output data isn't corrupted by you (If someone spots an error in my thinking, please feel free to correct me.)

I've designed one HW music making machine which used a 100 MHz STM32F4. Developing the thing was mostly like working on an old school computer: it's just a processors with low megahertz and you can get every singe CPU cycle out of it for your own purposes. In Korg's case you're only allowed to use part of the CPU cycles, but it should still be the same thing. So all regular software development practises apply, just be sure to use only static memory areas, i.e. no dynamic memory allocations of any kind.

BTW. STM32F4's FPU is fast. You definitely want to use it if floating point math is easier than fixedpoint math for the algorithms you want to develop.
Old 14th April 2020
  #3
This could become handy:

Quote:
// Initialize SysTick timer
uint32_t* p_systick = (uint32_t*) 0xE000E010; // SysTick memory address
p_systick[1] = 0x00FFFFFF; // This value will be loaded automatically into p_systick[2] everytime the counter hits zero
p_systick[2] = 0x00000000; // Set from which value we start counting down
p_systick[0] |= 1 << 0 | // Start the SysTick timer
1 << 2; // Clock source: processor clock (AHB)


// Calculate the number of cycles since last reset of the counter
// This is the total number of CPU cycles available for each frame
const uint32_t cycles_used_so_far = 0x00FFFFFF - p_systick[2];
p_systick[2] = 0x00FFFFFF;
The above initializes the CPU cycle counter which is found inside every ARM Cortex M4. Then it calculates how many cycles has passed since the counter was reset the last time and resets the counter again. This gives you cycle accurate CPU usage measurements.
Old 14th April 2020
  #4
Lives for gear
 
SkyWriter's Avatar
Quote:
Originally Posted by Synthpark View Post
Currently I am running into the following problem: The SDK says the code should be optimized for 64 frames (samples), but it could also support 32 or 16 if the user wants to. There is no hint how big the data chunk actually is what they use and where this is defined.
One of the problems I'm having is finding SDK info other than reviewing code.

Looking at Gekart's commented wave.cpp (https://github.com/gekart/logue-sdk/...aves/waves.cpp)

This is the first time through this code so this might be wrong:
Number of 'Frames' is passed to OSC_CYCLE for rendering. The ye loop calculates a 'frame' per pass. A Frame is a single sample. All user processing is done on the sample level. Can't you calculate glide on the sample level 48khz and derive time from that?

Is this wrong?

Update, OK maybe read the glide question wrong. Assuming 'nice pitch sweep' is portamento, @ kraku is right you just need time of last and current note.
Old 14th April 2020
  #5
Lives for gear
 
Synthpark's Avatar
Quote:
Originally Posted by SkyWriter View Post
One of the problems I'm having is finding SDK info other than reviewing code.

Looking at Gekart's commented wave.cpp (https://github.com/gekart/logue-sdk/...aves/waves.cpp)

This is the first time through this code so this might be wrong:
Number of 'Frames' is passed to OSC_CYCLE for rendering. The ye loop calculates a 'frame' per pass. A Frame is a single sample. All user processing is done on the sample level. Can't you calculate glide on the sample level 48khz and derive time from that?

Is this wrong?

Update, OK maybe read the glide question wrong. Assuming 'nice pitch sweep' is portamento, @ kraku is right you just need time of last and current note.
Look, this is how I understand it and the problem I have:

There is a readme.md (install any md-reader for your browser! I use Firefox.)

Quote:
Core API

Here's an overview of the core API for custom oscillator/effects.
Oscillators (osc)

Your main implementation file should include userosc.h and implement the following functions.

void OSC_INIT(uint32_t platform, uint32_t api): Called on instantiation of the oscillator. Use this callback to perform any required initializations. See inc/userprg.h for possible values of platform and api.

void OSC_CYCLE(const user_osc_param_t * const params, int32_t *yn, const uint32_t frames): This is where the waveform should be computed. This function is called every time a buffer of frames samples is required (1 sample per frame). Samples should be written to the yn buffer. Output samples should be formatted in Q31 fixed point representation.

Note: Floating-point numbers can be converted to Q31 format using the f32_to_q31(f) macro defined in inc/utils/fixed_math.h. Also see inc/userosc.h for user_osc_param_t definition.

Note: Buffer lengths up to 64 frames should be supported. However you can perform multiple passes on smaller buffers if you prefer. (Optimize for powers of two: 16, 32, 64)

void OSC_NOTEON(const user_osc_param_t * const params): Called whenever a note on event occurs.

void OSC_NOTEOFF(const user_osc_param_t * const params): Called whenever a note off event occurs.

void OSC_PARAM(uint16_t index, uint16_t value): Called whenever the user changes a parameter value.
So this is the point and really confusing. You have to write a function OSC_CYCLE and this function is called by the system. So why could YOU decide on the size of the buffer? The resulting buffer is given by the system. "frames" is a parameter given to you and not decided by you unless there is some parameter to be changed somewhere else. So if that parameter is hidden, how could you optimize the buffer length, as suggested in the API documentation?

Regarding the pitch thing: If 64 samples are requested you have to use the old stored value of the pitch and treat the new pitch value as an end value and interpolate between. Otherwise you apparently run into the mistake which did the guy here for pluck1. What he probably did was to treat the pitch parameter as a constant.

https://www.youtube.com/watch?v=iK1c...ture=emb_title

He advertises pluck v2 as an improvement for pitching up and down smoothly but this is actually his own fault. But you have to pay anyway.

So where does pitch come from?

If you look at userosc.h:

Quote:
typedef struct user_osc_param {
/** Value of LFO implicitely applied to shape parameter */
int32_t shape_lfo;
/** Current pitch. high byte: note number, low byte: fine (0-255) */
uint16_t pitch;
/** Current cutoff value (0x0000-0x1fff) */
uint16_t cutoff;
/** Current resonance value (0x0000-0x1fff) */
uint16_t resonance;
uint16_t reserved0[3];
} user_osc_param_t;
Your routine is called for a whole chunk, but the pitch is only one number, not a recorded vector. So treating it right is to store the old value from the previous call and use it for simple linear interpolation.

Last edited by Synthpark; 14th April 2020 at 04:41 PM..
Old 14th April 2020
  #6
Lives for gear
 
Synthpark's Avatar
Quote:
Originally Posted by kraku View Post
This could become handy:



The above initializes the CPU cycle counter which is found inside every ARM Cortex M4. Then it calculates how many cycles has passed since the counter was reset the last time and resets the counter again. This gives you cycle accurate CPU usage measurements.
Thats cool, nice input! Unfortunetely there is no debug interface or any form of output when using the Prologue itself as the development platform. So the code has to be tried out on the synth directly. I don't have the dev board which was limited and provided only for a short period of time. Code can be tested in some other environment.
Old 14th April 2020
  #7
Lives for gear
 
Synthpark's Avatar
Quote:
Originally Posted by kraku View Post
BTW. STM32F4's FPU is fast. You definitely want to use it if floating point math is easier than fixedpoint math for the algorithms you want to develop.
What if you use the integer ALU without the FPU for some tasks. Isn't that even faster? There are many cases where uint32_t is pretty sufficient, for example to compute a triangle wave etc. Also the resolution of the phase is well covered with 32 bit.
Old 14th April 2020
  #8
Lives for gear
 
Synthpark's Avatar
Quote:
Originally Posted by kraku View Post
BTW. STM32F4's FPU is fast. You definitely want to use it if floating point math is easier than fixedpoint math for the algorithms you want to develop.
What if you use the integer ALU without the FPU for some tasks. Isn't that even faster? There are many cases where uint32_t is pretty sufficient, for example to compute a triangle wave or some FIR filter etc. Also the resolution of the phase is well covered with 32 bit. I am coming from the hardware side of things (FPGA).

In the link above they mention that FPU registers have to loaded. Doesn't sound like a single-cycle operation.
Old 14th April 2020
  #9
Lives for gear
 
SkyWriter's Avatar
Quote:
Originally Posted by Synthpark View Post
Your routine is called for a whole chunk, but the pitch is only one number, not a recorded vector. So treating it right is to store the old value from the previous call and use it for simple linear interpolation.
I guess the problem I'm having is that I would not expect the pitch params for note or fine pitch to change between OSC_CYCLE Calls, unless OSC_NOTEOFF gets called again. Note is the note played, and fine pitch is for MPE. If NOTEOFF is called again, how do I identify that the 'last note' stored in that instance of the code relative to the note in the current instance of the code. It came from the same Voice, but that Voice may not have been the last note played - i.e. voice assignment.

A use case would help.
Old 14th April 2020
  #10
Lives for gear
 
Synthpark's Avatar
Quote:
Originally Posted by SkyWriter View Post
I guess the problem I'm having is that I would not expect the pitch params for note or fine pitch to change between OSC_CYCLE Calls, unless OSC_NOTEOFF gets called again. Note is the note played, and fine pitch is for MPE. If NOTEOFF is called again, how do I identify that the 'last note' stored in that instance of the code relative to the note in the current instance of the code. It came from the same Voice, but that Voice may not have been the last note played - i.e. voice assignment.

A use case would help.
When looking at the function of OSC_CYCLE there is no pitch information provided other than the structure.

Quote:
typedef struct user_osc_param {
/** Value of LFO implicitely applied to shape parameter */
int32_t shape_lfo;
/** Current pitch. high byte: note number, low byte: fine (0-255) */
uint16_t pitch;
/** Current cutoff value (0x0000-0x1fff) */
uint16_t cutoff;
/** Current resonance value (0x0000-0x1fff) */
uint16_t resonance;
uint16_t reserved0[3];
} user_osc_param_t;
But in general, the digital engine reacts to pitch envelope and pitch bend, but the pitch parameter is the only one carrying the current time variant pitch information. The hint about the note number and fine pitch is just a definition for the number range. In principle, the pitch is given as an exponent for the base of (2^1/12). There is a library function to derive the relative frequency for the sampling rate.

There is this sine test example. In the initialization part of the routine we have:

...
const float w0 = s_state.w0 = osc_w0f_for_note((params->pitch)>>8, params->pitch & 0xFF);
...

... and in the main loop:

...

phase += w0;
phase -= (uint32_t)phase;

Kind of dirty implementation by using uint32_t casting to implement modulo 1.

In this example they really treat pitch as constant, but how does it sound?
For 64 samples, the update rate would be only 750 Hz.
Old 14th April 2020
  #11
Lives for gear
 
SkyWriter's Avatar
Quote:
Originally Posted by Synthpark View Post
When looking at the function of OSC_CYCLE there is no pitch information provided other than the structure.



But in general, the digital engine reacts to pitch envelope and pitch bend, but the pitch parameter is the only one carrying the current time variant pitch information. The hint about the note number and fine pitch is just a definition for the number range. In principle, the pitch is given as an exponent for the base of (2^1/12). There is a library function to derive the relative frequency for the sampling rate.

There is this sine test example. In the initialization part of the routine we have:

...
const float w0 = s_state.w0 = osc_w0f_for_note((params->pitch)>>8, params->pitch & 0xFF);
...

... and in the main loop:

...

phase += w0;
phase -= (uint32_t)phase;

Kind of dirty implementation by using uint32_t casting to implement modulo 1.

In this example they really treat pitch as constant, but how does it sound?
For 64 samples, the update rate would be only 750 Hz.
Ok, I get it the pitch encoding now. 750hz would be 1.3ms that's plenty for smooth modulation, no?

TBH, I don't see the hardware EG & LFO handling for pitch modulation in any code - I could easily have missed it, I don't have a development bench built, just perusing the code on github. That points to out of band modulation. Seems simpler if the code can only effect it's rendering of the base pitch. It's got to be sandboxed somehow.

Roll-logs PTSW does a portamento. Maybe if you posted on the reddit thread you could catch their attention.
Old 14th April 2020
  #12
Quote:
Originally Posted by Synthpark View Post
Thats cool, nice input! Unfortunetely there is no debug interface or any form of output when using the Prologue itself as the development platform. So the code has to be tried out on the synth directly. I don't have the dev board which was limited and provided only for a short period of time. Code can be tested in some other environment.
You could buy one of these and develop the DSP algorithms (and optimise/debug them) on this board:

https://eu.mouser.com/ProductDetail/...OanYt05Q%3D%3D


Quote:
Originally Posted by Synthpark View Post
What if you use the integer ALU without the FPU for some tasks. Isn't that even faster? There are many cases where uint32_t is pretty sufficient, for example to compute a triangle wave or some FIR filter etc. Also the resolution of the phase is well covered with 32 bit. I am coming from the hardware side of things (FPGA).

In the link above they mention that FPU registers have to loaded. Doesn't sound like a single-cycle operation.
I don’t remember the actual CPU cycle counts/latencies, but IIRC, floating point add/sub was almost the same speed as integer one. Multiplication definitely was fast. Division in floating point was faster than done in integer if my memory serves me right. Regardless of the actual cycle counts, if you take into account the required bit shifting with fixed point math, the floating points make a lot of sense on Cortex M4.

If my memory serves me right, the SIMD instructions only supported integer math. Not sure though. Would need to check to be sure.

About loading the registers:
It probably refers to the RISC instruction set. Both integer and floating points registers need to be loaded separately with a value from memory, before you can do any maths with them. ARM Cortex M instruction sets don’t have any memory operations apart from simple load/store. I.e. you can’t add to a register directly from memory or vice versa.
Old 15th April 2020
  #13
Lives for gear
 
Synthpark's Avatar
Quote:
Originally Posted by kraku View Post
You could buy one of these and develop the DSP algorithms (and optimise/debug them) on this board:

https://eu.mouser.com/ProductDetail/...OanYt05Q%3D%3D




I don’t remember the actual CPU cycle counts/latencies, but IIRC, floating point add/sub was almost the same speed as integer one. Multiplication definitely was fast. Division in floating point was faster than done in integer if my memory serves me right. Regardless of the actual cycle counts, if you take into account the required bit shifting with fixed point math, the floating points make a lot of sense on Cortex M4.

If my memory serves me right, the SIMD instructions only supported integer math. Not sure though. Would need to check to be sure.

About loading the registers:
It probably refers to the RISC instruction set. Both integer and floating points registers need to be loaded separately with a value from memory, before you can do any maths with them. ARM Cortex M instruction sets don’t have any memory operations apart from simple load/store. I.e. you can’t add to a register directly from memory or vice versa.
That board is damned cheap! But not sure if I can port the API onto this board. Ok for functionality I could test just the user functions.

I thought about testing the design in the matlabh environment. It has an interface for c/C++ acceleration "mex" functions.

Thanks for sharing your experience with the FPU.
Old 15th April 2020
  #14
Lives for gear
 
Synthpark's Avatar
Quote:
Originally Posted by SkyWriter View Post
Ok, I get it the pitch encoding now. 750hz would be 1.3ms that's plenty for smooth modulation, no?

TBH, I don't see the hardware EG & LFO handling for pitch modulation in any code - I could easily have missed it, I don't have a development bench built, just perusing the code on github. That points to out of band modulation. Seems simpler if the code can only effect it's rendering of the base pitch. It's got to be sandboxed somehow.

Roll-logs PTSW does a portamento. Maybe if you posted on the reddit thread you could catch their attention.
Actually 750 Hz is way too slow for the LFO in fast mode (2 kHz?). The Prologue design is kind of an entry-level processing.
Old 15th April 2020
  #15
Lives for gear
 
SkyWriter's Avatar
Quote:
Originally Posted by Synthpark View Post
Actually 750 Hz is way too slow for the LFO in fast mode (2 kHz?). The Prologue design is kind of an entry-level processing.
Why does the frame size matter for LFO? Frame size is just a work unit. Internal LFO freq is relative to sample rate 48khz, no? LFO is updated at that rate, not 750hz.

Yes, it's a simple first pass-implementation crafted to keep the user code from interfering with product reliability - typical for this type of commercial product. You don't get to write an oscillator, as much as render one into a buffer. The synth applies it's own modulation out of band*. It can do a lot as is. More than enough as a proof of concept success. I may never buy another digital type of synth again - unless they had the same feature.

I would love to see the oscillator size go up on next gen offers. Filter bank constructs for physical modeling alone require a lot of memory. Not to mention the whole sample playback possibilities. More inputs, of course. And allow SHIFT-VALUE on Prologue too.

Last edited by SkyWriter; 15th April 2020 at 02:42 PM..
Old 15th April 2020
  #16
Lives for gear
 
Synthpark's Avatar
Quote:
Originally Posted by SkyWriter View Post
Oh, I thought you meant something else.
If pitch is updated at 750 Hz and the LFO, which can control pitch, swings at a similar or higher rate, then the digital engine cannot process the data correctly. Not even with interpolation.
Old 15th April 2020
  #17
Lives for gear
 
SkyWriter's Avatar
Quote:
Originally Posted by Synthpark View Post
If pitch is updated at 750 Hz and the LFO, which can control pitch, swings at a similar or higher rate, then the digital engine cannot process the data correctly. Not even with interpolation.
See my explanation above. Does it make sense?
Old 15th April 2020
  #18
Lives for gear
 
Synthpark's Avatar
Quote:
Originally Posted by SkyWriter View Post
Why does the frame size matter for LFO? Frame size is just a work unit. Internal LFO freq is relative to sample rate 48khz, no? LFO is updated at that rate, not 750hz.
The problem is not the LFO generation, but what the digital engine sees from this LFO.

The engine user function is called to calculate all samples for the next frame buffer, probably 64 samples.
This is apparently done to avoid large overhead which would appear if the function is called for every single sample.

For every 64 samples, the engine receives the LFO state indirectly via the pitch value, IF(!) the LFO modulates the pitch, but this can be selected by the user.

The pitch information is essentially downsampled by a factor or 64, dropping information required to track fast realtime pitch changes, since the LFO can swing as fast as 2 kHz, but the effective sample rate seen by the digital engine is reduced to 750 Hz. To account for this, the buffer siize should be reduced or there should be a pointer to a buffer where the value for the LFO is stored for the last 64 samples. In such a case, this would mean that the digital engine would see a delayed version of the LFO, but that can be accepted.

Therefore, LFO is updated by the LFO generator at 48 kHz, but the engine sees only a downsampled version, because it is not called for every sample, but for a packet of 64 samples. Does that make sense to you? For such fast audio rate modulation nothing is left from the original LFO signal seen by the engine, it sees an aliased version.
Old 15th April 2020
  #19
Lives for gear
 
SkyWriter's Avatar
OK now I see what you mean now*. It sounds like the Korg ME doesn't have this problem though? The utility of a fixed hi-frequency LFO is low for me. I have been ignoring that aspect. I would much rather have had the one-shot the XD has instead.

*-inter-arrival time of LFO state is too low because it's only updates at 750hz, when the LFO can operate higher than that. What I'm having a hard time with LFO->SHAPE appearing as Pitch info in params defined as a constant. It's sloppy and confusing, I must be wrong.
Old 15th April 2020
  #20
Lives for gear
 
Synthpark's Avatar
Quote:
Originally Posted by SkyWriter View Post
It sounds like the Korg ME doesn't have this problem though?
What is the Korg ME
You mean Multi-Engine? The VPM Osc?
I don't know. This can be checked against the analog oscillator.

I just checked! The VPM engine does not have any of these problems. Now that is really confusing, isn't it?!

Korg definitely should provide more information.

Last edited by Synthpark; 15th April 2020 at 04:27 PM..
Old 15th April 2020
  #21
Lives for gear
 
SkyWriter's Avatar
Quote:
Originally Posted by Synthpark View Post
What is the Korg ME
You mean Multi-Engine? The VPM Osc?
I don't know. This can be checked against the analog oscillator.

I just checked! The VPM engine does not have any of these problems. Now that is really confusing, isn't it?!

Korg definitely should provide more information.
Yup!

I can't hear the raggidy sounds on the VPM oscillators I heard on MO2 fm, for instance (an FM user osc). The resolution is poor in the upper frequencies so it's hard to tune in a stable tone. You can't even find a suitable tone on MO2 fm - it's just bad noise.
Old 15th April 2020
  #22
Lives for gear
I'm going to give Windows one more chance. If I can't get to the point of making the demo Waves file and uploading it to my XD, I'm going to jump over to a Macbook environment.

It took a couple days to get to the point where I could even issue the Make command, and I got some object 137 error. I think my paths are screwed up. Uninstalled MSYS and the logue-sdk folder and all its contents, to try starting from scratch again.

I might be a foolish newbie thinking I can get up and design my own oscillators and FX. I didn't even know what git was, lol.

If at first you don't succeed, fail, fail again.
Old 15th April 2020
  #23
Lives for gear
 
SkyWriter's Avatar
@ psionic11 , don't give up man. The macbook (catalina) was literally readme easy - and I'm not a sw coder geek. The fun part is figuring out a undocumented environment from a code repository and dragneting social media posts :-)

made good progress in one day - more-questions-than-answers-good! :-)

I haven't coded anything yet; the laptop install is on top of the Prologue, every time I sit down, I just start playing! The only time I get anything done is when I leave the room with an iPad to browse*.

*- I had used github before, but never looked into the code repositories.
Old 16th April 2020
  #24
Lives for gear
Success! Got the environment setup, went to the Waves demo, and make'd it...
Waves.mnlgxdunit transferred over to XD module and played just fine.

For any other amateur coders who have a bare minimum of knowledge like me, here's the steps to get a Windows development environment.

Download librarian.
- first things first, make sure your computer can talk to your logue.

Install Msys, a bash utility and package manager for Windows.
- install Msys2 on C: drive (msys2.org/#installation)
- open Msys2 (or Msys64) terminal
- install Make utility (type "pacman -S make")
- install Zip utility (type "pacman -S zip")
- note that the cd command needs a space (cd .. to go up directory)
- main pain is navigating from C: all the way to project directory\logue\tools
- advice from more experienced coders appreciated
- like should I have installed Msys in logue-sdk folder?

Git stuff!
- install git (git-scm.com/downloads)
- create github account; sign in
- go to Korg logue page (https://github.com/korginc/logue-sdk)
- upper right, click Fork to make your own "copy" of the code
- click green Clone/Download dropdown, copy URL (https://gith..../sdk.git)

Create a Project directory on your PC (I chose desktop\Music Tools)
- choose where you want, right click, New Folder, name it
- go to windows CMD prompt
- navigate to your project directory (for me, cd desktop\music tools)
- type "git clone https://url.git" (use git URL from above)
- type "git submodule update --init"

You now have the logue SDK. Notice the structure:
- Logue-SDK has Tools and Platform folders.
- Tools is where we'll install some more utilities.
- Platform contains the Prologue, Minilogue, and NuTekt folders.
- Inside each are folders for the user osc, and the FX. And the Demo\Waves.

Open Windows Explorer and go to your Project\logue-sdk\tools folder.
- We already installed the Make and Zip utilities.
- Open the gcc folder, double-click on the get_gcc_msys.sh.
- This gets the Arm Cortex stuff.

Ok, pretty much good to go at this point!
Let's build the demo file to see if everything is working as it should.
- open Msys64
- navigate all the way to logue-sdk\platform\minilogue-xd\demos\waves
- type "make"

Hopefully it automagically builds the Waves.mnlgxdunit file. You should now be able to drag and drop the file onto the librarian into an empty User Oscillators slot, then Send All User Osc/FX. Woot!

=======================================

This next part is optional; it installs a few handy commands for your CLI. That way you can check MIDI ports, and load or clear your user osc / FX directly, without using the librarian.

Logue-CLI
- open the tools\logue-cli folder, double-click on get_logue_cli_msys64.sh
- to use, from command prompt, navigate to tools\logue-cli folder
- open the one folder logue-cli-win64-blah blah.
- type logue-cli.exe probe -l
- checks available MIDI ports
- other commands are load, clear, and check (validates file integrity)

I cheated and copied the .exe over to the platform\minilogueXD folder, since that's from where you're going to be validating and loading your custom stuff.

Tomorrow: do Hammond Eggs Random LFO filter tutorial.
Old 16th April 2020
  #25
Lives for gear
 
SkyWriter's Avatar
Nice work psi! I need to start using the cli tool. Much faster I've heard.
Old 16th April 2020
  #26
Lives for gear
This 4-part video series was also very helpful, even for Windows users. MacOs doesn't need Msys, make, or zip, since a bash is already included.

Old 16th April 2020
  #27
Lives for gear
 
Synthpark's Avatar
Quote:
Originally Posted by SkyWriter View Post
Yup!

I can't hear the raggidy sounds on the VPM oscillators I heard on MO2 fm, for instance (an FM user osc). The resolution is poor in the upper frequencies so it's hard to tune in a stable tone. You can't even find a suitable tone on MO2 fm - it's just bad noise.
The buffer size for the latest API seems to be 16 samples, checked by reverse engineering. At least thats better than 64 samples.
Old 16th April 2020
  #28
Lives for gear
 
SkyWriter's Avatar
Quote:
Originally Posted by Synthpark View Post
The buffer size for the latest API seems to be 16 samples, checked by reverse engineering. At least thats better than 64 samples.
Very interesting! I wonder if it's different on other models, or over time.
Old 16th April 2020
  #29
Lives for gear
 
Synthpark's Avatar
Quote:
Originally Posted by SkyWriter View Post
Very interesting! I wonder if it's different on other models, or over time.
That was part of the reverse engineering to check that out. The answer is "constant".

Take the sine test project and replace the input w0 with something like

float w0;
w0 = 0.001 * frame_size;

Now you will hear a constant sine tone which is proportional to the buffer length, in multiples of 48 Hz.

Since I get 768 Hz, no matter how fast the LFO or envelope is, I conclude that the buffer size is always 16.

That was, so far, my first "contribution", haha.
Old 17th April 2020
  #30
Lives for gear
 
SkyWriter's Avatar
Need a term for "user oscillator". It's feeling like this:
Attached Thumbnails
The Korg Logue User Oscillator Programming Thread-947deaa8-5974-4eb4-8210-0a1e4034640e.jpeg  
📝 Reply
Topic:
Post Reply

Welcome to the Gearslutz Pro Audio Community!

Registration benefits include:
  • The ability to reply to and create new discussions
  • Access to members-only giveaways & competitions
  • Interact with VIP industry experts in our guest Q&As
  • Access to members-only sub forum discussions
  • Access to members-only Chat Room
  • Get INSTANT ACCESS to the world's best private pro audio Classifieds for only USD $20/year
  • Promote your eBay auctions and Reverb.com listings for free
  • Remove this message!
You need an account to post a reply. Create a username and password below and an account will be created and your post entered.


 
 
Slide to join now Processing…
🖨️ Show Printable Version
✉️ Email this Page
🔍 Search thread
♾️ Similar Threads
🎙️ View mentioned gear
Forum Jump
Forum Jump