Tag: CDM Create Digital Music

Live coding group toplap celebrates days of live streaming, events

What began as a niche field populated mainly by code jockeys has grown into a worldwide movement of artists, many of them new to programming. Onekey group, TOPLAP, celebrates 15 years of operation with live streams and events.

Image at top – Olivia Jack’s Hydra in action, earlier this month at our MusicMakers Hacklab at CTM Festival. We’ll be talking to Olivia over the weekend about live coding visuals, and you can catch her in Berlin tonight – or online – see below.

Here’s the full announcement – eloquently worded enough that I’ll just copy it here – check this crazy schedule, which began yesterday:

Live coding is about making live music, visuals and other time-based arts by writing and manipulating code. Recently it’s been popularised as Algorave, but is a technique used in all kinds of genres and artforms.

The open worldwide live coding community goes by the name of TOPLAP (Temporary Organisation for the Promotion of Live Algorithm Programming) was formed 15 years again (14th February, 2004) at an event called Changing Grammars in Hamburg.

Now this worldwide community is coming together to make a continuous 3.5 day live stream with over 168 half-hour performance slots..

Watch here:
toplap.org/wearefifteen/

Join the livestream chat here:
talk.lurk.org/channel/toplap15

There’s over 168 performances from 14th-17th February, quite a few beamed from local celebratory events being organised around the place (Prague, London, NYC, Amsterdam, Madison, Bath, Argentina, Richmond, Hamilton, …), and others by individuals who’ll be live coding from their sofa.

Anyone going to stay up to watch the whole thing?

Here in Berlin tonight, there’s a live and in-person event featuring 𝕭𝖅𝕲𝕽𝕷, Calum Gunn, Olivia Jack with Alexandra Cardenas, Yaxu (who we hosted here last year), and Renick Bell:

KEYS: computer music ~ digital arts | Renick Bell • Yaxu & more [Faecbook event]

Algorave and TOPLAP have made major efforts to be more gender balanced and inclusive and community driven – a topic deep enough that I’ll leave it for another time, as they’ve worked on some specific techniques to enable this. But it’s extraordinary what people are doing with code – and yes, if typing isn’t your favorite mode of control, some are also extending these tools to physical controllers and other live performance techniques. Live coding in one form or another has been around decades, but now is possibly the best time yet for this scene. We’ll be watching – and streaming. Stay tuned.

The post Live coding group toplap celebrates days of live streaming, events appeared first on CDM Create Digital Music.

from CDM Create Digital Music cdm.link/2019/02/live-coding-group-toplap-celebrates-days-of-live-streaming-events/
via IFTTT

Teenage Engineering OP-1 synth is back in stock, here to stay

It put the boutique Swedish maker on the music map, and helped usher in new interest in mobile devices and slick design. Now the OP-1 from Teenage Engineering is back in stock, and its makers say it’s here to stay.

That should be good news for OP-1 fans. Sure, the OP-Z has some fancy new features, but it loses the all-in-one functionality and inviting display on the OP-1. And Pocket Operators – both in their original mini-calculator form and now in a line of inexpensive kit modular – well, that’s for another audience. The OP-1, love it or hate it, is really unlike anything else out there. And someone must want it, because it’s been in demand a full decade after its first appearance.

Teenage Engineering shared today they were resurrecting the OP-1 ( under a headline “love never dies,” for Valentine’s Day). Here’s that announcement:

after being out of stock for more than a year with rumours of its demise, we are very happy to let you know that finally, the OP-1 is back and here to stay!

so what happened?

during our nine years of production, we have been very lucky in having a steady supply of the components needed for the OP-1. but last year we suddenly found ourselves without the amoled screen needed and nowhere to find new ones in the same high quality. but after a long time sourcing the perfect replacement, we have finally found it, and we will now be able to fulfil the demand that’s been growing for the past year.

Hmm, maybe the Teenagers want to start a side business reselling that display part? I’m interested.

Anyway, you can buy an OP-1 new now if you couldn’t find it on the used market – or watch for used prices to come down accordingly. Let’s celebrate with a little OP-1 reminiscence, as I know for some of you, Teenage Engineerings’ other stuff just doesn’t compare.

Also – shoes!

TĀLĀ is right – Teenage Engineering OP-1 is a great desert island synth

Teenage Engineering: Opbox Sensors and Shoes, OP-1 Drums and MIDI Sync

Teenage Engineering’s OP-1 Instrument: Hands-on, Videos, Why it’s Different

Someday I hope Elijah Wood says nice things about me:

teenageengineering.com/products/op-1

The post Teenage Engineering OP-1 synth is back in stock, here to stay appeared first on CDM Create Digital Music.

from CDM Create Digital Music cdm.link/2019/02/op-1-forever/
via IFTTT

Why is this Valentine’s song made by an AI app so awful?

Do you hate AI as a buzzword? Do you despise the millennial whoop? Do you cringe every time Valentine’s Day arrives? Well – get ready for all those things you hate in one place. But hang in there – there’s a moral to this story.

Now, really, the song is bad. Like laugh-out-loud bad. Here’s iOS app Amadeus Code “composing” a song for Valentine’s Day, which says love much in the way a half-melted milk chocolate heart does, but – well, I’ll let you listen, millennial pop cliches and all:

Fortunately this comes after yesterday’s quite stimulating ideas from a Google research team – proof that you might actually use machine learning for stuff you want, like improved groove quantization and rhythm humanization. In case you missed that:

Magenta Studio lets you use AI tools for inspiration in Ableton Live

Now, as a trained composer / musicologist, I do find this sort of exercise fascinating. And on reflection, I think the failure of this app tells us a lot – not just about machines, but about humans. Here’s what I mean.

Amadeus Code is an interesting idea – a “songwriting assistant” powered by machine learning, delivered as an app. And it seems machine learning could generate, for example, smarter auto accompaniment tools or harmonizers. Traditionally, those technologies have been driven by rigid heuristics that sound “off” to our ears, because they aren’t able to adequately follow harmonic changes in the way a human would. Machine learning could – well, theoretically, with the right dataset and interpretation – make those tools work more effectively. (I won’t re-hash an explanation of neural network machine learning, since I got into that in yesterday’s article on Magenta Studio.)

amadeuscode.com/

You might well find some usefulness from Amadeus, too.

This particular example does not sound useful, though. It sounds soulless and horrible.

Okay, so what happened here? Music theory at least cheers me up even when Valentine’s Day brings me down. Here’s what the developers sent CDM in a pre-packaged press release:

We wanted to create a song with a specific singer in mind, and for this demo, it was Taylor Swift. With that in mind, here are the parameters we set in the app.

Bpm set to slow to create a pop ballad
To give the verses a rhythmic feel, the note length settings were set to “short” and also since her vocals have great presence below C, the note range was also set from low~mid range.
For the chorus, to give contrast to the rhythmic verses, the note lengths were set longer and a wider note range was set to give a dynamic range overall.

After re-generating a few ideas in the app, the midi file was exported and handed to an arranger who made the track.

Wait – Taylor Swift is there just how, you say?

Taylor’s vocal range is somewhere in the range of C#3-G5. The key of the song created with Amadeus Code was raised a half step in order to accommodate this range making the song F3-D5.

From the exported midi, 90% of the topline was used. The rest of the 10% was edited by the human arranger/producer: The bass and harmony files are 100% from the AC midi files.

Now, first – these results are really impressive. I don’t think traditional melodic models – theoretical and mathematical in nature – are capable of generating anything like this. They’ll tend to fit melodic material into a continuous line, and as a result will come out fairly featureless.

No, what’s compelling here is not so much that this sounds like Taylor Swift, or that it sounds like a computer, as it sounds like one of those awful commercial music beds trying to be a faux Taylor Swift song. It’s gotten some of the repetition, some of the basic syncopation, and oh yeah, that awful overused millennial whoop. It sounds like a parody, perhaps because partly it is – the machine learning has repeated the most recognizable cliches from these melodic materials, strung together, and then that was further selected / arranged by humans who did the same. (If the machines had been left alone without as much human intervention, I suspect the results wouldn’t be as good.)

In fact, it picks up Swift’s ticks – some of the funny syncopations and repetitions – but without stringing them together, like watching someone do a bad impression. (That’s still impressive, though, as it does represent one element of learning – if a crude one.)

To understand why this matters, we’re going to have to listen to a real Taylor Swift song. Let’s take this one:i’

Okay, first, the fact that the real Taylor Swift song has words is not a trivial detail. Adding words means adding prosody – so elements like intonation, tone, stress, and rhythm. To the extent those elements have resurfaced as musical elements in the machine learning-generated example, they’ve done so in a way that no longer is attached to meaning.

No amount of analysis, machine or human, can be generative of lyrical prosody for the simple reason that analysis alone doesn’t give you intention and play. A lyricist will make decisions based on past experience and on the desired effect of the song, and because there’s no real right or wrong to how do do that, they can play around with our expectations.

Part of the reason we should stop using AI as a term is that artificial intelligence implies decision making, and these kinds of models can’t make decisions. (I did say “AI” again because it fits into the headline. Or, uh, oops, I did it again. AI lyricists can’t yet hammer “oops” as an interjection or learn the playful setting of that line – again, sorry.)

Now, you can hate the Taylor Swift song if you like. But it’s catchy not because of a predictable set of pop music rules so much as its unpredictability and irregularity – the very things machine learning models of melodic space are trying to remove in order to create smooth interpolations. In fact, most of the melody of “Blank Space” is a repeated tonic note over the chord progression. Repetition and rhythm are also combined into repeated motives – something else these simple melodic models can’t generate, by design. (Well, you’ll hear basic repetition, but making a relationship between repeated motives again will require a human.)

It may sound like I’m dismissing computer analysis. I’m actually saying something more (maybe) radical – I’m saying part of the mistake here is assuming an analytical model will work as a generative model. Not just a machine model – any model.

This mistake is familiar, because almost everyone who has ever studied music theory has made the same mistake. (Theory teachers then have to listen to the results, which are often about as much fun as these AI results.)

Music theory analysis can lead you to a deeper understanding of how music works, and how the mechanical elements of music interrelate. But it’s tough to turn an analytical model into a generative model, because the “generating” process involves decisions based on intention. If the machine learning models sometimes sound like a first year graduate composition student, that may be that the same student is steeped in the analysis but not in the experience of decision making. But that’s important. The machine learning model won’t get better, because while it can keep learning, it can’t really make decisions. It can’t learn from what it’s learned, as you can.

Yes, yes, app developers – I can hear you aren’t sold yet.

For a sense of why this can go deep, let’s turn back to this same Taylor Swift song. The band Imagine Dragons picked it up and did a cover, and, well, the chord progression will sound more familiar than before.

As it happens, in a different live take I heard the lead singer comment (unironically) that he really loves Swift’s melodic writing.

But, oh yeah, even though pop music recycles elements like chord progressions and even groove (there’s the analytic part), the results take on singular personalities (there’s the human-generative side).

“Stand by Me” dispenses with some of the ticks of our current pop age – millennial whoops, I’m looking at you – and at least as well as you can with the English language, hits some emotional meaning of the words in the way they’re set musically. It’s not a mathematical average of a bunch of tunes, either. It’s a reference to a particular song that meant something to its composer and singer, Ben E. King.

This is his voice, not just the emergent results of a model. It’s a singer recalling a spiritual that hit him with those same three words, which sets a particular psalm from the Bible. So yes, drum machines have no soul – at least until we give them one.

“Sure,” you say, “but couldn’t the machine learning eventually learn how to set the words ‘stand by me’ to music”? No, it can’t – because there are too many possibilities for exactly the same words in the same range in the same meter. Think about it: how many ways can you say these three words?

“Stand by me.”

Where do you put the emphasis, the pitch? There’s prosody. What melody do you use? Keep in mind just how different Taylor Swift and Ben E. King were, even with the same harmonic structure. “Stand,” the word, is repeated as a suspension – a dissonant note – above the tonic.

And even those observations still lie in the realm of analysis. The texture of this coming out of someone’s vocal cords, the nuances to their performance – that never happens the same way twice.

Analyzing this will not tell you how to write a song like this. But it will throw light on each decision, make you hear it that much more deeply – which is why we teach analysis, and why we don’t worry that it will rob music of its magic. It means you’ll really listen to this song and what it’s saying, listen to how mournful that song is.

And that’s what a love song really is:

If the sky that we look upon
Should tumble and fall
Or the mountain should crumble to the sea
I won’t cry, I won’t cry
No, I won’t shed a tear
Just as long as you stand
Stand by me

Stand by me.

Now that’s a love song.

So happy Valentine’s Day. And if you’re alone, well – make some music. People singing about hearbreak and longing have gotten us this far – and it seems if a machine does join in, it’ll happen when the machine’s heart can break, too.

PS – let’s give credit to the songwriters, and a gentle reminder that we each have something to sing that only we can:
Singer Ben E. King, Best Known For ‘Stand By Me,’ Dies At 76 [NPR]

The post Why is this Valentine’s song made by an AI app so awful? appeared first on CDM Create Digital Music.

from CDM Create Digital Music cdm.link/2019/02/ai-valentine-nightmare/
via IFTTT

Magenta Studio lets you use AI tools for inspiration in Ableton Live

Instead of just accepting all this machine learning hype, why not put it to the test? Magenta Studio lets you experiment with open source machine learning tools, standalone or inside Ableton Live.

Magneta provides a pretty graspable way to get started with an field of research that can get a bit murky. By giving you easy access to machine learning models for musical patterns, you can generate and modify rhythms and melodies. The team at Google AI first showed Magneta Studio at Ableton’s Loop conference in LA in November, but after some vigorous development, it’s a lot more ready for primetime now, both on Mac and Windows.

If you’re working with Ableton Live, you can use Magenta Studio as a set of devices. Because they’re built with Max, though, there’s also a standalone version. Developers can dig far deeper into the tools and modify them for your own purposes – and even if you have just a little comfort with the command line, you can also train your own models. (More on that in a bit.)

Side note of interest to developers: this is also a great showcase for doing powerful stuff with machine learning using just JavaScript, applying even GPU acceleration without having to handle a bunch of complex, platform-specific libraries.

I got to sit down with the developers in LA, and also have been playing with the latest builds of Magenta Studio. But let’s back up and first talk about what this means.

Magenta Studio is out now, with more information on the Magneta project and other Google work on musical applications on machine learning:

g.co/magenta
g.co/magenta/studio

AI?

Artificial Intelligence – well, apologies, I could have fit the letters “ML” into the headline above but no one would know what I was talking about.

Machine learning is a better term. What Magenta and TensorFlow are based on is applying algorithmic analysis to large volumes of data. “TensorFlow” may sound like some kind of stress exercise ball you keep at your desk. But it’s really about creating an engine that can very quickly process lots of tensors – geometric units that can be combined into, for example, artificial neural networks.

Seeing the results of this machine learning in action means having a different way of generating and modifying musical information. It takes the stuff you’ve been doing in music software with tools like grids, and lets you use a mathematical model that’s more sophisticated – and that gives you different results you can hear.

You may know Magneta from its involvement in the NSynth synthesizer —

nsynthsuper.withgoogle.com/

But even if that particular application didn’t impress you – trying to find new instrument timbres – the note/rhythm-based ideas make this effort worth a new look.

Recurrent Neural Networks are a kind of mathematical model that algorithmically loops over and over. We say it’s “learning” in the sense that there are some parallels to very low-level understandings of how neurons work in biology, but this is on a more basic level – running the algorithm repeatedly means that you can predict sequences more and more effectively given a particular data set.

Magenta’s “musical” library applies a set of learning principles to musical note data. That means it needs a set of data to “train” on – and part of the results you get are based on that training set. Build a model based on a data set of bluegrass melodies, for instance, and you’ll have different outputs from the model than if you started with Gregorian plainchant or Indonesian gamelan.

One reason that it’s cool that Magneta and Magenta Studio are open source is, you’re totally free to dig in and train your own data sets. (That requires a little more knowledge and some time for your computer or a server to churn away, but it also means you shouldn’t judge Magenta Studio on these initial results alone.)

What’s in Magenta Studio

Magenta Studio has a few different tools. Many are based on MusicVAE – a recent research model that looked at how machine learning could be applied to how different melodies relate to one another. Music theorists have looked at melodic and rhythmic transformations for a long time, and very often use mathematical models to make more sophisticated descriptions of how these function. Machine learning lets you work from large sets of data, and then not only make a model, but morph between patterns and even generate new ones – which is why this gets interesting for music software.

Crucially, you don’t have to understand or even much care about the math and analysis going on here – expert mathematicians and amateur musicians alike can hear and judge the results. If you want to read a summary of that MusicVAE research, you can. But it’s a lot better to dive in and see what the results are like first. And now instead of just watching a YouTube demo video or song snippet example, you can play with the tools interactively.

Magenta Studio lets you work with MIDI data, right in your Ableton Live Session View. You’ll make new clips – sometimes starting from existing clips you input – and the device will spit out the results as MIDI you can use to control instruments and drum racks. There’s also a slide called “Temperature” which determines how the model is sampled mathematically. It’s not quite like adjusting randomness – hence they chose this new name – but it will give you some control over how predictable or unpredictable the results will be (if you also accept that the relationship may not be entirely linear). And you can choose number of variations, and length in bars.

The data these tools were trained on represents millions of melodies and rhythms. That is, they’ve chosen a dataset that will give you fairly generic, vanilla results – in the context of Western music, of course. (And Live’s interface is fairly set up with expectations about what a drum kit is, and with melodies around a 12-tone equal tempered piano, so this fits that interface… not to mention, arguably there’s some cultural affinity for that standardization itself and the whole idea of making this sort of machine learning model, but I digress.)

Here are your options:

Generate: This makes a new melody or rhythm with no input required – it’s the equivalent of rolling the dice (erm, machine learning style, so very much not random) and hearing what you get.

Continue: This is actually a bit closer to what Magneta Studio’s research was meant to do – punch in the beginning of a pattern, and it will fill in where it predicts that pattern could go next. It means you can take a single clip and finish it – or generate a bunch of variations/continuations of an idea quickly.

Interpolate: Instead of one clip, use two clips and merge/morph between them.

Groove: Adjust timing and velocity to “humanize” a clip to a particular feel. This is possibly the most interesting of the lot, because it’s a bit more focused – and immediately solves a problem that software hasn’t solved terribly well in the past. Since the data set is focused on 15 hours of real drummers, the results here sound more musically specific. And you get a “humanize” that’s (arguably) closer to what your ears would expect to hear than the crude percentage-based templates of the past. And yes, it makes quantized recordings sound more interesting.

Drumify: Same dataset as Groove, but this creates a new clip based on the groove of the input. It’s … sort of like if Band-in-a-Box rhythms weren’t awful, basically. (Apologies to the developers of Band-in-a-Box.) So it works well for percussion that ‘accompanies’ an input.

So, is it useful?

It may seem un-human or un-musical to use any kind of machine learning in software. But from the moment you pick up an instrument, or read notation, you’re working with a model of music. And that model will impact how you play and think.

More to the point with something like Magenta is, do you really get musically useful results?

Groove to me is really interesting. It effectively means you can make less rigid groove quantization, because instead of some fixed variations applied to a grid, you get a much more sophisticated model that adapts based on input. And with different training sets, you could get different grooves. Drumify is also compelling for the same reason.

Generate is also fun, though even in the case of Continue, the issue is that these tools don’t particularly solve a problem so much as they do give you a fun way of thwarting your own intentions. That is, much like using the I Ching (see John Cage, others) or a randomize function (see… all of us, with a plug-in or two), you can break out of your usual habits and create some surprise even if you’re alone in a studio or some other work environment.

One simple issue here is that a model of a sequence is not a complete model of music. Even monophonic music can deal with weight, expression, timbre. Yes, theoretically you can apply each of those elements as new dimensions and feed them into machine learning models, but – let’s take chant music, for example. Composers were working with less quantifiable elements as they worked, too, like the meaning and sound of the text, positions in the liturgy, multi-layered quotes and references to other compositions. And that’s the simplest case – music from punk to techno to piano sonatas will challenge these models in Magenta.

I bring this up not because I want to dismiss the Magenta project – on the contrary, if you’re aware of these things, having a musical game like this is even more fun.

The moment you begin using Magenta Studio, you’re already extending some of the statistical prowess of the machine learning engine with your own human input. You’re choosing which results you like. You’re adding instrumentation. You’re adjusting the Temperature slider using your ear – when in fact there’s often no real mathematical indication of where it “should” be set.

And that means that hackers digging into these models could also produce new results. People are still finding new applications for quantize functions, which haven’t changed since the 1980s. With tools like Magenta, we get a whole new slew of mathematical techniques to apply to music. Changing a dataset or making minor modifications to these plug-ins could yield very different results.

And for that matter, even if you play with Magenta Studio for a weekend, then get bored and return to practicing your own music, even that’s a benefit.

g.co/magenta
g.co/magenta/studio

The post Magenta Studio lets you use AI tools for inspiration in Ableton Live appeared first on CDM Create Digital Music.

from CDM Create Digital Music cdm.link/2019/02/magenta-studio-ai-ableton-live/
via IFTTT

Two twisted desktop grooveboxes: hapiNES L, Acid8 MKIII

Now the Nintendo NES inspires a new groovebox, with the desktop hapiNES. And not to be outdone, Twisted Electrons’ acid line is back with a MKIII model, too.

Twisted Electrons have been making acid- and chip music-flavored groovemakers of various sorts. That started with enclosed desktop boxes like the Acid8. But lately, we’d gotten some tiny models on exposed circuit boards, inspired by the Pocket Operator line from Teenage Engineering (and combining well with those Swedish devices, too).

Well, if you liked that Nintendo-flavored chip music sound but longer for a finished case and finger-friendly proper knobs and buttons, you’re in luck. The hapiNES L is here in preorder now, and shipping next month. It’s a groovebox with a 303-style sequencer and tons of parameter controls, but with a sound engine inspired by the RP2A07 chip.

“RP2A07” is not something that likely brings you back to your childhood (uh, unless you spent your childhood on a Famicom assembly line in Japan for some reason – very cool). Think to the Nintendo Entertainment System and that unique, strident sound from the video games of the era – here with controls you can sequence and tweak rather than having to hard-code.

You get a huge range of features here:

Hardware MIDI input (sync, notes and parameter modulation)
Analog trigger sync in and out
USB-MIDI input (sync, notes and parameter modulation)
Dedicated VST/AU plugin for full DAW integration
4 tracks for real-time composing
Authentic triangle bass
2 squares with variable pulsewidth
59 synthesized preset drum sounds + 1 self-evolving drum sound
16 arpeggiator modes with variable speed
Vibrato with variable depth and speed
18 Buttons
32 Leds
6 high quality potentiometers
16 pattern memory
3 levels of LED brightness (Beach, Studio, Club)
Live recording, key change and pattern chaining (up to 16 patterns/ 256 steps)
Pattern copy/pasting
Ratcheting (up to 4 hits per step)
Reset on any step (1-16 step patterns)

If you want to revisit the bare board version, here you go:

255EUR before VAT.

twisted-electrons.com/product/hapines-l/

Okay, so that’s all well and good. But if you want an original 8-bit synth, the Acid8 is still worth a look. It’s got plenty of sound features all its own, and the MKIII release loads in a ton of new digital goodies – very possibly enough to break the Nintendo spell and woo you away from the NES device.

In the MKIII, there’s a new digital filter, new real-time effects (transposition automation, filter wobble, stutter, vinyl spin-down, and more), and dual oscillators.

Dual oscillators alone are interesting, and the digital filter gives this some of the edge you presumably crave if drawn to this device.

And if you are upgrading from the baby uAcid8 board, you add hardware MIDI, analog sync in and out, and of course proper controls and a metal case.

Specs:

USB-MIDI input (sync, notes and parameter modulation)
Hardware MIDI input (sync, notes and parameter modulation)
Analog sync trigger input and output
Dedicated VST/AU plugin for full DAW integration
18 Buttons
32 Leds
6 high quality potentiometers
Arp Fx with variable depth and decay time
Filter Wobble with variable speed and depth
Crush Fx with variable depth
Pattern Copy/Pasting
Variable VCA decay (note length)
Tap tempo, variable Swing
Patterns can reset at any step (1-16 step pattern lengths)
Variable pulse-width (for square waveforms)
12 sounds: Square, Saw and Triangle each in 4 flavors (Normal, Distorted, Fat/Detuned, Harmonized/Techno).
3 levels of LED brightness (Beach, Studio, Club)
Live recording, key change and pattern chaining

Again, we have just the video of the board, but it gives you idea. Quite clever, really, putting out these devices first as the inexpensive bare boards and then offering the full desktop releases.

More; also shipping next month with preorders now:

twisted-electrons.com/product/acid8-mkiii/

The post Two twisted desktop grooveboxes: hapiNES L, Acid8 MKIII appeared first on CDM Create Digital Music.

from CDM Create Digital Music cdm.link/2019/02/nes-acid-grooveboxes-desktop/
via IFTTT

Live compositions on oscilloscope: nuuun, ATOM TM

The Well-Tempered vector rescanner? A new audiovisual release finds poetry in vintage video synthesis and scan processors – and launches a new AV platform for ATOM TM.

nuuun, a collaboration between Atom™ (raster, formerly Raster-Noton) and Americans Jahnavi Stenflo and Nathan Jantz, have produced a “current suite.” These are all recorded live – sound and visuals alike – in Uwe Schmidt’s Chilean studio.

Minimalistic, exposed presentation of electronic elements is nothing new to the Raster crowd, who are known for bringing this raw aesthetic to their work. You could read that as part punk aesthetic, part fascination with visual imagery, rooted in the collective’s history in East Germany’s underground. But as these elements cycle back, now there’s a fresh interest in working with vectors as medium (see link below, in fact). As we move from novelty to more refined technique, more artists are finding ways of turning these technologies into instruments.

And it’s really the fact that these are instruments – a chamber trio, in title and construct – that’s essential to the work here. It’s not just about the impression of the tech, in other words, but the fact that working on technique brings the different media closer together. As nuuun describe the release:

Informed and inspired by Scan Processors of the early 1970’s such as the Rutt/Etra video synthesizer, “Current Suite No.1” uses the oscillographic medium as an opportunity to bring the observer closer to the signal. Through a technique known as “vector-rescanning”, one can program and produce complex encoded wave forms that can only be observed through and captured from analog vector displays. These signals modulate an electron-beam of a cathode-ray tube where the resulting phosphorescent traces reveal a world of hidden forms. Both the music and imagery in each of these videos were recorded as live compositions, as if they were intertwined two-way conversations between sound and visual form to produce a unique synesthetic experience.

“These signals modulate an electron-beam of a cathode-ray tube where the resulting phosphorescent traces reveal a world of hidden forms.”

Even with lots of prominent festivals, audiovisual work – and putting visuals on equal footing with music – still faces an uphill battle. Online music distribution isn’t really geared for AV work; it’s not even obvious how audiovisual work is meant to be uploaded and disseminated apart from channels like YouTube or Vimeo. So it’s also worth noting that Atom™ is promising that NN will be a platform for more audiovisual work. We’ll see what that brings.

Of course, NOTON and Carsten Nicolai (aka Alva Noto) already has a rich fine art / high-end media art career going, and the “raster-media” launched by Olaf Bender in 2017 describes itself as a “platform – a network covering the overlapping border areas of pop, art, and science.” We at least saw raster continue to present installations and other works, extending their footprint beyond just the usual routine of record releases.

There’s perhaps not a lot that can be done about the fleeting value of music in distribution, but then music has always been ephemeral. Let’s look at it this way – for those of us who see sound as interconnected with image and science, any conduit to that work is welcome. So watch this space.

For now, we’ve got this first release:

atom-tm.com/NN/1/Current-Suite-No-IVideo/

Previously:

Vectors are getting their own festival: lasers and oscilloscopes, go!

In Dreamy, Electrified Landscapes, Nalepa ‘Daytime’ Music Video Meets Rutt-Etra

The post Live compositions on oscilloscope: nuuun, ATOM TM appeared first on CDM Create Digital Music.

from CDM Create Digital Music cdm.link/2019/02/oscilloscope-nuuun-atom-tm/
via IFTTT

NI now has killer, budget audio interfaces and compact keys

The answer to questions like “I just need a simple audio interface,” and “I want a compact keyboard that doesn’t suck,” and “oh, yeah, wait, does this connect to my Eurorack?” along with “did I mention I’ve got almost no money?” – just got some new answers.

Native Instruments launched the new audio interfaces and the latest addition to their keyboard line as part of some grand, abstract PR idea called “for the music in you,” and said a bunch of things about starting points and ecosystems.

To cut to the chase – these are inexpensive, very mobile devices with a ton of bundled software extras that make sense for anyone on a budget, beginner or otherwise. And whereas most inexpensive stuff looks really cheap, they look pretty nice. (That holds up in person – I got a hands-on in Berlin just before NAMM.)

KOMPLETE AUDIO 1, AUDIO 2

There are two audio interfaces – KOMPLETE AUDIO 1 and KOMPLETE AUDIO 2. These take one of the best features of NI’s past audio interfaces – they put a big volume knob right on top so you can quickly adjust your level, and they’ve got meters so you can see what that level is. But crucially, they promise better audio quality.

There are two models here, but let me break it down for you: you don’t want the AUDIO 1, you want the AUDIO 2. Why?

The AUDIO 1 was clearly made with the idea that singers just want one mic input (so there’s only a single XLR in), and for some reason also with RCA jacks on the back (because consumers, I suppose).

But if you spend just a little more on the AUDIO 2, you get a lot more usefulness.

First, two inputs – both XLR/jack combo, for mics and instruments, with mic preamps and phantom power so you can use any microphone. My guess is at some point everyone wants to record two inputs rather than one. (Think line inputs, stereo instruments, a mic and an instrument… you get the point.)

And you get jack outputs instead of RCA.

And while this won’t matter to everyone, the AUDIO 2 I’m told also has DC coupling, so you can use your computer and your Eurorack or other modular gear. That means you can pull off tricks like combining modular software and hardware, with tools like Ableton Live, Softube Modular, VCV Rack, Bitwig Studio, and oh yeah, Reaktor.

So, quietly, NI just created the most affordable way of connecting a computer and a modular.

If you are a beginner, you get a bunch of software to play around with. Ableton Live 10 Lite is actually a reasonable version of Live to try – only 8 tracks, but all of the core functionality of the software and many instruments and effects. There’s also MASCHINE Essentials, MONARK, REPLIKA, PHASIS, SOLID BUS COMP, and KOMPLETE START, which represents plenty of music making time.

The price is really the big point: US$109 / 99 EUR and $139 / 129 EUR. Coming in March.

www.native-instruments.com/en/products/komplete/audio-interfaces/komplete-audio-1-audio-2/

A micro keyboard

If you want some sort of mobile input, there are now some wild multi-touch expressive controllers out there, like ROLI’s Seaboard Block and the Sensel Morph.

But what if you don’t want some new-fangled touch insanity? What if you just want a piano keyboard?

And you want it to be inexpensive, and fit in a backpack so you can take it with you or fit it on cramped desks?

Good news: you’ve got loads of options.

Bad news: they’re all kind of horrible. They’re ugly, and they feel cheap. And they have extras you may not need (like drum pads, mapped to the same channel as the keyboard, begging the question why you wouldn’t just play the keys).

So I welcome the introduction of Native Instruments’ KOMPLETE KONTROL M32. This is one that I figured I needed myself the moment I saw it. (Normally, my reaction on keyboard product launches is more on the lines of – “God, please don’t make me write about another generic keyboard controller.”)

The feel is solid – a bit like some of the mini-key keyboards from Roland/Edirol a few years back. They don’t have the travel of full-sized keys, allowing this low profile, but seemed reasonably velocity sensitive.

Plus there are transport buttons and encoders, and two very usable touch strips. In software like Ableton Live and Apple Logic, these map to the usual transport features, and the encoders are assignable. In Native Instruments’ software, of course, you get the usual deep integration with parameters, browsing, and production.

The M32 will be a particularly strong companion to Maschine on the go, finally with a small footprint – something simply not possible with a 4×4 pad layout, much as I love it.

Speaking of Maschine – this is the full Maschine software. There’s a smaller sound bank, but even that is still 1.5GB. So when they say “Maschine Essentials,” they’re practically giving Maschine away. The other extras I mentioned above are slick, too – Reaktor Prism alone you could lose weeks or months in. Monark is a gorgeous Minimoog emulation with realistic filters and some sound design twists not on the original.

And it’s just US$129 (119 EUR). So it looks twice as expensive, but is actually cheaper than a lot of other options out there.

NI are trying to tell a lot of stories at once – something about Sounds.com, something about DJs, something about producers… and they’re following us all over social media and Google with constant ads.

But here’s the bottom line: this is only compact keyboard at any price that feels good or looks good, it’s still only just over a hundred bucks, and the “beginners” bundle is likely to please advanced users for months.

Coming in March.

www.native-instruments.com/en/products/komplete/keyboards/komplete-kontrol-m32/

The post NI now has killer, budget audio interfaces and compact keys appeared first on CDM Create Digital Music.

from CDM Create Digital Music cdm.link/2019/02/ni-now-has-killer-budget-audio-interfaces-and-compact-keys/
via IFTTT

Roland just registered the 303, 808 designs as trademarks

Roland has quietly filed for trademark protection (Unionsmarkenanmeldung) in Germany for the designs of the TB-303 and TR-808.

The filings were uncovered by a poster on the sequencer.de forum. The discussion is in German:

Roland versucht aktuell sich die 808-Farben und das 303-Design als Marke schützen zu lassen [sequencer.de]

register.dpma.de/DPMAregister/marke/registerHABM?AKZ=018016159&CURSOR=34

register.dpma.de/DPMAregister/marke/registerHABM?AKZ=018016158&CURSOR=33

The “trademark” here is trade dress, the design of the actual appearance of the 303 and 808 – the signature layout of the keyboard and knobs of the 303, and the sequence of colored buttons on the 808. “Iconic” is a word that’s wildly overused, but here we can take it to be almost literally true: you can draw out these layouts and even a lot of lay people with a passing interest in electronic music will immediately recognize this bassline synth and drum machine.

Forum posters conclude that this is about Behringer, who announced last month at the NAMM show that they would ship their “RD-808” drum machine – matching the original TR-808 color scheme and button layout – in March. But the registration in Germany could be a sign Roland are generally planning to more aggressively protect their intellectual property, in respect to Behringer or others. And as the RD-808 could, for instance, wind up being subject to litigation outside Germany – that is, anywhere the drum machine ships.

That said, Behringer without fanfare reversed the order of the colors on their RD-808, from a production prototype (orange / light orange / yellow / white, as on the original Roland) to what was shown at NAMM.

The one thing I can say for sure is – the artwork Roland filed from Japan is gorgeous. So, Roland, please don’t sue us for sharing. (And maybe consider it for some merch?)

No idea how long processing will take, or really how the law works; if I can find out, I’ll share. At least Germany should appreciate the aesthetics of combining gold, bright red, and black – check the flag.

The post Roland just registered the 303, 808 designs as trademarks appeared first on CDM Create Digital Music.

from CDM Create Digital Music cdm.link/2019/02/roland-303-808-trademarks/
via IFTTT

Ableton Live 10.1: more sound shaping, work faster, free update

There’s something about point releases – not the ones with any radical changes, but just the ones that give you a bunch of little stuff you want. That’s Live 10.1; here’s a tour.

Live 10.1 was announced today, but I sat down with the team at Ableton last month and have been working with pre-release software to try some stuff out. Words like “workflow” are always a bit funny to me. We’re talking, of course, mostly music making. The deal with Live 10.1 is, it gives you some new toys on the sound side, and makes mangling sounds more fun on the arrangement side.

Oh, and VST3 plug-ins work now, too. (MOTU’s DP10 also has that in an upcoming build, among others, so look forward to the Spring of VST3 Support.)

Let’s look at those two groups.

Sound tools and toys

User wavetables. Wavetable just got more fun – you can drag and drop samples onto Wavetable’s oscillator now, via the new User bank. You can get some very edgy, glitchy results this way, or if you’re careful with sample selection and sound design, more organic sounds.

This looks compelling.

Here’s how it works: Live splits up your audio snippet into 1024 sample chunks. It then smooths out the results – fading the edges of each table to avoid zero-crossing clicks and pops, and normalizing and minimizing phase differences. You can also tick a box called “Raw” that just slices up the wavetable, for samples that are exactly 1024 samples or a regular periodic multiple of that.

Give me some time and we can whip up some examples of this, but basically you can glitch out, mangle sounds you’ve recorded, carefully construct sounds, or just grab ready-to-use wavetables from other sources.

But it is a whole lot of fun and it suggests Wavetable is an instrument that will grow over time.

Here’s that feature in action:

Delay. Simple Delay and Ping Pong Delay have merged into a single lifeform called … Delay. That finally updates an effect that hasn’t seen love since the last decade. (The original ones will still work for backwards project compatibility, though you won’t see them in a device list when you create a new project – don’t panic.)

At first glance, you might think that’s all that’s here, but in typical Ableton fashion, there are some major updates hidden behind those vanilla, minimalist controls. So now you have Repitch, Fade, and Jump modes. And there’s a Modulation section with rate, filter, and time controls (as found on Echo). Oh, and look at that little infinity sign next to the Feedback control.

Yeah, all of those things are actually huge from a sound design perspective. So since Echo has turned out to be a bit too much for some tasks, I expect we’ll be using Delay a lot. (It’s a bit like that moment when you figure out you really want Simpler and Drum Racks way more than you do Sampler.)

The old delays. Ah, memories…

And the new Delay. Look closely – there are some major new additions in there.

Channel EQ. This is a new EQ with visual feedback and filter curves that adapt across the frequency range – that is, “Low,” “Mid,” and “High” each adjust their curves as you change their controls. Since it has just three controls, that means Channel EQ sits somewhere between the dumbed down EQ Three and the complexity of EQ Eight. But it also means this could be useful as a live performance EQ when you don’t necessarily want a big DJ-style sweep / cut.

Here it is in action:

Arranging

The stuff above is fun, but you obviously don’t need it. Where Live 10.1 might help you actually finish music is in a slew of new arrangement features.

Live 10 felt like a work in progress as far as the Arrange view. I think it immediately made sense to some of us that Ableton were adjusting arrangement tools, and ironing out the difference between, say, moving chunks of audio around and editing automation (drawing all those lovely lines to fade things in and out, for instance).

But it felt like the story there wasn’t totally complete. In fact, the change may have been too subtle – different enough to disturb some existing users, but without a big enough payoff.

So here’s the payoff: Ableton have refined all those subtle Arrange tweaks with user feedback, and added some very cool shape drawing features that let you get creative in this view in a way that isn’t possible with other users.

Fixing “$#(*& augh undo I didn’t want to do that!” Okay, this problem isn’t unique to Live. In every traditional DAW, your mouse cursor does conflicting things in a small amount of space. Maybe you’re trying to move a chunk of audio. Maybe you want to resize it. Maybe you want to fade in and out the edges of the clip. Maybe it’s not the clip you’re trying to edit, but the automation curves around it.

In studio terms, this sounds like one of the following:

[silent, happy clicking, music production getting … erm … produced]

OR ….
$#(*&*%#*% …. Noo errrrrrrrgggggg … GAACK! SDKJJufffff ahhh….

Live 10 added a toggle between automation editing and audio editing modes. For me, I was already doing less of the latter. But 10.1 is dramatically better, thanks to some nearly imperceptible adjustments to the way those clip handles work, because you can more quickly change modes, and because you can zoom more easily. (The zoom part may not immediately seem connected to this, but it’s actually the most important part – because navigating from your larger project length to the bit you’re actually trying to edit is usually where things break down.)

In technical terms, that means the following:

Quick zoom shortcuts. I’ll do a separate story on these, because they’re so vital, but you can now jump to the whole song, details, zoom various heights, and toggle between zoom states via keyboard shortcuts. There are even a couple of MIDI-mappable ones.

Clips in Arrangement have been adjusted. From the release notes: “The visualisation of Arrangement clips has been improved with adjusted clip borders and refinements to the way items are colored.” Honestly, you won’t notice, but ask the person next to you how much you’re grunting / swearing like someone is sticking something pointy into your ribs.

Pitch gestures! You can pitch-zoom Arrangement and MIDI editor with Option or Alt keys – that works well on Apple trackpads and newer PC trackpads. And yeah, this means you don’t have to use Apple Logic Pro just to pinch zoom. Ahem.

The Clip Detail View is clearer, too, with a toggle between automation and modulation clearly visible, and color-coded modulation for everything.

The Arrangement Overview was also adjusted with better color coding and new resizing.

In addition, Ableton have worked a lot with how automation editing functions. New in 10.1:

Enter numerical values. Finally.

Free-hand curves more easily. With grid off, your free-hand, wonky mouse curves now get smoothed into something more logical and with fewer breakpoints – as if you can draw better with the mouse/trackpad than you actually can.

Simplify automation. There’s also a command that simplifies existing recorded automation. Again – finally.

So that fixes a bunch of stuff, and while this is pretty close to what other DAWs do, I actually find Ableton’s implementation to be (at last) quicker and friendlier than most other DAWs. But Ableton kept going and added some more creative ideas.

Insert shapes. Now you have some predefined shapes that you can draw over automation lanes. It’s a bit like having an LFO / modulation, but you can work with it visually – so it’s nice for those who prefer that editing phase as a way do to their composition. Sadly, you can only access these via the mouse menu – I’d love some keyboard shortcuts, please – but it’s still reasonably quick to work with.

Modify curves. Hold down Option/Ctrl and you can change the shape of curves.

Stretch and skew. Reshape envelopes to stretch, skew, stretch time / ripple edit.

Insert Shapes promises loads of fun in the Arrangement – words that have never been uttered before.

Check out those curve drawing and skewing/scaling features in action:

Freeze/Export

You can freeze tracks with sidechains, instead of a stupid dialog box popping up to tell you you can’t, because it would break the space-time continuum or crash the warp core injectors or … no, there’s no earthly reason why you shouldn’t be able to freeze sidechains on a computer.

You can export return and master effects on the actual tracks. I know, I know. You really loved bouncing out stems from Ableton or getting stems to remix and having little bits of effects from all the tracks on separate stems that were just echos, like some weird ghost of whatever it was you were trying to do. And I’m a lazy kid, who for some reason thinks that’s completely illogical since, again, this is a computer and all this material is digital. But yes, for people are soft like me, this will be a welcome feature.

So there you have it. Plus you now get VST3s, which is great, because VST3 … is so much … actually, you know, even I don’t care all that much about that, so let’s just say now you don’t have to check if all your plug-ins will run or not.

Go get it

One final note – Max for Live. 10.0.6 synchronized with Max 8.0.2. See those release notes from Cycling ’74:

cycling74.com/forums/max-8-0-2-released

Live 10.1 is keeping pace, with the beta you download now including Max 8.0.3.

Ableton haven’t really “integrated” Max for Live; they’re still separate products. And so that means you probably don’t want perfect lockstep between Max and Live, because that could mean instability on the Live side. It’d be more accurate to say that what Ableton have done is to improve the relationship between Max and Live, so that you don’t have to wait as long for Max improvements to appear in Max for Live.

Live 10.1 is in beta now with a final release coming soon.

Ableton Live 10.1 release notes

And if you own a Live 10.1 license, you can join the beta group:

Beta signup

Live 10.1: User wavetables, new devices and workflow upgrades

Thanks to Ableton for those short videos. More on these features soon.

The post Ableton Live 10.1: more sound shaping, work faster, free update appeared first on CDM Create Digital Music.

from CDM Create Digital Music cdm.link/2019/02/ableton-live-10-1/
via IFTTT

The synth modules of winter: your Eurorack radar

The waves of synth modules never stop coming, as obsessed engineers keep making them and sound tinkerers keep buying them. So let’s catch up with what’s out there, in the wake of the NAMM show in California late last month.

Most of these are from NAMM, but there are some other sightings recently, as well.

Make Noise’s new modulation monster. Make Noise have made a name for themselves with some real weirdness that then shaped a lot of the music scene. The Quad Peak Animation System is the latest from them – a wild modulation system that can make vocalization-like sounds, with fast-responding multiple resonant filter peaks across a stereo image. In other words, this thing can sing – in an odd way – in stereo.

The best part of the story behind this is Tony Rolando of Make Noise partly got the idea calibrating Moog Voyagers … and now will apply that to making something crazy and new.

makenoisemusic.com/modules/qpas

Now we have multiple videos of that:

Low-cost Buchla. There’s a phrase I’ve never typed before. The Buchla USA company themselves are working to bring Buchla to the masses, with the new low-cost Red Label line of modules. This is 100 series stuff, the historical modules that really launched the West Coast sound – mixer, quad gate, dual-channel oscillator, filters, reverb, and more. There’s even a case and – of course – a touch surface for input, because keyboards are the devil’s playground. Good people are involved – Dave Small (Catalyst Audio) and Todd Barton – so this is one to watch.

buchla.com/

A module that’s whatever you want it to be. Nozori is a Kickstarter-backed project to make multifunctional modules – buy a module once, then switch modes via software (and of course coordinated faceplates). People must like the idea, because it’s already well funded, and you still have a week back if you want in.

kck.st/2TPKDdT

Lightning in a bottle. Gamechanger have a wild technology that lets you “play a lightning bolt” – basically, incorporating Tesla Coils into their hardware. They’ve done that once with Plasma Pedal, which we hope to test soon. With Erica, they’ll stick this in a module – and let you use high-voltage discharges in a xenon-filled tube. That looks cool and should sound wild; you get distortion with CV control in this module, octave up/down tracking oscillators for still more harmonics, and even an assignable pre/post- EQ. 310EUR before VAT, coming late February.

Erica Synths does the Sample Drum. This one’s sure to be a big hit, I think – not only for people wanting a drum module, per se, but presumably anyone interested in sample manipulation. Sample Drum plays and (finally!) records, with manual and automatic sample slicing, and three assignable CV inputs per channel. There are even effects onboard … which actually makes me wonder why we can’t have something like this as a desktop unit, too. You even can embed cue points in WAV. SD card storage. Looks terrific – 300EUR (not including VAT) coming late February.

One massive oscillator with zing, from Rossum. TRIDENT is a “multi-synchronic oscillator ensemble” – basically three oscillators in one, with loads of modulation and options for FM and phase and … uh, “zing.” Of course you could get a whole bunch of modules and do something similar, but the advantage here is a kind of integrated approach to making a lot of rich timbres – and while the sticker price here is US$599, that may well be less than wrangling a bunch of individual modules.

Actually, let’s let Dave himself talk about this:

www.rossum-electro.com/products/trident/

A module for drawing. LZX Industries’ Escher Sketch is a stylus pen controller with XY, pressure, and “directional velocity” (expression). LZX are thinking of this for video synthesis, though I’m sure it’ll get abused. US$499.

MIDI to CV, with autotuning and polyphony. Bastl Instruments’ 1983 4-channel MIDI to CV interface, complete with automatic tuning and other features, is one we’ve been following for a while. It’s now officially out as of 1 February.

Previously, including an explanation of why this is so cool:

Bastl do waveshaping, MIDI, and magically tune your modules

Don’t forget that Bastl also worked with Casper Electronics on Dark Matter, which I covered last month:

Bastl’s Dark Matter module unleashes the joys of feedback

Inexpensive Soundlazer modules. This LA company is actually known more for its directional speakers, but it looks like they’re getting into modules. Opening salvo: $99 bass drum, $69 VCA – evidence that it’s not just Behringer who may get into lower cost Eurorack. Check out their site for more.

Mix with vectors and quad. v3kt is really cool. Plug in joysticks, envelopes, LFOs, automatically calibrate them with push-button sampling, and then mix and connect all that CV to other stuff, with save states. Oh and you can use this as a quad panner, too. $199 now.

www.antimatteraudio.com/modules/v3kt

STG and Radiophonic 1 synthesizer. Radiophonic 1 is a terrific-sounding all-in-one, with a gorgeous oscillator at its core (also available separately). See Synthtopia’s video for explanation:

And Matt Chadra demonstrates how it sounds:

Slice and recombine waveforms in a module. Hey, you know how everyone keeps complaining there are no new ideas in synthesis? Well, Waverazor at least claims to be a new idea (with patent pending, too). Cut individual waveform cycles into slices, individually modify and modulate the slices, recombine. Okay – that sounds a lot like wavetable synthesis with a twist (albeit a compelling one), but we’ll bite. Or rather if you didn’t bite when this was a standalone plug-in, maybe you’ll like real knobs and a bunch of patch points:

mok.com/wr_dual.php

Control your modular with a ring. It’s funny how this idea never goes away. But here we are again – this time with crowd funding on IndieGogo, so maybe a larger group of people to actually use it. Wave is a ring you wear so you can make music by waving your hand around and … this time it plugs into a modular (the Wavefront module).

Watch this video and marvel at how you can do something you could do with an expression pedal or by using the same free hand to turn a knob, but, like, with a ring.

(Sorry, probably someone does want this, but… yes, it is truly a NAMM tradition to see someone trying it, again.)

Behringer are promising Roland System 100M modules. The German mass manufacturer was out ahead of the NAMM show with pre-production designs and prototypes based on Roland’s 100M series. Price is the lead here – US$49-99. Interestingly, what I didn’t see was people saying they’d opt for Behringer over other makers so much as that they might expand their system with these because of that low cost. Teenage Engineering also made a play for that “modular for the masses” mantle, though not in Eurorack.

Synthtopia did a good write-up of the prototype plans:
Behringer Plans 40 Eurorack Modules In The Next 2 Years, Priced at $49-99

Behringer did make this promise already back in April of last year – then, just in advance of the Superbooth show in Berlin – which I expect annoys other modular makers. But if you want Roland remakes right now, you can get them from – well, Roland, if at higher prices:

Roland’s new SYSTEM-500 modules, and why you might want them

Low cost, 2hp bells and grains and stuff.www.youtube.com/watch?v=nfz17MzDmOM

pocket operator modular system. And yes, while we might be talking about Behringer as the IKEA of modular, but for Teenage Engineering. TE have extended their pocket operator brand to a line of modular. It’s not Eurorack, but it is patchable and you can buy individual modules or a complete kit. I’m working on an in-depth interview with the teenagers, so stay tuned.

You actually do fold these things together – and prices run 399-549 EUR for a complete system.

teenageengineering.com/

That’s far from everything, but for me it’s the standouts. Any you’re excited about here – or anything I missed? Sound off in comments.

The post The synth modules of winter: your Eurorack radar appeared first on CDM Create Digital Music.

from CDM Create Digital Music cdm.link/2019/02/modular-eurorack-of-winter/
via IFTTT