Categories
Uncategorized

Prototypes are free, open-source plug-ins – use them for sound, or to learn Csound

Get a free algorithmic bass drum generator, a lo-fi modulator, a massive granular workstation, for free – and that’s just the beginning.

Micah Frank is one of the most prolific sound designer-inventor-composer types around, via his Puremagnetik soundware label and personal projects. Lately, he’s been turning some of these larger, more experimental projects into free tools that you can both use in your own music – and learn from and expand.

Last summer, we saw an expansive, unparalleled granular tool take form as both album and free code:

But now, Micah has gone further – way further. The new series is a set of plug-ins called Prototypes. That granular instrument from last summer has become what is really a full-fledged tool like no other, and now is available in plug-in form. There are new tools in a slightly more pre-release state, true to the “prototype” name. But all are ready to use – and they offer a window into the power of Csound, the fully free and open-source omni-platform sound toolkit that is descended the very first digital audio tools ever created.

Available already:

Kickblast (an algorithmic bass drum generator)

Parallel (a lo-fi modulator)

And a much developed (not so prototype-ish) plugin version of my multitrack granular workstation Grainstation C

Pre-built plug-ins for VST and Audio Unit are available for macOS and 64-bit Windows. I think it’s trivial to build for some other platforms (I need to check that out), or you can also run in Csound directly. Find those in the Builds section of his GitHub:

github.com/micah-frank-studio/Prototypes/tree/master/Builds

It’s all open-source (GNU GPLv2 license), and while you can run it as a plug-in, the sound code is all in Csound. Full repository:

github.com/micah-frank-studio/Prototypes

Micah tells CDM he hopes that some of you will discover what Csound can do in your own work. ” Csound is my favorite,” Micah says. The “spectral, granular, convolution sound” is one of the best available, he raves. “I feel like it needs an awareness push, as the music-making community is much more ready to code than they were in the ’80s. And the learning curve from Max (or even a modular system) to Csound is not so bad.”

Noted.

Follow Micah on Instagram, so you get some pretty nature shots interspersed with your music nerd goodness. My kind of influencer.

www.instagram.com/micah.frank.studio/

The post Prototypes are free, open-source plug-ins – use them for sound, or to learn Csound appeared first on CDM Create Digital Music.

from CDM Create Digital Music cdm.link/2020/02/prototypes-free-plugins-csound/
via IFTTT

Categories
Uncategorized

Someone made a Pomodoro timer for Ableton Live, so you can stay productive, take breaks

Productivity engineering has come to music production. A popular method for timeboxing is now available as a free Live add-on.

Have you ever sighed in relief to have a big, uninterrupted span of time – only to wind up wiling it all away with procrastination? And then have you found yourself with a particular deadline – like an hour left in your music studio before your partner arrives to kick you out – and suddenly find you’re focused?

The basic principle here is that, paradoxically, even as we hate schedules and deadlines, constraints can help us focus. By constraining our time, or timeboxing, we can concentrate more easily on a particular task.

The Pomodoro Technique is this boiled down to a really simple cycle. It’s named for a kitchen timer – you know, the thing often called an egg timer because it’s shaped like an egg, but in this case apparently with a model shaped like a tomato. It’s the late-80s invention of Francesco Cirillo, who I understand even liked the ticking sound. I hate ticking – uh, especially while making music – but sometimes setting a timer can make it easier to tackle a task you’re putting off.

While invented in the late 80s, Pomodoro Technique has spread more widely in the productivity craze of the Internet age. Of course, there’s a Lifehacker guide to getting started. (It was even updated as recently as last summer.) And yes, Francesco is around and will gladly take your money.

Now, it may seem a little strange to do this when you’re working on music, which most of us think of as a diversion. Isn’t music supposed to be endlessly fun and something we can concentrate on without any challenge? But apart from more rote work or making a Max for Live patch or carefully editing envelopes, anything that requires you to focus your brain benefits from breaks.

And that’s really what the Pomodoro Technique is about. It’s not actually the 25 minutes of focus that is the most important. It’s the break. (Perhaps part of why you’re so eager to procrastinate is a legitimate impulse by your brain that you’re overly and unnaturally focused on something.)

There’s plenty of science to back this up. Selecting just one useful overview:

Brief diversions vastly improve focus, researchers find [ScienceDaily summary; original paper in Cognition, 2001]

There are lots and lots of Pomodoro-themed timers out there – or you can use any timer (as on your phone, wristwatch, a physical egg timer, whatever). (The Pomodoro timers sometimes have special features dedicated to the technique, and at least pictures of tomatoes, which as a fan of the veget— erm, fruit – I enjoy.)

pATCHES, a site and Patreon subscription creating resources for producers, has an experimental Max for Live plug-in. Apart from letting you run the thing inside your session, it even stops your transport when you’re due for a break – if you find that useful.

patches.zone/max-for-live/pomodoro

I’m curious to hear if people find this useful. It is easy to forget that, as much as we mystify music process, what we’re really taking care of is our brain.

The post Someone made a Pomodoro timer for Ableton Live, so you can stay productive, take breaks appeared first on CDM Create Digital Music.

from CDM Create Digital Music cdm.link/2020/02/pomodoro-timer-ableton-live/
via IFTTT

Categories
Uncategorized

In quarantined China, concerts are going online as a safe place to meet

Even in the capital Beijing, once-crowded streets are now empty, as the 2019-nCoV coronavirus outbreak forces people at home. The solution for live musicians: turn to streaming.

Streaming was already a popular hangout for Chinese musicians and artists across the region, before the viral shutdown of public space. That already included experimental artists looking to reach one another in their niche. The difference is, now online interaction in China is essential because people are effectively all isolated at home.

I caught some small window into this via Edward Sanderson, based in Beijing, who has been sharing the streams of his friends. (To this I’m again indebted to C-drik and his Syrphe Facebook group on experimental music in Asia and Africa, as I wrote up recently.)

Edward writes, ” As group events in China have been curtailed because of the coronavirus threat, the online space has become more important for meeting up.” (Many of these events are also shared via Facebook even though that site is blocked by default in China; in experimental music circles, it seems VPNs are popular.)

So, for instance, via streaming, two experimental clarinetists can play together.

Zhu Wenbo played a concert from his home in Beijing:

In Dali, located in the southwestern Chinese province of Yunnan, clarinet player Ding Chenchen could join in a day later, as a duet:

You won’t see anything until a stream is active, but there’s a streaming space on Shanghai-based Bilili, with a URL like this one:

space.bilibili.com/505035552/

That’s a Chinese-only service that now boasts tens of millions of users, largely focused around games, animation, and comics, but evidently branching out into clarinet noise music. Artist Zhao Cong had announced a stream for today. I couldn’t locate it in time for this post, but here are some of her gorgeous textural compositions on Bandcamp – engrossingly fuzzy, lo-fi looped constructions:

Plus as part of the “Practice” series, new live-streamed performances were just announced with music by Zhu Wenbo, Zhou Yi, and Li Song (Chinese-language link, but you can get QR codes for concerts coming up in the next week):

mp.weixin.qq.com/s/opP6L9YTtevuRtwvjchpTg?fbclid=IwAR3qgeJNGXj0DmT6YLCeZtxqxXcJw8wJUjR1Fxubvfwo9gWvDCUtryXGH9I

Instead of links, event promos heavily feature images, and even QR codes. The number below Bilibili represents a “space” on the streaming site; head there at the appointed time, and you get live-streamed music. So think more underground – less Facebook notifications from the Boiler Room page everybody and their dog subscribes to.

Just as China has led the way in expanding the uses of mobile chat, mobile-based streaming has taken off in the country even as the West embraces the tech in fits and starts. (I’d say the reason is, markets like the USA still split usage between desktop and mobile, and are dominated by Facebook and Google and their business models – including for how music fees are structured.)

Anyway, our Chinese readers now far more about all of this than I do (from streaming to the current state of Chinese quarantine). So, since we do have a large readership that’s now trapped in your houses –

Open call to Chinese artists and other readers under quarantine! If you do have some ideas for streaming concerts, go for it! I’ll be happy to share that across the readership here. We can basically create, for now, not Boiler Room, but a sort of Coronavirus Room for bored and isolated quarantined musicians.

And to everyone dealing with life in the shadow of this virus, we wish you the best health. A big thanks to all the people working to contain its spread and doing research to help humans respond in ways that are well-informed and effective. I am not an immunologist and I don’t know that I would make a very good one, but what I imagine we can do as musicians is to help share accurate information across communities, bring people together, and to process emotions.

The post In quarantined China, concerts are going online as a safe place to meet appeared first on CDM Create Digital Music.

from CDM Create Digital Music cdm.link/2020/02/coronavirus-online-music-streaming/
via IFTTT

Categories
Uncategorized

The Wall of Sound reimagines a sampler-sequencer for public space and use

In an oversized, lo-fi electronic sound instrument, the project from Warszawa, PL’s panGenerator lets the public collaborate on sonic graffiti.

The Wall of Sound was commissioned by the group for Katowice Street Art 2019: Urban Sound, in the south of Poland. It’s a big web of hexagonal nodes, each with small controls and description so you can record sound, then sequence its playback.

The components will be familiar to anyone working with DIY electronics – some ATmega 328 (in the nodes), some ATTiny for the links, and “some cheap sound recording / playback chips that are giving the whole thing a lo-fi vibe.” Actually, maybe the independence of all those nodes is the most interesting part – a uniquely lo-fi modular.

Curators: Piotr Ceglarek / Zuzanna Waltoś from Biuro Dźwięku Katowice. Photos by Maciej Jędrzejewski.

More:

The Wall of Sound

The post The Wall of Sound reimagines a sampler-sequencer for public space and use appeared first on CDM Create Digital Music.

from CDM Create Digital Music cdm.link/2020/02/sampler-sequencer-public-art/
via IFTTT

Categories
Uncategorized

The latest attempt to make digital music tangible: NFC-powered Muse Blocks

We are living in an immaterial world. Muse Blocks, tiles with embedded NFC chips, are one idea, and now team up with a popular electronic music label.

Berlin-based Senic, a hardware startup focused on smart home solutions, devised the tangible product Muse Blocks. And they’ve recruited underground tech house label Katermukke, Dirty Doering’s label, which has its own grungy Berlin afterhours vibes – fitting to its home base of the Kater Blau nightclub.

Launch video (German with English subtitles):

Basically, you can think of these tiles as connected art objects. Tap them to your phone (provided you have an NFC-capable smartphone), and up pops a streamed album or playlist. You can program the tiles yourself, meaning that you can have a physical object to go with your mixes – so it’s the 21st-century streaming equivalent of a mixtape, in theory.

www.senic.com/en/museblocks#faq

The pricing mirrors what we used to pay for CDs – 15EUR is the “special introductory price.” If you want them to look smart in your living room, you can buy a set that includes a bar to mount to a wall, and 7 Muse Blocks to put up on it, for a 69EUR bundle price. That of course makes them expensive for the promo use case.

Since the music is streamed, these are purely decorative, but then I suppose we buy all sorts of objects that are indeed purely decorative. It changes the streaming experience, at least, in that the ephemeral experience of streamed music gets its own object permanence and spatial location. By default, there’s support for Spotify, Apple Music, SoundCloud, and Tidal, but they also suggest Netflix, YouTube, and more interesting stuff like Apple Homekit and IFTTT.

Uh, so then you can do this. Yes, I see that she’s also tapping her phone almost from the first interaction. Shhh. The design objects still look very cool.

I don’t know if this solves any problems here, but it does at least reframe the ongoing lack of tangibility in streamed music. And so that was obviously the appeal to Katermukke.

Now, if you’re wondering if you could DIY something like this – like maybe you want to release your next streamed album or mix inside a furry toy rabbit or a potted cactus – you can, of course. There are kits available from Identiv, tons of NFC and RFID stuff from Adafruit, and more. The mind boggles, actually, given the amount of stuff in our world constantly transmitting data.

Even on Senic’s devices, you can use a free app to write your own data. It’s certainly more fun, if a lot more expensive, than a cut up paper giveaway, so – yeah, you could absolutely use this for a Bandcamp code if you wanted.

Here’s an example of the write process:

The problem with all of this remains that there’s no actual data on the object, so it is effectively, well, useless. I still wonder what delivery medium makes sense for digital downloads. Most easily-bought USB keys and SD cards are pretty unattractive, and arguably they don’t offer anything that a download link can’t do. CDs are at this point about as dead as a format as cassette tapes and vinyl, but lack the collectability of either of those.

And so… oh, actually, I have nothing to say beyond that. If I come up with a conclusion, maybe I can embed it on an NFC object, and then… uh, never mind.

Let me just go dig up what NFC powers my Huawei phone has. See you.

The post The latest attempt to make digital music tangible: NFC-powered Muse Blocks appeared first on CDM Create Digital Music.

from CDM Create Digital Music cdm.link/2020/02/nfc-muse-blocks-katermukke/
via IFTTT

Categories
Uncategorized

Welcome to Hell: the marvelous, mad musical instruments of Ewa Justka

Some people claim electronic music is the work of the devil. Inventor Ewa Justka creates things that could actually prove it.

Ewa is a Glasgow, Scotland UK-based, Polish-born sound artist, musician, and inventor. She can bang her way through a raw techno set, she can blind you wish flashing lights driven by homemade circuits, or she can open a gateway to evil realms in unbridled noise – all of this at festivals like CTM, Unsound, Insomnia, and Sonic Acts. But she also builds fantastic instruments of her own – and you can buy them for your own abuse, or if you’re lucky, catch her at a workshop and make it for yourself.

There’s the Ladder to Hell, a synthesizer. It started as a resonant ladder filter a la Moog, but devolved into something far more distorted and psychotic. There’s a WASP filter in there, too. There are SCREAM and DRIVE knobs that are … not tame. You can input CV to the Moog and Wasp filters – that’s resonance on the Wasp filter, for some real punishment.

Self-oscillate or even make some subtler distorted timbres, too.

It’s as effective as a sound processor as it is as a synth, thanks to an audio input. Check the manual and full specs.

Ladder to Hell at Etsy

Here are some samples of the instrument, which you can buy on Bandcamp, then play on your next dinner date.

ewajustka.bandcamp.com/album/ladder-to-hell-samples

Then there’s the WhOoPsYnTh, a combination sampler + delay + LFO with similarly masochistic sonic possibilities. It’s inspired by the Pete Edwards design for a similar architecture – and Pete, like Ewa, is also someone who builds creations, then takes them into ecstatic noisy performances.

The WhOoPsYnTh just goes all out with that idea, screaming in pain in a very Ewa Justka-ish sonic voice. But the beauty of it is, you can again use external CV – here for delay length. You can cut up sounds and stretch them with the delay. You can really warp audio inputs with this.

More documentation on how to play it soon.

WhOoPsYnTh @ Etsy

My favorite review: “…the Optodeafener is evil, dangerous, exciting, rhythmic and feral. Do not hesitate. “

You can find loads of stuff on the Optotronics site, shipped from Glasgow (she was formerly in London). All of this is painstakingly handmade by the artist, so you get something truly unique.

www.etsy.com/shop/Optotronics?ref=l2-about-shopname

These are elaborate, full instruments, but Ewa can also make dark magic with more economical sets of parts. Meet the VOICE_ODDER 2, a thing that takes inputs and makes them … odd. And makes your neighbors hate … you.

Using light-sensitive wave oscillators and a delay, it’s palm-sized mayhem.

Take a class with Ewa to turn this…
…into this.

You’ll be able to build one of these yourself at an event I’m co-hosting on February 22 in Kaliningrad, Russia, so if you’re nearby – say, Gdansk, or Lithuania, or Minsk, or somewhere like Moscow that has cheap flights – you should come learn these dark arts with us. Sign up for the Facebook event and we’ll tell you how to join the workshop and make one yourself:

Space.Zero Kaliningrad

Thanks to British Embassy in Moscow and the British Council for supporting UK artist Ewa’s Kaliningrad debut, as part of the UK-Russia Year of Music.

More of Ewa – who was also a co-host and a participant in the CTM Festival MusicMakers Hacklab with me.

ewajustka.tumblr.com/

The post Welcome to Hell: the marvelous, mad musical instruments of Ewa Justka appeared first on CDM Create Digital Music.

from CDM Create Digital Music cdm.link/2020/02/ewa-justka-evil-instruments/
via IFTTT

Categories
Uncategorized

Watch Elsa Garmire’s pioneering laser show from 1972 – futuristic and expressive even now

Forget that cheesy Pink Floyd stuff from the planetarium. Scientist Elsa Garmire used optic chops to make lasers into a real instrument – and her work holds up today.

Sound and light artist and researcher Derek Holzer spotted this one; don’t miss his vector work and other synesthetic studies. It’s not a new article, but this story from Sloan Science & Film, Museum of the Moving Image in Queens, New York is worth visiting now.

Death of the Red Planet, 1973.

What’s telling is, it took an optical scientist and physicist to push the medium aesthetically. So even though Dr. Garmire was at a center that brought together engineers and artists – the legendary Experiments in Art and Technology (E.A.T.) – it was really her deep knowledge of how the technology worked that drove her to make something aesthetic, even when others were not.

As she told Science & Film:

There was a standard way of putting X-Y mirrors on the laser and getting what us scientist’s call Lissajous figures, which are sort of ovals. You can get lots of ovals of different sizes, moving in different directions, and you can run them with music and get a kind of wild pattern that to me has no aesthetic value at all,

That aesthetic virtuosity is just as impressive now – maybe more so, having seen what we’ve seen – as it must have been in the 70s. Working with filmmaker Ivan Dryer and Dale Pelton, her exquisite light performances were captured on film and presented in public, and as a result helped launch the whole laser industry. Going back to this early work is like watching Clara Rockmore on a Theremin, though – a deep level of virtuosity in a new medium that has been tough to match since. (Just, in this case, Dr. Garmire was essentially Clara and Lev Termen, all in one – player and engineer.)

Go to their article and scroll down to take in LASERIMAGE, and do check the whole article; it’s fascinating:

Creatures of Light: LASERIUM

Their gallery on science and cinema is worth a long, long look, especially for those of us who love that intersection – more like this, please:

Here is Dr. Garmire at a lecture at the museum last year:

And she brought lasers, too:

On May 31, 2019, the Museum of the Moving Image’s Science on Screen series (movingimage.us/scienceonscreen), presented six short films by experimental film and light show pioneers. The screening was followed by a live laser demonstration by physicist Elsa Garmire, and a discussion between Garmire, Joshua White, and AJ Epstein moderated by Executive Editor and Associate Curator of Science and Film Sonia Epstein. More: movingimage.us/scienceonscreen

Let’s linkhole a little further, though, because the 1973 film she worked on Death of the Red Planet was also a major moment in immersive sound, featuring what composer Barry Schrader claims was the “first quadraphonic electronic music soundtrack composed for a motion picture.” (Given my forays into Soviet audiovisual experimentation, I’m not sure everyone is comparing their notes between east and west on the “first” business, but – it at least counts as pioneering, even if “first” is always a risky word to use. Ditto the “first” laser show referenced in the article above.)

That score was made on the Buchla 200 system, so have at this juicy link here:

barryschrader.com/death-of-the-red-planet

Best of all, there is a full scan of the write-up of this film from American Cinematographer at the time. Yeah, cinematographer this!

econtact.ca/11_4/pelton_red_planet.html

As artists like Robert Henke and emerging artists around the world rediscover lasers, it seems now is the perfect time to connect their modern computer-controlled experiments with the history of the field. Watch this space.

And I’ll be eagerly anticipating the upcoming documentary on the topic the Sloan folks promise in the article.

The post Watch Elsa Garmire’s pioneering laser show from 1972 – futuristic and expressive even now appeared first on CDM Create Digital Music.

from CDM Create Digital Music cdm.link/2020/02/elsa-garmire-pioneering-lasers-1972/
via IFTTT

Categories
Uncategorized

Hear Jan Wagner’s intimate piano electronics, before they enter a planetarium dome

Maybe now is a perfect time for a moment of calm contemplation – premiering Jan Wagner’s “Kapitel 36” on the eve of a new album and a spatial planetarium premiere.

Kapitel, out on March 20 on the Quiet Love Label, is “autobiographical” ambient music. These are spontaneous, personal sketches that began as piano improvisations, but have sometimes had those piano imprints removed – a kind of lost wax approach to composition, piano molds for electronic textures.

“Kapitel 36” is an especially poignant, reflective moment in that series. Listen:

Berghain would be probably the last thing you’d expect to associate with this sound, but this sense of space and exploration also comes from an artist who has frequently mixed albums for the well respected Ostgut Ton label attached to that club. And maybe that’s an ideal Berlin connection – piano sentiment, engineering precision, and ambiguous spaces for personal reflection all come together here.

But we’ve had plenty of music in industrial nightclubs. Now, Jan is joining a new wave of artists realizing music for immersive contexts, with fully spatialized sound made for particular architectures. Jan was invited by Spatial Media Lab to collaborate – that’s a recently formed artist/tech collective founded by Andrew Rahman and Timo Bittner. With Jan’s music – and a full-sized acoustic grand piano hauled into the space – they’ll transform the environment of the Zeiss Grossplanetarium Berlin into a unique listening environment.

I got the chance to work with Spatial Media Lab on their first planetarium outing in November 2018. What makes their effort unique is that they’re working to de-mystify the delivery technology for spatializing sound, along artists to be more hands-on and collaborative. That frees them to spend the significant time to finely tune their music material to the space, and play creatively, rather than just wrestle with tech or turn over control to engineers. (You can read up on the collaboration I joined in 2018, Contentious Constant II – and we’re overdue for a check-up here.)

Jan has shared some thoughts with CDM on how this process worked:

What was the process for you, reworking material for a spatial context?

It was a totally new approach for me. The difference between stereo and immersive sound is enormous. I had to rethink the whole album and detach the production from the well-known stereo panorama cage. It wasn’t that simple, because everything was [originally] made in stereo. From the synth to the DAW, it’s all made for a stereo environment. So we had to [mix] the signals into mono, which we later scaled up to ambisonic sound.

After exporting all of the tracks, we imported them into the DAW Reaper … [which is able to] handle up to 64 outputs of each track, needed to play all the signals into the dome. We used the IEM Plugin Suite to build our scene and then mixed the tracks from scratch. [Ed.: SML used this combination before, and it’s great to work with artistically. IEM is free and open-source and easy to manage, and Reaper, of course, has some superb multichannel support and is fast, efficient, free to try, and inexpensive to own.]

Once I realized how far I could go when it comes to the production and writing process, my head almost exploded. There is no longer a stereo cage. You basically can do whatever you want. The signals can start right at the top of your head and fall down to your knees, surrounding you! This changes the whole process of how you create music.

Your musical process I know shifted for this record; can you describe what changed?

I started recording in the same way. The piano improvisation is still the root of it all, but it is no longer necessarily the main part of the production. I didn’t want to be constricted by the piano and often I just muted it after adding some synth layers. The piano is no longer the lead voice.

How did Tobias Preisig get involved in the project – and now on the same bill?

Last year I produced Tobias Preisig’s solo debut Diver. He wanted to concentrate on the essence of his music and dive deeper into his instrument and discover the real needs of his art. Tobias and I share the same approach to music, and while planning this event I wanted him to be part of it. His music is so immersive by default and it fits perfectly into the planetarium environment.

If you’re in Berlin, you can catch the “Spherea” program with both artists at the Zeiss-Grossplanetarium in Prenzlauer Berg.

Spherea präsentiert von Jan Wagner & Tobias Preisig

More on the Spatial Media Lab:

janwagner.bandcamp.com/

The post Hear Jan Wagner’s intimate piano electronics, before they enter a planetarium dome appeared first on CDM Create Digital Music.

from CDM Create Digital Music cdm.link/2020/02/jan-wagner-piano-planetarium/
via IFTTT

Categories
Uncategorized

Hear Jan Wagner’s intimate piano electronics, before they enter a planetarium dome

Maybe now is a perfect time for a moment of calm contemplation – premiering Jan Wagner’s “Kapitel 36” on the eve of a new album and a spatial planetarium premiere.

Kapitel, out on March 20 on the Quiet Love Label, is “autobiographical” ambient music. These are spontaneous, personal sketches that began as piano improvisations, but have sometimes had those piano imprints removed – a kind of lost wax approach to composition, piano molds for electronic textures.

“Kapitel 36” is an especially poignant, reflective moment in that series. Listen:

Berghain would be probably the last thing you’d expect to associate with this sound, but this sense of space and exploration also comes from an artist who has frequently mixed albums for the well respected Ostgut Ton label attached to that club. And maybe that’s an ideal Berlin connection – piano sentiment, engineering precision, and ambiguous spaces for personal reflection all come together here.

But we’ve had plenty of music in industrial nightclubs. Now, Jan is joining a new wave of artists realizing music for immersive contexts, with fully spatialized sound made for particular architectures. Jan was invited by Spatial Media Lab to collaborate – that’s a recently formed artist/tech collective founded by Andrew Rahman and Timo Bittner. With Jan’s music – and a full-sized acoustic grand piano hauled into the space – they’ll transform the environment of the Zeiss Grossplanetarium Berlin into a unique listening environment.

I got the chance to work with Spatial Media Lab on their first planetarium outing in November 2018. What makes their effort unique is that they’re working to de-mystify the delivery technology for spatializing sound, along artists to be more hands-on and collaborative. That frees them to spend the significant time to finely tune their music material to the space, and play creatively, rather than just wrestle with tech or turn over control to engineers. (You can read up on the collaboration I joined in 2018, Contentious Constant II – and we’re overdue for a check-up here.)

Jan has shared some thoughts with CDM on how this process worked:

What was the process for you, reworking material for a spatial context?

It was a totally new approach for me. The difference between stereo and immersive sound is enormous. I had to rethink the whole album and detach the production from the well-known stereo panorama cage. It wasn’t that simple, because everything was [originally] made in stereo. From the synth to the DAW, it’s all made for a stereo environment. So we had to [mix] the signals into mono, which we later scaled up to ambisonic sound.

After exporting all of the tracks, we imported them into the DAW Reaper … [which is able to] handle up to 64 outputs of each track, needed to play all the signals into the dome. We used the IEM Plugin Suite to build our scene and then mixed the tracks from scratch. [Ed.: SML used this combination before, and it’s great to work with artistically. IEM is free and open-source and easy to manage, and Reaper, of course, has some superb multichannel support and is fast, efficient, free to try, and inexpensive to own.]

Once I realized how far I could go when it comes to the production and writing process, my head almost exploded. There is no longer a stereo cage. You basically can do whatever you want. The signals can start right at the top of your head and fall down to your knees, surrounding you! This changes the whole process of how you create music.

Your musical process I know shifted for this record; can you describe what changed?

I started recording in the same way. The piano improvisation is still the root of it all, but it is no longer necessarily the main part of the production. I didn’t want to be constricted by the piano and often I just muted it after adding some synth layers. The piano is no longer the lead voice.

How did Tobias Preisig get involved in the project – and now on the same bill?

Last year I produced Tobias Preisig’s solo debut Diver. He wanted to concentrate on the essence of his music and dive deeper into his instrument and discover the real needs of his art. Tobias and I share the same approach to music, and while planning this event I wanted him to be part of it. His music is so immersive by default and it fits perfectly into the planetarium environment.

If you’re in Berlin, you can catch the “Spherea” program with both artists at the Zeiss-Grossplanetarium in Prenzlauer Berg.

Spherea präsentiert von Jan Wagner & Tobias Preisig

More on the Spatial Media Lab:

janwagner.bandcamp.com/

The post Hear Jan Wagner’s intimate piano electronics, before they enter a planetarium dome appeared first on CDM Create Digital Music.

from CDM Create Digital Music cdm.link/2020/02/jan-wagner-piano-planetarium/
via IFTTT

Categories
Uncategorized

AI upscaling makes this Lumiere Bros film look new – and you can use the same technique

A.I.! Good gawd y’all – what is it good for? Absolutely … upscaling, actually. Some of machine learning’s powers may prove to be simple but transformative.

And in fact, this “enhance” feature we always imagined from sci-fi becomes real. Just watch as a pioneering Lumiere Brothers film is transformed so it seems like something shot with money from the Polish government and screened at a big arty film festival, not 1896. It’s spooky.

It’s the work of Denis Shiryaev. (If you speak Russian, you can also follow his Telegram channel.) Here’s the original source, which isn’t necessarily even a perfect archive:

It’s easy to see the possibilities here – this is a dream both for archivists and people wanting to economically and creatively push the boundaries of high-framerate and slow-motion footage. What’s remarkable is that there’s a workflow here you might use on your own computer.

And while there are legitimate fears of AI in black boxes controlled by states and large corporations, here the results are either open source or available commercially. There are two tools here.

Enlarging photos and videos is the work of a commercial tool, which promises 600% scaling improvements “while preserving image quality.”

It’s US$99.99, which seems well worth it for the quality payoff. (More for commercial licenses. There’s also a free trial available.) Uniquely, that tool also is optimized for Intel Core processors with Iris Plus, so you don’t need to fire up a specific GPU like the NVIDIA. They don’t say a lot about how it works, other than it’s a deep learning neural network.

We can guess, though. The trick is that machine learning trains on existing data of high-res images to allow mathematical prediction on lower-resolution images. There’s been copious documentation of AI-powered upscaling, and why it works mathematically better than traditional interpolation algorithms. (This video is an example.) Many of those used GANs (generative adverserial networks), though, and I think it’s a safe bet that Gigapixel is closer to this (also slightly implied by the language Gigapixel uses):

Deep learning based super resolution, without using a GAN [Towards data science]

Some more expert data scientists may be able to fill in details, but at least that article would get you started if you’re curious to roll your own solution for a custom solution. (Unless you’re handy with Intel optimization, it’s worth the hundred bucks, but for those of you who are advanced coders and data scientists, knock yourself out.)

The quality of motion may be just as important, and that side of this example is free. To increase the framerate, they employ a technique developed by an academic-private partnership (Google, University of California Merced, and Shanghai’s Jiao Tong University):

Depth-Aware Video Frame Interpolation

Short version – you combine some good old-fashioned optical flow prediction together with convolutional neural networks, and then use a depth map so that big objects moving through the frame don’t totally screw up the processing.

Result – freakin’ awesome slow mo go karts, that’s what! Go, math!

This also illustrates that automation isn’t necessarily the enemy. Remember watching huge lists of low-wage animators scroll past at the end of movies? That might well be something you want to automate (in-betweening) in favor of more-skilled design. Watch this:

A lot of the public misperception of AI is that it will make the animated movie, because technology is “always getting better” (which rather confuses Moore’s Law and the human brain – not related). It may be more accurate to say that these processes will excel at pushing the boundaries of some of our tech (like CCD sensors, which eventually run into the laws of physics). And they may well automate processes that were rote work to begin with, like in-betweening frames of animation, which is a tedious task that was already getting pushed to cheap labor markets.

I don’t want to wade into that, necessarily – animation isn’t my field, let alone labor practices. But suffice to say even a quick Google search will quickly come up with stories like this article on Filipino animators and low wages and poor conditions. Of course, the bad news is, just as those workers collectivize, AI could automate their job away entirely. But it might also mean a Filipino animation company would face a level playing field using this software with the companies that once hired them, only now with the ability to do actual creative work.

Anyway, that’s only animation; you can’t outsource your crappy video and photos, so it’s a moot point there.

Another common misconception – perhaps one even shared by some sloppy programmers – is that processes improve the more computational resources you throw at them. That’s not necessarily the case – objectively even not always the case. In any event, the fact that these work now, and in ways that are pleasing to the eye, means you don’t have to mess with ill-informed hypothetical futures.

I spotted this on the VJ Union Facebook group, where Sean Caruso suggests this workflow: since you can only use Topaz on sequences of images, you can import into After Effects and go on and use Twixtor Pro to double framerate, too. Of course, coders and people handy with tools like ffmpeg won’t need the Adobe subscription. (ffmpeg, not so much? There’s a CDM story for that, with some useful comment thread, too.)

Having blabbered on like this, I’m sure someone can now say something more intelligent or something I’ve missed – which I would welcome, fire away!

Now if you’ll excuse me, I want to escape to that 1896 train platform again. Ahhhh…

The post AI upscaling makes this Lumiere Bros film look new – and you can use the same technique appeared first on CDM Create Digital Music.

from CDM Create Digital Music cdm.link/2020/02/ai-upscaling-framerate/
via IFTTT