By Joel Rich on YouTube.
The sounds around 2 minutes in do sound a bit like a cat
By Joel Rich on YouTube.
The sounds around 2 minutes in do sound a bit like a cat
From Jeff Donovick via our Facebook page.
The photo features the cat Otava sitting on the command chair in front of a Roland A80 keyboard controller. The controller is connected to Native Instruments Reaktor with RAZOR and SKANNER synths.
The Tiger Synth VSTi some sound examples are single patches performed skillfully by Dimitri Schkoda plus some shorter ones by HGF
Only one patch used with each example showing you can have a wide variety of sound from fairly straight to really complex ones. with the enhanced realtime control options this is a real performance synth suiting many needs.
The video has a retro movie sort of feel, does it not?
Last week we lost Dennis M. Ritchie, whose work influenced much of what we do with computers today both as users and software developers.
From the New York Times obituary:
In the late 1960s and early ’70s, working at Bell Labs, Mr. Ritchie made a pair of lasting contributions to computer science. He was the principal designer of the C programming language and co-developer of the Unix operating system, working closely with Ken Thompson, his longtime Bell Labs collaborator…
It was only a week earlier that we were marking the passing of Steve Jobs and noting the contributions he made to Apple via NeXT. The operating system of NeXT which became Apple’s Mac OSX are Unix systems. Similarly, the much of the heavy computer programming from large-scale servers to iPhones is done with C and its descendents C++ and Objective C.
“The tools that Dennis built — and their direct descendants — run pretty much everything today,” said Brian Kernighan, a computer scientist at Princeton University who worked with Mr. Ritchie at Bell Labs.
A great many of us who studied computer science and practiced computer programming have the classic text that Kernighan and Ritchie co-wrote, The C Programming Language, known affectionately as authoritatively as “K&R”.
C is at hits heart a “systems programming language.” It’s a small language, structured in the imperative programming style of Algol and PASCAL, but the individual functions and operations are close to the machine language, simple bit-shift, arithmetic and memory location (pointer) operations. As such, it is very unforgiving compared to some of its predecessors, but it was efficient and simple and has enough expressive power to build operating systems like Unix, scientific computing, and the inner works of most software applications through the object-oriented successors, C++ and Objective C. Much of my software work has centered around these descendent languages, but when it comes to doing actual computation, it’s still C.
“C is not a big language — it’s clean, simple, elegant,” Mr. Kernighan said. “It lets you get close to the machine, without getting tied up in the machine.
Higher-level languages, like the PHP used to build this site, are ultimately implemented as C and C++ programs. So both this website and the device you are using to read it are products of Dennis Ritchie’s work.
Like so many others, we are marking the passing of Steve Jobs. But one thing that is often overlooked in the many tributes is his leadership at NeXT.
I first came across NeXT in 1989, as I was starting to explore the world of computer music. It was an ideal machine for its time for music and media work. It had a powerful operating system, it had a programmable DSP, it had SoundKit and MusicKit libraries specifically designed for music programming. And the original NeXT Cube was quite a striking physical object.
I had even written a couple of letters and formal research proposals to NeXT and directly to Steve Jobs to support my incipient research work in high school. Of course, nothing came of it, but it was an interesting exercise in learning how to write a proposal. And my opportunities to try out the system (via various institutions) game me a sense of what a more ideal computing environment could be like. As a music and computer-science student at Yale, I did in fact have the opportunity to do work on NeXT systems, but by that point the computing world, and the computer-music world, were moving on.
Apple acquired NeXT in late 1996. In the process, they reacquired Steve Jobs, whose return marked the Apple that we know today, and also the NeXTSTEP operating system, which lives on to this day as the foundation of Mac OSX. Every contemporary MacBook and MacPro is in many ways a late model NeXT computer. Indeed, it was only after the introduction of OSX that I purchased my first Mac (an iBook) in 2003 and gradually shifted into being one of those annoyingly obsessive Mac/iPhone/iPad users. All of my music and photography work is done on these devices, as are all posts to this blog and updates to our Twitter and Facebook streams. Even my day job is intimately connected to the technology from Apple. We associate these technologies and designs with Steve Jobs, but much of it can be traced back to what he and others pioneered at NeXT.
The theme of this week’s Photo Hunt is digital. Rather than simply use a digital photo – which could be any photo ever taken of Luna – I chose a couple of images that demonstrate the unique opportunities of the medium. A digital photo is really just a stream of numbers, not unlike digital audio, and can be processed in countless ways using digital signal processing or applying other mathematical functions.
For a piece I originally did in 2007, I took one of Luna’s adoption photos from Santa Cruz County Animal Services and applied an algorithm that overlaid these colored bands, as shown above. The color bands were generated using a set of hastily chosen trigonometric and hyperbolic functions applied to the timeline of the animation sequence. These photos are stills from the full animation.
I did these using image and video extensions to Open Sound World – one nice feature of that work was that I could use the same functions for both audio and video, and “see” what a particular audio-processing algorithm looked like when applied to an image. And I would probably use the Processing environment for future visual work, perhaps in conjunction with OSW.
Weekend Cat Blogging #309 and Carnival of the Cats are both being hosted by Billy SweetFeets this weekend. Perhaps Luna’s animation could be part of one of the dance videos they often feature.
And the Friday Ark is at the modulator.
A special note this week. Our friend Judi at Judi’s Mind over Matter (home of Jules and Vincent) has information on how to help animals affected the storms and tornadoes in the southeast US. They live in Alabama, not far from the place that was hit hardest by the tornadoes. We’re glad they’re safe, and able to provide this information for those who would like to help.
Two Sundays ago, I attended a performance at Artist Television Access featuring electro-acoustic audio-visual improvisations with John Butcher, Bill Hsu and Gino Robair. Bill Hsu provided the visual elements of the performance using the visualization environment Processing. (I have been interested in Processing for a while, and used it in the abstract graphics in my video piece featuring Luna.) Gino Robair had an array of electronic devices, including a Blippo Box and an Alesis effects unit, and acoustic percussion for a variety of sounds. John Butcher provided the low-tech counterpoint on saxophone.
I arrived late to an already pitch-black room as the first piece was concluding. (I was late because I was looking for a parking spot, which in the Mission is usually an ordeal. and I rarely drive there, but I had to on this night because of other obligations.) The next piece began in darkness, with small colored dots and a very sparse musical texture. The sound primarily consisted of electronic drones and long saxophone tones. As the the dots began to expand, so did the music. It became more active and featured more percussive sounds from Robair. As the graphics grew more complex, with swells and streaks, the music veered from discrete sounds to outright skronking with long runs of fast notes from both performers.
The next piece featured graphics that reminded me a bit of finite-element simulations with large numbers of particles forming in and out of patters. At first the particles seemed to form glyphs or characters of a written language, but then dissolved into smoke. This was set against sparse music, featuring bowed metal. (It was too dark to see, but I am pretty sure this was Gino Robair’s signature cracked cymbal.) The graphics shifted gradually over time, sometimes it seemed more like water, sometimes more like sand. Towards the end, the music (both saxophones and percussion) moved towards rather piercing high tones.
After a brief intermission, the performance resumed with the now familiar sound of the Blippo Box. It is interesting how despite having chaotic processes, this instrument has a very distinctive set of timbres and contours that are quickly recognizable. I did find out after the performance that the Blippo Box was being used in conjunction with an Alesis effects unit, which added more dimensions to the sound without changing its inherent character. Butcher attempted to match the sound on his saxophone, coming into unisons on the steady-state pitches, but then moving in chaotic runs of fast notes are growling timbres during the more turbulent output from synthesizer. The graphics during this piece focused on two closed elements, one yellow and one purple. They were mostly round shapes that curved in on themselves, but they occasionally coalesced into representational objects, such as a complex cross shape with sub-bars on the end (a bit like an Eastern Orthodox crucifix), and vague outlines of human figures.
The next piece was a sharp contrast musically, with drum samples and live percussion set against percussive saxophone effects, such as key clicks and tonguing. The graphics featured a red star with a roiling plasma surface that expanded over time.
The graphics in the final piece connected most strongly with my own visual aesthetics. It featured patterns of vertical bars overlaid periodically with large dots. The patterns started out simple, focusing on just a few elements on colors, but got more complex and richly colored over time. The music set against these visuals again featured the Blippo Box and its constantly changing but distinctive sound palette. But rather than attempting to match it, Butcher’s saxophone provided a counterpoint. He wove together active lines and melodies that on occasion were distinctively jazz-like, and then moving back and forth between long runs and series of loud inharmonic tones.
On Tuesday, I went to the Center for New Music and Audio Technologies (CNMAT) in order to continue preparing for the Regent’s Lecture concert on March 4. I brought most of the setup with me, at least the electronic gear:
Several pieces are going to feature the iPad (yes, the old pre-March 2 version) running TouchOSC controlling Open Sound World on the Macbook. I worked on several new control configurations after trying out some of the sound elements I will be working with. Of course, I have the monome as well, mostly to control sample-looping sections of various pieces.
One of the main reasons for spending time on site is to work directly with the sound system, which features an 8-channel surround speaker configuration. Below are five of the eight speakers.
One of the new pieces is designed specifically for this space – and to also utilize a 12-channel dodecahedron speaker developed at CNMAT. I will also be adapting older pieces and performance elements for the space, including a multichannel version of Charmer:Firmament. In addition to the multichannel, I made changes to the iPad control based on the experience from last Saturday’s performance at Rooz Cafe in Oakland. It now is far more expressive and closer to the original.
I also broke out the newly acquired Wicks Looper on the sound system. It sounded great!
The performance information (yet again) is below.
Friday, March 4, 8PM
Center For New Music and Audio Technologies (CNMAT)
1750 Arch St., Berkeley, CA
CNMAT and the UC Berkeley Regents’ Lecturer program present and evening of music by Amar Chaudhary.
The concert will feature a variety of new and existing pieces based on Amar’s deep experience and dual identity in technology and the arts. He draws upon diverse sources as jazz standards, Indian music, film scores and his past research work, notably the Open Sound World environment for real-time music applications. The program includes performances with instruments on laptop, iPhone and iPad, acoustic grand piano, do-it-yourself analog electronics and Indian and Chinese folk instruments. He will also premier a new piece that utilizes CNMAT’s unique sound spatialization resources.
The concert will include a guest appearance by my friend and frequent collaborator Polly Moller. We will be doing a duo with Polly on flutes and myself on Smule Ocarina and other wind-inspired software instruments – I call it “Real Flutes Versus Fake Flutes.”
The Regents’ Lecturer series features several research and technical talks in addition to this concert. Visit http://www.cnmat.berkeley.edu for more information.
I have been busily preparing this weekend for the first of my UC Berkeley Regents’ Lecturer presentations:
Open Sound World (OSW) is a scalable, extensible programming environment that allows musicians, sound designers and researchers to process sound in response to expressive real-time control. This talk will provide an overview of OSW, past development and future directions, and then focus on the parallel processing architecture. Early in the development of OSW in late 1999 and early 2000, we made a conscious decision to support parallel processing as affordable multiprocessor systems were coming on the market. We implemented a simple scalable dynamic system in which workers take on tasks called “activation expressions” on a first-come first serve basis, which facilities for ordering and prioritization to deal with real-time constraints and synchronicity of audio streams. In this presentation, we will review a simple musical example and demonstrate performance benefits and limitations of scaling to small multi-core systems. The talk will conclude with a discussion of how current research directions in parallel computing can be applied to this system to solve past challenges and scale to much larger systems.
You can find out more details, including location for those in the Bay Area who may be interested in attending, at the official announcement site.
With slides out of the way, I can now turn to the more fun part, the short demos. This gives me an opportunity to work with TouchOSC for the iPad as a method for controlling OSW patches. We will see how that turns out later.