|"Human Supervision and Control in Engineering and Music"|
|Workshop||Orchestra Concert||Ensemble Concert||About us|
AbstractThis is a "live" report of the composition of "Visional Legend" with the point of view "Human Supervision and Control in Engineering and Music" (e.g., multimedia/interactive art, interaction between breath/music of Sho, human-media interaction with sensor and computer music/graphics). You can follow the "live" process of my composition with full information/score of the work. Please enjoy !
1. The MotiveAs a composer and a researcher of computer music, I have two different style of composition : "needs" oriented, and "seeds" oriented. This work "Visional Legend" was composed by both motivation as a special case in my composition.
"Seeds-oriented" compositionWith my research and development in computer music, I have found and developed many compositional concepts and ideas : random, chaos, NN, GA, multi-agent multimedia, multi-layer algorithmic composition, etc. So they became the "seeds" of ideas in my compositions experimentally. On the other hand, I (as an engineer) have produced many sensors experimentally (testing new sensing devices, new technique of noise reduction, new microprocessing system for human performance, etc). They were also "seeds" of my research, and were often cast as a instrument after the development.
"Needs-oriented" compositionAs a usual composer, my composition begins with study of the "triggered" theme (e.g., poetry, traditional/folk instruments, natural/strange sounds, special musical styles,etc). On the other hand, I am requested to develop the special sensors/instruments for musician or artists, this case is also the "needs" oriented, but I am not a composer in this time but an engineer and a collabolator of the project. I cannot produce the "special instruments" as a composer.
The Motive of the composition of "Visional Legend"
Fig.2 : Tamami Tono plays the Sho
On the other hand, I had inspired and have been keeping one
poem called (written
by Japanese poet Shimpei Kusano) for over 20 years. As a composer,
I have tried to compose with this poem by many styles of music, but all
challenges have failed because of the deep scale of the world. Finally
I have succeeded in composing with new method : computer music. Of
I was deeply affected with the Sho sounds performed by Tamami Tono, and
total "Japanese" atomsphere is the main theme of this music.
2. Musical ElementsWhole sound of this work consists of Sho sound performed by Tamami Tono and reading speech of the poem by Junya Sasaki (Baritone). There are two types of sources : pre-composed CD (background part) and live signal processing of Sho sound. The performance of this workshop is partly revised version from the premiere version in 1998, but the background CD part and the score for performer is just the same the premiere version.
Studio work for CD partAt first, we had a recording of Sho sounds at a studio of Keio University SFC. Of course, we used not only "musical" (traditional) performing style but also tried many styles of contemporary technique of performance : for example, staccato, accent, noise, distortion and singing. Traditional Sho performers refuse these "vulgar" manner because they might be excommunicated, but Tamami could challenge the possiblity of Sho sounds, so we found many new technique and good sound materials.
The Sho sounds (single note, chords, noises, etc) were sampled
The Baritone voices were recorded at a studio in Tokyo, and
into SGI Indy workstation, edited many techniques like Sho sounds. In
composition I attach importance to "singing/speaking voice" because I
composed over 100 choral music, but in this work I treat simply
effect on the reading voices. [
"Ogress-II" : featuring voice processing throughout]
Recorded sounds were divided into many parts, then trimmed,
System and ScoreThis work is not only a live computer music but also a multimedia art, so the system consists of (1) background CD part, (2) live signal processing with Sho sounds via microphone, (3) live graphics part. The premiere version (1998) was construted with 3 Video players, 3 CCD cameras and original MIDI video switcher which exchanged live visual sources. This version (2001) has another Macintosh computer which runs "Image/ine" software, and acts real-time image processing with MIDI information from Max algorithm.
The live Sho performance part is fully with improvisation of
Tono, so she uses the score only as "cue-sign" of the
Live processing with KymaFig.9 shows the Kyma patch for "Visional Legend" in a moment of the composition. Live Sho sound is real-time sampled and signal-processed in these modules. Many parameters in signal processing are assigned to many MIDI parameters, and real-time controlled from Max. Because this work is using pre-composed CD part, this live processing by Kyma is very simple and compact compared with other my works (live processing only).
Fig.10 shows one sample of signal processing block which acts as "real-time granular sampling" effects. In this patch, there are 29 grains (smoothly-enveloped, live-sampled sound elements) and rendomly re-generated with [GrainDur] [GrainDurJitter] [Density] [PanJitter] parameters via MIDI "control change" messages.
3. Graphical ElementsI have been creating many works of multi-media art, but in almost cases I have
collaborators in creating graphical part of the work, because I am a composer. But the first version of "Visional Legend" (1998) was composed only by myself including the graphical part of the work (3 video images and slide-show CG), it was a rare case. In the newest version of "Visional Legend" (2001), I selected 2 collaborators, Misaki Kato and Masumi Ohyama, to create the graphical part of this work originally.
Visual sourceAt first step, I showed the background CD part and the poem to the collaborators, and we had a discussion of the image and atomosphere. Then, we go on a small trip to one temple, and took photos and videos of JIZOs etc. The captured and processed images from them are used as visual source of the graphical part os this work.
Live video switchingFig.12 shows the system block diargam of the first version of "Visional Legend" (1998). In this system, I used original "MIDI video switcher" for live control of pre-created graphic contents and live video images from 3 CCD cameras. This was the "analog" processing style of the graphcal part.
Fig.12 : system of old "Visional Legend" (1998)
Image/ine, QT movie, FirewireThe newest version of "Visional Legend" (2001) has evolved to "digital" processing not only in musical part but also in graphical part of the work completely. I choosed one software called "Image/ine" with Macintosh.
We had also recorded the QT movies of Tamami Tono performing Sho, and these movies are used as elements of the graphical part of the work. I am challenging not only using QT movie but also live image of CCD camera via Firewire (IEEE1394, iLink) with Image/ine, so there remains very little possiblity of using live CCD information in the performance in the Kassel concert 2001.
4. Interactive ElementsIn my composition, I prefer to choose "live" computer music rather than "fixed"
(sequencer = playback only) style. Thus, there must be interfaces between human performer and live computer system, like "instruments" in traditional music. The "interface" is very important in "interactive-art" like music, and I have been challenging to research/experiment/develop new interfaces with sensor technology and microelectronics.
Bi-directional breath sensingNormally the Sho player must keep sitting calmly, so I cannot use popular interfaces like foot pedals, foot volumes and optical beam sensing the movements of arms. On the other hands, the breath stream of each bamboo tube is very critical, and it is very difficult to detect the value of the bi-directional pressures for each bamboo pipe. I found that normal Sho uses 15 bamboos with reed but 2 bamboos are used only for decoration, not used for sound generation. So I and Tamami replaced one bamboo with the "sensing pipe" which connected a small air pressure sensor module. This sensor detects the bi-directional air pressure value of the "air room" of the bottom of the Sho. Fig.15 shows the pressure sensor and the Sho.
A/D, AKI-H8 and MIDI
5. PerformanceThe rehearsal is most important process in live computer music, because so many parameters can be changed on stage in rehearsal. The "fixed" music like CD, MD, DAT, DVD cannot be changed in rehearsal, with only balance setting of PA. The sequencer style is also difficult to change whole the music. But it is easy for me to arrange/trim/change the algorithm and perameters of the music in rehearsal time, and the music changes drastically in short time.
Max control via MIDIAll live control is generated by Max in the performance with receiving the MIDI output of Sho breath sensor. The breath sensing information from Sho is real-time pattern-matched and recognized into "performance information", and they trigger some changes of sounds and graphics, they controls continuously as parameters of signal processing and color information in visual effect, etc. Fig.23 - Fig.27 shows the original Max patches for "Visional Legend". In the performance, I am at the computer desk and control the main patch of Max in real time as another performer.
"Chance" and "Improvisation"As you know when you read/study the score of "Visional Legend", there is no "fixed" note or chord in the Sho performer's part. The Sho part of this work may be played fully improvisation, so the Sho performer must "listen, feel, create" the musical images and play the Sho with bachground CD sound part and live-generated graphics. I request the "human supervision and control in music" to the performer in this work, and construct the environment for real-time composition and performance scientifically (computer music).
In my algorithmic composition, I usually use many "random" objects in Max patch which control the music totally. This means not that my music is random music or statistic music, but that my music stands upon (traditionally) tonal atmosphere or simple style/theory in music. I never use the "random" object directly to generate musical parameters, I always add the "musical filtering" algorithm upon the randomness, it may be called "God in music world". Of cource, this is just the same in traditional composition. For example, the direct output of "random" object or sensor MIDI information is an integer number (0-127), so it generates 12-tone chromatic scale or atonality when used as a note number parameter of MIDI. On the other hand, the DTM (sequencer)
composer sets each note data on the score, so the scale or tonality is fixed in each scene in music. But I usually use the "weighten table" algorithm in everywhere in my composition. This table converts input integer data into many kinds of musical parameters, but the probablity or weight of the conversion may be changed easily in real-time. So, the scale or tonality is flexible and changable with chance or performance (sensor) in every moment in music.
"Silence" as musicI use a special video projector for the performance of "Visional Legend". Normal video projectors cannot be shut-down immediately, because the lamp must be cool-down with the fan avoid heat-broken. But in the final part of this work, only the Sho sound remains in deeply silence, so I choose the special video projector which may be shut-down in any time. Thus, you (also Tamami and myself) can enjoy the perfect silence with the natural Sho sound (of course, the PA is shut-down this moment). This is different concept with John Cage, but I think that the silence is rich music with Japanese traditional culture.