“scoreLight” is a prototype musical instrument capable of generating sound in real time from the lines of doodles as well as from the contours of three-dimensional objects nearby (hands, dancer’s silhouette, architectural details, etc). There is no camera nor projector: a laser spot explores the shape as a pick-up head would search for sound over the surface of a vinyl record – with the significant difference that the groove is generated by the contours of the drawing itself. Sound is produced and modulated according to the curvature of the lines being followed, their angle with respect to the vertical as well as their color and contrast. Sound is also spatialized (see quadrophonic setup below); panning is controlled by the relative position of the tracking spots, their speed and acceleration. “scoreLight” implements gesture, shape and color-to-sound artificial synesthesia; abrupt changes in the direction of the lines produce trigger discrete sounds (percussion, glitches), thus creating a rhythmic base (the length of a closed path determines the overall tempo).
Artist Statement
A previous installation (called “Sticky Light”) called into question the role of light as a passive substance used for contemplating a painting. Illumination is not a passive ingredient of the observation process: the quality of the light, its relative position and angle fundamentally affect the nature of the perceived image. “Sticky Light” exaggerated this by making light a living element of the painting. “scoreLight” introduces another sensorial modality. It not only transform graphical patterns into temporal rhythms, but also makes audible more subtle elements such as the smoothness/ roughness of the artist’s stroke, the texture of the line, etc.
This installation is an artistic approach to artificial sensory substitution research and artificial synesthesia very much along the lines of Golan Levin’s works in the field [3]. I particular, it can be seen as the reverse (in a procedural sense) of the interacting scheme of Golan Levin & Zach Liebermann “Messa di Voce” [2], in which the speed and direction of a curve continuously being drawn on a screen is controlled by to the pitch and volume of the sound (usually voice) captured by a microphone nearby.
Finally, it is interesting to note that the purity of the laser light and the fluidity of the motion makes for a unique interactive experience that cannot be reproduced by the classic camera/projector setup. It is also possible to project the light over buildings tens or hundred of meters away – and then “read aloud” the city landscape.
Technical Statement
The piece is based upon a 3d tracking technology I developed at the Ishikawa-Komuro laboratory in 2003, using a laser diode, a pair of steering mirrors, and a single non-imaging photodetector called the “smart laser scanner”. The hardware is very unique: since there is no camera nor projector (with pixellated sensors or light sources), tracking as well as motion can be extremely smooth and fluid. The light beam follows contours in the very same way a blind person uses a white cane to stick to a guidance route on the street. Details of this tracking technique can be found here. When using the system on a table (as in the image on the right), the laser power is less than half a milliwatt – half the power of a not very powerful laser pointer – and does not supposes any hazard. More powerful, multi-colored laser sources can be used in order to “augment” (visually and with sound) facades of buildings tens of meters away – and then “read aloud” the city landscape.
Installation setup
The setup can be easily configured for interaction on an horizontal surface as was done for the Sticky Light project, or a vertical surface (a wall or a large white board for people to create graffitis or draw doodles). When using the system on a table , the laser power is less than half a milliwatt – half the power of a not very powerful laser pointer. More powerful, multi-colored laser sources can be used in order to “augment” (visually and with sound) facades of buildings tens of meters away. Sound is also spatialized – panning is controlled by the relative position of the tracking spots, their speed and acceleration.
Modes of interaction and sound generation
The preferred mode of operation is contour following. Each connected component of the image will function as sound sequencers. Sequences can be recorded and reused in the form of drawings (on stickers for instance). I have tried several working modes:
- Pitch is determined by the inclination of the lines. This generates a melody, whose tempo is determined by the length of the contour; rotating the drawing will transpose the melody to a higher/lower pitch.
- Pitch is continuously modulated as a function of curvature of the lines (FM modulation). This mode of operation enables one to hear the “roughness” of the drawing (then simulating the kiki/bouba effect). Discrete sounds triggered at specific places, such as corner (extreme curvature). Specific, pre-recorded sounds are then triggered (percussion, glitches, etc).
Other modes of operation include:
- Bouncing on lines with and without gravity, single or multi-spots. This may be useful to create a rhythmic base (from a spot repetitively bouncing between lines), or to create instead random notes (very much like in the “Hanenbow ” mode of Toshio Iwai’s “Elekroplanton” [5]).
- Interaction between spots and intermodulation. Relative distance between the spots affect the sounds produced by each other (frequencies can become closer with distance, so as to produce audible intermodulation).
Finally, a very different mode of interactively generating sound is by manipulating the shape of the scanning saccade directly, not by using it to follow a contour. This creates a very different experience – the laser shape looks organic, amoeba like, crawling over the drawings or crying when compressed.
Although it is interesting to “hear” any kind of drawing, more control can be found by “recording” interesting patterns and reusing them. Recording is here graphical: stickers for instance can be used as tracks or effects (very much like in the reacTable [10] interface). These can be arranged relative to each other, their relative distance mutually affecting their sounds as explained above (for instance, the “track” represented as a drawing on a sticker could be used to modulate the volume of the sound produced by another sticker).
It is still too early to decide whether this system can be effectively used as a musical instrument or not (has it enough expressivity? can we find a right balance between control and randomness?). However, it is interesting to note that “scoreLight”, in its present form, already unveils an unexpected direction of (artistic?) research: the user does not really knows if he/she is painting or composing music. Indeed, the interrelation and (real-time) feedback between sound and visuals is so strong that one is tempted to coin a new term for the performance since it is not drawing nor is it playing (music), but both things at the same time… drawplaying?
Publications:
- A. Cassinelli, Y. Kuribara, A. Zerroug, D. Manabe and M. Ishikawa, ScoreLight: playing with a human sized laser pickup, International Conference on New Instruments for Musical Expression (NIME2010), 15-18th June 2010 Sydney, Australia, pp:144-149, (2010), Slide presentation.
- Cassinelli A., Manabe D., Perrin S., Zerroug A. and Masatoshi I.: scoreLight & scoreBots, ACM CHI’12 (Interactivity), May 5-10, 2012, Austin, Texas, USA (2012)
- A. Cassinelli, Y. Kuribara, D. Manabe and M. Ishikawa, scoreLight, Digital Content Expo 2009 Symposium (25 Oct. 2009, Miraikan – Museum of Emerging Science and Innovation, Tokyo).
- A. Cassinelli, Y. Kuribara, D. Manabe and M. Ishikawa: scoreLight: a laser-based synesthetic experience, additional documentation for SIGGRAPH ASIA 2009 (Art Gallery).
Exhibition history
- CONTEX2009 exhibition (invited), Miraikan, Museum of Emerging Science and Innovation, Tokyo (22-25 Oct. 2009)
- 13th Japan Media Art Festival 2009 (Excellence Prize), The National Art Center, Tokyo, JAPAN (3/2-14/2/2009).
- SIGGRAPH ASIA 2009 (juried), Art Gallery: Adaptation. Yokohama, Japan, 17-19 december 2009. Art Gallery & Emerging Technologies DIGITAL EXPERIENCES, p.15 [pdf]
- EXIT & VIA festival , Creteil and Maubeuge, FRANCE (18-28/3/2010)
- Fuji TV stage, ホルスの好奇心, 3.1.2010.
- Scopitone 2010 (15-19/9.2010, Nantes, FRANCE)
- Lille3000, “Dancing Machine” (2/7 – 31/10 2010, Lille, FRANCE)
- Kyoto Media Art Festival (2/9-12/9/2010, Kyoto, JAPAN)
- Okayama Media Art Festival (30/10-7/11/2010, Okayama, JAPAN)
- Japanese television (Nihon Terebi, 「世界一受けたい授業」), aired 18.12.2010.
- SNUMOA, Game+Interactive Media Art, Museum of Art, Seoul National University (2010.12.2-2011.1.9)
- Japanese television, TV Asahi, “Sakicho” program (Dec.2010).
- JST Symposium in conjuntiction with IEEE VR2011, Suntec Convention Center, SINGAPORE (20-27.3.2011)
- Sonar Tokyo Festival (2-3.4.2011)
- The sense of Machines (one year exhibit at Disseny Hub Barcelona – DHUB), (21.06.2011 – 15/01/2012) [interview]
- Dancing Machine / Monaco Dance Forum (10-17 Dec. 2011).
- Microwave Festival: Alchemy: Drifting lab (6-21/11/2011)
- ACM CHI 2012 Interactivity session (5-10/5/2012)
References:
- Köhler, W. (1929) Gestalt Psychology. New York: Liveright (the “bouba/kiki effect“) .
- Golan Levin & Zach Liebermann “Messa di Voce“, 2003.
- Levin, G. and Lieberman, Z. “Sounds from Shapes: Audiovisual Performance with Hand Silhouette Contours in “The Manual Input Sessions“. Proceedings of NIME ’05, Vancouver, BC, Canada. May 26-28, 2005.
- Golan Levin’s online Bibliography of Synesthesia Research
- Toshio Iwai, “Eleckroplanton“.
- Philippe Chatelain, Alvaro Cassinelli, Line Surface Noise.
- Pablo Valbuena, augmented sculpture series
- Nam Jun Paik, Random Access Music, 1963
- vOICe vision technology, Seeing with Sound – The vOICe
- Music Technology Group – Pompeu Fabra University, reacTable
Credit details:
- Alvaro Cassinelli: concept and direction / hardware and software development (C++, early MAX/MSP demo)
- Daito Manabe: sound design and programming (MAX/MSP,Super Collider).
Contributors:
- Alexis Zerroug: electronic and software development (microcontroller based system)
- Kuribara Yusaku: latest software development (C++/C#)
- Luc Foubert: analog sound design on microcontroller based-version.
Acknowledgments:
- Stephane Perrin: participated in early development of the smart laser scanner technology used for tracking.
- Technology developed at the Ishikawa-Oku Laboratory, directed by Professor Masatoshi Ishikawa.