SMC 2011

Summer School:

Embodied Sound and Music

Academic Program / Courses / Teachers / Application & Registration

The SMC2011 Summer School has finished successfully. All the materials (slides, code, pictures, teachers, students, ...) are available at the Summer School wordpress page:

The goal of the SMC Summer School is to give an opportunity to young researchers interested in the field to learn about some of the core interdisciplinary topics of SMC, and to share their own experiences with other young researchers. This year the focus is on the embodied links between sound, music, and movement. Lectures and hands-on projects will explore this theme from several viewpoints, including novel sound synthesis techniques, multimodal interaction, music cognition, movement analysis and characterization.

Academic Program

Time Sat, 2nd. Sun, 3rd. Mon, 4th. Tue, 5th.
9:00 - 10:45 Lectures Lectures Lectures
10:45 - 11:15 Coffee break Coffee break Coffee break
11:15 - 13:00 Start 12:30 - Summer School &
Students Presentation
Lectures Lectures Lectures
13:00 - 14:30 Lunch Lunch Lunch
14:30 - 16:15 Presentation/Introduction Course 1 Projects Projects Projects
16:15 - 18:00 Presentation/Introduction Course 2 Projects Projects Projects


Music and Movement / Interactive SMC: The Challenges of Continuous Interaction

Course 1 - Music and Movement
Leon van Noorden (University of Ghent, Belgium)

Main Lectures
  • Ch1: The Embodied Link between Music and Movement
    Recent developments in Systematic Musicology put more emphasis on the link between action and perception in the experience of music. This is in line with the general trend in cognitive and neural sciences. The focus is on Embodied Music Cognition. Important are also the psychological and social aspects.
  • Ch2: Characterisation of Movement and Music and their Relation
    Although we argue in the previous chapter that music and movement are intimately linked, we take them now apart to study the features of each of them and to study how we can describe the linkages between them.The following topic will be Introduced: resonant and smooth motion, basic gestures, synchronisation and entrainment, the use of space and time, pulse and metrical structure.
  • Ch3: The Measurement and Registration of Music and Movement
    A presentation is made of the various possibilities to measure movement in and outside the lab. Accelerometers, Mocap systems, video and GPS and their raw presentation as data or avatar models will be discussed.
  • Ch4: Mathematical Tools for Movement Analysis
    In order to reduce the gigabytes of data obtained with the technologies described in the previous chapter to understandable models it is important to have good mathematical description and analysis tools. An important aspect is that the movements to music are very often quasi repetitive movements. Relevant recent methodologies such as Periodicity Transform and Empirical Mode Decomposition will be introduced.
  • Ch5: Some State of the Art Music and Movement Studies
    Walking to Music, Dancing the Samba, Resonance and Synchronisation in tapping to music by young children and movements made by musicians are relevant examples of the research on Embodied Music Cognition from the IPEM.
  • Ch6: Applications
    Many people, today, have small but sophisticated equipment in their pocket that can present music and measure movement. This opens the floor for many 'apps' that can apply our knowledge on Music and Movement. We suggest to have a brainstorm on innovative applications.

Hands on
Denis Amelynck (University of Ghent, Belgium)
Frank Desmet (University of Ghent, Belgium)

In the two hands on projects the participants will measure movements related to music listening or music making, analyse it with one of the mathematical models and present a model rendition of the movement in order to get hands on experience with the contents of the more theoretical lectures. The students will be able to propose experiments in which they are interested or choose one of the ideas we present.

Course 2 - Interactive SMC: The Challenges of Continuous Interaction
Federico Fontana (University of Udine, Italy)

Main Lectures
Digital sound synthesis relies on a number of techniques that accumulated during the last decades. Together with methods for audio and musical analysis and classification, they form a corpus that answers almost every sound researcher's and music practitioner's question. Unless, tight interaction constraints come into play. Once no more than 15-20 ms are allowed to transform interactive inputs such as human control into accurate auditory feedback, much of that corpus melts and instead leaves the stage to ad-hoc solutions, subsuming hardware dependency, application constraints, and various trade-offs that inevitably limit the quality and scalability of the solution. Far from providing a systematic approach to SMC in conditions of continuous interaction, the lectures will survey the diverse questions that need to be answered during a sonic interaction design project, both inside and outside the music field:
  1. Instantaneous Sonic Feedback: interactive musical and everyday sounds;
  2. Critical Design and Technological Issues;
  3. Case Studies in Musical Sound Synthesis;
  4. Case Studies in Everyday Sound Synthesis.

Hands on
Maurizio Goina (Conservatorio Tartini Trieste, Italy)
Stefano Papetti (University of Verona, Italy)

  • Natural sonic walking (Maurizio Goina - Conservatorio Tartini Trieste, Italy)
    The purpose of this workshop is the realization of an interactive installation in which the main idea is the exploration of the expressive potential of ecological sounds for gait sonification. The aim of such work is to stimulate the user’s sensitivity to ecological sounds through their embodiment, and, reciprocally, to enhance her/his proprioception through a continuous and interactive sonic feedback. The EGGS system Elementary Gestalts for Gesture Sonification will be used for the sonification of the walk, this by recognizing elementary kinematic units. The system is implemented as a Max/MSP application and a Processing applet. There will be three phases; theoretical and conceptual work, sketching, prototyping and realization. On the first day students will be introduced to the conceptual background of the EGGS system, will go trough a brainstorming session and produce concepts and vocal sketches. On the second day students will focus on sound design needed for the sonification of the walking, and will realize sketches in order to simulate the installation, by means of the EGGS system, graphics tablets and accelerometers. On the third day students will work on a full scale installation, applying and calibrating sensors to their legs, video documenting the system at work and discussing the results. An example of this type of installation is Sonic Walking.
  • Effects of sound and vibration in augmented walking tasks (Stefano Papetti - University of Verona, Italy)
    In everyday life, auditory and tactile cues coming from the feet provide significant information about the environment. Recent trends in HCI, performing arts and gaming, consider the use of foot-based multimodal interfaces, whose applications include augmented reality, rehabilitation, critical labor environments, navigation aids and entertainment. The purpose of this workshop is to let students investigate and experience the importance of auditory and tactile feedback in walking and other foot-related tasks. One or more pair of instrumented shoes will be made available (here's a previous prototype). Such shoes are provided with force sensors and audio-tactile exciters. Moreover, we will make use of a sensing floor enabled by contact microphones, which can track a person while walking. By interfacing such hardware devices with an open-source software for sound synthesis, students will be able to experience both auditory and underfoot tactile feedback simulating different grounds (from creaking floors to snow and mud). After exploring this setup, we will start a couple of pilot experiments to investigate the cross-modal effects of sound and vibration in walking scenarios.


Leon van Noorden (University of Ghent, Belgium)
Education: PhD: Temporal Coherence in the Perception of Tone Sequences, TU Eindhoven, 1975; Technical Physics, TU Eindhoven, 1963-1970
Current activities: Research on music and movement; synchronisation in children, auditory scene analysis, walking of patients with Parkinson's Disease, rhythm perception and production. Functions: Associate professor, Université Libre de Bruxelles, Unité de recherche en Neurosciences Cognitives, 2009-present, Visiting professor, IPEM, University Ghent, 2005-present; European Commission, 1989-2004; Dutch PTT, 1981-1989; Dutch Association for the Blind, 1977-1981; Post Doctoral Fellow, Bell Laboratories, Murray Hill, USA 1975-1976, ZWO 1970-1975.
Artistic: Construction of and performing with computer controlled musical instruments. Member of Maciunas Ensemble: 1972-present.

Denis Amelynck (University of Ghent, Belgium)
Denis Amelynk received a master degree in engineering at the University of Ghent in 1985. He worked as system and training engineer for several international companies like Alcatel, Honeywell and W.R. Grace. Machine Learning is one of his principal interests. Currently as PHD-researcher he makes his expertise available to the Institute for Psycho-acoustics and Electronic Music (IPEM) of the University Ghent. His most recent research concerns Bayesian Modeling of musical gestures providing musicologists with new insights in the embodied music recognition paradigm.

Frank Desmet (University of Ghent, Belgium)
Frank Desmet (Ph.D. Researcher, IPEM Ugent) is involved in the MEFEMCO project at IPEM on empirical methodologies for the analysis of listeners/musicians movements in response to music, using specific multivariate statistical paths. A broad spectrum of traditional statistical techniques, such as Time-Series Analysis, Multivariate Non-Parametric Analysis, Multivariate Analysis of Variance (GLM) and Multiple Regression Analysis, Principal Components Analysis, Procrustes Analysis and Multidimensional Scaling. Furthermore, advanced statistical methods, such as Dynamic Time Warping, Chaos Analysis, Cladogram Analysis and Analysis of Entropy are used to analyze complex data. Another tasks is to coach and train researchers at IPEM in Experimental Design, setup of datasets and provide guides to establish valid statistical pathways for the validation of experimental data.

Federico Fontana (University of Udine, Italy)
Federico Fontana is assistant professor at the Dipartimento di Matematica e Informatica of the University of Udine, teaching sound processing. In 2001 he has been visiting scholar at the Laboratory of Acoustics and Audio Signal Processing, Helsinki University of Technology (now Aalto University) in Espoo, Finland. In 2003 he received the PhD in computer science from the Dipartimento di Informatica of the University of Verona, where he has been working until 2009, and where he currently teaches non visual interaction as a guest professor. He coordinates the EU FET-Open Project 222107 NIW - Natural Interactive Walking and the industrial project E-PHASE - Electronic Piano with Haptic And Spatial Enhancements. He has been guest editor of two special issues in international journals focusing on SMC. He has been scientific program chair of HAID 2010, in Copenhagen. In 2000-2003 he has been consulting for the R&D divisions of some companies and public institutions. His current interests are in the design and interactive synthesis of sounds, the evaluation of non visual interfaces, and in real time nonlinear signal processing.

Maurizio Goina (Conservatorio Tartini Trieste, Italy)
Maurizio Goina is a viola player and an audio-visual composer. He received a Master Degree in Music and New Technologies from the Conservatory of Trieste – Italy. He plays viola in the orchestra of the Trieste opera theatre. His audiovisual works were performed in many festivals in Italy, Europe and the Americas. Since 2008 he has been developing, together with Pietro Polotti, the EGGS system for gesture sonification. Currently, he is working as a researcher on a Gesture Sonification research project at the School of Music and New Technologies of the Conservatory of Trieste.

Stefano Papetti (University of Verona, Italy)
Stefano Papetti received a MSc in Computer Engineering from the University of Padova, and a Ph.D. in Computer Science from the University of Verona, where he currently works as research associate on the EU-funded project NIW. His research focuses on sound synthesis, physical models of acoustic phenomena, ecological sounds, interactive sonification and sonic interaction design.

Application and Registration

Applications must include the following documents in pdf format:
  • Curriculum vitae (max. 1 page);
  • Proof of university enrolment;
  • Short description of the student research interest and motivation to participate (max. 2 pages).
The three pdf files must be included in a file. The application must be sent by e-mail to

Please check all the related important dates.
Selected students will be able to register through the registration page.

Organized by
SaMPL Conservatory

With the support of the Culture Programme of the European Union