|"Human Supervision and Control in Engineering and Music"|
|Workshop||Orchestra Concert||Ensemble Concert||About us|
Sound and Meaning in Auditory Data Display
AbstractThis paper focusses on the connection between human listening and
An unanswered question is how high-dimensional data could or should
This paper looks at the relation between sound and meaning in our real
world and transfers some findings onto the sonification domain. The result
is the technique of Model-Based Sonification, which allows the development
of sonifications that can easily be interpreted by the listener.
Auditory Perception and Environmental ListeningIn evolution, the auditory senses developed because they provide us with
The elementary and oldest function of listening is the detection of
Sounds in our environment provide us with awareness and they are able to
draw our attention to potentially dangerous events (e.g. approaching
enemies). Besides this, sound allows us to extract lots of information
about a 'sounding' object: its size, material, surface, tension and so on.
We are able to abstract from sounds to properties of the sounding object or
sound process because the connection between sound and an object is fixed,
given by the laws of physics. Because the coupling mechanism (physical
laws) were constant over a large time scale, evolution was able to develop
'hard-wired' mechanisms to interprete such signals and pull out relavant
information from the signal without the need for conscious processing.
Sound emerges as a consequence of excitation of physical objects,
objects/instruments in equilibrium are usually silent. Humans often excite
objects consciously and thus get sound as a feedback to their actions. So
they can relate the sound to their actions and learn about the world from
this interaction loop. E.g. pressing a button, we know from the sound
(besides haptical feedback) if our action succeeded. While these
observations seem self-evident, they are often ignored when considering
sonification and techniques to access data by acoustic
representations. Using this relation between sound and meaning, an
alternative to Parameter Mapping, the prevailing sonification technique, is
developed in the next section.
Let us now focus on the relation between sound and meaning within
types of sound signals. In language, spoken words receive their meaning
within a cultural context and the association is learned by each child.
The relation between the sound of the word 'table' and the meaning of this
word must be learned and is somehow arbitrary. Obviously, humans have also
excellent capabilities in learning and accessing the meaning of learned
auditory patterns. While the information within environmental sounds is
analog, language emphasizes the communication of symbolic information or
abstract content. In sonification, verbal messages are suited to label
categorial data or provide symbolic labels within an analog auditory data
Musical Information lies in between these two sound types.
sounding objects by human supervisors leads to sound that both gives
information about the instrument and the performer. Harmonic relations find
an analog in physical laws (Fourier decomposition of periodic signals)
while melodic and rhythmic structures are related to prosodic patterns in
language and narrative.
Data SonificationHigh-dimensional data is given by vectors of numbers. The question is how
Model-Based SonificationModel-Based Sonification solves some of the problems mentioned above by
The main advantages of this model-based approach is, that the sound
meaning with respect to the data are connected in the same way as in the
real world. Thus, intuitive metaphors can be applied to interact with the
model. Think of a model where the local tension of a membrane surface is
parameterized by data. It can be struck, plucked, rubbed, etc. to make it
produce sound. Information about the data is given in the control loop
between human excitation and system reaction. As a model definition is not
bound to specific data, humans get the chance to become familiar with the
sound space of a model. Furthermore, the model can be applied to arbitrary
data. If dynamical laws are applied which resemble physical laws that
govern sounding objects, the sonifications are likely to be sounds within
the soundspace we are familiar with from our listening experience.
Sonification models emphasize acoustic signals as a feedback to
actions. This offers new perspectives for human supervision of complex data
and control of data manipulations. The investigation of sonification
models and their utility for exploration of high-dimensional data is the
topic of current ongoing research.