Jaroslaw Kapuscinski – Contrapuntal Dimensions of Intermedia
Music defines counterpoint as a relationship between two or more parts that are independent in contour and rhythm but harmonically interdependent. Three kinds of contrapuntal motion: oblique, parallel/similar and contrary; result in polyphony, which is characterized by moments of agreement/consonance and disagreement/dissonance. Using these definitions as premise the paper attempts to define counterpoint in the intermedia domain. What are the parameters of sound and image that can be engaged in contrapuntal relationships? What are the relevant audiovisual analogues for directional and hierarchical motion found within scales and tonality? Can there be an intermedia leading tone? What is the role of synchresis? These and other questions will be discussed using examples from intermedia performance works Juicy and Oli’s Dream.
Biography
Jaros?aw Kapu?ci?ski is an intermedia composer and pianist whose work has been presented at New York’s MoMA; ZKM in Karlsruhe; the Museum of Modern Art, Palais de Tokyo, and Centre Pompidou in Paris; and Reina Sofia Museum in Madrid, among others. He has received numerous awards, including at the UNESCO Film sur l’Art festival in Paris, VideoArt Festival Locarno, and the Festival of New Cinema and New Media in Montréal. He was first trained as a classical pianist and composer at the Chopin Academy of Music in Warsaw and expanded into multimedia during a residency at the Banff Centre for the Arts in Canada (1988) and through doctoral studies at the University of California, San Diego (1992-1997). Kapu?ci?ski is actively involved in intermedia education. Currently, he is an assistant professor of composition and director of the Intermedia Performance Lab at Stanford University.
Jim Dickinson – Infinite Process from Finite Form
Much of Paul Klee’s experimental oeuvre is concerned with his ideal of creating dynamic movement in the fixed medium of painting. From 1921 to 1931 Klee taught at the Bauhaus and was exposed to the pioneering techniques of abstract film making, but unlike some of his contemporaries, like Hans Richter, Klee resisted the draw of this new technology and chose to restrict/limit his medium to the spatial form of painting.
Klee turned to his beloved music to find structural solutions to the temporal problems he would encounter and these experiments led to his ‘Polyphonic Painting’ style, of which he declared:
Polyphonic painting is superior to music in that, here, the time element becomes a spatial element. The notion of simultaneity stands out even more richly (Diaries, 1081)
This paper seeks to explore the paradoxical link between finite form and infinite process in Klee’s work. It will seek to highlight the wider implication of Klee’s work for inter-disciplinary composition today.
It will reference specific musical ‘readings’ of Klee paintings from my own practice, alongside examples taken from the musical archive at The Paul Klee Centre in Bern Switzerland.
Biograpy
I am in the fourth year of an interdisciplinary PhD at Hull University under the supervision of Dr. Rob Mackay.It concerns the Impact of Paul Klee’s art on the theories and practice of specific composers. I am working closely with The Paul Klee Centre in Bern, from where I have recently returned.
I have taught musical composition and technology at the Brighton Institute of Modern Music, Grimsby Institute of Further and Higher Education and I will be starting a new post as Senior Lecturer in Commercial Music at Bath Spa University from September 2011.
Research activity includes interdisciplinary performances working with visual artists, the most recent being at the Triton Gallery, Sledmere House.
My music industry experience includes 11 hit singles and 4 hit albums including a U.K number 1 album. 2 major record deals (Polydor and V2) , 2 major publishing deals (Polygram and Sony). Performance experience includes shows at Royal Albert Hall, Glastonbury main stage and Wembley Stadium, worldwide major touring with Bon Jovi, Aerosmith, Van Halen, ZZ Top and Bryan Adams. Soundtrack work includes T.V and Sony Playstation games.
Margaret Schedel – Hearing Nano-Structures
In this paper we will describe how we have sonified x-ray scattering data. We will show images from the Brookhaven Labs X-9 beamline and play various methods of sonification. In an x?ray scattering experiment, a sample of interest is positioned in a collimated x?ray beam. A two?dimensional detector is then used to record the pattern of x?rays reflected and scattered off of the sample. The detector images are created by FFT and are realized as false?color images, which provides an immediate but rudimentary visualization of the intensity of scattered radiation as a function of angle. Sonification of the original data enables researchers to discern patterns that would otherwise be ignored, such as: very diffuse or low?frequency data that is hard to see in the usual 3D colormap, low?intensity peaks buried in background, the existence or persistence of higher?order peaks, asymmetry or pea! k shift of scattering rings, or the exact character of diffuse small?angle scattering. Thus far most sonification has been created from data which already exists in the time domain: e.g. tidal patterns. In order to sonify static data we need to “play the image” by scanning it over time. Some methods we explore include: interpreting the q?values (angle on detector image) as frequency channels, scanning time through angles in the scattering image, scan time through the scattering image z?axis (effectively converting thin film nanostructure into time data), or scan time through a series of grazing angles.
Biography
Margaret Anne Schedel is a composer and cellist specializing in the creation and performance of ferociously interactive media. She sits on the boards of 60×60 Dance, the BEAM Foundation, the EMF Institute, the ICMA, and Organised Sound. She contributed a chapter to the Cambridge Companion to Electronic Music and her article on generative multimedia was recently published in Contemporary Music Review. She has worked for Cycling’74 and Making Things and her work has been supported by the Presser Foundation, Centro Mexicano para la Música y les Artes Sonoras, and Meet the Composer. In 2009 she won the first Ruth Anderson Prize for her interactive installation Twenty Love Songs and a Song of Despair. As an Assistant Professor of Music at Stony Brook University, she serves as Co-Director of Computer Music and is a core faculty member of cDACT, the consortium for digital art, culture and technology. In 2! 010 she co-chaired the International Computer Music Conference, and in 2011 she co-chaired the Electro-Acoustic Music Studies Network Conference.
Lewis Sykes – The augmented Tonoscope
The Augmented Tonoscope is an artistic study into the aesthetics of sound and vibration through its analog in visual form – the modal wave patterns of Cymatics. Key to the research is the design, fabrication and crafting of a sonically and visually responsive hybrid analogue/digital instrument that will produce dynamic Visual Music based on the physics, effects and manifestations of sound and vibration. I then intend to play, record and interact with it to produce a series of artistic works for live performance, screening and installation. This paper describes the first stage of the study – a series of creative investigations into analogue tonoscope design and fabrication of a digital tone generator integrating light and camera control – driven by an artistic experimental method devised specifically for the project. It concludes with future directions for the research: Can the inherent geometries within sound provide a meaningful basis for Visual Music? Will augmenting these physical effects with virtual simulations realise a real-time correlation between the visual and the musical?
Journal: www.augmentedtonoscope.net
Sketchbook: tumblr.augmentedtonoscope.net
Biography
Lewis Sykes is a musician, interaction designer, digital media curator and producer and qualified Youth & Community Worker specialising in the Arts.
A veteran bass player of the underground dub-dance scene of the 90s he performed and recorded with acts such as Emperor Sly, Original Hi-Fi and Radical Dance Faction, was a partner in the underground dance label Zip Dog Records and more recently was musician and performer with the progressive AV collective, The Sancho Plan. Lewis is currently one-half of Monomatic – a collaboration, experimental playground and halfway house alongside the work of Nick Rothwell – exploring sound and interaction through physical works that investigate rich and sonorous musical traditions.
Lewis is also Director of Cybersonica – an annual celebration of music, sound art and technology (now in its ninth year) – and between 2002-07 was Coordinator of the independent digital arts agency Cybersalon – founding Artists-in-Residence at the Science Museum’s Dana Centre.
Lewis is in his first-year of a practice as research PhD at MIRIAD, Manchester Metropolitan University exploring the aesthetics of sound and vibration.
Monomatic: www.monomatic.net
Cybersonica: www.cybersonica.org
Emily Robertson – Rising Up and Fading Out: The Short Reception History of Animated Sound
Despite the unusual capabilities of sound synthesized on an optical recording system (“animated sound”), and the high hopes of artists and engineers, the technique’s popularity in mainstream filmmaking did not last long. This paper outlines the history of animated sound’s reception by the general public and by filmmakers from the 1930s onwards, examining the trends, successes, and failures along the way. Despite early fanfare, most efforts to develop sound synthesis techniques using optical recording systems were limited to short experimental abstract art films, with the exception of a few soundtracks for commercial films in Russia, England, and Canada. While animated sound was discussed in periodicals ranging from Popular Science to art journals, people were ultimately captivated more by the production techniques than by the sounds themselves. The public was distracted by new technology wit! hin 20 years of the first discovery of animated sound, and the art form faded along with the technological medium.
Biography
Emily Robertson is currently a doctoral student at the Sonic Arts Research Centre at Queen’s University in Belfast, conducting research in connections between graphic notation and the production of animated (graphic) sound in early film. She received a Master of Arts degree from the University of Maryland in Musicology for a thesis charting the history of animated sound techniques as they developed internationally, and was designated an Enosinian Scholar at the George Washington University for work in archival primary source material for music research.
Bret Battey – Inquiries Towards Fluid Audiovisual Counterpoint
This paper will provide initial exploration of the idea of a ‘fluid counterpoint’ of complex sound and image gestalts. The author will propose some fundamental questions and definitions in the domain, and relate these to both existing work and potential directions.
Biography
Bret Battey (b. 1967) creates electronic, acoustic, and multimedia concert works and installations. He has been a Fulbright Fellow to India and a MacDowell Colony Fellow, and he has received recognitions and prizes from Austria’s Prix Ars Electronica, France’s Bourges Concours International de Musique Electroacoustique, Spain’s Punto y Raya Festival, Abstracta Cinema of Rome, and Amsterdam Film eXperience for his sound and image compositions. He studied composition and electronic music at Oberlin Conservatory and the University of Washington, and is a Senior Lecturer with the Music, Technology, and Innovation Research Centre at De Montfort University, Leicester, UK.
http://BatHatMedia.com
Andrew Hill – Investigating interpretation of electroacoustic audio-visual music
This paper introduces my research project ‘Investigating audience reception of electroacoustic (E/A) audio-visual music’ combining the results from empirical audience research study with theoretical materials from phenomenology and neuroscience in order to build up a picture of how audiences interpret works and how such information can be used in order to assist audience access and appreciation.
Biography
Andrew Hill (1986) is a composer of electroacoustic audio-visual music living in Leicester. He composes works and is also fascinated by the way that humans understand and appreciate audio-visual music.
Nick Cope – Contextualising Electroacoustic Movies
Having been collaborating with Professor Tim Howle for a number of years on a series of video and electroacoustic music works, I have begun to explore the critical and theoretical contexts in which the work falls and which inform both the practce, the work and the contexts of the collaboration. This paper aims to give an overview of these contexts and how one visual music practice has looked at its critical and theoretical contexts.
Biography
Nick Cope: Currently works as Senior Lecturer in Video and New Media Production at the University of Sunderland. Graduated in 1986 from Sheffield Hallam University in Fine Art (Communication Arts) and worked freelance in film and video production with a particular emphasis on music and moving image work, collaborating with Cabaret Voltaire, the Butthole Surfers and Electribe 101 amongst others. Nick completed an MA in Media whilst working for Southampton Institute where he taught 16mm film and video production, before taking up his first lecturing post at the University of Hull in 2001 setting up modules, facilities and undergraduate programmes in Digital Arts and Design for Digital Media. More recent work has included projection work for public arts projects and installation collaborations, and has had film and video work screened throughout the US, the EU and China. Nick is on the editorial board for the ScreenWorks DVD project for publishing practice based research in association with the Journal of Media Practice.
An ongoing archive of work is online at www.vimeo.com/nickcope
Nick Collins – Cross-modal mappings and audiovisual feedback
This presentation will present a variety of audiovisual research, with a critical ear and eye to the efficacy of mapping involved. Translation of visually meaningful structure to the audio domain, and vice versa, can be highly problematic. Even after acknowledging the personal nature of synaesthetic experience, effective structure-preserving homomorphisms can be difficult to establish, except perhaps in the limiting case of certain immediate 1-1 temporal correlations.
Projects presented will include:
* PhotoNoise, an iPad app mapping from touches on any image in the user’s PhotoLibrary to a custom set of non-linear oscillators
* Experiments in live audiovisual feedback loops by the duo klipp av.
* An experimental crossmodal application which explores iterative audiovisual feature-driven convolution based on live audio and camera input.
* Audiovisual concatenative synthesis, cueing selection of audiovisual material from a driver sequence. Movie cut-up demoes will be shown which raise issues of respecting scene boundaries and congruence of meaning.
By critically assessing these projects, it is hoped that the talk will promote some discussion of audiovisual work in a wider context of multimedia theory, audiovisual object cognition and new technologically acute routes for feature analysis and crossmodal mapping.
Biography
Nick Collins is a composer, performer and researcher in the field of computer music. He lectures at the University of Sussex, running the music informatics degree programmes and research group. Research interests include machine listening, interactive and generative music, and audiovisual performance. He co-edited the Cambridge Companion to Electronic Music (Cambridge University Press 2007) and The SuperCollider Book (MIT Press, 2011) and wrote the Introduction to Computer Music (Wiley 2009). iPhone apps include RISCy, TOPLAPapp, Concat, BBCut and PhotoNoise for iPad. Sometimes, he writes in the third person about himself, but is trying to give it up. Further details, including publications, music, code and more, are available from http://www.cogs.susx.ac.uk/users/nc81/index.html
Louise Harris – causality and equilibrium in abstract audiovisual composition
Levels and methods of experimentation with computer-based abstract audiovisual art expand daily, yet the theoretical framework within which to discuss that art continues to be under-developed. This discussion will focus primarily on my own practice as an audiovisual composer, considering in particular the nature of cause and effect in an abstract, temporal art form and my own attempts to find and exhibit cohesion and equilibrium between the audio and video components of my work. Ideas will be discussed within the framework of phenomenological considerations of the nature of perception and the writings of audiovisual pioneers such as John Whitney and Ernest Edmonds, in combination with personal aesthetic considerations in the production of audiovisual composition. The problems and opportunities presented by a lack of theoretical framework within which to discuss abstract audiovisual art will also ! be considered.
Biography
Louise is an electroacoustic and audiovisual composer who joined Kingston University in September 2010, having completed a PhD in composition at Sheffield University. Louise specialises in the creation of audiovisual relationships utilising electroacoustic music and computer-generated visual environments. She previously studied composition at York with Nicola LeFanu and before that at Oxford with Robert Saxton. Louise’s audiovisual work has been performed and exhibited nationally and internationally, including at the University of New York State, Albany (2008), Soundings Festival Edinburgh (2008), Sound and Music Expo Leeds (2009), Sound Junction Sheffield (2007-2010), the BBC Big Screen, AV Festival (2010) and ICMC 2011.
Holger Zschenderlein – The Passages of Time(s) within Time Based Multi-Modal Audio-Visual Experimental Installation Practice
Keywords: Time Based Multi-Modal Audio-Visual Experimental Installation Practice, Time Orientation Metaphor, Memory, Perception of Embodied and Disembodied Time, Composite Sensory Mapping, Construction of Meaning(s), Interrelatedness of Seeing and Hearing, Embodied Knowledge, Hidden Complexities, Modes of Representation.
Abstract: The paper aims investigate key questions and aspects centred around the perception of time based events and the construction of meaning that underpin principles of the interplay and interrelatedness of seeing and hearing within time based multi-modal audio-visual experimental installation practice.
The paper will investigate the importance of the perception of embodied and disembodied time(s) and the seemingly existence of multiple temporalities (Memory as Virtual Coexistence, Deleuze, 1988), which seem to be key when constructing meaning within multi-sensory audio-visual time based events. Further, the interrelatedness to time and space seem to be able to create for the observer a sort of ‘time orientation metaphor’ (Lakoff and Johnson, 1999). In particular within evocative multi-modal installation practice incorporating physical (time based) objects and time-based audio-visual media simultaneously.
The paper will also refer to a series of recent public experimental time-based audio-visual installation works by the author (i.e. Ice – Traffic, Royal Festival Hall, London).
Biography
Holger Zschenderlein, Principal Lecturer and Subject Leader in Digital Music and Sound Arts, Faculty of Arts, University of Brighton. His key research interest centres around the investigative study of cross-sensory perception and cognition i.e. how we construct meaning within audio – visual narrative structures. As an artist he has developed collaborative practices and strategies linking with moving image, still image, audio-visual and multi-modal interactive installation and performance, contributing towards award winning music and audio-visual productions.
Since 2006 he is also the lead investigator of the ongoing trans-disciplinary arts & science research project The Breathing City, which combines collaborative research from the field of urban meteorology (boundary-layer meteorology), sound-art / multi-media, design, material practice, audio-visual representation of data, experimental representation models. The Breathing City project aims to deepen understanding between disciplines through explorative dialogue and experimental practices aiming to develop evocative and thought provoking inclusive communication models for a wider audience concerning environmental aspects.

CindyKeeferCindy Keefer – “Ornaments are Music” – Oskar Fischinger’s experiments with ornament and synthetic sounds (Keynote)

Keefer will discuss Fischinger’s ornament sound experiments of the 1930s and his synthetic soundtracks of the 1950s. Examples of his graphic materials, photographs, animation paintings, audio and film will be shown from the archives of Center for Visual Music, plus clips by later visual music filmmakers engaged in direct/synthetic sound experimentation.

Biography

Keefer is an archivist and curator and the director of Center for Visual Music in Los Angeles. She has taught, curated, lectured and published on Visual Music internationally. Keefer has preserved dozens of historical visual music films by Fischinger, Belson, the Whitneys, Dockum, Bute and others; currently she is co-curating an upcoming traveling Fischinger museum exhibition. She also produces DVD compilations, consults on media use in museum exhibitions, and is presently completing a book on Belson’s Vortex Concerts.

http://www.centerforvisualmusic.org/

JaroslawKapuscinskiJaroslaw Kapuscinski – Contrapuntal Dimensions of Intermedia

Music defines counterpoint as a relationship between two or more parts that are independent in contour and rhythm but harmonically interdependent. Three kinds of contrapuntal motion: oblique, parallel/similar and contrary; result in polyphony, which is characterized by moments of agreement/consonance and disagreement/dissonance. Using these definitions as premise the paper attempts to define counterpoint in the intermedia domain. What are the parameters of sound and image that can be engaged in contrapuntal relationships? What are the relevant audiovisual analogues for directional and hierarchical motion found within scales and tonality? Can there be an intermedia leading tone? What is the role of synchresis? These and other questions will be discussed using examples from intermedia performance works Juicy and Oli’s Dream.

Biography

Jaroslaw Kapuscinski is an intermedia composer and pianist whose work has been presented at New York’s MoMA; ZKM in Karlsruhe; the Museum of Modern Art, Palais de Tokyo, and Centre Pompidou in Paris; and Reina Sofia Museum in Madrid, among others. He has received numerous awards, including at the UNESCO Film sur l’Art festival in Paris, VideoArt Festival Locarno, and the Festival of New Cinema and New Media in Montréal. He was first trained as a classical pianist and composer at the Chopin Academy of Music in Warsaw and expanded into multimedia during a residency at the Banff Centre for the Arts in Canada (1988) and through doctoral studies at the University of California, San Diego (1992-1997). Kapuscinski is actively involved in intermedia education. Currently, he is an assistant professor of composition and director of the Intermedia Performance Lab at Stanford University.

http://www.jaroslawkapuscinski.com/

JimDickinsonJim Dickinson – Infinite Process from Finite Form

Much of Paul Klee’s experimental oeuvre is concerned with his ideal of creating dynamic movement in the fixed medium of painting. From 1921 to 1931 Klee taught at the Bauhaus and was exposed to the pioneering techniques of abstract film making, but unlike some of his contemporaries, like Hans Richter, Klee resisted the draw of this new technology and chose to restrict/limit his medium to the spatial form of painting.

Klee turned to his beloved music to find structural solutions to the temporal problems he would encounter and these experiments led to his ‘Polyphonic Painting’ style, of which he declared:

Polyphonic painting is superior to music in that, here, the time element becomes a spatial element. The notion of simultaneity stands out even more richly (Diaries, 1081)

This paper seeks to explore the paradoxical link between finite form and infinite process in Klee’s work. It will seek to highlight the wider implication of Klee’s work for inter-disciplinary composition today.

It will reference specific musical ‘readings’ of Klee paintings from my own practice, alongside examples taken from the musical archive at The Paul Klee Centre in Bern Switzerland.

Biograpy

I am in the fourth year of an interdisciplinary PhD at Hull University under the supervision of Dr. Rob Mackay.It concerns the Impact of Paul Klee’s art on the theories and practice of specific composers. I am working closely with The Paul Klee Centre in Bern, from where I have recently returned.

I have taught musical composition and technology at the Brighton Institute of Modern Music, Grimsby Institute of Further and Higher Education and I will be starting a new post as Senior Lecturer in Commercial Music at Bath Spa University from September 2011.

Research activity includes interdisciplinary performances working with visual artists, the most recent being at the Triton Gallery, Sledmere House.

My music industry experience includes 11 hit singles and 4 hit albums including a U.K number 1 album. 2 major record deals (Polydor and V2) , 2 major publishing deals (Polygram and Sony). Performance experience includes shows at Royal Albert Hall, Glastonbury main stage and Wembley Stadium, worldwide major touring with Bon Jovi, Aerosmith, Van Halen, ZZ Top and Bryan Adams. Soundtrack work includes T.V and Sony Playstation games.

MargaretSchedelMargaret Schedel / Kevin Yager – Hearing Nano-Structures

Much sonification relies on pitch and rhythm; we have sonified x-ray scattering data with timbre. We will show images from Brookhaven National Lab’s X-9 beamline and play various methods of sonification. In an x-ray scattering experiment, a sample of interest is positioned in a collimated x-ray beam. A two-dimensional detector is then used to record the pattern of x-rays scattered and diffracted from of the sample. The detector images are essentially a Fourier transform of the sample’s density and thus structure. These data are typically visualized as false-color images, which provides an immediate but rudimentary view of the intensity of scattered radiation as a function of angle. Sonification of the original data enables researchers to discern patterns that would otherwise be ignored, such as: very diffuse or low-frequency data that is hard to see in the usual 3D colormap, low-intensity peaks buried in background, the existence or persistence of higher-order peaks, asymmetry or peak shift of scattering rings, or the exact character of diffuse small-angle scattering. Thus far most sonification has been created from data which already exists in the time domain: e.g. tidal patterns. In order to sonify static data we need to “play the image” by scanning it over time. Some methods we explore include: interpreting the q-values (angle on detector image) as frequency channels, scanning time through angles in the scattering image, or scan time through the q-values. Finally, we hope to show how timbral sonification can yield more useful interpretations than simpler methods.

Biographies

Margaret Anne Schedel is a composer and cellist specializing in the creation and performance of ferociously interactive media. She sits on the boards of 60×60 Dance, the BEAM Foundation, the EMF Institute, the ICMA, and Organised Sound. She contributed a chapter to the Cambridge Companion to Electronic Music and her article on generative multimedia was recently published in Contemporary Music Review. She has worked for Cycling’74 and Making Things and her work has been supported by the Presser Foundation, Centro Mexicano para la Música y les Artes Sonoras, and Meet the Composer. In 2009 she won the first Ruth Anderson Prize for her interactive installation Twenty Love Songs and a Song of Despair. As an Assistant Professor of Music at Stony Brook University, she serves as Co-Director of Computer Music and is a core faculty member of cDACT, the consortium for digital art, culture and technology. In 2010 she co-chaired the International Computer Music Conference, and in 2011 she co-chaired the Electro-Acoustic Music Studies Network Conference.

Kevin Yager is a materials scientist working in the Center for Functional Nanomaterials at Brookhaven National Laboratory. He obtained his Ph.D. in chemistry from McGill University in 2006, then worked at the National Institute of Standards and Technology on neutron scattering. His current work at BNL focuses on studying self-assembly of nanostructures: how materials can be designed to spontaneously form desired structures. He manages an x-ray scattering instrument at Brookhaven’s National Synchrotron Light Source. Current research interests include studies of nanostructured materials for applications ranging from high-density data storage to flexible solar cells.

sykesLewis Sykes – The Augmented Tonoscope

The Augmented Tonoscope is an artistic study into the aesthetics of sound and vibration through its analog in visual form – the modal wave patterns of Cymatics. Key to the research is the design, fabrication and crafting of a sonically and visually responsive hybrid analogue/digital instrument that will produce dynamic Visual Music based on the physics, effects and manifestations of sound and vibration. I then intend to play, record and interact with it to produce a series of artistic works for live performance, screening and installation. This paper describes the first stage of the study – a series of creative investigations into analogue tonoscope design and fabrication of a digital tone generator integrating light and camera control – driven by an artistic experimental method devised specifically for the project. It concludes with future directions for the research: Can the inherent geometries within sound provide a meaningful basis for Visual Music? Will augmenting these physical effects with virtual simulations realise a real-time correlation between the visual and the musical?

Journal: www.augmentedtonoscope.net

Sketchbook: tumblr.augmentedtonoscope.net

Biography

Lewis Sykes is a musician, interaction designer, digital media curator and producer and qualified Youth & Community Worker specialising in the Arts.

A veteran bass player of the underground dub-dance scene of the 90s he performed and recorded with acts such as Emperor Sly, Original Hi-Fi and Radical Dance Faction, was a partner in the underground dance label Zip Dog Records and more recently was musician and performer with the progressive AV collective, The Sancho Plan. Lewis is currently one-half of Monomatic – a collaboration, experimental playground and halfway house alongside the work of Nick Rothwell – exploring sound and interaction through physical works that investigate rich and sonorous musical traditions.

Lewis is also Director of Cybersonica – an annual celebration of music, sound art and technology (now in its ninth year) – and between 2002-07 was Coordinator of the independent digital arts agency Cybersalon – founding Artists-in-Residence at the Science Museum’s Dana Centre.

Lewis is in the second year of a practice as research PhD at MIRIAD, Manchester Metropolitan University exploring the aesthetics of sound and vibration.

Monomatic: www.monomatic.net

Cybersonica: www.cybersonica.org

MickGriersonMick Grierson – a discussion of the exhibition of Daphne Oram’s Oramics Machine at London’s Science Museum, through the lens of research around audiovisual perception (Keynote)

In July 2011, the Science Museum placed on display the Oramics Machine, a unique relic of the pioneering years in the 1960s when, arguably, electronic music was being invented. This is the first fruit of a partnership between the Science Museum and Goldsmiths, University of London; a joint research, conservation and display project to make publicly accessible the work of its inventor, Daphne Oram. Her pioneering approach to creative engineering, electronic music and composition made hers a distinctive contribution to the electronic arts. But amongst today’s experimental sound and electronic arts practitioners and scholars, she is gaining greater recognition as a pioneer of electronic music and sound synthesis. What is more, for many, the similarity of the graphical interface of the Oramics Machine programmer to contemporary computer music software such as Cubase is particularly striking, and as such, the machine can be thought of as an early example of graphical human-computer interaction design.

Biography

Dr Mick Grierson is a researcher and practitioner, exploring areas of audiovisual composition, interaction, perception and cognition. He is an AHRC knowledge transfer fellow developing algorithms for Brain Computer Interfaces, and interactive audiovisual technologies. He is also Director of Creative Computing at Goldsmiths, University of London. In 2006, he instituted the Daphne Oram Collection and Archive at Goldsmiths, and has been officially Director of the collection since 2008.

EmilyRobertsonEmily Robertson – Rising Up and Fading Out: The Short Reception History of Animated Sound

Despite the unusual capabilities of sound synthesized on an optical recording system (“animated sound”), and the high hopes of artists and engineers, the technique’s popularity in mainstream filmmaking did not last long. This paper outlines the history of animated sound’s reception by the general public and by filmmakers from the 1930s onwards, examining the trends, successes, and failures along the way. Despite early fanfare, most efforts to develop sound synthesis techniques using optical recording systems were limited to short experimental abstract art films, with the exception of a few soundtracks for commercial films in Russia, England, and Canada. While animated sound was discussed in periodicals ranging from Popular Science to art journals, people were ultimately captivated more by the production techniques than by the sounds themselves. The public was distracted by new technology wit! hin 20 years of the first discovery of animated sound, and the art form faded along with the technological medium.

Biography

Emily Robertson is currently a doctoral student at the Sonic Arts Research Centre at Queen’s University in Belfast, conducting research in connections between graphic notation and the production of animated (graphic) sound in early film. She received a Master of Arts degree from the University of Maryland in Musicology for a thesis charting the history of animated sound techniques as they developed internationally, and was designated an Enosinian Scholar at the George Washington University for work in archival primary source material for music research.

BretBatteyBret Battey – Inquiries Towards Fluid Audiovisual Counterpoint

This paper will provide initial exploration of the idea of a ‘fluid counterpoint’ of complex sound and image gestalts. The author will propose some fundamental questions and definitions in the domain, and relate these to both existing work and potential directions.

Biography

Bret Battey (b. 1967) creates electronic, acoustic, and multimedia concert works and installations. He has been a Fulbright Fellow to India and a MacDowell Colony Fellow, and he has received recognitions and prizes from Austria’s Prix Ars Electronica, France’s Bourges Concours International de Musique Electroacoustique, Spain’s Punto y Raya Festival, Abstracta Cinema of Rome, and Amsterdam Film eXperience for his sound and image compositions. He studied composition and electronic music at Oberlin Conservatory and the University of Washington, and is a Senior Lecturer with the Music, Technology, and Innovation Research Centre at De Montfort University, Leicester, UK.

http://BatHatMedia.com

AndreySmirnovAndrey Smirnov – Synthesizing Sound by Means of Light. Sound Machines of the Russian Revolutionary Utopia of the 1920s (Keynote)

The mid 1930s in Russia was a critical point of clash between two cultures – the totalitarian, highly centralized Anti-Utopia of 1930-50s and the censored, largely forgotten and underestimated anarchistic artistic and scientific Utopia of 1910-20s. There were two main intensions crossed in the development of new sound machines in the 1920s: to play and compose with any sounds at will and to synthesize speech and singing. Quite often both were combined in a single device or method. Because of lack of information and knowledge some proposals were nothing more, than repetition of old and well known ideas. In most cases we have no information about any prototypes built. Nevertheless, many inventors patented new sound machines. Some devices that were based on electro-mechanical, electro-optical and new electronic technologies were ahead of their time by dec­ades. The newly invented sound-on-film technology made possible access to the sound as a trace in a form that could be studied and manipulated, opening new prospects to synthesize sound by pure graphical methods, erasing the boundaries between the audible and visible worlds.

Biography

Andrey Smirnov is an interdisciplinary artist, independent curator, composer, author, educator, researcher and developer of interactive computer music techniques. He is a founding director and the senior lecturer of the Theremin Center at Moscow State Conservatory and the lecturer at the Rodchenko School of Photography and Multimedia where he teaches courses on the basics of Electroacoustic Music, recent Computer Music technologies and Physical Computing. Since 1976 he is conducting research and developments in electronic music techniques, gestural interfaces, hardware and software sensor technology. He has conducted numerous workshops and master classes, curated a number of exhibitions in Russia, Europe and the U.S., attended various festivals and conferences. He is collecting and keeping unique archives on the history of Music Technology and Sound Art in the early 20th century Russia as well as original historical electronic musical instruments, combining deep research into the history of music technology with extensive experience of computer assisted composition and interactive performance.

http://asmir.info/

AndrewHillAndrew Hill – Investigating interpretation of electroacoustic audio-visual music

This paper introduces my research project ‘Investigating audience reception of electroacoustic (E/A) audio-visual music’ combining the results from empirical audience research study with theoretical materials from phenomenology and neuroscience in order to build up a picture of how audiences interpret works and how such information can be used in order to assist audience access and appreciation.

Biography

Andrew Hill (1986) is a composer of electroacoustic audio-visual music living in Leicester. He composes works and is also fascinated by the way that humans understand and appreciate audio-visual music.

NickCopeNick Cope – Contextualising Electroacoustic Movies

Having been collaborating with composer Professor Tim Howle for a number of years on a series of video and electroacoustic music works, I have begun to explore the critical and theoretical contexts in which the work falls and which inform both the practice, the work and the contexts of the collaboration. This paper aims to give an overview of these contexts and how one visual music practice has looked at its critical and theoretical contexts.

Currently undertaking a PhD by Practice, which aims to contextualise an almost 30 year moving image making practice, this paper draws on the findings of this work, situating my current practice in the context of earlier work, and establishing some of the wider academic and historical contexts. It is also hoped that by opening up these observations of my own work, discussion and exchange of other practitioners’ contexts can be engaged in through Seeing Sound.

From our own experience, Tim Howle and I have found our collaboration taking each of us into areas previously seen as separate disciplines and domains. Similarly as each of us has sought to address the work in critical and academic contexts different languages, subject areas and approaches have presented themselves, illuminating our own reflections on our practice and opening up the fields that visual music practices operate in. I see in no way my own endeavours in this pursuit as being exhaustive and welcome all further input to the positioning and addressing of visual music practice that exposure here at Seeing Sound may bring.

Biography

Nick Cope is Senior Lecturer and Programme Leader for Video and New Media Production at the University of Sunderland. Graduated in 1986 from Sheffield Hallam University in Fine Art (Communication Arts) and worked freelance in film and video production with a particular emphasis on music and moving image work, collaborating with Cabaret Voltaire, the Butthole Surfers andElectribe 101 amongst others. Nick completed an MA in Media whilst working for Southampton Institute where he taught 16mm film and video production, before taking up a lecturing post at the University of Hull in 2001 setting up modules, facilities and undergraduate programmes in Digital Arts and Design for Digital Media. More recent work has included projection work for public arts projects and installation collaborations, and has had film and video work screened throughout the US, the EU and China. Nick is on the editorial board for the ScreenWorks DVD project for publishing practice based research in association with the Journal of Media Practice.

An ongoing archive of work is online at www.vimeo.com/nickcope and http://www.youtube.com/user/digitaldrift

NickCollinsNick Collins – Cross-modal mappings and audiovisual feedback

This presentation will present a variety of audiovisual research, with a critical ear and eye to the efficacy of mapping involved. Translation of visually meaningful structure to the audio domain, and vice versa, can be highly problematic. Even after acknowledging the personal nature of synaesthetic experience, effective structure-preserving homomorphisms can be difficult to establish, except perhaps in the limiting case of certain immediate 1-1 temporal correlations.

Projects presented will include:

* PhotoNoise, an iPad app mapping from touches on any image in the user’s PhotoLibrary to a custom set of non-linear oscillators

* Experiments in live audiovisual feedback loops by the duo klipp av.

* An experimental crossmodal application which explores iterative audiovisual feature-driven convolution based on live audio and camera input.

* Audiovisual concatenative synthesis, cueing selection of audiovisual material from a driver sequence. Movie cut-up demoes will be shown which raise issues of respecting scene boundaries and congruence of meaning.

By critically assessing these projects, it is hoped that the talk will promote some discussion of audiovisual work in a wider context of multimedia theory, audiovisual object cognition and new technologically acute routes for feature analysis and crossmodal mapping.

Biography

Nick Collins is a composer, performer and researcher in the field of computer music. He lectures at the University of Sussex, running the music informatics degree programmes and research group. Research interests include machine listening, interactive and generative music, and audiovisual performance. He co-edited the Cambridge Companion to Electronic Music (Cambridge University Press 2007) and The SuperCollider Book (MIT Press, 2011) and wrote the Introduction to Computer Music (Wiley 2009). iPhone apps include RISCy, TOPLAPapp, Concat, BBCut and PhotoNoise for iPad. Sometimes, he writes in the third person about himself, but is trying to give it up. Further details, including publications, music, code and more, are available from http://www.cogs.susx.ac.uk/users/nc81/index.html

LouiseHarrisLouise Harris – causality and equilibrium in abstract audiovisual composition

Levels and methods of experimentation with computer-based abstract audiovisual art expand daily, yet the theoretical framework within which to discuss that art continues to be under-developed. This discussion will focus primarily on my own practice as an audiovisual composer, considering in particular the nature of cause and effect in an abstract, temporal art form and my own attempts to find and exhibit cohesion and equilibrium between the audio and video components of my work. Ideas will be discussed within the framework of phenomenological considerations of the nature of perception and the writings of audiovisual pioneers such as John Whitney and Ernest Edmonds, in combination with personal aesthetic considerations in the production of audiovisual composition. The problems and opportunities presented by a lack of theoretical framework within which to discuss abstract audiovisual art will also be considered.

Biography

Louise is an electroacoustic and audiovisual composer who joined Kingston University in September 2010, having completed a PhD in composition at Sheffield University. Louise specialises in the creation of audiovisual relationships utilising electroacoustic music and computer-generated visual environments. She previously studied composition at York with Nicola LeFanu and before that at Oxford with Robert Saxton. Louise’s audiovisual work has been performed and exhibited nationally and internationally, including at the University of New York State, Albany (2008), Soundings Festival Edinburgh (2008), Sound and Music Expo Leeds (2009), Sound Junction Sheffield (2007-2010), the BBC Big Screen, AV Festival (2010) and ICMC 2011.