SEAMUS 2020 Digital Conference
Mark Engebretson :: Luminous
Daniel Swilley :: Portal Calibration
John Thompson :: Into the rarefied air
Lyn Goeringer :: Waterside
Per Bloland (I-Jen Fang, percussion) :: Shadows of the Electric Moon
Joshua Harris :: Wintry Mix / Winter Remix, III :: Harris Remix
Adam Vidiksis :: Hyperdyne
J.T. Rinker :: (after)Life Pulse for L.A.
Caleb Westby (Rachel Wolz, saxophone) :: [Pop Music]
John Ritz :: Chance Designs n.3
Alex Christie :: System Blocks Signal Blocks System
Jason Bolte :: Arid Flow
Seth Shafer :: Polytera II
Panayiotis Kokoras :: West Pole
Eli Fieldsteel :: Depth of Field
Christopher Poovey :: Hypoxia
Owen Hopper :: Strike Palm To Ask Buddha
Eren Gumrukcuoglu :: Lattice Scattering
Julie Herndon :: A Long Postlude
Rodney DuPlessis :: ASCAP / SEAMUS Finalist: Dimensionless
David Durant :: The Crystalline, Radiant Sky
Elizabeth Hoffman :: clouds pattern
Anne Neikirk :: locoMotives - adjudicated in to SEAMUS 2015, programmed at the 2020 hosts' discretion
Olga Oseth :: USCGC Healy (WAGB-2)
Sever Tipei :: Ghioc
Pinda Ho :: of Constructed Chaos
Monte Taylor :: Sigil II: Amistad
Leah Reid :: Reverie
Kristopher Bendrick :: Semi-Human/Semi-Sentient
Heather Stebbins :: things that follow
Andrew Walters :: Volts and Kettles
Adam Lenz :: A Collapsing Field
Kathryn Koopman :: Titanium Quartz
Omar Fraire :: Winning Quotes
Heather Mease :: GET WET
Kittie Cooper :: Earth Mother
Alison Ma :: Engulf
Christopher Burns :: Interferometry
J. Andrew Smith :: In the Midst of Night
Joo Won Park :: Func Step Mode
Mei-ling Lee :: Giant Dipper
Charles Nichols :: Meadows of Dan
John Gibson :: Almost an Island
Elainie Lillios :: Immeasurable Distance
Juan Carlos Vasquez :: A Landscape of Events
Greg Dixon :: Mirror Lake II
Brian Riordan :: Book Burner
Scott Barton and Nate Tucker :: Human-Robot Improvisation
Adam Mirza :: White
Federico Bonacossa :: De Profundis
Mengzhumei Yang :: Tan Qing Shui He - For Kyma and Nintendo Wii Remote
Yanqi Chen :: Memory in Glass Marble
Tao Li :: Wailing Ghosts
Nishan Jiang :: Cool, Warm Up, High Heat, and Boiling
Robert McClure :: struggling in excess
Jeffrey Stolet :: I'mPossible
Annea Lockwood, SEAMUS Award Recipient
David Nguyen :: ASCAP / SEAMUS Finalist : Weight Stranding
Robert Esler :: ILOVEYOU Stuxnut
Eli Stine :: Vestigial Wings
Fang Wan :: Overlapping Strings
Kelley Sheehan :: ASCAP / SEAMUS Finalist: Talk Circus
Ryan Maguire and Paige Naylor :: pnqbud
Jason Fick :: junktures
Jiayue Cecilia Wu :: For Tashi
Zachary Boyt :: A Need To Be Free
Ryne Siesky :: ...grind ...
John Carter Rice :: Ink Spots
Elliott Lupp :: 2nd Prize 2019 ASCAP / SEAMUS Commission : Erase-Repeat
Ryan Ingebritsen :: Reparameterization 1
Ralph Lewis :: Can't Take You Anywhere
Byungjin Kim :: Uncertainty
Aurie Hsu and Steven Kemper :: Why Should Our Bodies End At The Skin?
Jacob Walls :: Frayed Tethers
Christopher Douthitt :: Voices and Apparitions
Kyle Vanderburg :: The Earth Shall Soon Dissolve Like Snow
Ben Robertson :: Rainshadow
Kristian Twombly :: Interplait
Iddo Aharony :: falling out of time
Michael Smith :: Discords
Andrew McManus :: Impulse Response
Sean Hallowell :: 14242552
Jason Mitchell :: Bow Shock
Nicola Giannini :: Eyes Draw Circles of Light
Brett Masteller Warren :: BowMu STUCK MoBue
Kyong Mee Choi :: Vanished
Israel Neuman :: 50 Miliseconds
Matthew Wiggins :: Failing Structures
Nathaniel Haering :: Medical Text p. 57
Courtney Brown :: Machine Tango
Chi Wang :: Qin
Taylor Brook :: Erotisme Sacre
Judith Shatin :: Storm - adjudicated in to SEASMUS 2018, programmed at the hosts' discretion
Hubert Howe :: Inharmonic Fantasy No. 10
Doug Geers :: Teach Sum, Cheat Sum
Nicolas Chuaqui :: ASCAP / SEAMUS Finalist: Memories
Kyle Grimm :: REDLINE
Zhixin Xu :: La nuit bleue
Christopher Biggs :: Montress
Rachel Gibson :: Allen Strange Award Winner: Skyscapes // The Night Shines for You
This paper will explore concepts around compositional control that arise from computer-generated composition and computer improvisation. Drawing from an analysis of my electroacoustic composition, "Virtutes Occultae," I will explore the implications of computer improvisation on the role of the composer, how value is attributed to experimental art, and the broader relationship to data and automation in society at large.
Composing music using computer improvisation necessarily refocuses the compositional process from micro-decisions to macro-decisions. While total control of a musical composition is utopian, handing over musical decisions to a computer improvisor represents a departure from traditional compositional methods and necessitates an investigation into the meaning of this departure. In creating the software to generate music for Virtutes Occultae I was confronted with decisions regarding the degree of control or chaos I would infuse into the improvising algorithm. The amount of randomization and weighted probabilities integrated into the software set the levels of unpredictability; the unpredictability of the computer improvisation become artistically stimulating, even leading the composer to imitate the computer improvisor in more traditionally through-composed sections. This porous relationship between the computer improvisor and the composer suggests a form of compositional duet.
Recent commercial ventures (AIVA, Jukedeck, Melodrive, etc.) boast algorithms that generate commercial jingles and soundtrack music automatically. These programs generate music based upon a selection of high-level parameters on the part of the user: choose a mood and a style to create an original piece of music with the click of a button. The AIVA engine promotes a particularly uncanny function where one may select an existing piece of music, say a piano piece by Chopin, and move a slider between “similar” and “vaguely similar” to create a derivative work. What does this method of creating music mean for how we value music? While this software creates music for commercial purposes, I have employed similar techniques in non-commercial art in "Virtutes Occultae" and other works. Unpacking the ramifications of what computer-generated music means for the role of an artist and their relation to their art is a complex and multifarious subject that must be considered.
All composers who wish to incorporate non-determinate aspects into their music do so by designing interfaces for their source of their indeterminacy to act through. This common activity of interface design unifies artists as divergent as George Lewis, John Luther Adams, Pauline Oliverios, Shelly Knotts, and Mozart, and gives us a single lens through which we can compare their works. With such an analysis, we can examine the extent to which a composer maintains agency over their work, their success in divesting agency to their specified recipient, and the overall effectiveness of the relationship. These interfaces are generally targeted for use by one of three groups: passive agents, trained active agents, and untrained active agents. For untrained active agents, the composer is faced with a great challenge: how to identify and capitalize on training that the anticipated participant has had in another domain and translate that into a musical one. When this is done successfully, composers enable non-musicians to engage in real and meaningful musical conversations, dramatically accelerating the years of musical training often required for such a dialogue.
This research is an inquiry into how industrial noise levels in urban environments affect the psychoacoustic perception and personal preference of a given soundscape. A thorough definition of the term soundscape ecology is provided at length (especially within the context of an urban environment), subsequently followed by a presentation of research findings which indicate the significance of the soundscape in the context of acoustical perception in urban settings. Through this extensive inquiry into research pertaining to ecology, anthropology, and acoustics, it can be discerned that the expected, behavioral sonic qualities of the discreet objects which compose the entirety of an urban environment, the spatialization of these objects in relation to one another, and previously acquired level of environmental sound experience, are some of the most significant factors in determining an individual’s overall subjective acoustical comfort level. However, with the idiosyncratic nature of such subjective test results, being able to scientifically specify the exact natural or industrial sonic qualities that affect individual acoustical comfort has proven itself to be a challenging endeavor. While it it is possible to make concrete conclusions on the overall soundscape preferred by the majority of the global city-dwelling population, the discreet factors that lead to these preferences are wide ranging and highly dependent on metrics proven to be difficult to contextually quantify. Furthermore, presented research will indicate that the way in which we perceive soundscapes parallel the discreet, subjective evaluations individuals make while listening to musical mixes.
Recent developments at the University of Cincinnati, College—Conservatory of Music Center for Computer Music include explorations of creating virtual reality musical works using the Unity 3D game engine with RTcmix audio, a course collaboration between the composition department and the School of Architecture and Interior De-sign, internet performance software updating and internet performance, music based on plant data and brain waves, and many performances of new student and faculty works, some by the Cincinnati Composers Laptop Orchestra Project.
Mara Helmuth, Shawn Milloway, Zhixin Xu, Yunze Mu, Owen Hopper, Jacob Duber
Across the twentieth century, Western art music has become emancipated from any notion of strict adherence to a prevailing style or narrative governing either its poietic conditions or esthesic access. However, the plurality of musics may be identified as contemporary music’s overarching non-style. Taking pluralism as the base-condition for any compositional activity, this paper asserts that the composer’s ability to know the reality (or ‘real’ effect) of the music she writes becomes increasingly limited the more differentiated musical practices become, even when considered from within a sub-discipline such as electroacoustic music. Today, the electroacoustic composer must compose ‘as if’ she knew the effects of the choices she makes, while also accepting that such choice necessarily extends beyond the acoustic into the ever increasingly divergent arena of software, hardware, and technologically mediated performance. Such ‘epistemological limitation’ appears intractable, and is shown to be independently identifiable as the core problematic within three separate but overlapping discourses: 1) the socio-economic forces of music consumerism, 2) music semiotics, and 3) the reflexivity of embodied compositional action. The epistemological limitation governing electroacoustic music composition is therefore argued to be structural, and reflect post-Kantian philosophical prerogatives concerning the finitude of experience. Finitude is asserted to be the direct consequence of reflexivity, of being (either physically and/or discursively) in the place one seeks to know. Quentin Meillassoux’s term correlationism is used to describe the various discourses that maintain finitude as a fundamental limitation on knowledge; thinking and being cannot be addressed independently, for each is only-ever correlated with the other as a consequence of reflexivity. Max Neuhaus’ Times Square is ultimately considered as a speculative example of an electroacoustic intervention wherein neither compositional intent nor listener experience remain subject to epistemological limitation as correlationist interdiction, but instead inverts epistemological limitation as ontological possibility.
Jason Palamara and W. Scott Deal
A Digital Avatar for Interactive Human-Machine Performance (pdf abstract)
We present a machine learning (ML) digital model of a composer-performer and a corresponding system that listens to live audio and plays along, using the digital model as a “choice engine” to drive its performance. This system represents the first fruits of a long-term project aimed at developing easy-to-use AI software for musical scenarios. The current system reacts to live audio input, playing along dynamically and appropriately with the audio it is hearing, thus providing a stable user experience. This system differentiates between ambient noise and a listened-for signal, having been trained to listen for a given timbre. It then uses a logarithmic scale to first define these incoming amplitude levels as MIDI velocities and subsequently defines them using standard musical terms like piano and forte. These definitions may change dynamically throughout a given session, so if the system suddenly receives a higher audio level than previously encountered, the definitions adapt accordingly. The avatar player system can be loaded with the ML model of a given player and, using the incoming audio as a modifying, limiting, or triggering force, can respond to live input and make context-appropriate musical choices. The system is also programmed with various performing behaviors, which utilize the ML model in various ways, such as favoring repetition, favoring novelty, or chord making. These behaviors are similarly modeled on living performers using an algorithmic AI approach. The current system can be used as standalone software or as a Max-for-Live device in Ableton Live.
This lecture-presentation introduces SCAMP, a computer-assisted composition framework in Python designed to bridge the gap between the continuous timing of synthesis-based frameworks and the discrete timing of notation-based frameworks. SCAMP allows the composer to quickly audition and iterate over musical ideas based on the sonic result, and then flexibly quantize and export the music in western notation. SCAMP provides varied and highly extensible utilities for playback, features easy playback and notation of microtonality and glissandi, has a flexible clock system capable of coordinating multiple streams of music following separate tempo curves, and can export notation in the form of either MusicXML or Lilypond (via the abjad library). The goal of the framework is to address pervasive technical challenges while imposing as little as possible on the aesthetic choices of the user. For this reason, care has been taken to separate key elements of SCAMP’s functionality into self-contained subpackages. This design, along with SCAMP’s many output channels, allows a user to pick and choose the functionality they need and to abandon the framework when it no longer serves the aims of a given composition.
Multidisciplinary composition has benefited from technological tools that became available after the digital revolution, resulting in musical works that are "not for the ears alone", in which a link with contemporary realities is achieved by the use of theatrical elements and digital media. This talk will offer an analysis and typification of novel approaches within this practice, including examples from the presenter's own creative work, with an emphasis on the use of borrowed material as a means for establishing referentiality and elaborating semantic narratives.
3D video and 3D audio diffusion are each becoming increasingly common. It is however rare, if non-existent, to find the union of these two mediums explored. The purpose of this paper is to expose the idea that they form an obvious holistic compositional medium. One at once ripe with creative opportunity and fraught with extremely interesting challenges in the areas of aesthetics and technical implementation. Though squarely situated upon the bleeding edge of phenomenological research and creative practice, this novel medium is within currently reach. Here one methodological pipeline is briefly delineated that employs the use of holography, holophony, and super-computing toward the creation of visual music compositions for head mounted displays and high-density loudspeaker arrays, and other rare but emerging venues. Several challenges will be broached and resolved in an effort to inspire further investigation and creativity in this rich unexplored territory.
Deep Mapping is technique that consists of, first, identifying salient features of a composition and, secondly, making sure that those features are available for multimedia representation in a way that can be efficiently and intuitively rendered. Deep Mapping allows composers to store and render musical data into visuals by “catching” the data at its source, at a compositional stage. The advantages of this approach are: accuracy and discreteness in the representation of musical features; computational efficiency; and, more abstractly, the stimulation of a practice of audiovisual composition that encourages composers to envision their multimedia output from the early stages of their work.
As a multimedia programming practice still in its infancy, Deep Mapping can develop in several different directions. In its current implementation, it focuses on harnessing the mapping of rhythmic processes, in particular through Non-linear Sequencers. Non-linear Sequencing is a waveshaping technique that is applied to a sequencer ramp; Non-linear Sequencing allows for both rhythmic flexibility and sample-accurate playbacks.
Ivica Ico Bukvic and L2Ork
Raspberry Pi Orchestra Community Engagement Workshop
Raspberry Pi Orchestra is an ensemble designed to offer a malleable and affordable platform for the exploration of both musical instrument building and musicianship. Workshop participants will learn about: 1) Raspberry Pi’s use to design new hyperinstruments in community-centric Maker-like and professional scenarios (including hardware setup, input and sensing, instrument design, and networking and coordination methodologies); 2) Affordable and efficient ways of deploying a Raspberry Pi mobile setup; 3) Raspberry Pi’s utilization in an ensemble setting; and 4) Strategies to promoting repertoire distribution and sharing.
Daniel Fishkin and Theodore Teichman
Solar Sounders Workshop
In this workshop, we will build musical circuits with circuit boards, wires, solar panels, and wood. A Solar Sounder is a synthesizer played by sunlight. It works opposite from just about every other circuit you might encounter. A Solar Sounders has no batteries, and no knobs; instead, its speaker output is governed by sunlight, powering an internal synthesizer that is designed to work with ever-fluctuating power source. These instruments live outdoors and their sound changes as the shadows of daylight passing creep along their solar panels.
Students will gather in the Wilson Hall Maker Studio to assemble circuits and tune the instruments by selecting capacitors and resistors to affect the melodic content. Students will be provided materials including a custom designed circuit board for the workshop. This circuit forms the beating heart of a solar sounder, and many options are provided, just as there are many instruments in the orchestra.
The workshop culminates in the formation of an ad-hoc solar band, made from the group. A solar sounder alone does not sound particular interesting. But these instruments come to life in a group, twittering merrily! So, as you can see, this electronic practice is not merely community-oriented, it is community-dependent: the solar band assembles with each person's instrument and becomes a unit knitted together just for SEAMUS. After each instrument is completed, we stage a guerrilla performance in the sunlight on UVA's campus.
Home: Using light and lamps as musical instruments (video)
We interact with the sound by interacting with light, weaving together a sonic stream made up of our individual voices and memories. We listen, watch, and sense the radiant aural space around the performers. The combination of sounds and warm, intimate lights surround the performance space, embodying both the aural and visual worlds. Lamps come from homes of the performers, possibly storing memories of the performers spending time at their past and current homes. Home connects the performers and the audience members through a shared sensory experience.
Performers use custom-designed interactive systems made by the composer. The systems with light sensors detect the differences in the luminance intensity of the lamps, which the performers create by changing the dimness of their lamps. These luminance changes are converted into data in the systems and control the amplitude level of each performer's sound. All the performers contribute to their sound-creation by recording and processing them during the workshop preceding the performance. Each of the unique sounds acts as an element in a compositional idea that is similar to additive synthesis, formed with human performers in this composition.
The performance of Home at the McGuffey Art Center as a Community-Engaged Performance and Workshop at SEAMUS 2020 will be a special iteration of the composition. This collaboration with the local community in Charlottesville will introduce the youth the joy of music-making and performing by using unordinary instruments. The young performers who have been seeing the world with unique angles through their life experiences will bring sensibilities and memories to the performance. Their personal stories need to be shared with us, the people who have much more support from the social systems created by certain groups of people. In Home, their voices will be heard in the forms of sounds and light.
Toshihisa Tsuruoka, Oliver Hickman, and Leo Chang
Ear Talk: A Sound Adventure (pdf)
Ear Talk is a workshop where children are invited to record interesting sounds from their environment with the goal of co-creating a found sound composition. We implement the system we developed for an experimental performance, which allows people from remote locations to share, shape, and form music. In this workshop, we will encourage children to be curious about the everyday sounds around them, sending them on an adventure to hunt for peculiar sounds and discovering ways in which ordinary sounds could become extraordinary. After collecting interesting sounds, our system enables children to “see” their sounds visually (see attached picture above) and interact with them as a group, exploring ways in which their sounds could form a cohesive piece of music.
The idea for this workshop stems from an experimental online performance where we focused our attention to environmental sounds and shared them online, composing together from remote locations. Although the desire to share photos and videos on social media is an impulse that is commonplace among contemporary netizens, we asked ourselves: if we limited the shared content to sound, could we direct this impulse toward the sonic features of our lives?
The Max/MSP-hosted program organizes the collected sounds in a visually stimulating “score” based on each sound’s audio features, inviting children to interact with what they see and hear. In the course of this discussion, children will come up with creative ways to organize their sounds based on timbre, dynamics, and texture.
Kevin Zhang and Aurora Lagattuta
Eco-Visceral Sound Walk
This ongoing installation of self-guided sound walks previews a few excerpts from "A Place With...", an upcoming recording project collaboration between composer Kevin Zhang and choreographer Aurora Lagattuta. The broader projects consists of a series of eight eco-visceral awareness (EVA) self-guided sound and movement prompts. The eight walks intersect internal and external spaces, questioning the assumption that our environment and communities are somehow separate from our moving selves. SEAMUS attendees are welcome to participate on the self-guided walk during the conference on their own schedule by using their phones to scan QR codes located at the designated sites.
Michael Boyd :: Confessional
Ivica Ico Bukvic :: Forgetfulness
Brian Ellis :: Asking for it
Marc Evanstein :: Frozen Spring
Daniel Fishkin :: Solar Sounders
Scott Miller :: This Strange Fine-Tuning of our Universe II
Kory Reeder :: For Halsey
Michael Rhoades :: Antithesis - A VR Theater Experience
Joel Rust :: CITIZEN
Kristina Warren :: half hoping for failure
The conference hosts are pleased to present a selection of fixed electro-acoustic and multimedia works from our CIME / ICEM (International Confederation of Electroacoustic Music) colleagues, and three compositions by pioneering composer and educator Ruth Anderson, who passed away in November. Over the course of the conference, these works will be sounded in the Rotunda’s Dome Room in rotation, a space open to the public as well as to SEAMUS attendees. Works by Ruth Anderson were selected by the conference hosts in consultation with Annea Lockwood, Anderson’s longtime partner and spouse. These works and more are included on Here (2020) — a limited edition 12” by Arc Light Editions.
For more on Ruth Anderson's life and work please refer to the biography section.
For more on CIME / ICEM composers, please visit https://www.cime-icem.net/general-assembly-krakow-2019/.
Elsa Justel :: Cercels et surfaces (2013)
Jon Christopher Nelson :: When Left To His Own Devices (2018)
Manuel Rocha Iturbide :: Trama de Tramas (2018)
Anton Stuk :: The last and the Greatest Work of Oboe (2019)
Miguel Azguime :: SheBeingBrand (2010-2014)
Louise Rossiter :: Homo Machina (2018)
Arsalan Abedian :: Cstück Nr.2 (2015)
Elżbieta Sikora :: Aquamarina (1998)
Catalina Leonor Peralta Caceres :: . . . Per DUO BASSO II, quasi recitativo (1998/2019)
Philippos Theocharidis :: Baza (Debris) (2016)
Nikolai Popov :: Edit(a)Fill (2015)