19.11.22 – 04.02.23

(Extended until 11.02.23) 

Peter Nelson

Roberto Alonso Trillo

Marek Poliks

 

Exhibition

This exhibition explores how machine learning tools are being integrated into artistic practice. The works on show are the result of a two-year exploration of how machine learning can be used to synthesise music and synthesise 3D objects. In the time elapsed since the commencement of this project, online visual culture has reacted to and absorbed a host of new techniques, from image recognition to style transfer, to natural language synthesis and more recently the text-to-image synthesis pipelines offered by tools such as MidJourney and Dall-e. Underneath these rapidly evolving creative toolkits lie a common computational approach of a dataset, a neural network, and a newly synthesised output based on what features the network can understand in the original dataset. As the utility of these tools and the quality of their results improve, various cultural debates have been spawned, such as who ‘owns’ the collective cultural databases on which these systems are trained, and who therefore owns the works that these systems generate? Is there a tipping point where the human creative input relative to automated machine output shifts balance to the degree that we no longer consider the human to be the author of the work? In her overview of modern visual communication, Joanna Drucker notes that representational strategies evolve historically with changes in technological production, from the relationship between 16th-century developments in optics and Renaissance painting to mechanised assembly lines and the industrial geometric abstractions of modernist artists such as Paul Klee and Wassily Kandinsky. Considered in this broader trajectory, what we are witnessing is human creativity once again adapting to a paradigm shift, namely that of automation and artificial intelligence. It would be difficult to produce a definitive exhibition of how machine learning is changing the creative process, simply because these techniques are being integrated very quickly and across a wide number of applications and tools. Instead, this exhibition presents a bespoke exploration of three techniques – synthesising 3-dimensional shapes, synthesising music, and synthesising human motion. We present various artworks, sound installations, and musical performances made using these tools, alongside educational panels explaining the machine learning approaches behind these works. We hope that this exhibition can make a modest contribution to the rapidly evolving conversation of machine learning, artificial intelligence, and creativity.

 

Peter Nelson and Roberto Alonso Trillo, 2022.

Performance Series

19:00 – 19:40: Performance
19:45 – 20:45: Panel discussion [live streamed through Zoom]

The session will open with a performance involving the open-source data interface Archon, featured as one of the sound installations at Osage’s Machine Visions exhibition, and artists Karen Yu (http://www.karenyu.net/), Angus Lee (https://anguslee-music.space/), Roberto Alonso (www.robertoalonsotrillo.com), and Marek Poliks (https://www.marekpoliks.com/). Archon provides an interactive layer to Demiurge, a deep-learning audio synthesis engine developed by Marek Poliks and Roberto Alonso over the past few years.
Archon was developed with the belief that the future of music will replace (and is replacing) instruments and instrumentality with algorithmic verticalities capable of deploying and recombining the literal historical entirety of recorded audio according to affect-based macro-categories. Making music will need to change from a real-time experience to one of prompt-based audio generation, results management, and readymade conformity-driven refinement operations. In response, Archon facilitates a dynamic, behaviorally-adaptive, interpretive relationship between a musician and a data entity – reinjecting intimacy, proximity, and instrumentality into the higher-level data management activities that will constitute the future of music-making.
The performance will serve as a starting point to engage the panel members in a wider consideration of the evolution and current state of the relationships between AI and the arts in Australia and South East Asia.

This panel discussion is part of Engaging Influencers initiative. The Engaging Influencers is curated by the Australia Council for the Arts, and supported by the National Foundation for Australia-China Relations.

Performance #4 is a special addition to our program, where artist Peter Nelson,  musician Roberto Alonso and dancer Sudhee Liao will share a number of art and technology performances they have been working on for the past two years, mashed up into a special 30 minute composition.

This panel discussion is part of Engaging Influencers initiative. The Engaging Influencers is curated by the Australia Council for the Arts, and supported by the National Foundation for Australia-China Relations.

A performance in two parts, Debris project is a series of commissions of new music for fixed electronic means that exclusively explore, through the application of any sound transformation techniques (DSPs, fluid corpus manipulation/granular synthesis, physical modeling synthesis, etc.),  the materials found in the Demiurge’s Debris database. The project has been shaped by an interdisciplinary committee [Marek Poliks (Berklee) / Roberto Alonso (HKBU) / Pablo Coello (Vertixe) / Ramón Souto (Vertixe) / Ángel Faraldo (Phonos) / Jaime Reís (DME)]

Mon-Sat 10:30am – 6:00pm 
Closed on Sundays and public holidays 

Venue: Osage Gallery 4/F, 20 Hing Yip Street, Kwun Tong

Spread the love