Thursday, October 25, 2018

Duchamps of the 21st-century

Obvious Art
Edmond de Belamy, from La Famille de Belamy
2018

Three French students tweaked a GAN (generative adversarial network) algorithm derived from open source to produce an array of images. Then inkjet-printed to canvas and auctioned off one for $432,500 to an anonymous buyer. I mean—talk about turning water to wine—Jesus. The story behind this piece is all over the place.

***

Monday, October 01, 2018

Travel writing with A.I.

Ross Goodwin
1 the Road
2017

“It was nine seventeen in the morning, and the house was heavy…”

Ross Goodwin’s robot

Four sensors packed in a Caddy on a roadtrip from NYC to NOLA, sending signals to his neural network of existing literature, of which I’m pretty sure at least one is On The Road. I love how this recipe outputs in real time on a spool printer, very nice touch.

“The time was one minute past midnight. But he was the only one who had to sit on his way back. The time was one minute after midnight and the wind was still standing on the counter and the little patch of straw was still still and the street was open.”

Ross Goodwin’s robot
1 the Road

Tuesday, September 04, 2018

Human-A.I. Collaborated Fashion

or, “How to design a post-apocalyptic jumper”

These MIT students put together an after-school project called How to Generate (Almost) Anything—worth checking out all their projects, btw. 

For this one, they configured a GAN trained on vintage sewing patterns and came up with some fun designs, including the post-apocalyptic jumper.

Vintage sewing pattern images
Training the dataset
Results!

Tuesday, June 06, 2017

Who’s the Muse?

Mario Klingman and Albert Barqué-Duran
My Artificial Muse
2017

Klingermann used a stick figure modeled on Ophelia, then artist Albert Barqué-Duran painted the composite into a fresco.

From my perspective, the interesting part of these kinds of works is the constant role-switching and the question of who is pulling whose strings? The artist designs the experiment, then curates the image based on whatever message they’re trying to communicate. Up until this point, the machine might be considered the muse and the artist is in control. However, when the time comes to execute the artifact, it really begs the question of who’s the artist and who’s the muse. Who’s in control or does it matter?

By the way, the end result was supercool!

***

My Artificial Muse – THE AFTERMOVIE (Sónar+D 2017)

How can contemporary research, technology and art help us to see the classical artistic heritage with new eyes? "Muses" were the inspirational goddesses of literature, science, and the arts in Greek mythology. Can a computationally-generated Muse be as inspiring as a human-like one? By destroying the classic concept of a Muse, are we creating something more powerful? “My Artificial Muse” is a performance, which was Premièred at Sónar+D (Barcelona) and now on a World Tour, exploring how Artificial Intelligence can collaborate with humans in the creative and artistic processes. It is a disruptive project at the interface of art, science and technology. The human artist Albert Barqué-Duran performs a live-painting show using oil paintings, reproducing an artwork completely designed by an artificial neural network conceived by Mario Klingemann. Also, the artificial intelligent machine performs a mapping visual show on how it generates new paintings and showcases the computational creativity processes behind it. A generative soundtrack, produced by Marc Marzenit, is live-ensembled through a series of embodied sensors that follow the movements of the artist during the performance. This music set aims to immerse the audience in the development of the narrative. Each performance is unique. A new artificial muse computationally-created. A new classical muse live-painted. A new music set live-ensembled.

—Albert Barqué-Duran