Tuesday, July 21, 2020

Moon disaster

Scientific American teams up with M.I.T.-led project “In Event of Moon Disaster” for the short film “To Make a Deepfake”

Apparently, there was a backup speech prepared for Nixon in case the Apollo 11 mission failed. A team from M.I.T. used this speech and available deep learning technology to synthesize an entirely fake news narrative around this, until-now, unused speech.

In Event of Moon Disaster
from Suzanne Day (MIT)
In Event of Moon Disaster from Suzanne Day

The fake film is both fascinating and horrifying. On the positive side, it reminded me of the magic of discovering what Photoshop* could do with an image 25 years ago. It’s also easy to imagine many nefarious ways to weaponize deepfake videos. One of the lasting bits that made an impression on me was a comment Boston University Law Professor Danielle Citron made regarding the stance one takes on deepfakes. How do we deal with them, do we ban them? She illustrates with a metaphor—a kitchen knife—we can use it to carve a chicken at home in the kitchen…or stab someone with it. Yikes!

I really wish this team had taken the logical next step to point people in the direction of how to become more media savvy, for lack of a better term. Though all communication demands a degree of critical thinking, video and film are arguably the most effective in conveying a message. Maybe because it hits two of our senses, sight and sound, or possibly due to the passive manner in which we consume it—it’s the laziest media to consume. Sure, there's a policy angle too, but corporations and governments have thus far been inconsistent (or negligent) in addressing the matter. In many cases, they are the actual perpetrators. I’d put more stock into empowering people with the tools to understand what’s going on in front of their eyeballs.

To bring it back to a happy place, the project was exhibited in a few cities, including Amsterdam. The press kit features an image of a re-created late-1960s living room which is dyn-o-mite!

Art installation in Amsterdam


*Photoshop 2.5, no layers, yowza!

Thursday, October 25, 2018

Duchamps of the 21st-century

Obvious Art
Edmond de Belamy, from La Famille de Belamy

Three French students tweaked a GAN (generative adversarial network) algorithm derived from open source to produce an array of images. Then inkjet-printed to canvas and auctioned off one for $432,500 to an anonymous buyer. I mean—talk about turning water to wine—Jesus. The story behind this piece is all over the place.


Monday, October 01, 2018

Travel writing with A.I.

Ross Goodwin
1 the Road

“It was nine seventeen in the morning, and the house was heavy…”

Ross Goodwin’s robot

Four sensors packed in a Caddy on a roadtrip from NYC to NOLA, sending signals to his neural network of existing literature, of which I’m pretty sure at least one is On The Road. I love how this recipe outputs in real time on a spool printer, very nice touch.

“The time was one minute past midnight. But he was the only one who had to sit on his way back. The time was one minute after midnight and the wind was still standing on the counter and the little patch of straw was still still and the street was open.”

Ross Goodwin’s robot
1 the Road

Tuesday, September 04, 2018

Human-A.I. Collaborated Fashion

or, “How to design a post-apocalyptic jumper”

These MIT students put together an after-school project called How to Generate (Almost) Anything—worth checking out all their projects, btw. 

For this one, they configured a GAN trained on vintage sewing patterns and came up with some fun designs, including the post-apocalyptic jumper.

Vintage sewing pattern images
Training the dataset

Tuesday, June 06, 2017

Who’s the Muse?

Mario Klingman and Albert Barqué-Duran
My Artificial Muse

Klingermann used a stick figure modeled on Ophelia, then artist Albert Barqué-Duran painted the composite into a fresco.

From my perspective, the interesting part of these kinds of works is the constant role-switching and the question of who is pulling whose strings? The artist designs the experiment, then curates the image based on whatever message they’re trying to communicate. Up until this point, the machine might be considered the muse and the artist is in control. However, when the time comes to execute the artifact, it really begs the question of who’s the artist and who’s the muse. Who’s in control or does it matter?

By the way, the end result was supercool!


My Artificial Muse – THE AFTERMOVIE (Sónar+D 2017)

How can contemporary research, technology and art help us to see the classical artistic heritage with new eyes? "Muses" were the inspirational goddesses of literature, science, and the arts in Greek mythology. Can a computationally-generated Muse be as inspiring as a human-like one? By destroying the classic concept of a Muse, are we creating something more powerful? “My Artificial Muse” is a performance, which was Premièred at Sónar+D (Barcelona) and now on a World Tour, exploring how Artificial Intelligence can collaborate with humans in the creative and artistic processes. It is a disruptive project at the interface of art, science and technology. The human artist Albert Barqué-Duran performs a live-painting show using oil paintings, reproducing an artwork completely designed by an artificial neural network conceived by Mario Klingemann. Also, the artificial intelligent machine performs a mapping visual show on how it generates new paintings and showcases the computational creativity processes behind it. A generative soundtrack, produced by Marc Marzenit, is live-ensembled through a series of embodied sensors that follow the movements of the artist during the performance. This music set aims to immerse the audience in the development of the narrative. Each performance is unique. A new artificial muse computationally-created. A new classical muse live-painted. A new music set live-ensembled.

—Albert Barqué-Duran

Monday, December 12, 2016

Empathy for robots

Sun Yuan and Peng Yu
Can’t Help Myself

Sun Yuan and Peng Yu

What is it about watching robots do their work that’s so soothing and disturbing at the same time? Even though this machine doesn’t look like a humanoid, I see the pointless work it’s doing and can identify with it. There’s a shared experience through action and behavior.

I also get the impression this robot is embarrassed or ashamed. Maybe it spilled or killed something and it’s desperately trying to clean up the mess or evidence. There really is a sweet feeling in the futility of this robot trying to clean up the mess.

I was mesmerized when I first saw this clip—would love to see it close up. Brilliant piece of work.


Sun Yuan and Peng Yu: Can’t Help Myself


In this work commissioned for the Guggenheim Museum, Sun Yuan & Peng Yu employ an industrial robot, visual-recognition sensors, and software systems to examine our increasingly automated global reality, one in which territories are controlled mechanically and the relationship between people and machines is rapidly changing. Placed behind clear acrylic walls, their robot has one specific duty, to contain a viscous, deep-red liquid within a predetermined area. When the sensors detect that the fluid has strayed too far, the arm frenetically shovels it back into place, leaving smudges on the ground and splashes on the surrounding walls. The idea to use a robot came from the artists’ initial wish to test what could possibly replace an artist’s will in making a work and how could they do so with a machine. They modified a robotic arm, one often seen on production lines such as those in car manufacturing, by installing a custom-designed shovel to its front. Collaborating with two robotics engineers, Sun Yuan & Peng Yu designed a series of thirty-two movements for machine to perform. Their names for these movements, such as “scratch an itch,” “bow and shake,” and “ass shake,” reflect the artists’ intention to animate a machine. Observed from the cage-like acrylic partitions that isolate it in the gallery space, the machine seems to acquire consciousness and metamorphose into a life-form that has been captured and confined in the space. At the same time, for viewers the potentially eerie satisfaction of watching the robot’s continuous action elicits a sense of voyeurism and excitement, as opposed to thrills or suspense. In this case, who is more vulnerable: the human who built the machine or the machine who is controlled by a human? Sun Yuan & Peng Yu are known for using dark humor to address contentious topics, and the robot’s endless, repetitive dance presents an absurd, Sisyphean view of contemporary issues surrounding migration and sovereignty. However, the bloodstain-like marks that accumulate around it evoke the violence that results from surveilling and guarding border zones. Such visceral associations call attention to the consequences of authoritarianism guided by certain political agendas that seek to draw more borders between places and cultures and to the increasing use of technology to monitor our environment.

—Xiaoyu Weng

-- ·-· -··· ·-· ·· ·- -· -- --- ·-· ·-· ·· ···

Back to top Arrow