Sneak Peak: ‘What the Robot Saw’ is Live!

OK, here’s what I’ve been working on. It’s net art! (Not exactly like my old 90’s net art, but…)
And I’m happy to say that the amazing Curt Miller will be once again working with me to do some sound work on the project.

Is it really, really done? No.** Is it live already? Yes. So why not have a peek? I imagine I’ll be awkwardly promoting it for real soon enough:

####################################################

What the Robot Saw

Welcome to Robot TV. What the Robot Saw’ (v 0.1 alpha) is a perpetual, robot-generated livestream film, curated and edited algorithmically from among the least viewed and subscribed YouTube videos uploaded over the past few hours. A Robot/AI filmmaker makes its way through the world of online video, focusing its attention on people who don’t usually get attention.

If the stream isn’t live, you can find recent archives here.

An invisible audience of software robots continually analyze content on the Internet. Videos by non-“YouTube stars” that algorithms don’t promote to the “recommended” sidebar or award “verified” status may be seen by few or no human viewers. For these videos, robots may be the primary audience. In ‘What the Robot Saw,’ the Robot is both AI voyeur and film director, depicting and magnifying the online personas of the subjects of a never-ending film.

Using computer vision, neural networks, and other robotic ways of seeing and understanding, the Robot continually selects, edits, and arranges recently uploaded public YouTube clips from among those with low subscriber and view counts, focusing on personal videos. A loose, stream-of-consciousness structure is developed as the Robot organizes the clips as a drift through neural network-determined groupings. As the Robot generates the film, it streams it live back to YouTube for public viewing. The film navigates a slice of social media that’s overlooked by the usual search and recommendation algorithms, thus largely only visible to robotic-algorithmic voyeurs.

As time zones around the globe sleep and wake, ‘What the Robot Saw’ follows the circadian rhythms of the world’s uploads. So tune in now and then. Robots never sleep*.

* For now, the live stream runs eighteen hours a day; there are one hour “intermissions” every four hours (and as needed for maintenance.) Archives of some recent streams are available on the Videos page or on the YouTube Channel.

‘What the Robot Saw’ is a non-commercial project.
** This is version 0.1-alpha, an initial implementation. There’s work still to be done in terms of structure, timing, sound, and AI. Versions focused on different content will also likely be spawned in the future.

Although the YouTube live stream is central to the project, the technical limitations of live streaming mean the image and sound quality are not ideal and may vary with network conditions. A high quality stream can be generated locally for art installations and screenings.

More videos, links, “how does it work, etc.” available at:
what-the-robot-saw.com

New project almost here!

I’ve been coding away this summer, trying to get my new social media video algorithmic curation project ready. Lots of fun with everything from neural net and computer vision-based video classification to algorithmically-based sound and picture editing. It’ll be a live stream, plus available for installations. It’s almost ready for beta, so stay tuned!

Sneak Peek: DeepReals

I’ve had a couple of requests to see one of my works-in-progress — my first foray into working with generative machine learning. So here it is, an alternative to DeepFakes: DeepReals.




The First Three Minutes
Every frame from the first three minutes of Christine Blasey Ford’s and Brett Kavanaugh’s testimony before the Senate Judiciary Committee, September 2018.

Also still working on the time-based online project I mentioned in my previous post. Since it runs in real time, that one’s a bit more involved! Hopefully will have that one in beta over the summer when I have some bigger chunks of time to work on tricky things!

Project and teaching updates (finally!)

My bad, I’ve been overly sporadic about my sporadic updates again! Been busy, but here’s a brief one:
Still working on new online/installation project. Buzzwordy keywords: Real-time video, social media, algorithmic subjectivity, computer vision. Hopefully will have the equipment soon to get a beta version online, but mainly focusing on teaching til early June.

Teaching! This quarter I’m teaching a new “special topics” seminar course in computer vision/machine learning/algorithmic bias practice and critical contemporary issues. Also teaching one of our sections of “ICAM Senior Projects,” where I get to mentor some of our fabulous ICAM senior undergrads in their computing-in-the-arts graduation projects.

And of course, the Mary Hallock Greenewalt Visibility Project continues!

Quick bits — upcoming stuff

A quick update on what I’ve been up to during sabbatical:

Working on a new durational, real-time algorithmic, live streamed movie / installation project. Almost ready to debut; stay tuned!

Also, happy to report that “June 8th, 2018” (take 2) a real-time improvised short film produced within a studio rehearsal of PIGS performance, has been selected as part of SIGGRAPH Digital Arts Community’s online exhibition, “Urgency of Reality in a Hyper-Connected Age.” . More details on that as I have ’em.

Also been doing some research and development on more things computer vision and machine learning related. Some of which goes into the upcoming durational movie, and some which doesn’t. Stay tuned!

New work in progress!

Sabbatical update: I’m making new work! Focusing on online and installation work once again; a few different projects:

Algorithmic. Video. Still Image. Computer vision. Border region. Global. Social media. Speculative futures. And presents.

Despite the string of buzzwords, those are really some of the topics I’m working on. Some of it follows on from the ideas I started dealing with in “Utopian Algorithm #1,” and others are quite a bit different.

I’ll be posting more as I go along, but if you’d like to know more, give me a ping!

“Googling Californias” is up on the site (five years late)

Back in 2013, Rick Silva invited me to make a project for the “W-E-S-T-E-R-N D-I-G-I-T-A-L” pavilion he was curating at The Wrong Biennale. “W-E-S-T-E-R-N D-I-G-I-T-A-L” being a pavilion featuring the work of west coast artists, I started thinking about what “west coast” means — and what “California” means. I decided to do something on the theme of Californias. I sent Rick the video and HTML links for the show at the time and did a news post about it here. But apparently I neglected to make “Googling Californias” a proper page on my site, which caused it to essentially drop off the face of the Internet after “The Wrong” ended. I just unearthed it again tonight. Thanks to Rick for inviting me to make something “wrong” on purpose!

So here it is, with its five years belated webpage: “Googling Californias (Half Truths for People on the Go.)” Video Loop, 2013. (Original, theoretically better quality web video here.)

Late at night, thoughts wander — and we find ourselves Googling Californias. Seems fitting — and all wrong: Google is itself the image of early 21st-century California technology, commerce, and power. It lives in California — and it doesn’t. The images Google offers up form a muddled patchwork of stereotypes and half-truths; but the awkward thing about half-truths is that they are half true. Like stereotypes of California itself: as awkwardly accurate as they are grotesque distortions. Sometimes you don’t find the California you were searching for. The system failed you — or you failed the system. Or maybe you weren’t looking for the right Californias. And as you travel between Californias, you remember, there’s yet another California beyond the borders of California. It doesn’t stop here. And it does.

New PIGS film: “June 8th – take 2”

[For those coming here anew: Here’s what PIGS is, and an intro to the AlgoCurator who selected the clips for “June 8th.” ]

I’d already posted the full length studio rehearsal / improvised audiovisual animation, “June 8th, 2018,” that Curt Miller and I recorded last month. Came across the second run-through from that day — same set of clips, but we did two improvisations. Rather like how this one flows as a film — slower paced, and you hear and see more of the people — so I’ve posted it too:

June 8th: Take 2 (full, uncut real-time animation) from Amy Alexander on Vimeo.

Also on YouTube … (be sure to set your YouTube setting for 1080p60fps.) Definitely better experienced with good monitor and speakers than on a laptop (or *gasp* — a phone!)

New video: “Inside PIGS.”

I’ve put together a new video discussing the Percussive Image Gestural System. Mostly I’m discussing / demonstrating the software from an real time experimental animation / visual music perspective: I talk about the main ways the PIGS system implements its approach to “structured improvisation.”

Inside PIGS: Amy Alexander discusses the Percussive Image Gestural System from Amy Alexander on Vimeo.

For a deeper dive into the historical/critical aspects of PIGS and collaborative audiovisual improvisation, check out the interview Curt Miller and I did a few weeks ago, “On PIGS.”

New PIGS text (interview) and videos!

Happy summer! I’ve got lots of new PIGS (Percussive Image Gestural System) stuff posted!

Videos: (all with Curt Miller):
The first three are the first videos I’ve been able to make with a satisfactory capture setup. (Frame rate not quite up to snuff on the first one so it looks jerkier than in “real life,” but the “June 8th, 2018” videos are full 60fps.)
“Utopian Algorithm #1” — PIGS (Percussive Image Gestural System) Studio Rehearsal/Demo, June 2018
“June 8th, 2018 – excerpt” — PIGS Real-time animation excerpt
“PIGS film! – June 8th, 2018 – excerpt” The full uncut real-time animated studio performance — a recording turned “PIGS Film.”
Documentation video of PIGS performance at ICLI (International Conference on Live Interfaces), Porto.

Writing:
Finally, we’ve done some substantial writing about PIGS, the colliding histories behind it, our responses to working in mixed-modal (audio/visual) improvisation, and our responses to improvising with algorithmically curated, near real time videos by real people on YouTube. Hope you’ll have a read!
“On PIGS:” Chapter-length interview with audiovisual developers/performers Amy Alexander and Curt Miller.

Abstract: Amy Alexander and Curt Miller discuss mixed modal improvisation with their custom integrated
software systems PIGS (Alexander, visuals) and The Farm (Miller, sound.) In this free-flowing
discussion, Alexander and Miller discuss historical visual, music and programming practices
including abstract animation, graphic scores, and object-oriented programming. They discuss
how these trajectories feed into the development of PIGS, a system designed to facilitate
improvisation by using drums and visual controllers to perform structured visuals. The artists
then discuss the specificities of mixed-modal collaborative improvisation, including the impact of
representational content (algorithmically curated YouTube videos) on their responses as
improvisers. They review responses to PIGS performances to date and discuss future plans for
new PIGS performance context. They conclude with a discussion of PIGS as audiovisual
performance research and propose some ideas for the future role of frameless visuals in music
ensemble performance.

Looking forward to doing some new PIGS performance and installation work with the AlgoCurator in the coming months.

Meanwhile, you can find the whole slew of past and present PIGS info at the usual place:
http://amy-alexander.com/pigs