Excited to be speaking at the Algorithmic Art Assembly in San Francisco in March! Organized by the amazing Thorsten Sideboard and hosted at Gray Area.
The Robot and I are excited that What the Robot Saw has been included in the “Learning Machines” exhibition at ElectroMuseum in Moscow.
Also, The Robot has had a minor cinematic software upgrade to version 0.2. Still early childhood for The Robot, though. Look for more upgrades to come as the Robot continues its education.
OK, here’s what I’ve been working on. It’s net art! (Not exactly like my old 90’s net art, but…)
And I’m happy to say that the amazing Curt Miller will be once again working with me to do some sound work on the project.
Is it really, really done? No.** Is it live already? Yes. So why not have a peek? I imagine I’ll be awkwardly promoting it for real soon enough:
Welcome to Robot TV. ‘What the Robot Saw’ (v 0.1 alpha) is a perpetual, robot-generated livestream film, curated and edited algorithmically from among the least viewed and subscribed YouTube videos uploaded over the past few hours. A Robot/AI filmmaker makes its way through the world of online video, focusing its attention on people who don’t usually get attention.
If the stream isn’t live, you can find recent archives here.
An invisible audience of software robots continually analyze content on the Internet. Videos by non-“YouTube stars” that algorithms don’t promote to the “recommended” sidebar or award “verified” status may be seen by few or no human viewers. For these videos, robots may be the primary audience. In ‘What the Robot Saw,’ the Robot is both AI voyeur and film director, depicting and magnifying the online personas of the subjects of a never-ending film.
Using computer vision, neural networks, and other robotic ways of seeing and understanding, the Robot continually selects, edits, and arranges recently uploaded public YouTube clips from among those with low subscriber and view counts, focusing on personal videos. A loose, stream-of-consciousness structure is developed as the Robot organizes the clips as a drift through neural network-determined groupings. As the Robot generates the film, it streams it live back to YouTube for public viewing. The film navigates a slice of social media that’s overlooked by the usual search and recommendation algorithms, thus largely only visible to robotic-algorithmic voyeurs.
As time zones around the globe sleep and wake, ‘What the Robot Saw’ follows the circadian rhythms of the world’s uploads. So tune in now and then. Robots never sleep*.
* For now, the live stream runs eighteen hours a day; there are one hour “intermissions” every four hours (and as needed for maintenance.) Archives of some recent streams are available on the Videos page or on the YouTube Channel.
‘What the Robot Saw’ is a non-commercial project.
** This is version 0.1-alpha, an initial implementation. There’s work still to be done in terms of structure, timing, sound, and AI. Versions focused on different content will also likely be spawned in the future.
Although the YouTube live stream is central to the project, the technical limitations of live streaming mean the image and sound quality are not ideal and may vary with network conditions. A high quality stream can be generated locally for art installations and screenings.
More videos, links, “how does it work, etc.” available at:
I’ve been coding away this summer, trying to get my new social media video algorithmic curation project ready. Lots of fun with everything from neural net and computer vision-based video classification to algorithmically-based sound and picture editing. It’ll be a live stream, plus available for installations. It’s almost ready for beta, so stay tuned!
I’ve had a couple of requests to see one of my works-in-progress — my first foray into working with generative machine learning. So here it is, an alternative to DeepFakes: DeepReals.
Also still working on the time-based online project I mentioned in my previous post. Since it runs in real time, that one’s a bit more involved! Hopefully will have that one in beta over the summer when I have some bigger chunks of time to work on tricky things!
My bad, I’ve been overly sporadic about my sporadic updates again! Been busy, but here’s a brief one:
Still working on new online/installation project. Buzzwordy keywords: Real-time video, social media, algorithmic subjectivity, computer vision. Hopefully will have the equipment soon to get a beta version online, but mainly focusing on teaching til early June.
Teaching! This quarter I’m teaching a new “special topics” seminar course in computer vision/machine learning/algorithmic bias practice and critical contemporary issues. Also teaching one of our sections of “ICAM Senior Projects,” where I get to mentor some of our fabulous ICAM senior undergrads in their computing-in-the-arts graduation projects.
And of course, the Mary Hallock Greenewalt Visibility Project continues!
A quick update on what I’ve been up to during sabbatical:
Working on a new durational, real-time algorithmic, live streamed movie / installation project. Almost ready to debut; stay tuned!
Also, happy to report that “June 8th, 2018” (take 2) a real-time improvised short film produced within a studio rehearsal of PIGS performance, has been selected as part of SIGGRAPH Digital Arts Community’s online exhibition, “Urgency of Reality in a Hyper-Connected Age.” . More details on that as I have ’em.
Also been doing some research and development on more things computer vision and machine learning related. Some of which goes into the upcoming durational movie, and some which doesn’t. Stay tuned!
Sabbatical update: I’m making new work! Focusing on online and installation work once again; a few different projects:
Algorithmic. Video. Still Image. Computer vision. Border region. Global. Social media. Speculative futures. And presents.
Despite the string of buzzwords, those are really some of the topics I’m working on. Some of it follows on from the ideas I started dealing with in “Utopian Algorithm #1,” and others are quite a bit different.
I’ll be posting more as I go along, but if you’d like to know more, give me a ping!
Back in 2013, Rick Silva invited me to make a project for the “W-E-S-T-E-R-N D-I-G-I-T-A-L” pavilion he was curating at The Wrong Biennale. “W-E-S-T-E-R-N D-I-G-I-T-A-L” being a pavilion featuring the work of west coast artists, I started thinking about what “west coast” means — and what “California” means. I decided to do something on the theme of Californias. I sent Rick the video and HTML links for the show at the time and did a news post about it here. But apparently I neglected to make “Googling Californias” a proper page on my site, which caused it to essentially drop off the face of the Internet after “The Wrong” ended. I just unearthed it again tonight. Thanks to Rick for inviting me to make something “wrong” on purpose!
Late at night, thoughts wander — and we find ourselves Googling Californias. Seems fitting — and all wrong: Google is itself the image of early 21st-century California technology, commerce, and power. It lives in California — and it doesn’t. The images Google offers up form a muddled patchwork of stereotypes and half-truths; but the awkward thing about half-truths is that they are half true. Like stereotypes of California itself: as awkwardly accurate as they are grotesque distortions. Sometimes you don’t find the California you were searching for. The system failed you — or you failed the system. Or maybe you weren’t looking for the right Californias. And as you travel between Californias, you remember, there’s yet another California beyond the borders of California. It doesn’t stop here. And it does.
[For those coming here anew: Here’s what PIGS is, and an intro to the AlgoCurator who selected the clips for “June 8th.” ]
I’d already posted the full length studio rehearsal / improvised audiovisual animation, “June 8th, 2018,” that Curt Miller and I recorded last month. Came across the second run-through from that day — same set of clips, but we did two improvisations. Rather like how this one flows as a film — slower paced, and you hear and see more of the people — so I’ve posted it too:
Also on YouTube … (be sure to set your YouTube setting for 1080p60fps.) Definitely better experienced with good monitor and speakers than on a laptop (or *gasp* — a phone!)