What the Robot Saw

what-the-robot-saw.com

Welcome to the contrarian algorithm. As algorithms silently curate the social media world, an alternate reality also exists, starring the humans who win at losing the social media game — a secret cinematic world seen only by robots. Does the Internet suck? Or do just the parts we get to see suck?

‘What the Robot Saw’ is live, continuously-generated, robot film, curated, analyzed and edited using computer vision, neural networks, and contrarian search algorithms. It’s a Sunday drive through the awkward intersections of performance, surveillance, voyeurism, and robots — in the age of the talking head. The Robot catches glimpses of some of the least viewed recent videos on YouTube, featuring first person narratives by some of the people commercial ranking algorithms conceal.

The Robot continuously makes its way through the world of low engagement online video, carefully organizing and describing the people and scenes it features in its documentary. The film is generated and titled algorithmically from among the least viewed and subscribed YouTube videos uploaded in recent hours and days. “What the Robot Saw” documents the complicated relationship between the world’s surveillant and curatorial AI robots and the humans who are both their subjects and their stars — people they superficially observe and analyze but cannot comprehend.

Much more info (on both the conceptual and techy sides) at what-the-robot-saw.com.




^^^Please turn on the sound.^^^… ‘What the Robot Saw’ is intended for fullscreen viewing if you can…
If the stream’s not live, or to scrub through the live or recent videos, view them on Twitch.
Thrice daily intermissions last four minutes; stream resumes on the hour.

An invisible audience of software robots continually analyze and curate content on the Internet. Videos by non-“YouTube stars” that algorithms don’t promote to the top of the search rankings or the “recommended” sidebar may be seen by few or no human viewers. For these videos, robots may be the primary audience. In ‘What the Robot Saw,’ the Robot is AI voyeur turned director: classifying and magnifying the online personas of the subjects of a never-ending film.

Using computer vision, neural networks, and other robotic ways of seeing, hearing, and understanding, the Robot continually selects, edits, and identifies recently uploaded public YouTube clips from among those with low subscriber and view counts, focusing on personal videos. A loose, stream-of-consciousness narrative develops as the Robot drifts through neural network-determined groupings. As the Robot scans and magnifies the clips, it generates the film in a style fitting its own obsessions, inserting titles for sections and “interviewees.

Robot meets Resting Bitch Face and Other Adventures. As it makes its way through the film, the Robot adds lower third supers: periodic section titles, derived from its image recognition-based groupings and interpreted through the Robot’s vaguely poetic perspective; and frequent identifiers, for the many human interviewees in its documentary. The identifiers — talking head style descriptions like “Confused-Looking Female, age 22-34” — are generated using Amazon Rekognition — a popular commercial face detection/recognition library. The feature set of Rekognition offers a glimpse into how computer vision robots, marketers, and others, choose to categorize humans. While attempting to adhere to Rekognition’s guidelines that differentiate a person’s appearance from their actual internal emotional state, the Robot titles each person as it analyzes/perceives them — and as marketers and others using similar software do. When you’re a computer vision robot, appearance is everything. Pixels don’t have internal states.

Dated archives are generated for each daypart livestream, offering an ongoing archive of the videos few humans get to see, as robots might, and sometimes do, see them.

* The live stream runs throughout day; there are “intermissions” every four hours (and as needed for maintenance.) Archives of recent streams are available on the Videos page or on the YouTube Channel.

Artist, Director, Lead Developer, AI Wrangler, Coffee Girl: Amy Alexander (contact)

Additional software development and sound design contributions: Curt Miller