What the Robot that Saw Doesn’t See — aka Coronavirus meets Moderation Algorithms

What the Robot Saw has recently moved to Twitch. (Though you can always find it at the WTRS homepage, regardless of where the stream is.) Despite their being gaming-centered, Twitch seems to be working out well: The compression algorithms are presumably designed for computer animated games — so, minus a few nitpicks, the image quality for WTRS’s graphics is much better than it was on YouTube. It’s easy for the Robot to start and stop streams throughout the day. And best of all — so far their “community guidelines” algorithms aren’t knocking the Robot offline at every turn. (For the record, the Robot is an upstanding member of its community — just look at all the recycling it does… )

To be fair, I understand YouTube’s predicament in trying to train AI’s to filter out troll videos: I know it’s hard because I’ve been training the Robot to do that kind of thing for over a year. In the Robot’s case, it curates a small number of videos to begin with. If the Robot overaggressively culls content, it doesn’t knock that content offline entirely. In YouTube’s case, it’s a bigger deal, and can have a potentially bigger impact. ‘What the Robot Saw’ is a silly robot film, but it’s about the serious concern of making videos (and by extension, their humans) invisible if they don’t fit the mass popularity algorithm. Usually it happens by burying them in recommendations and search results. Now the “misfits” are being removed entirely. The Robot will be OK. But what — and who — else is not getting seen that should be? YouTube has become a global meta-medium that influences how we perceive the culture of the moment. And it’s search algorithms, then, are a meta-meta-medium. So — this matters.

YouTube has said they’re trying to keep videos off their site that spread of disinformation about Coronavirus. And stopping disinformation is a good thing. But they’ve got a dilemma, because at the intersection of monetization, popularity, and fear lies disinformation. AI’s are notoriously imperfect moderators, so a lot of disinfo still gets out, while some amount of legitimate content gets censored. Both parts of this are worth keeping an eye on. Which will be hard to do, since not even Robots can see what isn’t there.

For now, here’s what the Robot does see. (BTW, I’ve finally had a chance to do some timing work on the animation and other cleanup. So, here we go:)