The Necessary Imperfections of AI Content Moderation

 

With the ocean of social media content we need AI to identify and remove inappropriate material; humans just can’t keep up. But AI doesn’t assess content the same way we do. It’s not a deliberative body akin to the Supreme Court. But because we think of content moderation as a reflection of human evaluation, we then make unreasonable demands of social media companies and ask for regulations that won’t protect anyone. When we reframe what AI content moderation is and has to be, my guest argues, that leads us to make more reasonable and more effective demands of social media companies and government.

 

Listen to full episode :

Previous
Previous

The Secret Life of Data

Next
Next

AI Armageddon is Unlikely