Holding AI Responsible for What It Says

 

Canada Air blamed the LLM chatbot for giving false information about their bereavement fare policy. They lost the law suit because of course it’s not the chatbot’s fault. But what would it take to hold chatbots responsible for what they say? That’s the topic of discussion with my guest, philosopher Emma Borg.

 

Listen to full episode :

Previous
Previous

The AI Mirror

Next
Next

Deepfakes and 2024 Election