Season 2 | Episode 25
New episodes are released weekly, every Thursday.
Subscribe
A podcast that goes deep on tech, ethics, and society.
Recent Episodes
-
Ep. 025
From the best of season 1. Part 2 of my conversation with Alex.
There’s good reason to think AI doesn’t understand anything. It’s just moving around words according to mathematical rules, predicting the words that come next. But in this episode, philosopher Alex Grzankowski argues that AI may not understand what it’s saying but it does understand language. In this episode we do a deep dive into the nature of human and AI understanding, ending with strategies for how AI researchers could pursue AI that has genuine understanding of the world.
-
Ep. 024
From the best of season 1. Part 1 of my conversation with Alex Grzankowski.
It looks like ChatGPT understands what you’re asking. It looks like ChatGPT understands what it’s saying in reply.
But that’s not the case.
Alex and I discuss what understanding is, for both people and machines and what it would take for a machine to understand what it’s saying.
-
Ep. 023
One person driving one car creates a negligible amount of pollution. The problem arises when we have lots of people driving cars. Might this kind of issue arise with AI use as well? What if everyone uses the same hiring or lending or diagnostic algorithm? My guest, Kathleen Creel, argues that this is bad for society and bad for the companies using these algorithms. The solution, in broad strokes, is to introduce randomness into the AI system. But is this a good idea? If so, do we need regulation to pull it off? This and more on today’s episode.
-
Ep. 022
With so many laws and so much case law, it’s virtually impossible for the layperson to know what’s legal and illegal. But what if AI can synthesize all that information and deliver clear legal guidance to the average person? Is such a thing possible? Is it desirable?
-
Ep. 021
Author of the new book “Tech Agnostic: How Technology Became the World's Most Powerful Religion, and Why It Desperately Needs a Reformation” discusses, well, what do you think? It’s right there in the title. Go have a listen.
-
Ep. 020
From the best of season 1: You might think it's outrageous that companies collect data about you and use it in various ways to drive profits. The business model of the "attention" economy is often objected to on just these grounds.
On the other hand, does it really matter if data about you is collected and no person ever looks at that data? Is that really an invasion of your privacy?
Carissa and I discuss all this and more. I push the skeptical line, trying on the position that it doesn't really matter all that much. Carissa has powerful arguments against me.
This conversation goes way deeper than 'privacy good/data collection bad' statements we see all the time. I hope you enjoy!
-
Ep. 019
We use the wrong metaphor for thinking about AI, Shannon Vallor argues, and bad thinking leads to bad results. We need to stop thinking about AI as being an agent or having a mind, and stop thinking of the human mind/brain as a kind of software/hardware configuration. All of this is misguided. Instead, we should think of AI as a mirror, reflecting our images in a sometimes helpful, sometimes distorted way. Our shifting to this new metaphor, she says, will lead us to better, and ethically better, AI.
-
Ep. 018
Canada Air blamed the LLM chatbot for giving false information about their bereavement fare policy. They lost the law suit because of course it’s not the chatbot’s fault. But what would it take to hold chatbots responsible for what they say? That’s the topic of discussion with my guest, philosopher Emma Borg.
-
Ep. 017
California just signed a bill to drastically decrease deepfakes on social media. The worry, of course, is that they are already being used to unjustifiably sway voters. In this episode, one of the best from Season 1, I talk to Dean Jackson and Jon Bateman, experts on the role of deepfakes in disinformation campaigns. The bottom line? Deepfakes aren’t great but they’re not half the problem.
-
Ep. 016
What does it look like to integrate ethics into the teams that are building AI? How can we make ethics a practice and not a compliance checklist? In today’s episode I talk with Marc Steen, author of the book “Ethics for People Who Work in Tech,” who answers these questions and more.
-
Ep. 015
Doesn’t the title say it all? This is for anyone who wants the very basics on what AI is, why it’s not intelligent, and why it doesn’t pose an existential threat to humanity. If you don’t know anything at all about AI and/or the nature of the mind/intelligence, don’t worry: we’re starting on the ground floor.
-
Ep. 014
Are we dependent on social media in a way that erodes our autonomy? After all, platforms are designed to keep us hooked and to come back for more. And we don’t really know the law of the digital lands, since how the algorithms influence how we relate to each other online in unknown ways. Then again, don’t we bear a certain degree of personal responsibility for how we conduct ourselves, online or otherwise? What the right balance is and how we can encourage or require greater autonomy is our topic of discussion today.
-
Ep. 013
From the best of season 1:
I talk a lot about bias, black boxes, and privacy, but perhaps my focus is too narrow. In this conversation, Aimee and I discuss what she calls “sustainable AI.” We focus on the environmental impacts of AI, the ethical impacts of those environmental impacts, and who is paying the social cost of those who benefit from AI.
-
Ep. 012
Is our collective approach to ensuring AI doesn’t go off the rails fundamentally misguided? Is our approach too narrow to get the job done? My guest, John Basl argues exactly that. We need to broaden our perspective, he says, and prioritize what he calls an “AI ethics ecosystem.” It’s a big lift, but without it it’s an even bigger problem.
-
Ep. 011
Many researchers in AI think we should make AI capable of ethical inquiry. We can’t teach it all the ethical rules; that’s impossible. Instead, we should teach it to ethically reason, just as we do children. But my guest thinks this strategy makes a number of controversial assumptions, including how ethics works and what actually is right and wrong.
-
Ep. 010
It’s common to hear we need new regulations to avoid the risks of AI (bias, privacy violations, manipulation, etc.). But my guest, Dean Ball, thinks this claim is too hastily made. In fact, he argues, we don’t need a new regulatory regime tailored to AI. If he’s right, then in a way that’s good news, since regulations are so notoriously difficult to push through. But he emphasizes we still need a robust governance response to the risks at hand. What are those responses? Have a listen and find out!
-
Ep. 009
Everyone knows biased or discriminatory AI bad and we need to get rid of it, right? Well, not so fast.
I’m bringing one of the best episodes from Season 1 back. I talk to David Danks, a professor of data science and philosophy at UCSD. He and his research team argue that we need to reconceive our approach to biased AI. In some cases, David thinks, it can be beneficial. Good policy - both corporate and regulatory - needs to take this into account.
-
Ep. 008
Data about us is collected, aggregated, and shared in more ways than we can count. In some cases, this leads to great benefits. In others, a great deal of harm. But at the end of the day, the truth is that it’s all out of control. No individual, nor any private company, nor any government has a grip on what gets collected, what gets done with it, and what the societal impacts are. In this episode I talk to Aram Sinnreich and Jesse Gilbert about their new book, “The Secret Life of Data,” in which they explain the complexity and how we should begin to take back control.
-
Ep. 007
With the ocean of social media content we need AI to identify and remove inappropriate material; humans just can’t keep up. But AI doesn’t assess content the same way we do. It’s not a deliberative body akin to the Supreme Court. But because we think of content moderation as a reflection of human evaluation, we then make unreasonable demands of social media companies and ask for regulations that won’t protect anyone. When we reframe what AI content moderation is and has to be, my guest argues, that leads us to make more reasonable and more effective demands of social media companies and government.
-
Ep. 006
AI + nuclear capacities sounds like a recipe for disaster. Some people think it could cause mass extinction. While it’s easy to let our imaginations run wild, insight into how the military actually incorporates AI into its weapons and operations is a much better idea. Heather gives us precisely those insights and (thus) the opportunity to think clearly about the threat.
-
Ep. 005
AI holds a lot of promise in making faster, more accurate diagnoses of our ailments. But if they are too influential, might they undermine our doctors’ ability to understand the rationale for the diagnose? And could it undermine the aspect of the doctor-patient relationship that is crucial for maintaining our patient autonomy?
-
Ep. 004
Privacy is important. But I think we mostly misconceive the nature of privacy and data privacy. I argue we should rethink data privacy so that we can both focus better on how to protect people and so we can enable legitimately desirable innovations.
-
Ep. 003
Technologist’s are racing to create AGI, artificial general intelligence. They also say we must align the AGI’s moral values with our own. But Professors Ariela Tubert and Justin Tiehen argue that’s impossible. Once you create an AGI, they say, you also give them the intellectual capacity needed for freedom, including the freedom to reject your given values.
-
Ep. 002
Of course decreasing racial disparities in healthcare is ethically imperative. But does it sometimes require too great a moral sacrifice? If it costs more lives than an non-equitable distribution of healthcare resources, should we really do it? Professors Guha Krishnamurthi and Eric Vogelstein argue that equity is not always a moral trump card.
-
Ep. 001
Could online sexual assault be as morally bad as in-person sexual assault? Honestly, that initially struck me as a bit crazy. But Professor John Danaher makes some very compelling arguments.
Subscribe to my newsletter
Meet the Host
Follow along with our host, Reid Blackman the author of “Ethical Machines” (Harvard Business Review Press), creator and host of the podcast “Ethical Machines,” and Founder and CEO of Virtue, a digital ethical risk consultancy. He is also an advisor to the Canadian government on their federal AI regulations, was a founding member of EY’s AI Advisory Board, and a Senior Advisor to the Deloitte AI Institute.