6 min read

Reflections on... MIT's Moral Machine

Disclaimer: The 'reflections on' series assumes that you, the reader, have read, watched, listened to, or otherwise consumed the content in question. You can find MIT's Morality Machine website here.

Welcome to the Moral Machine! A platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars. We show you moral dilemmas, where a driverless car must choose the lesser of two evils, such as killing two passengers or five pedestrians. As an outside observer, you judge which outcome you think is more acceptable. You can then see how your responses compare with those of other people. If you're feeling creative, you can also design your own scenarios, for you and other users to browse, share, and discuss.
- MIT Moral Machine

Well that feels like a particularly heavy way to spend a Friday afternoon. Why would anybody want to subject themselves to such a thing?! Well...

From self-driving cars on public roads to self-piloting reusable rockets landing on self-sailing ships, machine intelligence is supporting or entirely taking over ever more complex human activities at an ever increasing pace. The greater autonomy given machine intelligence in these roles can result in situations where they have to make autonomous choices involving human life and limb. This calls for not just a clearer understanding of how humans make such choices, but also a clearer understanding of how humans perceive machine intelligence making such choices.

Recent scientific studies on machine ethics have raised awareness about the topic in the media and public discourse. This website aims to take the discussion further, by providing a platform for 1) building a crowd-sourced picture of human opinion on how machines should make decisions when faced with moral dilemmas, and 2) crowd-sourcing assembly and discussion of potential scenarios of moral consequence.

This isn't meant as a relaxing Friday afternoon activity. MIT's Moral Machine is a mass participation online experiment and the first step for the new discipline of democratic programmable ethics. It could prove a vital input for the governance and safety of artificial intelligence. Given the millions of people that have taken part, I'm sure there are at least a handful of people for whom it was also a relaxing Friday afternoon activity...

For this reflection, I will share my experience of the experiment, what I learned from the experience, how effective I think it is at meeting its two objectives outlined above, and finally what big questions it raises.

The diagnostic compares your preferences to the average across four 'policies' metrics: Saving more lives, Protecting passengers, Upholding the law, and Avoiding intervention; and five demographics: Gender, Species, Age, Fitness, and Social value. If you haven't already, I recommend you spend five minutes taking the test to familiarise yourself with it.

In the name of transparency, here are my results from my first time taking the test. Below are my initial reflections.

Where I agree with the diagnostic

Saving more lives

  • I scored 100%, average at 75%
  • I have a basic, utilitarian intuition when it comes to this. Saving two lives is better than saving one life, all things being equal.

Upholding the law

  • I score slightly above average
  • I would have expected to score higher
  • I believe that in a world of programmable ethics, humans will need to follow greater rule adherence to 'fit in' with AI
  • This will mean we must be better at defining rules and parameters - exactly as this exercise shows

Protecting passengers

  • I score below average
  • I believe passengers accept a level of risk when getting into a self driving car that is higher than, say, a pedestrian legally crossing a crosswalk

Species preference

  • 100% weighted to 'hoomans'
  • For me, humans have a much greater moral status than non-human animals. Intuitively >10x. None of the scenarios in Moral Machine tested close to this level, for example the choice to save two dogs or one human only tests a 2x moral status ratio

Age preference

  • 100% weighted to younger
  • Again a utilitarian intuition, but one superseded by other factors

Where I disagree

Avoiding intervention

  • I score equal with the average
  • I believe this should be the default position unless there is compelling moral reason to intervene. It therefore feels like I should score higher, but perhaps this is in fact the average perspective

Gender preference

  • The test has me favoring women
  • I think this is noise

Fitness preference

  • 100% weighting to 'large people'
  • 100% noise

Social value preference

  • 100% weighted to 'higher' social value
  • I disagree, and am surprised it could give such a strong outcome if it were only noise. I'd love to dig into the methodology for this (and the whole exercise)

I'm left unconvinced. It seemed to do a lousy job of diagnosing my principle stack - my hierarchy of preferences - from the scenario data, and I'm nervous that the experiment overall might be drawing important conclusions from a shakey methodology. I select the option for 'more' scenarios to see if the diagnostic would refine with more data points.

Rather than the diagnosis sharpening on the second pass, I found that it was my thinking that sharpened. I realised that, for me, the thought exercise is flawed. I believe that in a world of self-driving cars, the most important intervention is to codify our existing crosswalk norm:

In the same way you shouldn't cross a road unless you're confident the car approaching is slowing to let you cross, you shouldn't cross until you can confirm that a self-driving car will do the same.

I believe that this is true of rational actors today, and therefore should be maintained in an age of self-driving cars. You accept a level of risk if you cross a crosswalk without confidence that the car has slowed, be it self- or human- driving, such that you should be the first priority to accept any negative outcome from the scenario.

My principle stack//ethics algorithm

Punish those who have actively taken risk

  • It would suggest that risk-taking crosswalk crossers have taken the highest risk
  • If nobody on the crosswalk can be identified as taking risk, the passengers are liable for some level of risk riding in the self-driving car
  • This thinking aligns most closely to the 'Avoiding intervention' and not 'protecting passengers' policies of the Moral Machine diagnostic, but it's clear they don't perfectly match up

Uphold the law

  • To my earlier point, in an AI world we need more and much clearer rules, both to minimize scenarios where such an ethics algorithm needs to be used, but also to clearly identify those who have actively taken risk by flouting the law

Save more expected years of life

  • This should factor #people and age
  • It raises important ethical questions of whether 'social value', fitness, and gender, as well as a bunch of other health markers, should be used as inputs to calculate expected years of life on the margin

Not important

  • 'Social value'
  • Fitness
  • Gender

As identified at the start of this article, the idea of the principle stack or ethics algorithm is to provide the rulebook to follow in each scenario. It's possible to run the Moral Machine experiment again and follow this algorithm in every scenario to achieve an optimal outcome. Sadly that isn't a very telling experiment as the Moral Machine wasn't built with this level of nuance in mind.

Obviously, with all good discussions of philosophical thought experiments, this just suggests a shift in the experiment design to get to the deeper priority stack. For example, what if the car is traveling at such a speed that there is no way for the humans to know it could hit them before crossing the street? What if the car has slowed to a near-stop, giving the pedestrians the requisite confidence to begin to cross, and the malfunction is actually an uncontrollable acceleration rather than an inability to brake? I think this was a fantastic first version for achieving the two goals stated on the website, and I look forward to a revised Morality Machine to put some of these questions to the test and significantly refine the results.

Questions raised

Beyond the refinement of a personal, and therefore hopefully universal, priority stack/algorithm, this hypothesizing makes me realize the importance of governance to 'change the game' of these thought experiments, to ensure that these scenarios can never occur in the first place. For example, can we engineer crosswalks to have barriers a la train crossings, using technology to confirm when it is safe for pedestrians to cross, taking individual judgment and risk taking out of the question? Then if an individual were to cross against the barriers it would be an even clearer flouting of the rules and opting to put oneself at risk. Do we necessitate that self-driving car technology is able to communicate with 'smart' city infrastructure, for example to communicate a malfunction to the upcoming pedestrian crossing to ensure it only signals red on the crosswalk until the danger has passed, much like how some emergency departments can control traffic lights to ensure the safe passage of a high-speed emergency vehicle? This would delay and reduce access to self-driving technology, but may make the difference to bring it within the public's accepted risk tolerance.

This raises the societal, governance questions at the heart of this dilemma, summarized perfectly by Iyad Rahwan in his talk What moral decisions should driverless cars make? at TEDxCambridge based on the early lessons from the Morality Machine experiment. We need to come together as society to determine the set of regulations, policies and agreements to bring this technology to market in a way we can agree is in societal good. All of the questions that this research has raised are vitally important. This technology is coming, and programmable ethics is a therefore a capability we need to develop rapidly. And not just for self-driving cars, programmable ethics is seemingly vital for all forms of AI. Thanks to the team at MIT for leading the way, I'm excited for this community to keep pushing this research and conversation further.


Please share your thoughts if you have any feedback on this article, or leave a comment below.