11 min read

What Are We Solving For?

“Happiness then, is found to be something perfect and self sufficient, being the end to which our actions are directed.”
Aristotle, The Nicomachean Ethics


What are we solving for is math-speak for 'what is our objective'? What goal are we seeking as humankind? It's a very profound question, so profound in fact that you may believe it shouldn't be reduced to math-speak. We have seen however that the nature of Artificial Intelligence is making questions like these - phrased in the language of mathematics and logic - ever more relevant.

AI needs an objective function - an objective for it to understand that more of a certain state of the world is desirable. While we don't need to program AI with a full set of human values, it would certainly help if we knew what we as a species are aiming for. If we don't know what we are solving for as humanity, how can we expect to train an AI that acts in our best interests?

This is such a huge question I will make no attempt to provide answers here. Instead I hope to structure the problem and inform our research agendas by setting out the areas where we need to develop our thinking. I'll propose that beyond survival and health, our objective should be to maximise human fulfilment, and that we ought to further research the drivers of fulfilment, and indeed how to measure it. I'll expand this to clarify that the distribution of fulfilment across humanity is important, and that we should also consider the moral status of other agents in our objective function, namely future people, non-human animals, digital people, and artificial intelligence itself.

Survival

It seems a fair assumption to say that we want humanity to endure. If there is no human race...well there's nobody to ask what our objective should be for a start. We are programmed by evolution to want to recreate. The Voluntary Human Extinction Movement - the group that argues "that the best thing for Earth's biosphere is for humans to voluntarily cease reproducing" - is niche enough that it can be discounted. The relevant question then becomes how do we make sure that humanity survives for as long as possible?

Existential risk

The term existential risk was coined by Oxford Philosopher Nick Bostrom and we can understand it as the chance of an existential catastrophe, "an event which causes the end of existence of our descendants". It has come to prominence over the last fifteen years. There is a Future of Humanity Institute at Oxford University and a Centre for the Study of Existential Risk at Cambridge University, as well as a number of private think tanks such as the Future of Life Institute. Many in this field believe that even if the probability of such existential catastrophes are minimal, the inordinate loss of life and future potential they would bring about mean it can be beneficial to invest significant resources to reduce the probability of such events by even a fraction.

Health

Total number of human lives seems too general to quantify what we are solving for. Would we really choose an infinite future where all humans are bed-bound for their entire lives due to a chronic sickness, or a future where humanity goes extinct after 10 billion years but everyone is healthy and has a life expectancy of 120? That hopefully intuitive hypothetical shows how important a factor health should be in our quantification. There are parallels with how the healthcare industry moved to use measures of Healthy Life Years (HLYs) to measure the benefits of an intervention, rather than simply life expectancy.

Fulfilment

Would you rather choose a future where everybody is expected to have 120 years of healthy life yet live under the constant misery of a totalitarian regime, or one with 80 years of healthy life where everyone is free to live happy, fulfilled lives? This hypothetical shows the importance of quality of life or fulfilment in our objective calculation. This has been our catch-all term for the concept of living happy, flourishing lives, and while difficult to define, Aristotle believed this was the only thing that is an end unto itself, everything else.

It is also very difficult to measure. We know from our ethical toolkit that quantifying fulfilment is the realm of Utilitarianism ethics, and that most AI practitioners operate within this framework, due to it being "by far the most compatible with decision-theoretic analysis". We also know that this branch of moral philosophy has a lot of drawbacks, meaning we might be venturing into difficult water if we hand the future of humanity over to a group with a limited grasp on pragmatic ethics.

This was the finding of a Paper from 2017 titled Why Teaching Ethics to AI Practitioners Is Important. The authors recommend "that AI practitioners should also be familiar with the other two major schools of ethical theory, deontology and virtue ethics. These two approaches are far less compatible with decision-theoretic analysis and other familiar analytic strategies, which can make them challenging to understand and to apply. We argue that it is worthwhile, even essential, for AI practitioners to confront this challenge, and apply these theories in order to achieve the clearest possible understanding of a given situation, and of their own reasoning and decision- making in response to it". This rationale applies both to the decisions AI Practitioners make, but also to the decisions they will code into the AI they develop.

It's also very difficult to know what drives fulfilment. This is the focus of positive psychology, and in part, is the purpose of our ideal future vision, to understand the drivers of fulfilled lives to ensure they are present in our ideal future. We may not be very good at creating fulfilment, or measuring it when we do obtain it, but we can theoretically say that maximising fulfilment is the main component of our objective as humankind.

For whom (and for what)?

So far we have assumed that we are only acting on behalf of humanity, that we are the only ones who deserve survival, health and fulfilment. Here I confirm that we are talking about all of humanity, and introduce other categories of possibly sentient beings that we should possibly also factor into our calculation.

Humanity

First we must acknowledge that our desire for survival and fulfilment extends to all of humanity. One of the limitations of traditional utilitarian thinking is that it doesn't consider the distribution of fulfilment. We must. Our values of 'fairness' and 'justice' ensure we commit to wanting all of humankind to flourish. There is no distinction between who deserves it more. It should be evident that any contrarian view is on a slippery slope to eugenics and supremecist logic. If anything, thinking over the timeline of the entire future of humanity - beyond even the human-level AI technology frontier of our future vision - gives more credence to our value of equality.

Future people

Does the fulfilment of future people matter? I certainly care that my currently-hypothetical grandkids live fulfilled lives, but how much do I care about my hypothetical great-great-great-grandkids? How much do I care about your great-great-great grandkids? Our desire for survival implies we care about their existence, but how much do we value their fulfilment with respect to our own?

This may seem like a hypothetical, but there are real tradeoffs to be made based on this belief. For example, should we use our scarce resources to save lives today, or invest them in preventing the next pandemic, which might not hit for many more generations? The 'Longtermism' movement, de facto lead by Oxford Philosophy professor Will MacAskill and brought to the fore in his recent book What We Owe the Future, argues that we should value future people just as much as present people. Here's the simple thought experiment he introduces:

"So imagine that you're walking on a hiking trail that's not very often used. And at one point, you open a glass bottle and drop it. It smashes on the trail, and you're not sure whether you are going to tidy up after yourself or not. Not many people use the trail, so you decide it's not worth the effort. You decide to leave the broken glass there and you start to walk on. But then you suddenly get a Skype call. You pick it up, and it turns out to be from someone 100 years in the future. And they're saying that they walked on this very path, accidentally put their hand on the glass you dropped, and cut themselves. And now they’re politely asking you if you can tidy up after yourself so that they won't have a bloody hand.
Now, you might be taken aback in various ways by getting this video call, but I think one thing that you wouldn't think is: “Oh, I'm not going to tidy up the glass. I don't care about this person on the phone. They're a future person who’s alive 100 years from now. Their interests are of no concern to me.” That's not what we would naturally think at all. If you've harmed someone, it really doesn't matter when it occurs.

If you accept this premise, it leads to some interesting and difficult questions: Is there a moral imperative to reproduce? If you believe the happiness of future people is equal to our own, the most ethical thing you can do is reproduce as much as possible. Is there a moral imperative to populate the universe? We will eventually run out of space and/or resources on Earth, which places a natural cap on the population. Research has shown how many potential people will never be born if we stay limited on Earth, which Longtermist thinking would see as a moral atrocity.

This field of thinking is called Population Ethics. I have thought about these questions a lot and still do not have a solid perspective. I will deep-dive into this topic in the future, likely with a Reflection On... Reasons and Persons, the seminal text of this field of ethics, and with interviews with experts who hold opposing perspectives.

Non-human animals

How much do you love your dog? Or cat/rat/hamster/snake/(insert pet here)? Any pet owner values the health and happiness of an animal, unlikely to the same extent as human life - though I bet we can all picture a pet-owner we know whose actions would challenge that assumption - but certainly not at zero. How far does this extend to all animal life?

In attempting to quantify what matters to us, we must also quantify health and fulfilment of animals. It comes back to tradeoffs, how much animal suffering is acceptable for the happiness it might provide humans? This might seem abstract, but the Moral Machine experiment presents scenarios where a self-driving car must decide whether to hit humans or animals. For example, should a self-driving car swerve and kill five dogs in order to save one elderly human? Fifty dogs? Approximately 1% of the global population is vegan, which suggests we have a low value on animal welfare, but this is increasing, with 2019 being declared the Year of the Vegan. In fact, the field is popularising the term 'non-human animals' in part to highlight how we are in the same category of life as animals with a hope to foster more compassion.

The real question here is developing a deeper understanding of sentience. We want to be able to quantify the extent to which an animal can experience fulfilment and suffering. If a cow can experience 20% the fulfilment and suffering of a human,  a tortoise 2%, and a mealworm 0.002%, then we are on our way to weighing up the moral status of all living species (yes, there is a case that this extends to plant life too). The Moral Weight Project from Rethink Priorities is a fountain of knowledge on this topic, and I'll also be exploring it in more detail in the future, given its importance on envisioning an ideal future world.

Digital People

What if we develop the capability to upload copies of ourselves to a digital world? Imagine scientists can replicate the human brain digitally such that a copy of you has exactly the same sentience and sense of experience in their digital world as the 'real' you does in this one. This may seem like a fantasy idea but - hold onto your hats - what is there to say that we aren't all 'digital people' living in some form of digital world? Our experience of the world is understood purely through our senses and brain activity, ultimately all physical processes, and as it is all physical it is theoretically possible that we are able to digitally replicate it. I think this is a particularly difficult hypothetical to wrap your head around - if you want to dive into the rabbit hole I recommend the Most Important Century series of posts (you can search for 'digital people' if you want to skip to the relevant part, but I highly recommend reading the whole thing).

This obviously poses important questions for our objective. Do we value them equally with 'real' human life? The only basis I can imagine for why we'd deem 'real' humans as superior is with the concept of the 'soul', some non-physical element that makes 'real' humans special. Will we eventually prove that there is no such thing? Authors note - I love this work in large part because huge philosophical and spiritual questions like 'what is the soul?' have direct implications on how we design our future. Similar to reproduction, will there be a moral imperative to create as many digital people as we possibly can? This has similar implications to the reproduction case, namely that it necessitates expansion into the universe to maximise the resources we have available to create digital people. What makes a valid digital person? Imagine we can create a digital person with 80% of the sentience and experience of a human, do we value them at 80%, 100%, or 0%? This is strikingly similar to the non-human animals case, where a cow could be determined to have 20% the sentience of a human, for example. It also leads us to consider what would be on the path to digital people...

Artificial Intelligence

What if the digital person is artificially created and not a replica of a 'real' person? If this isn't deemed to matter, then we need to refocus this discussion on 'digital intelligence' rather than 'digital people'. We might develop 'digital intelligence' more akin to the level of a cow rather than a person, making the animal case a better analogy. From here we must begin to explore the moral status of artificial intelligence as we create it, and the field of AI ethics defined not as what is ethical in the development of AI for its impact on humans, but rather how practices of AI development impact the AI being developed, itself seen as having moral status.

Acceptance of this will necessitate a code of ethics for digital intelligences. If we develop digital people and decide they have moral status equivalent to 'real humans' then it is fair to assume that our existing ethical rules and norms will apply to them to. 'Digital people' will have human rights. It would be deemed a crime against humanity - or at least an equivalent wrong - to create a world of digital people and subject them to starvation, potentially as a social experiment. Much has been written about the potential of AI for the social sciences through the ability to run large scale social experiments with independent digital actors. Not all of these would constitute crimes, but this development of AI ethics could place significant limits on the value of AI capability. It could be a way to slow AI capability development...

We also then have to consider the possibility that we develop an intelligence greater than humanity. If it proves it has 200% of the capacity of a 'real' human for sentience, suffering, and fulfilment...what then? Do we accept our place as subservient and understand it is the morally just thing for us to sacrifice our happiness for the happiness of this superior intelligence? Do we deem it acceptable that humans may endure suffering if it leads to a sufficient happiness for the AI that it produces a net positive? If you intuitively disagree then you likely need to examine your moral justification for eating factory-farmed animals, if you have one. In a more extreme scenario, Do you believe it would be morally justified for the AI to sacrifice humans if the produces a net positive happiness? If you intuitively disagree with this question...well it might be time to go vegan

We should aim beyond survival and health, to have an objective to maximise equally-distributed human fulfilment. We need to research further the moral status of future people, non-human animals, digital people, and artificial intelligence itself. Once we have a better idea of what we want, we might be better placed to guide the AI that we develop. All we can say for sure it that we need an interdisciplinary approach to know what we are truly solving for.


Please share your thoughts if you have any feedback on this article, or leave a comment below.