10 min read

What Are We Solving For?

“Happiness then, is found to be something perfect and self sufficient, being the end to which our actions are directed.”
Aristotle, The Nicomachean Ethics

What goal are we seeking as humankind? How can we judge whether our vision for an ideal future world is, in fact, ideal? We need to determine our objective so we can judge whether our vision is fulfilling its purpose.

What are we solving for is how mathematicians phrase the question of objective setting. You may believe it is inappropriate to frame such a profound question in 'math-speak', however when we will increasingly be employing technology to help us arrive at our ideal future, phrasing questions like these in the language of mathematics and logic feels ever more appropriate. If we don't know what we are solving for as humanity, how can we expect to train an Artificial Intelligence that acts in our best interests?

This is such a huge question I will make no attempt to provide answers here. Instead I hope to structure the problem and set out the areas where we need to further our thinking. I'll propose that, beyond survival and health, our objective should be to maximise fulfilment for all people. We ought to research the drivers of fulfilment and how to measure it. I'll also consider whether we should include the fulfilment of other agents such as animals, people in the future, 'digital people', and artificial intelligence itself. That section gets a bit weird.

Survival

It seems a fair assumption to say that we want humanity to endure. If there is no human race, well... there's nobody to ask what our objective should be, for a start. We are programmed by evolution to want to recreate (I am discounting for obvious reasons the Voluntary Human Extinction Movement - the group that argues "that the best thing for Earth's biosphere is for humans to voluntarily cease reproducing"). The relevant question then becomes: how do we make sure that humanity survives for as long as possible?

Existential risk

The term existential risk was coined by Oxford Philosopher Nick Bostrom and we can understand it as the chance of an existential catastrophe; "an event which causes the end of existence of our descendants". It has come to prominence over the last fifteen years. There is a Future of Humanity Institute at Oxford University and a Centre for the Study of Existential Risk at Cambridge University, as well as a number of private think tanks such as the Future of Life Institute. Many in this field believe that the loss of life and future potential that existential threats would bring about mean that it is beneficial to invest significant resources to reduce the probability of such events by even a fraction.

Health

Total number of human lives seems too general to quantify what we are solving for. Would we really choose an infinite future where all humans are bed-bound for their entire lives due to a chronic sickness? Or a future where humanity goes extinct after 10 billion years but everyone is healthy and has a life expectancy of 120? That hopefully intuitive hypothetical shows how important a factor health should be in our quantification. There are parallels with how the healthcare industry moved to use measures of Healthy Life Years (HLYs) to measure the benefits of an intervention, rather than simply life expectancy.

Fulfilment

Why does it even matter to be healthy? What do Healthy Life Years hopefully give us that years with disability - it's opposite - seemingly do not?

Would you rather choose a future where everybody is expected to have 120 years of healthy life yet live under the constant misery of a totalitarian regime, or one with 80 years of healthy life where everyone is free to live happy, fulfilled lives? This hypothetical shows the importance of quality of life or fulfilment in our objective calculation. This can be a catch-all term for the concept of living happy, flourishing lives and, while difficult to define, Aristotle believed this was the only thing that is an end unto itself; everything else is a means of achieving fulfilment.

As it is so hard to define, it is also difficult to know what drives fulfilment. This is the focus of the field of positive psychology and in part is the purpose of our ideal future vision; to understand the drivers of fulfilled lives to ensure they are present in our ideal future. I'll therefore focus a lot on positive psychology in future posts.

Fulfilment is also very difficult to measure. Quantifying fulfilment is the realm of the branch of ethics known as utilitarianism, "a moral principle that holds that the morally right course of action in any situation is the one that produces the greatest balance of benefits over harms for everyone affected". I'll also talk much more about utilitarianism in future posts for this reason.

What Future World? is, in a way, an effort to consolidate and further the effort to define, measure, and increase human fulfilment.

For whom (and for what)?

So far we have assumed that we are only acting on behalf of humanity - that we are the only ones who deserve survival, health and fulfilment. Here I confirm that we are talking about all of humanity, and introduce other categories of possibly sentient beings that some would argue we should also factor into our calculation.

Humanity

First we must acknowledge that our desire for health and fulfilment extends to all of humanity. One of the limitations of traditional utilitarian thinking is that it doesn't consider the distribution of fulfilment. We must. Our values of 'fairness' and 'justice' ensure we commit to wanting all of humankind to flourish. There is no distinction between who deserves it more. It should be evident that any opposing view is on a slippery slope to supremecist thinking. But if we are agreed in equally valuing people across space (i.e., around the world) then should we consider those people distributed through time?

Future people

Does the fulfilment of future people matter? I certainly care that my currently-hypothetical grandkids live fulfilled lives, but how much do I care about my hypothetical great-great-great-grandkids? How much do I care about your great-great-great grandkids? Our desire for survival implies that we care about their existence but how much do we value their fulfilment with respect to our own?

This may seem like a hypothetical but there are real tradeoffs to be made based on this belief. For example, should we use our scarce resources to save lives today or invest them in preventing the next pandemic which might not hit for many more generations? The 'Longtermism' movement, lead by Oxford Philosophy professor Will MacAskill and brought to the fore in his book What We Owe the Future, argues that we should value future people just as much as present people. Here's the simple thought experiment he introduces:

"So imagine that you're walking on a hiking trail that's not very often used. And at one point, you open a glass bottle and drop it. It smashes on the trail, and you're not sure whether you are going to tidy up after yourself or not. Not many people use the trail, so you decide it's not worth the effort. You decide to leave the broken glass there and you start to walk on. But then you suddenly get a Skype call. You pick it up, and it turns out to be from someone 100 years in the future. And they're saying that they walked on this very path, accidentally put their hand on the glass you dropped, and cut themselves. And now they’re politely asking you if you can tidy up after yourself so that they won't have a bloody hand.
Now, you might be taken aback in various ways by getting this video call, but I think one thing that you wouldn't think is: “Oh, I'm not going to tidy up the glass. I don't care about this person on the phone. They're a future person who’s alive 100 years from now. Their interests are of no concern to me.” That's not what we would naturally think at all. If you've harmed someone, it really doesn't matter when it occurs.

If you accept this premise, it leads to some interesting and difficult questions: Is there a moral imperative to reproduce? If you believe the happiness of future people is equal to our own, the most ethical thing you can do is reproduce as much as possible. Is there a moral imperative to populate the universe? Research has shown how many potential people will never be born if we stay limited on Earth, which Longtermist thinking would see as a moral atrocity.

This field of thinking is called Population Ethics. I have thought about these questions a lot and still do not have a solid perspective. I will deep-dive into this topic in the future once I have finally digested Reasons and Persons, the 560-page seminal text of this field.

Animals

How much do you love your dog/cat/rat/hamster/snake/(insert pet here)? Any pet owner values the health and happiness of an animal but unlikely to the same extent as human life (though I bet we can all picture a pet-owner we know whose actions would challenge that assumption). How far does this extend to all animal life?

In attempting to quantify what matters to us, we must also quantify health and fulfilment of animals. It comes back to tradeoffs, how much animal suffering is acceptable for the happiness it might provide humans? This might seem abstract, but self-driving car technology is making real the thought experiments that ask us to decide whether to hit humans or animals. Should a self-driving car swerve and kill five dogs in order to save one elderly human? Fifty dogs?

Only ~1% of the global population is vegan which suggests we have a low value on animal welfare but this is increasing. In fact, the field is popularising the term 'non-human animals' in part to highlight how we are in the same category of life as animals with a hope to foster more compassion.

The real question here is developing a deeper understanding of sentience. We want to be able to quantify the extent to which an animal can experience fulfilment and suffering. If a cow can experience 20% the fulfilment and suffering of a human, a tortoise 2%, and a mealworm 0.002%, then we are on our way to weighing up the moral status of all living species (yes, there is a case that this extends to plant life too). The Moral Weight Project from Rethink Priorities is a fountain of knowledge on this topic and I'll be exploring it in more detail in the future given its importance on envisioning an ideal future world.

Digital People

What if we develop the capability to upload copies of ourselves to a digital world? Imagine scientists can replicate the human brain digitally such that a copy of you has exactly the same sentience and sense of experience in their digital world as the 'real' you does in this one. This may seem like a fantasy idea but - hold onto your hats - what is there to say that we aren't all 'digital people' living in some form of digital world? Our experience of the world is understood purely through our senses and brain activity - ultimately all physical processes - and as it is all physical it is theoretically possible that we are able to digitally replicate it. I think this is a particularly difficult hypothetical to wrap your head around - if you want to dive into the rabbit hole I recommend the Most Important Century series of posts (you can search for 'digital people' if you want to skip to the relevant part, but I highly recommend reading the whole thing).

This obviously poses important questions for our objective. Do we value digital people equally with 'real' human life? The only basis I can imagine for why we'd deem 'real' humans as superior is with the concept of the 'soul', some non-physical element that makes 'real' humans special. Will we eventually prove that there is no such thing?

Here we see how huge philosophical and spiritual questions like 'what is the soul?' have direct implications on how we design our future. Similar to reproduction, will there be a moral imperative to create as many digital people as we possibly can? This has similar implications to the reproduction case, namely that it necessitates expansion into the universe to maximise the resources we have available to create digital people. What makes a valid digital person? Imagine we can create a digital person with 80% of the sentience and experience of a human, do we value them at 80%, 100%, or 0%? This is strikingly similar to the non-human animals case, where a cow could be determined to have 20% the sentience of a human, for example. It also leads us to consider what would be on the path to digital people...

Artificial Intelligence

What if the digital person is artificially created and not a replica of a 'real' person? If they are deemed equal then we need to refocus this discussion on 'digital intelligence' rather than 'digital people'. If we develop 'digital intelligence' more akin to the level of a cow than a person then the animal case is a better analogy. We must begin to explore the moral status of artificial intelligence as we create it. A new field of 'AI ethics' will be defined not as what is ethical in the development of AI for its impact on humans but rather how practices of AI development impact the AI being developed.

Acceptance of this will necessitate a code of ethics for digital intelligences. If we develop digital people and decide they have moral status equivalent to 'real humans' then it is fair to assume that our existing ethical rules and norms will apply to them to. 'Digital people' will have human rights. It would be deemed a crime against humanity - or at least an equivalent wrong - to create a world of digital people and, for example, subject them to starvation.

We also then have to consider the possibility that we develop an intelligence greater than humanity. If it proves it has 200% of the capacity of a 'real' human for sentience, suffering, and fulfilment...what then? Do we accept our place as subservient and understand it is the morally just thing for us to sacrifice our happiness for the happiness of this superior intelligence? Do we deem it acceptable that humans may endure suffering if it leads to a sufficient happiness for the AI that it produces a net positive? If you intuitively disagree then you likely need to examine your moral justification for eating factory-farmed animals, if you have one. In a more extreme scenario: Do you believe it would be morally justified for the AI to sacrifice humans if the produces a net positive happiness? If you intuitively disagree with this question...well it might be time to go vegan

We should aim beyond survival and health; to maximise equally-distributed human fulfilment. We need to dive deep into positive psychology and utilitarianism to define and quantify our goal of fulfilment. We need to research further the moral status of future people, animals, digital people, and artificial intelligence to know whether to include them too. All we can say for sure is that we need an interdisciplinary approach to know what we are truly solving for.


Please share your thoughts if you have any feedback on this article, or leave a comment below.