WFW?: Opportunity and Theory of Impact

[Disclaimer - 31/10/22. While my vision for WFW? includes disciplines beyond AI Governance, there is such overlap in objectives that here I ground my focus on the AI Governance community. I have spent three months near full-time researching the topic and meeting leading thinkers in the space, after five years as a thought leader focused more narrowly on the impact of AI on the Future of Work. I’m far from having consumed all of the material I would like to inform my perspectives, but I have reached conviction that I am tackling a sufficiently large problem that I have very high personal fit for, that I am going live with the project knowing it will iterate and evolve as I, and the community, develop our thinking on AI Governance.]
“The visions we offer our children shape the future. It matters what those visions are. Often they become self-fulfilling prophecies. Dreams are maps.” - Carl Sagan
Summary
The 'AI Governance community' is too small and homogenous relative to the importance of the problem, by many orders of magnitude. Growing and diversifying the community will increase our chances of tackling the AI Governance problem by increasing population preparedness, raising the political clout of the problem, and attracting more talent and resourcing to the field. I propose an new approach to achieve this, centered on a positive orientation and message across all forms and media. What Future World? is a blog/media community that will grow the AI Governance field by developing a detailed vision of an ideal future world and the strategy for achieving it, inspiring and motivating a generation to work on the most important challenges. If you believe in this mission of growing the community then please email me to discuss opportunities to get involved.
The need to grow
Size and urgency of the problem
The AI Governance community is currently too small and too homogenous to be likely to tackle the crucially important problem of AI Governance - defined by the Centre for the Governance of AI as "the problem of devising global norms, policies, and institutions to best ensure the beneficial development and use of Advanced AI". To be clear, this means the avoidance of the risk of existential threat or dystopian future (often termed negative value lock-in) through governance over the development and deployment of Advanced AI. Experts believe there is a significant chance of existential threat, for example the Samotsvety forecasting group predicts a 25% chance of misaligned AI takeover by 2100, barring pre-APS-AI catastrophe. In What We Owe The Future, Will MacAskill outlines the compelling argument behind the risk of value lock-in with Advanced AI development.
Size of the community
The existing community is insufficient and the pipeline isn't strong for growing it, so we must act now. Currently, the community is full of brilliant minds - maybe the most brilliant that will ever be focused on the problem - but tiny in relation to the size and urgency of the problem. I'm taking it as given that we ought to grow the community, so please let me know if you believe there is no relationship between number of brains focused on a problem and likelihood of mission success.
Diversity of the community
The community also lacks diversity. In time I will conduct a survey to quantify this claim, but I'm confident that I have provisionally validated it through my many interviews and scanning the 'team' pages of the small number of AI Governance organizations. For those skeptical of the value of diversity as a driver for achieving our mission, this article does a good job to summarize the quantity of academic research showing how:
- Companies that prioritized innovation saw greater financial gains when women were part of the top leadership ranks
- For innovation-focused banks, increases in racial diversity were clearly related to enhanced financial performance.
- Academic papers written by diverse groups receive more citations and have higher impact factors than papers written by people from the same ethnic group
- Stronger papers were associated with a greater number of author addresses; geographical diversity, and a larger number of references, is a reflection of more intellectual diversity.
- Students who were trained to negotiate diversity from the beginning showed much more sophisticated moral reasoning by the time they graduated
Whether you perceive the work of this community as more similar to that of an academic institution or a corporate organization, the case for greater diversity is clear. In this article, I use ‘growth’ and related terms to mean both growth in the size and diversity of the community.
Lack of public awareness
Multiple surveys have found a huge gap between the general public's perception of AI and the realities of its development. Zhang & Dafoe found in a 2019 survey that "the American public – like the public elsewhere – lack awareness of AI or machine learning" and conclude that "the gap between experts and the public’s assessment suggests that this is a fruitful area for efforts to educate the public". Not only is the AI Governance community small, the lack of awareness in the general population will slow growth in both general understanding and active community participation, compelling us to act now.
Benefits of growth
Growing the community will increase our chances of tackling the AI Governance problem along multiple vectors:
More understanding of the issues throughout the population
It is a standalone issue that the general public are ignorant of both the nature of AI and its likelihood of causing societal failure. The general public needs an awareness of how AI will impact their lives, and as AI capability development continues to accelerate we risk having the vast majority of the population left behind. In scenarios requiring mass-mobilization - for example an urgent necessity to get public buy-in of a governmental governance proposal - this lack of awareness will undermine the success of the strategy and increase the likelihood of societal failure from AI. We only have to look as far as pandemic relief to know this to be true. I don’t believe we can rule out the idea that successful AI Governance will require significant governmental involvement - having an aware and engaged population makes the introduction and passing of legislation and regulation smoother.
More minds focused on the problem
Ultimately the most pressing issues in AI Governance are deep technical/strategic questions that require focused research. And we need answers to these questions. In raising the profile of the problem and broadening the community as a whole, we will also attract more of the best minds to dedicate a significant amount of their energy and careers to these vital questions. As shown in the research above, greater diversity of the group charged with tackling the problems will lead to a greater diversity of both questions and possible solutions - increasing the chances of finding the right solution to the most critical questions. A larger community will also generate more ideas, even in those who aren't willing or able to pursue them. Imagine a mechanism for anybody to submit questions or solutions - the vast majority of these will be duplicative, redundant, or wrong, but if the n becomes large enough, it is plausible, even likely, that a paradigm-shifting idea could surface from the community writ large.
More investment and resourcing for the space
This will take significant resourcing to achieve. The greater awareness and participation in these problems the greater the base of support for governments to direct public funds to the field, and the greater number of wealthy individuals and foundations will be attracted to fund its continued development. We will see the early seeding by a number of organizations grow into a scaled ecosystem of resourcing.
Mitigating existing AI Governance community risks
I believe that a number of risks to the AI Governance community will grow if it does not make an effort to build broad public awareness and support. I was inspired by this article on the ways in which EA could fail as I believe a number of the risks outlined are true of the AI Governance community, a much smaller group than EA yet largely overlapping. Most obviously, building a larger, more diverse community will negate the very real reputational risks stemming from an elitist reputation - the notion of a group of white billionaires deciding the fate of humanity. If general awareness builds around the importance of AI Governance for the future of society without significantly improving the diversity of the existing community and its financiers, I predict a significant backlash that will impede progress of the work. It has major implications for talent too. Without inclusivity, there is risk of internal disenchantment as people don’t feel represented or included by the existing community. Talent is arguably the most important input for the success of AI Governance. The existing community has proven its capability to find and attract top talent, WFW? will be a talent engine for the next tier.
Similarly my vision for the future of WFW? will overcome the risk of inadequate infrastructure as the community grows. I hope the platform itself will provide the organizational infrastructure to pull together the entire AI Governance community to work effectively and efficiently in the same direction, avoiding the duplication of work that pervades when moving from start-up to scale-up. I have a view on how to avoid the risk of excessive deference as the community grows, built into the platform infrastructure and reinforced by a community value to engage with original thought, rather than rely solely on such values. The diversity in the community will mitigate the risk of it becoming an echo chamber with insufficient intellectual diversity, though this also requires more thoughtful design of platforms and norms than I believe currently exist in online forums. Finally the platform commitment to feedback and having it as a core principle will mitigate the risk of poor community feedback loops and leadership becoming out of touch.
It’s important to acknowledge that the WFW? approach will bring with it new risks. Those risks, and the strategies to mitigate them, are discussed below.
How to grow
Invoke hope, not fear
My main hypothesis for achieving this is that we will inspire people to join the AI Governance community with a positive orientation and message. The AI Governance field was born with a goal to avoid societal failure from existential threat or negative value lock-in. This is still the most valuable goal. Given the current state of society, I hypothesize that we have a lot further to travel in a negative direction than in a positive one. This is enough to motivate the brilliant thinkers who have founded the AI Governance community. More people than ever cite 'concerns about the future' as the reason for not having children, and new research has proven the success of positive-future envisioning.
The Smithsonian Institution, and the Institute for the Future have released a new analysis based on the museum's "FUTURES" exhibit, which seems to prove that if people can better envision a detailed possible future, they're capable of taking actions to make it a reality. "We know that most young people feel anxious about their future and the fate of humanity," says Jane McGonigal, IFTF's director of Urgent Optimists. "We also know that politics, arts, science, and the public sector have all failed to provide believable images of positive futures''. This aligns with Rutger Bregman's claim that progressive thinkers have lost their ability to engage and uplift. As he says in Utopia for Realists, "[their] biggest problem isn’t that they’re wrong. Their biggest problem is that they are dull. Dull as a doorknob. They’ve got no story to tell, nor even any language to convey it in." Rather than say 'this is a terrifying problem that could kill us all, we must work to fix it', we instead need to say 'we can decide to use this technology for positive ends, what do we want to do with it?'. The "FUTURES" research shows how futures-prompts and storytelling can help people imagine the future more clearly and feel more ready to take action....
Introducing: What Future World?
What Future World? will grow the AI Governance field by developing a detailed vision of an ideal future world and the strategy for achieving it, inspiring and motivating a generation to work on the most important challenges. At first it will be a blog-with-a-purpose. I'll post twice a week to further our knowledge towards these aims and raise the profile of the space. In time, I'll enable community posting to accelerate our efforts towards this mission. For more on the form of WFW? and my vision for its evolution, see my separate post on What is 'What Future World?'?.
We'll draw on philosophy (primarily moral philosophy and what it means to live well) and fiction (primarily science and speculative fiction) to imagine desirable destinations, knowing they are not the end point but rather give the optimal direction for our current knowledge. As we refine and align on these utopian destinations, we correct our course, and as technological developments continue to accelerate on this journey, we gain confidence that we will narrow-in on a thriving society.
Of course this metaphor falls flat without the ability to steer our ship. What good is it to know that we want to travel on a certain heading, but know that we are currently heading away from it, and possess little to no way of turning the ship around? That's why this community will also be pragmatic. It will draw inspiration and energy from the desired destination to develop a strategy for how we will get there. Here the disciplines of AI Governance, political economy, and change management take precedent. This is not an academic exercise. Without real change this utopian dreaming could even do more harm than good - seen as academic and pointless or ivory-towered and exclusive.
The new approach
Reaching a mass audience
Raising the profile of AI Governance will require a new approach. To reach a mass audience we must leverage as many points of contact as possible. We must continue bringing AI Governance research into the mainstream, as shown by NYT best sellers Superintelligence, Rise of the Robots, and Life 3.0. It was the first two that I can attribute my dedication to the field when I read them in 2018. We must leverage the full range of popular media to meet people where they are at. Leading examples here are the Kurzgesagt video The Last Human – A Glimpse Into The Far Future with 7.4m views on a channel with 20m subscribers, Tim Urban's piece The AI Revolution: The Road to Superintelligence Part I Part II in the Wait But Why blog that receives over 1.5 million unique monthly visitors, and Karoly Zsolnai-Feher's Youtube Channel 'Two Minute Papers' with 1.34m subscribers. As these pieces do so well, we must translate the often complex problems in AI Governance into a form that people can understand. For example, Louis Rosenburg's article in Big Think titled Mind of its own: Will “general AI” be like an alien invasion? uses an effective alien invasion analogy to try to make the problem of AI Governance more real to the general public. Finally, we must unleash the power of fiction, particularly science and speculative fiction, to introduce such metaphors and analogies in a popular and accessible container. Klara and the Sun by Kazuo Ishiguro and Children of Time by Adrian Tchaikovsky both bring extremely thought provoking questions of AI Governance to a mainstream audience.
Inclusivity
In all the media we leverage, we must use an inclusive tone and language to empower anyone to positively contribute to this cause in some way. In particular we must appreciate that people have a broad range of philosophical beliefs relevant to the issues we will discuss. Differences - typically in moral philosophy - can cause disagreement in the right approach but aren't unearthed in typical debate. We need to foster sharing of ideas in a way where the sharing of one's underlying beliefs is the norm. For example, many will disagree on the moral status of Advanced AI which will have significant implications on the 'ideal' future world we are working towards. While there is a role within this community to educate everyone on moral philosophy, all perspectives that are open to debate should be accepted and accommodated in favor of the goal of expanding the diversity and size of the community. This approach certainly brings risks and inefficiencies - which I discuss below - but I believe the expected benefits significantly outweigh the expected costs.
Starting a (not so) Long Reflection
Those familiar with Toby Ord's The Precipice can understand this somewhat as a manifestation of the Long Reflection - "A sustained period in which people could collectively decide upon goals and hopes for the future, ideally representing the most fair available compromise between different perspectives." as Holden Karnofsky describes it. I say 'somewhat' as I only agree with the definition as a set of activities and not necessarily as an historical epoch. I currently don't believe the 'Long Reflection' can realistically be the "sustained period" of "perhaps...a million years" that could ideally be given the exercise. However it struck me when reading the book that - when interpreted solely as the activity of reflection - there is nothing holding us back from getting started on the work today. It need not be a future exercise. In fact, it is imperative that we begin today given how much work there is to do, and the risk that such a stable period proves impossible.
Risk mitigation
Awareness accelerating AI Capability development
Awareness building in Artificial Intelligence is a risky activity, but I feel the thoughtful approach I will take with WFW? will ensure the expected benefits far outweigh the expected risks or costs. There is an argument that bad actors, currently unaware of the potential power of AI, could be made aware through this project and devote significant resources to development of such a project. While I will certainly mention the unparalleled power that could come to whoever first develops Advanced AI - always to caution against this path and espouse the need for AI Governance - I will not focus on the technical side of AI capability or safety development. It is a fundamental principle of WFW? that we do not want to accelerate AI capability development.
My hypotheses here are that (1) there is already sufficient awareness of the power returns of developing AI that the WFW? Project won’t have any marginal impact in that vein, and (2) that writing about AI Governance in a thoughtful, positive way will not inform or further motivate bad actors already aware of the power returns of AI. These are sadly very difficult hypotheses to validate, and I therefore appreciate that some may disagree. Please engage if this is you. If there is sufficient tension in the community I believe that would justify a greater research effort. In the meantime, take solace in the fact it will take a very long time for WFW? to contribute to general public AI awareness.
Dilution of focus and norms
Opening the community to greater participation will bring with it a risk of diluting the quality of discussion and collaboration, making it more difficult to prioritize efforts and work together efficiently. It will require greater organization to ensure the ideas with the greatest potential positive impact surface to the top and are easily discoverable. This is arguably the most important role for WFW? in the community as it scales, and one I am fully prepared to step up to.
Public malaise
There is a chance that broad-reach AI Governance content could put the population into a state of malaise rather than activism. If the dominant reaction to a piece is a belief that this is another insurmountable problem that we have no chance of overcoming, it could cause the majority of readership to shut down and close themselves off from future discussion. We need people energized, and adding another urgent, potentially society-ruining, and seemingly insurmountable problem to the national conversation could do more harm than good if framed in such a way. It is therefore a vital principle of WFW? that content strikes a positive tone. To be sure, there are many topics that will necessitate covering the importance and urgency of the issues at hand, and may even need to describe the severity of negative impacts of getting these things wrong, but I will ensure that such articles also provide reasons for hope and direct actions people can take to improve our odds of success and feel empowered.
There is a related risk of public malaise if we build public awareness of the problems with no evidence of action. Saying a lot without proof of change or implementation could even do more harm than good - and give the community a reputation as academic and ultimately pointless. Hence the focus not just on our desired future vision but also the strategy for achieving it and the research agenda to fill the gaps in necessary knowledge. I will put more thought into how we will highlight the successes, actions, and contributions of the community toward achieving our vision.
Wasted resources
In my opinion, the biggest variable impacting the importance of AI Governance as a field is the urgency with which we need answers to these questions, which itself is driven by the technical capability development of AI. I believe that there is currently a 50% chance of AGI development by 2040. If there were a robust and universally agreed-upon technological finding that pushed expected AGI timelines back significantly then I would update the priority I give to AI Governance work. I haven’t mathematically modeled exactly how much the expected AGI development timeline would need to increase for me to update and my belief that AI Governance work will significantly improve society in the interim would make it all that higher. It is safe to say however that there is a value at which I would update and given the technological uncertainties of AI development, I can’t rule out this possibility.
There is a niche belief that AI Governance research is completely pointless, that the transformation at the point of Advanced AI development will be so stark it will render all attempts at AI Governance redundant, meaning technical AI Safety work is the only valuable endeavor. I’ll elaborate on this in a future post, but I believe that perspective to be not only incorrect, but logically impossible. It implies that we shouldn’t ‘do’ any AI Governance, but not doing anything is a decision in itself. A laissez-faire government policy is exactly that, a policy, even if there is no act or legislation codified. It is nonsensical to say AI governance doesn’t make sense, rather you would say we should not intervene and let it play out. To that argument, I say I’d be much more comfortable doing the research and proving that it is the right strategy.
Public malaise
There is a small risk that by creating metaphors and analogies to help the general public understand the scale and urgency of the problem, that the general consciousness and/or conversation could latch on to a non-ideal “meme” about AI risk. It's impossible to hypothesize what this might be - as we could then simply avoid using such a story - but the theory would be that the public might latch onto something and use it to either trivialize the problem or any number of other ways to justify inaction. If this were to grow into a movement of its own, it would make it significantly harder to push for positive actions requiring large scale buy-in, particularly in a potential crunch time.
I believe the risk here is low and the sensible approach is to share multiple stories in an iterative manner, not pushing one particular analogy until and unless it has proven clear that there is minimal misinterpretation. The flip-side of this risk is the opportunity that a particular story or analogy particularly resonates with the public consciousness, meaning it will prove important and difficult to weigh the opportunity against the risk if a particularly resonant meme emerges.
To action
We've seen the 'AI Governance community' needs to grow and diversify to have the greatest chance of tackling the AI Governance problem. Raising the profile of AI Governance will require a new approach - one with a positive orientation and message across all forms and media. What Future World? will grow the AI Governance field by developing a detailed view of an ideal future world and the strategy for achieving it, inspiring and motivating a generation to work on the most important challenges. To learn more, read the other introductory posts: What is 'What Future World?'? and WFW?: Principles. If you believe in this mission of growing the community then please email me to discuss opportunities to get involved.
Please share your thoughts if you have any feedback on our theory of impact.
Member discussion