‘US vs China’ vs me
More lazy, dangerous rhetoric on the state and nature of AI competition
I had some great feedback on my piece Careless talk on US-China AI competition?, which generated a bit of discussion (and perhaps a little controversy).
Ironically for a piece on speaking clearly and with nuance, I failed to explicitly point out crucial facts! - the actual true accounts regarding one of the exemplars I brought of language-misuse (these were obvious in my mind while writing, but some readers appeared confused in ways which make sense if they didn't know). I criticised
China has made several efforts to preserve their chip access, including smuggling, buying chips that are just under the legal limit of performance, and investing in their domestic chip industry.1
but didn't explicitly point out that, beyond being an oversimplification, there just isn't a ready way to map this to the reality, which is that
the smuggling in question was done by... smugglers
the buying of chips was done by multiple China-based entities
the (implicit but unmentioned) selling (and importantly, provisioning/enabling) of chips was done by NVIDIA, a US-based company (and perhaps others)
the investing was done by the CCP
I had a great response from CAIS in particular. The original author agreed this was ambiguous and unfortunate, and they've updated the text in question substantively. They also responded
More generally, we try to avoid zero-sum competitive mindsets on AI development. They can encourage racing towards more powerful AI systems, justify cutting corners on safety, and hinder efforts for international cooperation on AI governance. It’s important to discuss national AI policies which are often explicitly motivated by goals of competition without legitimizing or justifying zero-sum competitive mindsets which can undermine efforts to cooperate. While we will comment on the how the US and China are competing in AI, we avoid recommending "race with China."
This was really welcome and I hope other readers took on board the lesson here.
A few other readers pushed back a little. Stephen Clare expressed general agreement and offered a rearticulation of the problem I'm pointing to, while also criticising my relegation of governments to 'not currently meaningful players in AI development and deployment' as being too strong. Quite right: I meant that governments have (to date) been entirely passengers regarding the direction and nature of advanced AI development, but it is true that they have begun to get involved in coarse economy-level lever-pulls like investing and regulating hardware.
I went on a minor rant in the comments:
Do people actually think that Google+OpenAI+Anthropic (for sake of argument) are the US? Do they think the US government/military can/will appropriate those staff/artefacts/resources at some point? Are they referring to integration of contemporary ML/DS into the economy? The military? Or impacts on other indicators2? What do people mean by "China" here: CCP, Alibaba, Tencent, ...? If people mean these things, they should say those things, or otherwise say what they do mean. Otherwise I think people motte-and-bailey themselves (and others) into some really strange understandings.
Amazingly, one reader admitted that,
Yes.
In the end, all the answers to your questions are yes.
and made some further assertions about inevitability of international conflict. We had a minor back-and-forth but this was pretty remarkable, to me, and I think there was some talking-past happening. Thank you for sharing honestly.
Sadly, Scott Alexander, an author I hugely admire, has evidently not read my admonishment to CAIS, as his latest letter is full of thoughtless remarks about China and the US/West. Scott, you should know better. Words have power. Saying this, I think it is a good and useful post in many ways, in particular laying out a partial taxonomy of differing pause proposals and gesturing at their grounding and assumptions. He writes,
The biggest disadvantage of pausing for a long time is that it gives bad actors (eg China) a chance to catch up.
There are literal misanthropic 'effective accelerationists' in San Francisco, some of whose stated purpose is to train/develop AI which can surpass and replace humanity. There's Facebook/Meta, whose leaders and executives have been publicly pooh-poohing discussion of AI-related risks as pseudoscience for years, and whose actual motto is 'move fast and break things'. There's OpenAI, which with great trumpeting announces its 'Superalignment' strategy without apparently pausing to think, 'But what if we can't align AGI in 5 years?'. We don't need to invoke bogeyman 'China' to make this sort of point. Note also that the CCP (along with EU and UK gov) has so far been more active in AI restraint and regulation than, say, the US government, or orgs like Facebook/Meta.
Suppose the West is right on the verge of creating dangerous AI, and China is two years away. It seems like the right length of pause is 1.9999 years, so that we get the benefit of maximum extra alignment research and social prep time, but the West still beats China.
Now, this was in the context of paraphrases of others' positions on a pause in AI development, so it's at least slightly mention-flavoured (as opposed to use). But as far as I can tell, the precise framing here has been introduced in Scott's retelling.
Regardless of the origin of this formulation, this is bonkers in at least two ways. First, who is 'the West' and who is 'China'? This hypothetical frames us as hivemind creatures in a two-player strategy game with a single lever. Reality is a lot more porous than that, in ways which matter (strategically and in terms of outcomes). I shouldn't have to point this out, so this is a little bewildering to read. Let me reiterate: governments are not currently pursuing advanced AI development, only companies. The companies are somewhat international, mainly headquartered in the US and UK but also to some extent China and EU, and the governments have thus far been unwitting passengers with respect to the outcomes. Of course, these things can change.
Second, actually think about the hypothetical where 'we'3 are 'on the verge of creating dangerous AI'. For sufficient 'dangerous', the only winning option for humanity is to take the steps we can to prevent, or at least delay, that thing coming into being. This includes advocacy, diplomacy, 'aggressive diplomacy' and so on. I put forward that the right length of pause then is 'at least as long as it takes to make the thing not dangerous'. You don't win by capturing the dubious accolade of nominally belonging to the bloc which directly destroys everything! To be clear, I think Scott and I agree that 'dangerous AI' here is shorthand for, 'AI that could defeat/destroy/disempower all humans in something comparable to an extinction event'. We already have weak AI which is dangerous to lesser levels. Of course, if 'dangerous' is more qualified, then we can talk about the tradeoffs of risking destroying everything vs 'us' winning a supposed race with 'them'.
I'm increasingly running with the hypothesis that many anglophones are mind-killed on the inevitability of contemporary great power conflict in a way which I think wasn't the case even, say, 5 years ago. Maybe this is how thinking people felt in the run up to WWI, I don't know.
I wonder if a crux here is some kind of general factor of trustingness toward companies vs toward governments - I think extremising this factor would change the way I talk and think about such matters. I notice that a lot of American libertarians seem to have a warm glow around 'company/enterprise' that they don't have around 'government/regulation'.
[ In my post about this I outline some other possible cruxes and I'd love to hear takes on these ]
Separately, I've got increasingly close to the frontier of AI research and AI safety research, and the challenge of ensuring these systems are safe remains very daunting. I think some policy/people-minded discussions are missing this rather crucial observation. If you expect it to be easy (and expect others to expect that) to control AGI, I can see more why people would frame things around power struggles and racing. For this reason, I consider it worthwhile repeating: we don't know how to ensure these systems will be safe, and there are some good reasons to expect that they won't be by default.
I repeat that Scott’s post as a whole is doing a service and I'm excited to see more contributions to the conversation around pause and differential development and so on.
Relatedly, I had a great conversation at lunch yesterday with Will MacAskill, who’s currently working on questions of coordination around development of advanced AI. Very excited to read more when that comes out!
Center for AI Safety, AI Safety Newsletter #19, 2023-08-15
What indicators? Education, unemployment, privacy, health, productivity, democracy, inequality, ...?
Who, me? You? No! Some development team at DeepMind or OpenAI, presumably, or one of the current small gaggle of other contenders, or a yet-to-be-founded lab.


