Skip to content

Three visions of BigAI#

I wrote most of this article quite a long time ago, and I did not publish it because I had no time to dig into the details of all what was forbidden in Europe. Indeed, I did not expect to find such an enormous regulation. So, for the record, I will just publish the parts about the US and China, and postpone the part about Europe for later. Here it goes.

It is quite funny: The US and China have publishing their AI strategy almost at the same time.

The US Strategy: Be the winner#

AmericasAIActionPlan.png

Document available here.

Quote from Trump: As our global competitors race to exploit these technologies, it is a national security imperative for the United States to achieve and maintain unquestioned and unchallenged global technological dominance.

As always, this looks like a threat to the rest of the world. The fact that US national security is invoked is interesting because that means that, as for BigData, BigAI has a strategic military dimension we never spoke about in this blog. And strangely, quite few people are talking about it, even in the US.

For sure, suppose that, instead of coordinating thousands of drones to draw, in 3D, in the sky, a Chinese dragon, those drones are managed by an autonomous AI to kill every human fighter in a battlefield, we have a problem. Especially if those drone are completely autonomous and driven by an internal AI. No coms to scramble.

This also supposes that military contracts, such as the Internet or BigData, will soon feed the US AI industry enabling them to weaponize the technology.

On top of that, the plan is quite structured with many interesting objectives and we would need pages to make a commentary of it.

UAAIActionPlanTOC.png

Instead, we will take the grid of the US AI Action plan to rank the Chinese and European proposition (see below).

In a certain way, the US have a clear position that is quite structured and a clear objective to have AI assisting the America to become "Great Again".

As you may see, I added a line for the alignment with the United Nations 2030 plan. The US clearly does not mention United Nations at all.

The Chinese standpoint: Cooperation#

ChineseAIActionPlan.png

The document can be found here.

We'll estimate the Chinese dimensions of the plan on the canvas proposed by the US, plus some dimensions of ours.

In a general way, the position is less detailed (the document is also shorter) and some ticks we made in the table are natural interpretations. We have also many topics appearing in several dimensions of the Chinese plan and this plan is quite aligned with the United Nations 2030 plan. Its may position is "collaboration".`

The European standpoint#

The particularity of Europe is to have a 2024 enormous text of 144 pages regulating AI, the so-called "AI Act". The document can be found here.

This text would deserve a complete analysis and unfortunately I have no time now to detail my comments about it.

The introduction of the text is showing the schizophrenic position of the EU in the matter of AI:

The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, the placing on the market, the putting into service and the use of artificial intelligence systems (AI systems) in the Union,

  • in accordance with Union values,
  • to promote the uptake of human centric and trustworthy artificial intelligence (AI)
  • while ensuring a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union (the ‘Charter’),
  • including democracy,
  • the rule of law
  • and environmental protection,
  • to protect against the harmful effects of AI systems in the Union,
  • and to support innovation.

We could laugh about it if that was not so sad. In other terms, it is possible to innovate after having respected plenty of regulations and ambiguous statements (there is no "trustworthy AI" so far). We'll come back of the European notion of the "systemic risk for AI".

Some weeks ago, an AI "action plan" was published taking as a prerequisite the respect of the AI Act.

AIEuropeanActionPlan.png

The text can be found here and here.

The AI continent action plan was published some months before the US and the Chinese plan.

The comparison table#

Here is the comparison table without Europe.

Dimension US China
Regulation "Remove" "Friendly"
Values US values -
Speech Free -
Open-source Yes Yes
Open-weight Yes Yes
Empower people and society Yes Yes
Empower workers Yes Yes
Support AI-enabled Industry Yes Yes
Invest in AI-enabled science Yes Yes
Support Scientific datasets Yes Yes
Advance the science of AI Yes (US) Yes (Global)
Increase AI safety Yes Yes
AI ecosystem Limited Global
Use AI in Government Yes Yes, leader
Use AI in DoD Yes Yes
Protect AI innovations Yes Yes
Combat deep fakes Yes Yes
Build infrastructure Complete plan Yes
Environment constraints "Reduce" "Sustainability"
Export AI to Allies and Partners Yes Cooperation
International cooperation (*) No Yes
Standards and norms No Consensus
Counter Chinese Influence Yes NA
Strengthen Export control Yes -
Evaluate national security risks Yes Yes
Invest in bio-security Yes -
UN 2030 plan alignment (*) No Yes
Safe, equitable, inclusive (*) No Yes

(*) Not in the US plan

To be developed further.

Annex: An "Au secours !" from the European industry#

Europe industry wrote to the European commission to postpone the application of the "Europe AI Act". You can find the letter here: Stop the Clock.

stoptheclock

Please broadcast or we will, once again, be the willing victims.

(July 29-31 2025)


Navigation: