After DeepSeek, France Should Have Called Off The AI Action Summit

Before DeepSeek, the France AI Action Summit to be held in Paris from February 10 – 11, already looked like it would make little difference, because they did not seem to revise any assumption about AI safety in their agenda [Public interest AI, Future of work, Innovation and culture, Trust in AI, Global AI Governance]. You can download the program here.

Since over two years that AI has been in the news, there were several initial safety, alignment and regulatory assumptions that were made that were subsequently obvious would be the wrong direction.

Now, after DeepSeek, France should have called off the summit and organized something smaller, at a later time, just for their homeland. There were lots of prior conclusions that it was better to build bigger AI somewhere, so that it is not built elsewhere. However, with DeepSeek, it should be clear that the race is no longer for an AI advancement edge, but for AI safety superiority.

Whatever country can build the safest AI wins. If a country has a powerful but unsafe AI, with the capability to answer any question or be easily jail broken to do so, it would enable vices within the country against the country, deeper than it can, for some distant adversary.

There are lots of answers that AI can give that could result in negative actions that will be effected locally first, in whatever country, making unsafe AI a risk to the country. Also, access to unsafe AI elsewhere is also a risk. It is simply not about blocking access because those can be easily bypassed, once actors know that a less safe AI is out there.

The national security project for France or any country is not just to build safe AIs, but to have safe AI monitor unsafe AI from any source on the networks within the country, or be able to explore safety models that can be used as regulatory tools, so that AI models can adopt them for broader safety. AI safety tools, as viable products, would be the most profitable as well, as AI percolates social and productivity purposes.

Some countries in Europe are rushing to ban access to DeepSeek when they neither have a matching AI nor a safe AI to counter. What they could do is to aggressively fund labs in most universities in their countries to work on general AI safety and alignment, to have technical defense in a vital area, not to have nothing and assume AI is some sort of social media or website. Also, AI from a seemingly unallied country does not automatically mean bad and from an allied country good. AI models from a friendly nation can replace jobs or be used to make AI voices or fake images or videos, or generate malicious codes. It is not necessarily about where AI comes from, it is about what it can irreversibly and consequentially wrought, in general, especially for those who have nothing in safety or alignment.

The EU has an AI office, which seems for regulation more than for technical advances. DeepSeek has already made that office irrevocably obsolete. The UK has an AI safety institute that was evaluating AI models, but the AI companies would not give access to the weights, just in private, for proprietary reasons and to avoid other nations asking. The UK AISI were careful not to irk the AI labs. DeepSeek made their weights open, not just privately, for everyone. The UK AISI also does not seem to have anything promising in the lead for AI safety for over a year since inception, for the UK homeland—at minimum against fake AI videos, images, or voices—questioning their strategy, direction and innovative might. Several countries launched AI safety institutes after the UK, they possibly do not seem to recognize how important safe AI is to their nations, in spite of current misuses, the future and the possible emergence or a kind of unexpected model like DeepSeek. France, the host, however, does not even have one.

DeepSeek is not a wake-up call to ban semiconductors, or access to the model, or to accelerate AI development, it is a wake up call to pursue intense research in every possible direction on general AI safety. Some star AI professors already said no one knows how to control powerful AI systems. This means that every possible channel, from theoretical neuroscience to math, new deep learning architectures and whatever areas, should be explored in priority research, towards technical development, by serious countries for safety.

Governments would continue to make laws against misuses of AI, but AI is an intelligent tool, where evasion is far easier. Penalization directly for the AI tool responsible, tracking the tool, its outputs and preventing it from causing harm would be better technical channels for safety than laws against people—as AI ploughs on. Those laws may apprehend some, but many more may elude them. Backing up laws by technical tools would give hope to AI safety, alignment and regulation.

Even for workforce security, should AI replace people, it is also a major risk. Pursuing novel economic paths is another important research that is not simply universal basic income or the future of work.

The AI Action Summit would proceed in France and likely to have some major announcements. Just like the last AI safety summit in the UK did little as well as the one in Seoul, the surprises ahead could quickly make this summit unremarkable.

There is a recent feature in Le MondeArtificial intelligence: The first measures of the European AI Act regulation take effect, stating that, “The first measures of the European AI Act will come into effect on Sunday, February 2 –coincidentally, just a few days before the Summit for Action on Artificial Intelligence (AI) on February 10 and 11 in Paris. Although the symbolism is strong, for the time being, this first part only concerns certain prohibited uses. In concrete terms, this Sunday will see a ban on certain uses of AI deemed unacceptable by the AI Act. These include social rating software, whether private or public, such as that used by China, individual predictive policing AI aimed at profiling people by estimating their propensity to commit offenses, and also emotion recognition at work or school, to analyze the behavior of an employee or pupil. Also banned are the exploitation of people’s vulnerabilities, manipulation or subliminal techniques.”

  

 

 

 

Comments (4)

Leave a Reply to Peter Breingan Cancel reply

Your email address will not be published.

  1. Dougie Blackwood says:

    I worry about AI. We are promised the kingdom of heaven with lots of wonderful things that it will be able to do. All very well if that is all it does. Unfortunately these systems are under the control of very rich people and historically their aims have mostly been to increase their wealth to the detiment of the rest of us.

    Will AI become the tool of “BIG BROTHER” in a society where doublespeak and thought control becomes the norm and most of us are persuaded the “There Is No Alternative” to the stories we are told. It’s bad enough now with news manipulation in Main Stream Media but at least with some diligent searching we can still dig out inconvenient truths using the internet.

    Maybe I’m being paranoid but maybe not.

  2. Peter Breingan says:

    The final age of Homo-sapiens, the wise human?

    Oh, man’s hubris is not quite exhausted.
    Somehow, the oft feared nuclear extinction evaded (so far).
    Having plundered the power of the mighty sun and the atom,
    man attempts to artificially recreate himself and eventually his God.

    Is this not the peak of hubris?
    Man striving to emulate God (or the gods)?
    Yet little has man learnt the teachings of the very God they invented.
    Will the heart of the AI be the ten commandments,
    from the Bible or Quran, which proclaims,
    Thou shalt not make an image of God.

    Without AI, in time (a long time) man would evolve into a new species,
    hopefully more adapted to a fruitful existence on our home planet.
    But with AI man sees it can accelerate this process,
    by developing an inhuman technology under human guidance.
    It is inevitable a myriad of robots will be developed and deployed,
    some limited in action others autonomous (if man allows).

    A division amongst the people will come about.
    Those in favour of robots and those not so.
    They will live in different cities and there will be strife between them.
    The AI powered autonomous robots, by now beyond human control,
    will further develop how they choose.

    Human reproduction will be in danger of collapse.
    The autonomous AI robots do not have a production problem.
    The human’s last chance is to destroy the robots and all AI technology.
    This maybe extremely difficult by then.

  3. SleepingDog says:

    Another incoherent article about AI, full of nonsense and presumably written by AI. I suppose the literal effect is analogous to dazzle camouflage: you can see the object but cannot discern much that is useful about it. The cacophony escalates. Insanity levels increase.

    There’s a common degradation feedback loop: machine-learning from AI-generated content (much of which may be unlabelled as such), and human journalists often seem to think it is funny or expedient to use AI to generate articles about AI, which are then trawled for training purposes, and used to spawn a new generation of articles, and so on.

    Plus journalists may not be able to make sense of AI itself. And hype/doom surrounds it. So AI is a topic that is exceptionally garbled as a result, when articles are generated. And I presume it will only get worse as humans adopt the ‘AI style’, just as children pick up USAmericanisms from their devices. Vices? You decide.

    What safety, and whose? If AI can hack death machines and deadly people, and malicious social media pranksters can use AI to trick and beach whales for clicks, and fake sludge spreads from the digital world into the psychological and material, what is the method for putting the genie back into the bottle?

  4. Statan says:

    Wow. In the future, can articles with no content please come with a warning?

Help keep our journalism independent

We don’t take any advertising, we don’t hide behind a pay wall and we don’t keep harassing you for crowd-funding. We’re entirely dependent on our readers to support us.

Subscribe to regular bella in your inbox

Don’t miss a single article. Enter your email address on our subscribe page by clicking the button below. It is completely free and you can easily unsubscribe at any time.