The February 10-11 AI Action Summit in Paris, will address, among other things, how to equitably distribute AI’s benefits globally, concerns about control of AI by a few dominant players, the role of opensource and building responsible and trustworthy AI.
Attendees at the summit, which will gather nearly 100 nations and be co-hosted by the French and Indian governments, will include Chinese Vice Premier Zhang Guoqing and U.S. Vice-President J.D. Vance.
The Summit hopes to move the conversation on from fear about the harm the technology might do, the focus of the inaugural AI Safety Summit hosted by the UK government at Bletchley Park in November 2023 in the UK, and instead emphasize how general-purpose AI has immense potential for education, medical applications, research advances in fields such as chemistry, biology, or physics, and generally increased prosperity.
But AI’s dark side will nonetheless be top of mind.
Governments, leading AI companies, civil society groups and experts gathered for the AI Action Summit will be presented with an International AI Safety Report 2025 report spearheaded by Turing prize winner Yoshio Bengio and compiled with the help of expert representatives nominated by 30 countries, the OECD, the EU, and the UN, as well as several other world-leading experts. The goal of the report is to provide a shared scientific, evidence-based foundation for discussions about the risks.
The document, which was commissioned after the 2023 global AI Safety Summit and became publicly available last week, covers numerous threats ranging from already established harms such as bias, scams, extortion, psychological manipulation, generation of non-consensual intimate imagery and child sexual abuse material, deepfakes and targeted sabotage of individuals and organizations, to future threats such as large-scale labor market impacts, AI-enabled biological attacks, and society losing control over Artificial General Intelligence (AGI).
Managing this toxic brew is complicated by conflicting approaches to risk management. While France, on February 3, announced the creation of the equivalent of an AI Safety Institute (INESIA), one of U.S. President Donald J. Trump’s first acts in office was to rescind an Executive Order issued by the previous administration on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
The danger is that even if Europe, and the rest of the world, opt for leveraging AI in a “responsible and sustainable manner” the U.S. government and U.S. companies will see a recent breakthrough by China’s DeepSeek, which has demonstrated AI reasoning models that appear to be on par with U.S. companies OpenAI and Anthropic, at significantly lower cost, as a threat that needs to be countered in a no-holds barred race.
As model capabilities continue to advance amid mounting global instability, fueling the race between the U.S. and China, but also opening new opportunities for startups around the world working on alternative opensource models, the need for international collaboration on a series of other risks including the global AI R&D and compute divide, market concentration, environmental risks, privacy, copyright violations and safeguarding intellectual property (IP), has never been higher.
Global AI R&D And Compute Divide
General-purpose AI R&D is currently concentrated in a few Western countries and China and this ‘AI divide’ has the potential to increase much of the world’s dependence on this small set of countries, says the International AI Safety 2025 report. Some experts also expect it to contribute to global inequality. Among other things it stems from differing levels of access to the very expensive compute needed to develop general-purpose AI: most low- and middle-income countries have significantly less access to compute than high-income countries/
Compute is, in fact, slated to become “the foundation of next-generation economic growth and influence, shaping economic developments, as well as the future of sovereign power and international influence,” according to a recent Tony Blair Institute for Global Change report. It notes that compute infrastructure – and the difference in its availability from country to country – risks becoming the basis of a new digital divide.”
Last year TBI published a report that measured countries’ capabilities across the entire compute ecosystem, including talent, energy, governance initiatives, available training and technical partnerships, to help governments understand their compute capacity. The 2024 report, released November 18, underscores how the gap between nations has widened
The U.S. has some well publicized major advantages: U.S. tech companies dominate large language models globally and U.S. AI chipmaker Nvidia has become the leading beneficiary, with its Graphics Processing Units (GPUs) widely used by tech giants like Microsoft, Alphabet, Amazon, Meta, and OpenAI for AI applications. In 2024 Nvidia’s market cap reached $3.314 Trillion USD, making it the world’s second most valuable company.
The latest TBI report found that in addition to these advantages the U.S. built more data-center capacity in 2023 than the rest of the world combined, excluding China.
Market Concentration
“We are living in a world where the value of AI is very concentrated in the hands of a few players that have the data, the infrastructure, the energy and the talent,” Clara Chappaz, France’s Minister Delegate for Artificial Intelligence and Digital Technology, cautioned during a panel in Davos moderated by The Innovator’s Editor-in-Chief. “
The real power struggle isn’t geopolitical – it’s architectural, argues Sangeet Paul Choudary, a best-selling business book author and advisor to Fortune 500 companies, says in the latest edition of his newsletter. It’s about who gets to write the standards and control the infrastructure everyone else builds on, he says.
He calls it sandwich economics: one ‘player’ specifies an economic framework and imposes it on the rest of the industry, or in this case, the rest of the global economy.“Over the past couple of decades, the most significant innovation by Big Tech hasn’t merely been the creation of the world’s leading search engine, social network, e-commerce platform, or cloud computing service,” he says, “it is the ‘sandwich’ they create and impose on entire industries, forcing these industries to re-organize within it. And, with that, they sandwich the rest of the economy in between – squeezing power and profits from them, he says.
“The ones in the lead will rewrite the rules of global competition and collaboration,” Choudary says in his post. “The other will be left to adopt AI on someone else’s terms, inheriting its priorities and policies along the way.’’
In an interview with The Innovator AI scale-up iGenius founder and CEO Uljan Sharka, an AI Action Summit attendee, says entrepreneurs and governments need to prevent that from happening. “AI is the most powerful technology ever built,” Sharka says. “If we centralize it, we risk subjecting ourselves to modern dictatorship. The few tech companies that own it will become the government of the future. They will be the creators and everyone else a user, or a slave. I am motivated by building tech that enables digital equality, allowing everyone to compete, with the goal of building things beyond our imagination.”
Europe is pinning its hopes on European entrepreneurs like Italy-based iGenuis that are working on opensource and trustworthy AI.
iGenius.ai, which is valued at more than $1 billion, develops open source large language models that respect strict data security rules to ensure that corporate clients can safeguard their intellectual property. The company’s AI platform is used by some of the world’s largest organizations including financial services company Allianz, electric utility company Enel, Intesa Sanpaolo, one of the top banking groups in Europe, global shipbuilding group Fincantieri and several governments, including Italy’s.
“We are attracting customers in the U.S.,” says Sharka, co-founder and CEO of the Italian scale-up. “We tell them if you want the best models go to Silicon Valley. If you want good performance and large language models you can trust you can talk to us.” [See The Innovator’s Startup Of The Week story about iGenius).
“Europe is well-poised to win a GenAI-led digital Renaissance by offering an alternative to American and Chinese models that embeds European values and builds trust,” says Sharka.” It is a different way of building tech, and it is an advantage,” he says.
Like Sharka, Arthur Mensch, Mistral AI’s co-founder, is highlighting a need for a European alternative to Chinese and American offerings, telling Reuters that his goal is to make AI “more open and more accessible to everyone”.
Mistral AI this week rolled out its open source Le Chat assistant on the app store, claiming that it is powered by the world’s fastest inference engines, responding with up to 1,000 words per second. It also announced a strategic partnership with French transnational company, aimed at transforming the management and monitoring of industrial sites for water management, waste recycling and local energy production. to the ecological transformation. “This partnership marks a major step forward in industrial management,” Veolia said in a press release. “Thanks to the integration of Mistral’s LLM with Veolia’s data and knowledge base, it will now be possible to have a conversation with the plant, a world first.” By integrating the power of generative artificial intelligence, Veolia and Mistral AI are enabling employees and stakeholders to co-pilot water, waste and energy plants through interactive discussions. The companies said this represents a further step towards the realization of Industry 5.0 and the emergence of augmented employees, where technology directly supports human expertise.
Mistral AI also announced this week that it is expanding its partnership with Stellantis, the world’s fourth-largest carmaker; that it has formed a partnership with the French jobs agency; and that has more deals in the pipeline with other European authorities.”
French government official Philippe Huberdeau, Secretary General of Scale-Up Europe, sees recent market developments as reason for optimism. “There is clearly lot of space between Stargate [a U.S. high-profile artificial intelligence infrastructure project, being backed by OpenAI, SoftBank and Oracle, which aims to spend $500 billion on new compute infrastructure] and DeepSeek for a third European way to leverage this industrial revolution in a responsible and sustainable manner,” he said in a LinkedIn post in the run-up to the summit.
Environmental Risks
Growing compute use in general-purpose AI development and deployment has rapidly increased the amounts of energy, water, and raw material consumed in building and operating the necessary compute infrastructure, posing environmental risks, says the International AI Safety Report 2025. Indeed, a recent research paper from a Capgemini R&D team highlights that large generative AI models consume 4,600 times more energy than traditional models, with AI-related electricity usage potentially increasing 24.4 times in the most extreme scenario by 2030. Mitigating this environmental impact in the coming years will require a coordinated effort from all stakeholders across the AI value chain. In the run-up to the AI Action Summit the AI and Society Institute, the Ecole Normal Superierie (ENS-PLS) and the ENS Foundation, with the support of Cap Gemini, launched an Observatory dedicated to analyzing and mitigating the environmental impacts of AI at all stages of its lifecycle (training, adjustment, inference and end-of-life). The new Observatory aims to establish a solid, shared methodology to encourage sustainable AI usage.
Privacy Risks
The International AI Safety Report 2025 lists violation of privacy as one of the general-purpose AI systemic risks. For example, sensitive information that was in the training data can leak unintentionally when a user interacts with the system. In addition, when users share sensitive information with the system, this information can also leak. But general-purpose AI can also facilitate deliberate violations of privacy, for example if malicious actors use AI to infer sensitive information about specific individuals from large amounts of data.
Copyright Infringements and IP Risks
Copyright infringements also pose systemic risks, according to the International AI Safety Report 2025. General-purpose AI both learns from and creates works of creative expression, challenging traditional systems of data consent, compensation, and control and threatening the livelihoods of content creators ranging from journalists to authors, artists and musicians. Data collection and content generation can implicate a variety of data rights laws, which vary across jurisdictions Given the legal uncertainty around data collection practices, AI companies are sharing less information about the data they use, complicating efforts to resolve the way copyright will be handled going forward.
There are additionally significant intellectual property risks for large corporations. Approximately 80% of the most valuable enterprise data -including personal information, financial transactions, trade secrets, and intellectual property -cannot be exported and fine-tuned to centralized AI models and/or open models with a limited license, explains iGenius’ Sharka, This is especially true of generative AI, which merges data and intellectual property irreversible. For example, financial institutions sharing sensitive data with centralized LLMs can lead to potential data breaches or misuse, which can expose proprietary trading strategies, signal market intentions, and enable market manipulation, he says.
Choosing Between OpenAI And AI That Is Open
Will companies like OpenAI control the future or can an AI that is open enable trustworthy, innovative, and equitable outcomes? That is a question that the participants in the AI Action Summit will ultimately have to address.
“Embracing openness in AI is non-negotiable if we are to build trust and safety; it fosters transparency, accountability, and inclusive collaboration,” said a statement issued Feb. 4 by Mozilla. The statement followed a meeting in Paris in the lead up to the AI Action Summit, organized by Mozilla, Foundation Abeona, École Normale Supérieure (ENS) and the Columbia Institute of Global Politics whicih brought together a diverse group of AI experts, academics, civil society, regulators and business leaders in Paris to discuss openness, a topic it says is increasingly central to the future of AI.
“ Openness must extend beyond software to broader access to the full AI stack, including data and infrastructure, with a governance that safeguards public interest and prevents monopolization,” says the Mozilla statement. “ If AI is to advance competition, innovation, language, research, culture and creativity for the global majority of people, then an evidence-based approach to the benefits of openness, particularly when it comes to proven economic benefits, is essential for driving this agenda forward.”
Mozilla said the group that met in Paris would push the following recommendations for policymakers at the AI Action Summit:Diversify AI Development: Policymakers should seek to diversify the AI ecosystem, ensuring that it is not dominated by a few large corporations in order to foster more equitable access to AI technologies and reduce monopolistic control. This should be approached holistically, looking at everything from procurement to compute strategies.
Support Infrastructure and Data Accessibility: There is an urgent need to invest in AI infrastructure, including access to data and compute power, in a way that does not exacerbate existing inequalities. Policymakers should prioritize distribution of resources to ensure that smaller actors, especially those outside major tech hubs, are not locked out of AI development.
Understand openness as central to achieving AI that serves the public interest. One of the official tracks of the Paris AI Action Summit is Public Interest AI. Increasingly, openness should be deployed as a main route to truly publicly interested AI.
Openness should be an explicit EU policy goal: As one of the furthest along in AI regulatory frameworks the EU will continue to be a testbed for many of the big questions in AI policy. The EU should adopt an explicit focus on promoting openness in AI as a policy goal
The Choices Ahead
The future of general-purpose AI is uncertain, with a wide range of trajectories appearing possible even in the near term including both very positive and very negative outcomes, says the International AI Safety Report 2025. But nothing about the future of general-purpose AI is inevitable. “How general-purpose AI gets developed and by whom, which problems it gets designed to solve, whether societies will be able to reap general-purpose AI’s full economic potential, who benefits from it, the types of risks we expose ourselves to, and how much we invest into research to manage risks – these and many other questions depend on the choices that societies and governments make today and in the future to shape the development of general-purpose AI, says the report. “AI does not happen to us: choices made by people determine its future.
Visit: https://innovatorawards.org/
No comments:
Post a Comment