Sunday, June 22, 2025

The Next Industrial Revolution









AI is helping automate factories and connect the $25 trillion global product economy.

The Industrial Internet of Things will connect an estimated 50 billion assets such as machines, turbines, vehicles and rolling stock from the transportation, energy, healthcare, automotive, manufacturing, mining, oil and water industries. It is part of the next industrial revolution, in which the Internet of Things (IoT), combined with artificial intelligence, will disrupt end-to-end value chains.

Corporations are scrambling to prepare for this restructuring as the scope of change will compel many manufacturers to adopt new plant designs, reshape their manufacturing footprints and devise new supply-chain models. The new approach to production will use machines linked through the Internet to assemble parts and adapt to new processes with minimal guidance from human operators. Siemens’ Electronic Works in Amberg, Germany, is an example of what intelligent manufacturing will look like. There, employees manage and control the production of programmable logic circuits through a virtual factory that replicates the factory floor.



Via bar codes, products communicate with the machines that make them, and the machines communicate amongst themselves to replenish parts and identify problems. Nearly 75 % of the production process is fully automated, and 99.99988% of the logic circuits are defectfree, notes a June McKinsey report.

The report predicts that going forward, the AI’s intelligent insights will help manufacturers across different industries shorten development cycles, improve engineering efficiency, prevent faults, increase safety by automating risky activities, reduce inventory costs with better supply-and-demand planning, and increase revenue with better sales-lead identification and price optimization. The application of AI is also expected to allow end-to-end real-time visibility into the supply chain. “Machine learning will help huge operations get a better understanding of how external upstream and downstream parties interact with their own facilities and functions,” says Mikhail Nader, CEO of Ethereum, an American startup that uses natural-language processing and machine learning to extract and categorize information from more than 10 million supply-chain signals every day. The intersection with AI enables supply-chain participants to be notified
immediately of relevant opportunities or potential disruptions.

“The way we handle our operations will look completely different,” he says. “We will no longer be making decisions based on blind guesswork. By having a better, broader, and more accurate understanding of how and when things are happening, AI-enabled systems can give users the best options, instead of inundating them with irrelevant and non-targeted data.”
A Single Global Supply Chain

That is not all. Currently, companies operate as if they own their own supply chains. “We believe this is a fallacy and that there is only one global supply chain, and this is inherent in the way we are building our software,” says Nader. Ethereum is developing a product that aims to map every supplier, warehouse, factory, and port in the world to provide a full picture of the $25 trillion global product economy. AI is a layer that sits on top of this view of the global supply chain to make sense of the signals coming in, giving users targeted insights.

A well-functioning supply chain is the backbone of virtually every industry, and accurate projections for just the right amount of inventory are critical to achieving a competitive advantage, notes the McKinsey report. Factors such as product introductions, distribution network expansion, weather forecasts, extreme seasonality, and changes in customer perception or media coverage can severely affect the performance of the supply chain, says the report. What’s more, traditional systems for forecasting and replenishment can’t take advantage of the amount of data associated with IoT devices and the sheer number of influencing factors. So supplychain leaders are starting to realize the advantages of applying AI to increase forecasting accuracy, the report says.

Using AI to predict demand is also expected to allow businesses to optimize their sourcing more broadly, including fully automating purchases and order processing. The report cites the example of the German online retailer Otto, which uses an AI application that is 90% accurate in forecasting what the company will sell over the next 30 days. The system has proven so reliable that Otto now builds inventory in anticipation of the orders AI has forecast, enabling the retailer to rush deliveries to customers and minimize product returns. Otto is confident enough in the technology to let it order 200,000 items a month from vendors with no human intervention. AI is not just helpful for forecasting demand for current products. It can replace humans through automation and enable people and robots to work safely side-by-side in factories. For example, in a warehouse of the British online supermarket Ocado, robots steer productfilled bins over a maze of conveyer belts and deliver them to human packers just in time to fill shopping bags. Other robots bring the bags to delivery vans, whose drivers are guided to customers’ homes by an AI application that picks the best route based on traffic conditions and weather.

Advances in computer vision are behind many developments in these collaborative and context-aware robots, according to the McKinsey report.
Enhanced vision is enabled by more powerful computers, new algorithmic models, and large training data sets. Within the field of computer vision, object recognition and semantic segmentation — the ability to categorize a particular object type, such as distinguishing a tool from a component
— have recently advanced significantly in their performance. These changes allow robots to behave appropriately for the context in which they operate. Context-aware robots recognize the materials and objects they interact with and are capable of safely interacting with the real world and with humans.

New AI-enhanced, camera-equipped logistics robots can be trained to recognize empty shelf space. Deep learning can also be used to correctly identify an object and its position, enabling robots to handle objects without requiring things to be in fixed, predefined positions. AI-enhanced logistics robots are also able to integrate disturbances in their movement routines via an unsupervised learning engine for dynamics. Despite such notable progress and the promising potential of AI’s application to factories and the supply chain, industry players have not yet fully embraced the interconnectivity of machines and sensors and the use of data and analytics. Concerns include the potential for the Industrial Internet of Things to act as a conduit for cyber-security attacks.

Visit: https://innovatorawards.org/

Friday, June 20, 2025

How A New Entity Aims To Help Europe Gain AI Sovereignty





Aleia, a new umbrella organization that will federate the best European AI startups into a single centralized unit and present their offerings as a one-stop-shop for corporate clients, launched on February 3 with financial support from an arm of the French government and several angel investors. The aim is to ensure Europe’s technology sovereignty, increase its competitiveness, and make it easier for large corporates to move from proof-of-concept trials to scaling artificial intelligence across their organizations.


The launch event at Aleia’s headquarters in Paris’ 8th arrondissement, included Renaud Vedel, the head of France’s national AI strategy, two former members of the EU’s High Level Expert Group on Artificial Intelligence, the CEOs of several startups and the head of France’s new Cyber Campus, a cybersecurity initiative from French President Emmanuel Macron, which will open its doors this month. The event was moderated by Jennifer L. Schenker, The Innovator’s editor-in-chief.

The €8 million in financing will help Aleia, which is now officially part of France’s national AI strategy, accelerate its development ahead of its commercial launch in June of this year.

The French government’s backing of Aleia is part of a trend. At a time when there is growing mistrust in the handling of data by U.S. and Chinese actors, there is a growing desire in Europe to reduce dependence on foreign technology. The lack of European Cloud services has received much attention and the European Commission’s February 2020 proposals on data and AI stressed the importance of industrial data as a resource that -if leveraged – could give Europe an edge. The notion of sovereignty also intersects with the long-time European concerns about privacy and personal data. Some believe there is an opportunity for Europe to differentiate itself with a brand of AI that is more human-centric, transparent, and trustworthy, incorporating European values, such as data privacy, that could serve as an alternative to American and Chinese offers.

If Europe plays its cards right and develops and diffuses AI it could add some $2.7 trillion, or 19% to its economic output by 2030, according to a report by the World Economic Forum and McKinsey. At stake is not just the competitiveness of nations but the viability of Europe’s largest companies.

Today the uptake of AI by both business and government in Europe is still limited. Many attempts never get beyond proof-of-concept trials. Aleia aims to change that by ensuring that governments and companies have the right data and the right tools to fully leverage the power of AI, says Antoine Couret, Aleia’s CEO. Its ambition is to offer a platform for development that unifies, secures, and industrializes the whole data and AI production chain, from ideation to scaling it across the enterprise. “Think of it as a fast track to AI, both in terms of business impact and tech simplicity” he says.

Europe does not have its own version of an Amazon Web Service, Google or Alibaba. What it does have is best-of-breed startups in different areas of AI, says AI expert Francoise Soulié, a speaker at the launch event. Europe’s AI companies are still relatively small, and their offers are niche, so they struggle to get contracts with big companies or public administrations, she says. By grouping them together on one platform it will be easier for them to compete against offers from foreign tech giants, says Soulié, who is a scientific advisor to France AI, a former member of the European Commission’s High Level AI Experts group, and co-chair of the innovation and commercialization working group of Global Partnership on AI, a global multi-stakeholder initiative which aims to bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities.

Aleia’s ecosystem of startups includes QWAM Control Intelligence, which specializes in analytics and exploitation of textual data and documents with semantics, AI, and Big Data natural language processing; computer vision expert XXII; Cliris, which uses Machine Learning and Big Data to analyze crowds; Linkurious, which analyzes social graphs; Le Voice Lab, which specializes in analyzing voice; Adobis Group, a specialist in data virtsualization, and Cosmian, which secures Cloud-native apps with advanced cryptography. All of the initial startups are French but Aleia plans to add startups from other European countries.

Startups’ offerings will be integrated into the Aleia marketplace, which is powered by France’s Dawex Data Exchange Platform, a white-label SaaS platform to distribute, source, commercialize and/or orchestrate data ecosystems.

Quam CEO Christian Langevin, a speaker at the launch event, explained how the capacity to process text data at scale is particularly important to the public sector. He said it is difficult for small tech companies to scale due to the risk adverse attitudes of public sector purchasing departments and said this was a driving factor in QWAM’s decision to join the Aleia ecosystem.

Data sharing is another main objective for Aleia. The design of Aleia’s platform will give companies and governments the opportunity to share and exchange data in a controlled manner with the goal of assembling enough data to train algorithms, says Couret.

“The adoption of AI is constrained for many companies, and most notably for the smallest, by a lack of data, the basic material of digital transformation,” he says. “Only by pooling the data of private and/or public organizations can we attain the critical amount needed to accelerate the training of algorithms.”

In addition, “data used by AI is now multi-faceted, with text, images, videos, speech or graph data, so that a one stop shop is needed where one can come and find all the tools to analyze the data and rapidly develop and deploy his or her application,” Soulié says.

The take-up of AI in the public sector for applications like health and the willingness to share public data with private entities is constrained by privacy concerns, says Vedel. This may change later this year after Europe passes the EU Data Governance Act, which is intended to foster the availability of data by increasing trust in data intermediaries and strengthening data sharing across the EU and between sectors. The law will introduce new data governance, in line with EU rules on personal data protection, consumer protection and competition law, as part of the European Strategy for Data.

“We need companies to be confident about exchanging data and not to be afraid of privacy regulationa,” says Couret. “We need to rethink regulation based on a new balance between privacy and efficiency.”

There is another serious issue that needs to be resolved if Europe is going to become an AI leader. Soulié points out that every time a large French or European company awards a contract to an American Cloud provider, European data is being used to improve American AI tools. Most of European data is currently being processed by American tech companies. European AI tools will never be as good or surpass those of the U.S. if they don’t train on massive amounts of data, she says.

The global AI race is largely seen as being between the U.S. and China. Europe is a distant third. If European governments want the region to catch-up large companies need to work more closely with European startups, says Laurent Lafaye, Co-CEO of Dawex, a speaker at the Feb. 3 launch event. He suggested, during a round table, that the French government, which holds stakes in some of France’s biggest companies, should exert pressure on those companies to allocate a portion of their existing R &D budget to working with local AI startups.

Aleia, which currently has around 30 collaborators and expects to triple that number by the end of this year, is actively courting some of France’s largest companies as potential clients.

The pitch is that all data will be hosted on European soil, either with a provider like France’s OVH or within a corporate’s own network, and all processing of data and algorithms will be done using either open source or European software tools, meaning it will not be subjected to the CLOUD Act, which allows federal law enforcement to compel U.S.-based technology companies via warrant or subpoena to provide requested data stored on servers regardless of whether the data are stored in the U.S. or on foreign soil. The offer will include four building blocks: AI Lake, for importing and transforming data; AI Factory to simplify and accelerate the construction of data sets, algorithms, and AI tools; AI Run for deploying, integrating and supervising artificial intelligence applications; and AI Admin for managing the governance of data, APIs and the infrastructure in a sovereign environment.

Big Ambitions

Aleia is well plugged into the French and European AI ecosystems. It is a member of Hub France AI, which regroups the French AI ecosystem. (Couret is president of both Aleia and Hub France AI) It is also a member of the European Alliance on Applied AI and of Gaia-X, a pan-European project that aims to be “a decentralized, secure, transparent digital ecosystem for the European data economy, allowing digital services and data to be shared by any public or private institution without sacrificing data protection and privacy.”

The Cyber Campus will also be part of the Aleia ecosystem. Cybersecurity and AI are intertwined because Europe’s sovereign AI offerings must be secure, says Yann Bonnet, managing director of The Cyber Campus and a former member of the EU’s High Level Expert Group on Artificial Intelligence. Over a hundred entities, including large corporates, public agencies, research organizations, startups and organizations are already involved in the campus. Most participants are French, but the ambition is to connect to other cybersecurity hubs across Europe, Bonnet says. Part of The Cyber Campus’ mission is promoting the sharing of data to reinforce the capacity of public and private entities to handle cyber risks. Among other things, it aims to create a cyber commons studio to share resources and develop sovereign solutions to respond to threats.

Deja Vu

The French state’s support of sovereign tech projects is not new. For example, as part of Project Andromède, which was announced in 2011 as a governmental desire for French-controlled cloud computing, the French state financed two cloud companies: Cloudwatt, which was formed by Orange and Thales, and Numergy, created by SFR and Bull, in the hopes that they could serve as national alternatives to Amazon, Microsoft and Google. The state divested its stakes in 2016. Numergy was absorbed into telecommunications company SFR and Cloudwatt shut down in 2020.

The French government is also one of the investors in Qwant, a French search engine that safeguards data by keeping it on European soil and respecting user privacy. The company, which presents itself as an alternative to Google, is facing some financial difficulties. According to a story in Politico in June of last year Qwant requested an €8 million loan from Huawei, raising concerns that the Chinese telecom equipment vendor, which has been accused by the U.S. government of espionage, could potentially gain visibility or influence on Qwant’s strategy.

Meanwhile, GAIA-X, which was formed at the behest of the German and French governments, has found itself surrounded by controversy. French cloud provider Scaleway announced in November that it would back out of the project and not renew its membership, citing foreign tech company influence as one of the reasons for its departure.

A Different Trajectory

Aleia is on a different trajectory, says Couret. It is a fully private company, with a business and tech focus, and most importantly, is led by an ecosystem of entrepreneurs. He says he is confident that Aleia can make a strong contribution to developing sovereign AI in France and in Europe by promoting data sharing and harnessing the power of Europe’s best AI startups and its ecosystem. “Based on the wealth of deep tech companies in Europe, Europe deserves tohave its own AI offer and it must be created at a European level,” he says.

Visit: https://innovatorawards.org/

Thursday, June 19, 2025

The Tech Sector Nearly Destroyed Media. Can It Save It?







After hobbling the news business by grabbing most of the advertising revenue and normalizing the giving away of content for free, Big Tech is using original articles created by journalists at surviving outlets to train its AI models, without giving credit to their work or providing any kind of compensation.

Big Tech companies are, in fact, hoovering up the content not only of newspapers and magazines but artists, authors and musicians, ballooning their own valuations while threatening the livelihoods of content creators. Copyright lawsuits filed against GenAI companies abound, alleging that the way they operate amounts to theft.

Bill Gross, one of Silicon Valley’s most prolific entrepreneurs, believes there is a better way than lawsuits to combat the problem: using tech of his own invention.

Generative AI cannot thrive on a foundation of stolen or uncredited content—it’s neither sustainable nor just, says Gross, CEO of ProRata.ai, a new company that uses tech to enable generative artificial intelligence (GenAI) platforms to attribute and compensate content owners.

Among other things Gross, who has created more than 150 companies with more than 50 IPO’s and acquisitions over the last 30 years, is widely credited with inventing “pay-for-click”, a novel way for search engines to make money on advertising, when he was running a company he founded in 1998 called GoTo.com. Instead of paying for page-views—an old-media model—advertisers pay only when people click on their ads.

Google paid GoTo.com to license its tech and pay-to-click would go on to create a multi-billion-dollar advertising business. Then came along the latest disruption: Generative AI models like ChatGPT that respond to questions with knowledge gained from crawling content without credit or compensation, essentially giving the builders of large language model a free ride on the massive investment made by the small number of surviving media outlets that have built successful business models from online journalism.

At the same time, OpenAI and other AI tech firms — which use a wide variety of online texts, from newspaper articles to poems to screenplays and books – to train chatbots, are attracting billions of dollars in venture capital.

It doesn’t need to be a zero-sum game, says Gross. YouTube, which started out by using other people’s content, saw its business thrive when it started revenue 50/50 with creators, and music streaming service Spotify has paid out billions of dollars to artists so “it is completely possible to pay creators and make a viable business,” says Gross, who spoke about ProRata in January at the DLD technology conference in Munich and at an Axios side event at the World Economic Forum’s annual meeting in Davos.

“Why should Generative AI be an exception?” he asked during an interview with The Innovator.

Whereas Spotify is based on the number of streams, with Generative AI the challenge was to figure out the proportionate contribution to an answer. Gross invented tech that can reverse-engineer where an answer came from and what percentage comes from a particular source so that owners can be paid for the use of their material on a per-use basis, says Gross, who has patented the technology. ProRata pledges to share half the revenue from subscriptions and advertising with its licensing partners, help them track how their content is being used by AIs, and aggressively drive traffic to their websites.

When a user poses a query ProRata’s algorithm compiles an answer from the best information available. At the top of the page there is an attribution bar which specifies where the answer came from. It might say, for example, 30% of this answer came from The Atlantic, 50% from Fortune and 20% from The Guardian. The publications are immediately compensated according to their contribution to the answer and a side panel displays the original articles and enables users to click on the original source to learn more.

Think of it as “attribution-as-a-service.,” says Gross. “Just as Nielsen measures how TV shows are watched to determine what advertisers should pay, we are moderating the output of the queries to determine how much GenAI providers should pay content providers.”

For starters Gross is launching a GenAI search engine called Gist.ai that only consults the archives of participating publishers. Some 400 publishers have signed on so far, including The Atlantic, Time Magazine, Fortune, The Guardian and Skynews, contributing some 50 million documents. So have book authors such as Adam Grant and Walter Isaacson and Universal Music as the same technology can be used to attribute credit to images, music and movies.

Expect other types of content providers to follow. In his presentation at DLD Gross demonstrated how his technology could determine that an image of a masked superhero provided by Meta was generated using 90.3% of material from Marvel Comic images and 6.2% from DC Comics.

Gross plans to charge $20 a month for individuals to use the Pro version of Gist.ai, the same rate charged by ChatGPT. The difference, says Gross, is that Gist.ai will only use trusted sources of information and has the buy-in of the content owners, who are ethically compensated on a per-use basis.

“This empowers the long tail,” says Gross. “You don’t have to be a big brand” to take advantage of the service, he says. A growing number of professional journalists are trying to monetize their content, but many have struggled to make a living using sites such as Substack, which, like Prorata, bills itself as a new economic engine for content providers.

Once more publishers join ProRata it will put pressure on GenAI platforms to share revenue with content providers, says Gross. He hopes to eventually get Microsoft, Amazon and maybe even Google to license its technology. “We want to convince the industry that if you want to crawl people’s content you should share,” he says.

Gross says he was shocked to see statistics from the tech company Cloudflare that demonstrated that 10 years ago Google crawled two pages for every visitor to a website, but today it crawls six pages for every visitor it sends, making it three times harder for content providers to monetize. OpenAI crawls 250 pages for every visitor it sends and Anthropic crawls 250,000 for every one visitor. Why is it so little? “They obscure where the content is from so there is almost no reason to go to a site,” says Gross. “I want us to get to a fairer value exchange. “



He hopes a combination of things will help convince GenAI platforms to compensate content providers. If guilt and lawsuits don’t work if, over time, more and more publishers block their content from being crawled by GenAI platforms the quality of their chatbot’s answers will deteriorate and people will vote with their feet, choosing to instead access answers from the content of trusted publishers, Gross says.

Visit: https://innovatorawards.org/

Why Companies Are Embracing Open Source AI








A survey of more than 700 technology leaders and senior developers across 41 countries found that business leaders are embracing open source AI tools as essential components of their technology stacks. Overall, more than three-quarters of respondents—76%—expect their organizations to increase use of open source AI technologies over the next several years according to the survey, which was conducted by McKinsey, the Mozilla Foundation, and the Patrick J. McGovern Foundation.

The uptake of open source AI by business can be explained by looking at technological and geopolitical trends.

Open source AI innovations are having impact on two key AI technology developments: privacy-centric Edge applications powered by small language models (SLMs) and the emergence of reasoning models with higher inference-time compute, according to the survey report.

Embracing open source is also increasingly part of the political zeitgeist as governments seeks alternatives to U.S. and Chinese closed models. During the global AI Action Summit in Paris in February political leaders expressed concern about concentration of power in the hands of a few AI companies. Some 58 of the countries attending the summit – which together represent one half of the global population – signed a statement committing to promoting AI accessibility to reduce digital divides; ensuring AI is open, inclusive, transparent, ethical, safe, security and trustworthy and avoiding market concentration.

“The future of AI belongs to ecosystems, not empires,” says Vilas S. Dhar, president, Patrick J. McGovern Foundation, a contributor to the survey report and a participant at the AI Action Summit.

In a May 1 interview with The Innovator Dhar argued that open source enables open innovation. “By democratizing access to innovation ecosystems, open source puts the tools of creation into everyone’s hands, allowing regionally appropriate AI models to develop,” he says. He points to an initiative by Chile’s Ministry of Science, Technology, Knowledge and Innovation and the National AI Center to launch Latam GPT, a large language model designed to understand and represent the history and culture of Latin America. The official launch is scheduled for June, with capabilities comparable to OpenAI’s ChatGPT 3.5.

Latam GPT was developed with the support of experts, institutions, and research centers across Mexico, Argentina, Colombia, Ecuador, the United States, Spain, Peru, and Uruguay. The project’s vision is to democratize access to advanced AI technologies, ensuring every country in the region can develop and implement AI systems in their governments and industries.

“This takes the idea of open source and uses it to build a product that will drive an entire ecosystem,” says Dhar. “Open source can unlock thousands of new approaches by breaking the link between dependence on costly foundation models and innovation in the fast, affordable applications that are built on top of them.”

Sovereignty through collaboration is an idea that is taking hold in Europe as well. “In a world where exponential technologies have shifted the concentration of power and value capture to a handful of non-European companies, mostly through proprietary and lock-in techniques, openness is the only radical and non-conflicting public policy that can reverse the trend immediately,” Yann Lechelle, CEO of Probabl, a spin-off of French research center Inria that has been financing a global open source data science library called scikit-learn, a tool widely used for performing complex AI and machine learning tasks, said in a recent interview with The Innovator. “For the European Commission stimulating, supporting and adopting more open science, open data, open source and open weights, open standards and open hardware , may be the strongest path to transforming the economic landscape. It’s a weapon that can be used for a massive leveling of the playing field.”

Big Tech is, in fact, playing a big role in open source as well. The most common open source AI tools used by enterprises, as of January 2025, are those developed by large technology players, such as Meta with its Llama family and Google with its Gemma family, according to the survey report.

“I see this as Big Tech recognizing the growing momentum around open source as a path to build more inclusive and participatory ecosystems,” says Dhar. “It also reflects a realization that market dominance will not be about holding on to the AI model but about sustaining and supporting developer communities that build on top,” he says. “More innovation is better for everyone.”

The Advantages of Open Source

Hyperscalers such as Amazon Web Services (AWS), Google Cloud, and Microsoft Azure are releasing industry-specific, cost-efficient SLM models tailored for specialized tasks and distilled into domain-specific tools to power applications for sectors such as manufacturing and finance. But open source developers are also playing an important role in creating these SLMs, enabling the distillation process of general-purpose LLMs with smaller models that can match or even exceed the performance of larger ones, according to the survey report.

Small models enable Edge applications and on-device intelligence for organizations that prioritize latency and/or privacy. Some examples of small-model hubs that distribute open source (and other) models cited in the survey report include the Qualcomm AI Hub, which addresses the needs of Edge AI product OEMs, and Ollama, which offers a framework and tools to deploy open models to the PCs of individual advanced users. The expectation is that hubs like these will add trusted third-party evaluation certification tools, enhancing customer trust and confidence.

The second key trend is the emergence of reasoning models, which employ higher compute during inference time (rather than in their pretraining time) to excel at specific tasks, says the survey report. While the initial wave of reasoning models were proprietary (such as OpenAI’s o1 reasoning model), open source alternatives—including China’s DeepSeek-R1 and a similarly capable model from Alibaba—have quickly followed. Other players are building on and adapting these. The survey report mentions how Perplexity has modified a version of DeepSeek3 to provide more unbiased and accurate information and Smolagents from Hugging Face has created an alternative Deep Research model, challenging offerings from OpenAI and Google DeepMind.

Other open technologies are emerging to help builders optimize and enhance their model-training pipelines and processes. DeepSeek, for example, has continued to offer open source repositories, including parallelism and integration capabilities, for its reasoning models, says the survey report.

While the capabilities of open source models once lagged proprietary ones, base models have improved significantly, says the survey report. And while enterprises may face challenges in tailoring some of the components of reasoning models and the time to value is often longer “the bottom line is that open source offerings now allow model service providers to bring together a full stack of technologies that deliver an effective developer experience, enable modularity, and capture the advantages of community-based development,” says the survey report. The report argues that open source provides organizations greater flexibility and choice to deploy AI either on the Edge or in the Cloud, depending on their privacy, latency, and performance needs. And it says the open source’s operating model and architectural flexibility “can help build more resilient AI systems.”

Navigating The Risks

Amid the benefits and value of open source AI, there are risks, primarily related to security, that could affect their adoption, says the survey report. The most relevant AI risks cited include cybersecurity (62% of respondents), regulatory compliance (54%), and intellectual property (50%).

The survey report recommends four ways businesses can control the risks when implementing an AI model-based system, whether open source or proprietary:Guardrails: The establishment of robust guardrails—such as automated content filtering, input/output validation, and human oversight—can help ensure responsible use and secure outputs.
Third-party evaluations: Conduct regular assessments with standardized benchmarks that allow for certification. During such benchmarking, private evaluations assure that test data sets are kept private from the model.
Documentation and monitoring: Operationally, a software bill of materials can help track version discrepancies and vulnerabilities by maintaining detailed inventories of open source components. Quantitative risk assessments can assess the severity of vulnerabilities in open source systems.
Cybersecurity practices: To secure data privacy and system integrity running models in trusted execution environments may help to ensure sensitive data remains encrypted during processing. Incorporating differential privacy and federated learning techniques during training can prevent models from memorizing confidential information. Strong access controls within model repositories, network segmentation between training and inference servers, continuous monitoring of security incidents, and cryptographic hash verification to confirm that models are from trusted repositories can help address both content safety and cybersecurity challenges in production AI environments, the survey report says.

A Foundation For A More Innovative Future



The survey found that many companies are opting for hybrid open source and proprietary systems. Still, the momentum behind open source AI is undeniable, says Mozilla President Mark Surman, a contributor to the survey report. “In just the past year, we’ve seen countless examples proving that community-driven innovation can not only compete with but even outperform proprietary models,” he said in a statement in the report. “The next big bet is building open tools and a stack that make AI truly accessible—like an AI Lego box that anyone can use. If we get this right, open source AI won’t just be an alternative to closed systems. It will be the foundation for a more competitive, creative, and innovative future.”

Wednesday, June 11, 2025

Ant Group Recognized for Innovation Excellence As a Top 100 Global Innovator 2024



HANGZHOU, China--(BUSINESS WIRE)--For the third consecutive year, Ant Group, a global digital technology provider, has been recognized as a Top 100 Global Innovator™ 2024 by Clarivate™, a global leader in connecting people and organizations to intelligence they can trust to transform their world. This recognition reaffirms Ant Group's ongoing commitment to innovation excellence in technologies such as artificial intelligence (AI) and blockchain.

Gordon Samson, President, Intellectual Property, Clarivate, said, “We congratulate Ant Group for being named as a Top 100 Global Innovator again. To feature as a Top 100 Global Innovator is no mean feat as maintaining an edge in the innovation ecosystem is harder than ever. Organizations must balance experimentation and risk with discipline and reward. We measure and rank innovative performance in a dynamic and thorough way, using live thresholds of differentiation. At Clarivate, we think forward by analyzing the quality of ideas, their potency and their impact to identify the world’s top innovators, and this year we reveal the ranking of these innovators for the first time.”

As of the end of 2023, Ant Group had filed 32,459 patent applications globally, with 22,102 patents granted. The top three categories of these patent applications are security technology, blockchain and AI.

"Our recognition by Clarivate as one of the Top 100 Global Innovators is a testament to our innovation capabilities," said Shen PAN, Director of Patents at Ant Group. "We are committed to leveraging technology to build trust and accelerate digital transformation across industries. Our journey is driven by our advancements in key technologies such as blockchain, privacy computing, security technology, Internet of Things (IoT) databases, and most notably, AI.”

In the field of AI, Ant Group has been continuously leveraging its technology to enhance the user experience across its product offerings for years. As of the end of 2023, Ant Group had filed over 3,000 AI-related patents. During Alipay’s 2024 Chinese New Year campaign, AI features in the app attracted 600 million interactions.

To facilitate technological advancement across industries, Ant Group has been making its innovations publicly accessible to developers in the open-source community. For example, Ant Group’s AI infrastructure team open-sourced ATorch, an extension library of PyTorch, that can improve GPU utilization rate up to 60% in large-scale pre-trainings of Large Language Models (LLMs). Meanwhile, the company’s open-sourced Lookahead achieves lossless generation accuracy for LLMs while boosting the inference speeds of LLMs by 2 to 6 times.

By the end of 2023, the number of open-source repositories from Ant Group on platforms such as GitHub had exceeded 1,900. Additionally, according to the 2023 Blue Paper on the Development of Open Source in China published by China OSS Promotion Union (COPU), Ant Group is recognized as one of the top three organizations in the country in terms of open-source contributions and influence.

Leveraging its innovations in technologies such as blockchain, privacy computing, security technology, IoT, and databases, Ant Group provides technology products and services to support the digital transformation and collaboration of global enterprise customers across a variety of industries. These industries include banking, telecommunication, real estate, medicine and energy. The company has garnered recognition from various organizations for its excellence in delivering innovative products and services to customers. For example, AntChain was recognized by Forbes on the Blockchain 50 list for five consecutive years (from 2019 to 2023). In September 2023, ZOLOZ was named as a Representative Vendor for the second consecutive time in the latest Gartner Market Guide for Identity Verification.

Methodology

The Top 100 Global Innovators uses a complete comparative analysis of global invention data to assess the strength of every patented idea, using measures tied directly to their innovative power.

To move from the individual idea strength to identify the organizations that create them more consistently and frequently, Clarivate sets two threshold criteria that potential candidates must meet and then adds a measure of their patented innovation output over the past five years.

For full information on the methodology used to identify the 2024 list, see here.

About Clarivate

Clarivate™ is a leading global provider of transformative intelligence. We offer enriched data, insights & analytics, workflow solutions and expert services in the areas of Academia & Government, Intellectual Property and Life Sciences & Healthcare. For more information, please visit www.clarivate.com

About Ant Group

Ant Group traces its roots back to Alipay, which was established in 2004 to create trust between online sellers and buyers. Over the years, Ant Group has grown to become one of the world's leading open Internet platforms.

Through technological innovation, Ant Group supports its partners in providing inclusive, convenient digital life and digital financial services to consumers and SMEs. In addition, it has been introducing new technologies and products to support the digital transformation of industries and facilitate industrial collaboration. Working together with global partners, the company enables merchants and consumers to make and receive payments and remit around the world.

Visit: https://innovatorawards.org/

'Startup in Shanghai' competition invites international innovators



The 2025 edition of "Startup in Shanghai" International Innovation and Entrepreneurship Competition is now open for global applications, welcoming outstanding projects from innovation-driven teams and enterprises worldwide, the Shanghai Municipal Science and Technology Commission said recently.

This year's application officially launched in late May, and will run through the end of July. International applicants should register through the WeStart TOP100 website.

Applicants must have a core team of at least three members and possess original technologies with clear commercial potential. Projects should fall within key sectors, such as next-generation information technology, biomedicine, high-end equipment manufacturing, new energy, new materials, environmental resources, and new energy vehicles, according to the commission.

The competition will include preliminary, semifinal, and final rounds, with participants pitching their ideas to panels of industry experts and investors. In addition to first, second, and third prizes, a special grand prize will be conferred to top performers. Winners will receive funding, incubation support, investment matchmaking, and opportunities to engage in high-profile innovation events in China.

Upon registering a company in Shanghai, award recipients may access exclusive services offered by designated partner banks. Eligible participants may also be recommended for exposure and engagement opportunities at high-profile activities, including the Pujiang Innovation Forum, Shanghai Science and Technology Festival, China International Import Expo, and China International Industry Fair, according to the commission.

Since its inception in 2012, the competition has attracted over 70,000 startup participants worldwide, offering a premier platform for showcasing cutting-edge technologies and entrepreneurial talent.

Visit: https://innovatorawards.org/