Thursday, July 31, 2025

AI Is Changing the Way Companies Interact With Their Customers

 

KLM Royal Dutch Airlines is winning acclaim for a service that uses artificial intelligence not only to send customers booking confirmations,check-in notifications, boarding passes and flight status updates but also to answer their questions.
The service uses a chatbot, a conversational computer program that customers can interact with via a messaging interface like SMS, Facebook Messenger, Apple iMessage, Slack, Kik, Telegram and WeChat. The result: huge time savings for customer service agents, who can instead focus on customers with more pressing or complicated needs. Beyond the airline industry, chatbots are being used in fields such as retail, banking, insurance and health, signaling a sea change in the way customers converse with companies or engage in commerce.

“AI is making the user interface both simple and smart — and setting a high bar for how future interactions will work,” says a recent report from Accenture. “It will act as the face of a company’s digital brand, be a key differentiator — and become a core competency demanding C-level investment and strategy.” Within five years, more than half of customers will select a company’s service based on its AI, the report says.
In the same way that a customer service representative can please or anger a customer, an AI system will represent a company’s brand and leave a lasting impression. In the United States alone, businesses lose an estimated $1.6 trillion annually due to poor customer service, according to the Accenture report. In addition, it says, 68% of consumers say they will not go back to a brand once they have switched.

A Better Brand Experience

But get the customer experience right, and there’s a much larger opportunity, the report notes. Instead of interacting with one person at a time like a human representative does, a bot can interact with an infinite number of people at once — based on the skills built for it — and maintain a powerful, consistent brand experience in every interaction. KLM’s bot uses deep-learning algorithms that enable it to ingest vast volumes of historical customer service data. This history is then integrated with customer service. When a new message comes in via a digital channel such as email, chat, social media or text, the bot takes three actions: it predicts and auto-fills metadata related to the incoming message; it proposes the best response to the incoming message and shows it to the contact center agent for approval or personalization before sending it to the customer; it automatically answers questions that exceed a certain confidence threshold.
Instead of using a website or mobile app, KLM customers can get things done simply by chatting with a bot and asking questions naturally via text like “Can I bring my puppy on the flight?” This helps KLM handle a huge volume of messages. The airline says it has more than 22 million social-media followers, who mention it on various platforms more than 100,000 times a week. KLM’s team of 235 social media service agents engage in 15,000 conversations a week across all its social platforms, offering 24/7 service in 10 languages.

Other companies are also increasingly using bots to help with customer service. For example, Burberry’s Facebook Messenger bot, launched during London Fashion Week last year, shares new collections and doubles as a live customer service portal. The fastfood outlets Burger King and Wingstop show users nearby locations and menu choices, take and confirm orders, estimate when a customer’s order will be ready and enable payments. Credit Agricole’s health insurance offering is represented by a bot named Marc while Allianz has introduced a chatbot called Alli that offers assistance 24/7 to answer questions about a wide range of insurance products.

Visit: innovatorawards.org

Wednesday, July 30, 2025

How AI Is Impacting HR And The Future Of Work

 

If developed and deployed correctly, AI‑based tools have the potential to boost employee productivity, save HR departments time and money, and improve fairness and diversity outcomes. Used wrongly, AI human resource tools can reinforce historical biases and expose the companies that use these tech tools to reputational damage and legal issues.  Read our story to learn how The World Economic Forum and several startups are helping corporates find the right approach.

Visit:https://innovatorawards.org/

Tuesday, July 29, 2025

Biology: The Next Target For AI Large Language Models








AI large language models that can mimic human prose are generating lots of excitement but the most significant long-term opportunity may be unlocking the language of life.

That’s the premise behind Bioptimus a newly launched French company that aims to build the first universal AI large language model (LLM) in biology.

While a few startups are leveraging LLMs for specific areas of biology, such as the design of novel protein therapeutics, Bioptimus believe it is the first to try and build a model that will be trained on data that’s necessary to understand multiple biological processes and how they connect with each other. The aim is to address many different scales of biology, including organs, tissues, cells, molecules, and atoms, to gain a holistic view of how the human body functions and advance the treatment of disease.

Even simple biological systems are made up of a huge number of components that interact with one another in complicated ways that are not yet understood. The hope is that amassing of huge datasets and the application of AI will allow researchers to gain fuller, more accurate pictures of complex biological systems.

Improvements in AI over the past two years are a step-change in the field, Professor Jean-Philippe Vert, PhD, co-founder and CEO of Bioptimus, chief research and development officer at Owkin and a former Research Lead at Google Brain, said in an interview with The Innovator. “We have reached a point where we have the recipes and the know-how to train AI systems,” he says. “Just as large language models are being trained using text written in human language, we can show it data written in other languages – like the language of nature – and capture the laws of biology.”

One of the key forces behind the dramatic recent progress in artificial intelligence is so-called “scaling laws” : the fact that radical improvements in performance result from continued increases in LLM parameter count, training data and compute.

In the short-term applying LLMs to patient data could help accelerate the development of new drugs and precision medicine, says Vert. Longer term it has the potential to help create digital twins of individuals to capture and monitor the state of the body, making disease prevention easier, he says.

To achieve those goals Bioptimus is assembling a team of scientists that includes Google DeepMind alumni as well as scientists from Owkin, a French unicorn and a member of the World Economic Forum’s Global Innovators Community.

DeepMind, which is owned by Google parent company Alphabet, in late 2020 used an AI system called AlphaFold (which does not use large language models) to produce a solution to protein folding that humans could not solve. Because a protein’s shape is closely linked with its function, knowing a protein’s structure unlocks a greater understanding of what it does and how it works, helping accelerate scientific research and discovery globally. Within 12 months AlphaFold was accessed by more than half a million researchers and used to accelerate progress on important real-world problems ranging from plastic pollution to antibiotic resistance.

Owkin, for its part has been using early versions of AI to identify new treatments, de-risk and accelerate clinical trials and develop AI diagnostics. It founded MOSAIC, the world’s largest multi-omics atlas for cancer research. Multi-omics is a new approach that combines the data sets of different modalities: in this case bulk, single-cell and spatial transcriptome, genome, pathology slide, and clinical data.

Alumni from both companies who are now working for Bioptimus are optimistic that they can make further – potentially radical – improvements to healthcare by applying LLMs to biology.

Accelerating Precision Medicine

One of the most highly anticipated use cases of AI is in precision medicine, where treatments are tailored to each individual patient’s biology. However, the volume of data analysis required to develop highly targeted treatments is enormous.

With AI-enabled screening, biochemists can shorten the search to find disease drivers and potential drug candidates, simplifying the drug discovery program and advance the development of targeted therapies.

Not only does AI make precision medicine more accessible, it also highlights some of the shortcomings of current treatment approaches for complex diseases. For example, cancers can differ from patient-to-patient, both genetically and symptomatically, yet different patients often receive the same treatment. While there’s been critical strides made in the development of more personalized treatments, the introduction of AI tools is expected to fast-track advances.

Outside of the lab, data from real patients in hospitals is being leveraged to inform treatment approaches. Owkin is addressing privacy concerns by using federated machine learning, a decentralized training model that prevents patient data from ever leaving a hospital’s server..

Successful implementation of federated learning – along with the utilization of LLMs and combining data sets- could hold significant potential for enabling precision medicine at scale, helping match the right treatment to the right patient at the right time.

Bioptimus said it expects to benefit from Owkin’s data generation capabilities and federated access to multimodal patient data sourced from academic hospitals worldwide. Fueled by scaleable computing resources from Amazon Web Services that will power the LLMs and an abundance of data from all scales and modalities, the newly launched French company says it believes it will have the power to create computational representations that establish a strong improvement over models trained solely on public datasets and a single data modality.

Bioptimus’ approach has an “unprecedented potential to personalize medicine, capturing the uniqueness of each individual while harnessing the collective knowledge,” Edward Kliphuis, a partner at venture firm Sofinnova Partners, which led a $35 million seed round of funding into Bioptimus in February, said in a statement. Other investors in Bioptimus include Bpifrance Large Venture, Frst, Cathay Innovation, Headline, Hummingbird, NJF Capital, Owkin and Top Harvest Capital as well as well-known French tech entrepreneur Xavier Niel.

Seeking Game-Changing Impact

Bioptimus expects to release its first LLMs later this year, says Vert, who will not have an operational role at Bioptimus and keep his role at Owkin. Instead of creating one big model for everything at the beginning Bioptimus will start with different modalities and connect them, he says. Its target customers will be pharmaceutical and biotech companies. It will use as software-as-a-service licensing model. Owkin will be among the first customers as Bioptimus’ foundational models are expected to provide better diagnostics.

At the same time, Owkin is expected to open doors for Bioptimus to access biomedical data from hospitals and collaborate on the analysis of spatially resolved transcriptomics data from the MOSAIC atlas. Biochemists are excited about applying LLMs to not just analyze images of cancerous tumors but understand which proteins and genes are expressed, furthering understanding of how the body’s immune system works, says Vert. “It’s a unique opportunity to describe tumors in a micro-environment.”

What will success look like for Bioptimus? In the short to medium term, Vert says “if we can get companies and institutes to take advantage of our LLMs and improve diagnostics of cancer by 2% to 5% I would consider that to be success.” Longer term, he says, as artificial intelligence and large language models unlock biology’s secrets, the impact on overall human health and wellness could be game-changing.

Monday, July 28, 2025

An Israeli Lab Tests A Unique Way Of Collaborating and Scaling AI








A new lab formed by four pharmaceutical companies, a German biomedical research institute, a venture capital firm, and Amazon Web Services is creating a radically different model for drug discovery that could become a blueprint for how companies collaborate, scale AI and work with startups in future.

AION Labs, which was announced October 13, is a first-of-its-kind innovation lab spearheading the adoption of AI technologies and computational science to solve therapeutic challenges. The lab, which is based in Rehovot, Israel, was created with the support of the Israeli government and will combine the pharmaceutical expertise of AstraZeneca, Merck, Pfizer, and Teva Pharmaceutical, with the R&D engine of Germany’s BioMed X, the technology knowhow of AWS and the investment experience of the Israel Biotech Fund.

The launch of this consortium follows the winning of a government tender from the Israel Innovation Authority. The government has identified life sciences as a vital area for growth potential and investment.

Instead of the standard industry practice of waiting for startup to prove a concept, then partnering for late-stage development and distribution, the owners of AION Labs will crowdsource the brightest scientists and technologists globally. The recruited multidisciplinary teams will create startups from scratch. The startups will focus on applying artificial intelligence and computational biology to specific challenges defined by industry in drug discovery and development.

The hand-picked startup founders will receive four years of runway funding and get access to to both a wet lab, where biomedical research will be performed, and a computational lab environment, focusing on the development of new algorithms and computational methods, with the aim of accelerating the discovery and development of potential new therapies. Both will be under the guidance of top senior researchers and experts from AION Labs’ partners.

AION Lab’s shareholders can all submit challenges for the startups to work on. The investment committee votes on which ones will be tackled. The first challenge – which was submitted by all four pharma companies – is to develop a platform that can harness the power of AI to automatically predict the best antibody drugs, circumventing years of research and billions of dollars in investment.

Therapeutic antibodies are well established life-saving drugs. Discovery of existing therapeutic antibodies relies on immunization or in-vitro selection from large, pre-defined libraries with limited sequence space coverage. Selecting a drug candidate from billions of potential antibody sequences can take years, is expensive and, in many cases, fails to identify functional antibodies. Recent advances in protein structure prediction, AI algorithms, and increased availability of experimentally determined antigen-antibody structures present a unique opportunity for AI-driven antibody discovery.

AION Labs is inviting computational biologists and biomedical scientists at academic and industry research labs worldwide to propose the development of a next-generation general computational platform for the design of high-affinity and bio-physically well-behaved antibody binders directed towards epitopes of choice, starting from an antigen structure or antigen sequence as an input. The AION Labs pharma partners involved in this project will provide data for model training and their expertise in setting specifications and evaluating the outcome.

It is a great opportunity for biomedical scientists from around the globe to learn how to become successful biotech entrepreneurs and create companies, says Christian Tidona, Founder and Managing Director of the BioMed X Institute.

AION Lab’s owners can invest in its startups, but they will not be given any special rights to the technology. The intellectual property will be wholly owned by the startups.

“The main difference with other models is they are all based on existing startups and existing licensing agreements. In this case we are choosing the most brilliant scientists and are seeding something that does not exist,” says Tidona.

A Groundbreaking Arrangement

The set-up is groundbreaking in multiple ways. It is the first time that an incubator has specifically been formed around AI for drug discovery and development.

Instead of the typical “area of mutual interest” agreement the four pharmaceutical companies have created a company together, an unprecedented move. AION Lab’s equity partners all had to “put skin in the game” by taking equity in the lab and making financial, time and mentoring commitments.

BioMed X, which has developed an “R&D engine” approach to connecting academia to business, will apply its process to search for tech entrepreneurs and scientists globally and vet them.

AWS is not just contributing Cloud expertise. It provides input on what challenges are tackled by the new startups. AION Labs sought a partner that knows about the specifics of biological data, with expertise in computing and storage. AWS works with all big pharmaceutical companies, as well as the entire life sciences sector, including startups and research. “They have real insights into the industry globally,” notes Gill.

The Israel Biotech Fund will ensure that the companies created are viable businesses with staying power, helping the Israeli government fix a hole in its market. While Israel has a thriving tech sector it has – until now – not succeeded in building a life sciences ecosystem.

Trust Building

“To make this work in this format took a year and a half of trust building and hundreds of hours of discussion,” says AION Labs CEO Mati Gill, a former senior executive at Teva Pharmaceutical. It was all done over video, due to COVID-19. Not being able to meet in person helped speed up the process since syncing schedules for international travel was not necessary, he says. “The first time I met some of the partners in person was after we submitted the tender bid.”

Now that the initial groundwork is out of the way, Jim Weatherall, AstraZeneca’s U.K.-based Vice President of Data Science & Artificial Intelligence, in Research and Development and the company’s board representative at AION Labs, says he is eager for the start of in-person collaboration.

AION Labs will open in January, in time for the first portfolio company’s launch.

“I am looking forward to the closeness of interaction, at a real entity, with very, very smart people working there, to the brainstorming, bootcamps and different events that will be held there and all of the real hands-on exercises at the wet lab on site,” says Weatherall.

Forming a company was a way to ensure commitment from all of the companies involved, he says. “Taking an equity stake in the lab makes it a real entity, a meaningful business partnership that is stronger than an affiliation.”

Creating startups from scratch and mentoring them during the evolution of their businesses is also a unique opportunity for pharma companies, says Weatherall. “A lot of startups are doing great work but once they get past a certain stage it is so difficult to get them aligned to big companies and they miss out because it is too late for them to change course.” To avoid that, AION Labs will bake in what big pharma needs from the start.

An R&D Engine

That is where BioMed X comes in. An independent biomedical research institute based on the campus of Heidelberg University, BioMed X has a strong track record of seeding biomedical innovation at the interface between academic research and the pharmaceutical industry. It’s innovation model, based on global crowdsourcing and local incubation of the brightest research talents and ideas, will serve as the R&D engine to propel AION Lab’s venture creation model.

BioMed X publishes specific challenges from industry on its crowdsourcing platform and typically gets anywhere from 100 to 300 responses from up to 80 countries. Fifteen are selected and the scientists are invited to a five-day boot camp at the Heidelberg center. The applicants are divided into five competing groups. The groups pitch proposals on the last day to senior management of a pharmaceutical company. They are vetted not just on their ideas and the commercial viability of their projects but also on an individual basis to see how “coachable” they are, says Tidona. The winning applicants are offered help with everything from visas to housing and schooling for their children. At the end of five years the temporary research groups are disbanded and BioMed X helps them find new jobs.

The model is being tweaked for AION Labs so that it can be applied to startups rather than research groups. “We are starting with a big challenge from pharma, crowdsourcing the brightest talent, move them locally, and aim to turn them into successful entrepreneurs,” he says.

“Israel is a hungry place, and it has the right ingredients in the tech sector and know-how in creating startups,” says Tidona. “But everyone has understood that in the biotech field in order to have something that can be competitive globally there needs to be critical mass. That is why we have decided to recruit and physically move top young scientists to Israel.”

Building A Life Sciences Ecosystem in Israel

Israel has one global pharmaceutical company – Teva- but very few scaled-up life science startups. Though it regularly trains brilliant scientists in its universities, the country has suffered from a brain drain in the sector. The goal is to do something like what Israel has done with the car industry. Although it has no car manufacturers of its own it has created a thriving ecosystem of startups that serve the auto industry, attracting big automakers to set up innovation centers in the country. For this reason, AION Labs does not want to replicate the usual way of creating a startup: build it and then sell after three years, with a goal of making as much money as possible, says Gill. “We want to build companies that will grow over time, become mid-sized and reverse the brain drain.” While the search for scientists needed to populate AION Lab startups will be global, there will be a special emphasis on targeting communities of Israelis living abroad, he says.

“This is an example of a government doing something right,” says Gill. “It identified areas that interest industry and served as a catalyst for this type of group to come together. This is a long-term commitment. It is Israel saying that as a country that this is a national priority.”

AstraZeneca’s Weatherall says the government’s initiative was part of the draw for his company. “A major driver was the pro-active stance taken by the Israeli government,” he says. The Israel Innovation Authority contributed the lion’s share of the funding and “has helped set an important precedent in terms of priorities and stimulating innovation in this area and helping get targets aligned,“

A Blueprint For Future Collaborations



While it is early days AION Lab’s unique model could serve as a blueprint for future collaborations between companies and across industries, says Weatherall. “The fact that companies that normally compete can come together in a pre-competitive space is a model I would like to see more of in future,” he says. “Each individual company clearly needs to have their IP and commercial business propositions but at the same time to be able to routinely carve out areas that could be solved in common. I think there is something there, a future world where there is much more cooperation within and across sectors, where appropriate.

Visit: https://innovatorawards.org/

Sunday, July 27, 2025

Open Science In The Age Of AI








Getting their research peer reviewed and published in a respected scientific journal has long been the gold standard for scientists. But too often, , this process fails to truly advance science because no one else can access or interrogate the underlying data and build on it, argues neuroscientist Sean Hill.

“We need to use this data to build the next iteration of science,” he says. “ The fact that we can’t is a major barrier. Every year, billions of dollars in research value are lost because scientific data is difficult to find, access, and reuse. In fact, most scientific data disappears after only a few years. That drives me crazy. We have to change this and find a scalable way to solve the problem.”

To address this, Hill co-founded Senscience, the AI-driven initiative behind Frontiers’ new FAIR² Data Management service, launched on March 3. Its mission is to make research data AI-ready, aligned with Responsible AI principles, and structured for deep scientific reuse. The goal is to enable open-source science and ultimately help the pharmaceutical and other industries find and leverage AI-ready research data, he says.

The profound impact of AI on the pursuit of science is already evident. The 2024 Nobel Prizes in both physics and chemistry were awarded to pioneers of AI-driven research, underscoring how AI is no longer a peripheral tool, but a central engine of discovery, notes a recent report by the Tony Blair Institute for Global Change. The report notes that advanced AI models are driving groundbreaking discoveries that push the boundaries of scientific knowledge in specific domains from AlphaFold’s revolutionary breakthrough in protein-structure prediction, to materials discovery, toxicity prediction in drug discovery and predictive modelling in climate science, these domain-specific innovations are redefining what is possible and accelerating the pace at which society’s most pressing challenges can be addressed.

AI is essential for addressing critical challenges across health, the climate and security, but we can only leverage these breakthroughs if the data is AI-ready, says Hill.

Researchers waste valuable time cleaning data instead of making discoveries and rarely receive credit for the data they generate, says Hill. Meanwhile, funders are increasingly demanding that researchers publish their data, but they lack the tools to comply. Without scalable solutions, vast pools of knowledge remain locked away, stalling scientific progress.

Increasing The Quality Of Science

Senscience is an AI venture of Switzerland-based scientific publisher Frontiers, which was founded by neuroscientists Henry and Kamila Markram with the stated goal of accelerating collaboration and increasing the quality of science across all academia through open science.

Henry Markram, a leading figure in brain simulation, is the founder of the Blue Brain Project, which created detailed digital replicas of the brain, and founder of the Human Brain Project, a major EU initiative to advance understanding of the human brain. With over 450 publications and approximately 55,000 citations, his work significantly influences the fields of brain architecture and how the brain learns. Markram also established the Open Brain Institute, a not-for-profit foundation, to democratize access to brain simulation through virtual laboratories, making the tools and data available to allow researchers worldwide to simulate the brain. He is now focused on developing artificial general intelligence in inait, a company he formed to focus on teaching digital brains to acquire skills, continuing his mission to unlock the full potential of the brain.

Hill worked on the Blue Brain Project with Henry Markram. During the project they constantly struggled with how to organize petabytes of neuroscience data. “The challenge was how to organize diverse scientific data so we could receive it and combine it and use it for machine learning pipelines to build a brain,” says Hill, a serial entrepreneur. “It’s not like sharing text or numbers,” he says. “Scientific data is far more complicated because all the details matter.” For example, during the Blue Brain Project different groups of scientists would trace neurons using different methods. To image the brain one group would inject a dye and trace the neurons manually and another group would reconstruct neurons using totally different techniques. “It was the same type of neuron, the same species, and the same binary format and yet if you assume they are the same you would fail to build an accurate model of a neuron,” says Hill. “It is the subtleties that really matter to ensure valid scientific insights.”

After nearly a decade of trying different database solutions and failing Hill led the development of a platform called Blue Brain Nexus, a flexible knowledge graph data structure that could handle distributed data and capture all the details of each piece of scientific data. At the time knowledge graphs were not widely used. Today they are often used to integrate heterogeneous data and knowledge (i.e. data models such as ontologies, schemas) coming from different sources and often with different formats (i.e. structured, unstructured). The latest iteration of Blue Brain Nexus now forms the backbone of Senscience.

FAIR² Data Sharing

For years the FAIR principles (Findable, Accessible, Interoperable, Reusable) have provided a foundation for research data sharing. However, as machine learning and AI become an increasingly important tool in scientific research, data must be structured for both humans and machines.

Senscience says FAIR² Data Management goes beyond the FAIR principles by providing an AI-powered solution that transforms research data into a structured, machine-actionable resource, ensuring data is richly documented and linked to provenance, methodology, and a detailed data dictionary, creating a context-rich representation of each dataset. It leverages an AI data steward to automate data organization, improve usability, and assist with governance.

Senscience’s open specification, FAIR² (see fair2.ai), is compatible with MLCommons Croissant, a high-level format for machine learning datasets that combines metadata, resource file descriptions, data structure, and default ML semantics into a single file. It also integrates with TensorFlow, JAX, and PyTorch, enabling AI-driven analysis and easy sharing on Kaggle and Hugging Face, amplifying its impact across disciplines, says Hill.

Researchers benefit from an AI-assisted workflow that streamlines data preparation and sharing, turning their datasets into a FAIR² Data Package, an interactive exploration portal, and a peer-reviewed FAIR² data article in a Frontiers journal—increasing visibility, recognition, and citations.

“If someone has already finished a scientific study, we can make the data available in usable form by dragging and dropping their manuscripts, spreadsheets, and additional files into our platform,” says Hill. “Our AI data steward cleans the data, visualizes it in a data portal and generates adata article draft that they edit and approve before it is submitted to Frontiers.”

This gives researchers not only the classical value of a peer reviewed publication but a way to create an interactive data portal, allowing others to interact with that data and use the AI chat to ask questions of the data, says Hill. In addition, an AI-generated podcast, and integration with Python and Jupyter Notebooks, allow researchers to interact with and analyze their data in completely new ways. “This creates huge opportunities for collaboration and scientific advancement,” he says.

The first peer-reviewed FAIR² Data Article and FAIR² Data Portal, published March 3, showcase what Senscience offers. Led by Dr. Ángel Borja of AZTI Foundation (Spain), this dataset—spanning nearly three decades of marine biodiversity monitoring in the Basque Country, managed by the Basque Water Agency (URA)—has been curated using FAIR², transforming long-term environmental data in Spanish into an AI-ready English language resource.

“AI-assisted curation is a game changer,” Borja said in a statement. “ AI-assisted metadata creation makes ocean sustainability research more accessible, providing scientists, managers, and decision-makers with faster, more accurate insights.”

For now, Senscience operates as an AI venture within Frontiers. But the aim is to become an independent company. “Science thrives when data is open and accessible,” says Hill. “We want FAIR² to be the standard across publishers, researchers, and industries—not confined to a single organization.

“Other sectors are also eager for a solution like this,” he says. “For example, pharma companies want to be able to organize protocols and the data produced by those protocols so they can find, assess and use data in a meaningful way.”

During the current pilot period Senscience is waiving fees. Eventually it plans to charge for its services.



‘Everyone sees the value of making research data AI-accessible and computable,’ says Hill. ‘The ability to structure and reuse data at scale will transform scientific discovery.

Friday, July 25, 2025

AI-Decision Making: State Of Play And What’s Next








AI alone was not up to the job so Finland’s largest airline instead implemented a hybrid system that uses AI to make predictions about air traffic and allows the humans-in-the-loop to make better decisions, explains Tero Ojanpera, CEO of Silo.ai, a Finnish AI lab that specializes in bringing cutting-edge AI talent to corporations around the world.

Getting the FinnAir project to that point was not a question of plug and play. It required a complex multi-step modeling process to help the organization become more AI literate.

Finnair’s experience neatly illustrates the current state of play. AI is not fully ready to make the kind of decision-making corporates expect it to make and even if it were corporate teams and networks are not fully ready to implement and reap the full benefits of AI.

The state of AI-decision making was the focus of an October 13 roundtable discussion moderated by The Innovator in partnership with DataSeries, a global network of data leaders led by venture capital firm OpenOcean. The discussion centered on what is holding back business from using AI, how corporates should approach AI projects in order to better leverage the technology and the methods being tested to improve AI’s decision-making powers. Some of these methods – such as the merger of rules-based and machine learning (ML) techniques, knowledge graphs and multi-modal neural sequencing – promise not just to help automate existing functions but to aid companies to strengthen and even re-imagine their businesses.

Stumbling Blocks

There are many variables at play for a successful AI product to go from infancy to launch.

“Corporates need to make sure their whole infrastructure is ready before trying to build something more intelligent on top,” says roundtable participant Ekaterina Almasque, a general partner at OpenOcean. “Unfortunately, in many enterprises there is still the question of how to deal with the data.”

She cited the example of the automotive industry, which is searching for new sources of revenue, such as using AI to leverage data collected from connected cars. The automakers don’t even have data centers that can collect and process the data in a way that it can be used, she noted, creating an opportunity for startups to help them develop ways to close that gap.

Other corporates have been collecting lots of data for years. But when they start on AI projects they sometimes find that a few columns of crucial information are missing. “It doesn’t matter how big the data is or how long it has been collected, it is not necessarily the perfect data,” says round table participant Reza Khorshidi, one of the founders of, and currently a research leader at, the Deep Medicine Program at the University of Oxford’s Martin School, and also the Chief Scientist for global insurance company AIG.

Corporates don’t only need to ensure that they have the right data – and enough of it – regardless of whether it comes from different parts of the organization or from a variety of outside sources. The data also needs to be structured properly, which is no easy feat.

“If there is one thing that we could do to save hundreds of billions of dollars every year it is to start with standardization of data schema,” says round table participant Vishal Chatrath, CEO of Secondmind, a U.K. startup that helps corporates identify and address how to to build a decision-making framework that combines the best of AI with human domain knowledge.

A lack of data compatibility makes it impossible to businesses to fully leverage AI insights and for supply chains to operate efficiently. Chatrath used the example of an online shop in the UK that sells branded sport t-shirts. But there is no way for the shop to alert Nike or Adidas that the green t-shirt is out of stock and proactively order green dye, buttons and thread, because there is no universally recognized way to call a green t-shirt a green t-shirt. This type of problem was avoided when the mobile Internet was created because a lot of effort was put into standardization, resulting in a Global System for Mobile Communications (GSM). “We need the equivalent of a GSM for AI,” says Chatrath. “Someone has to take this bull by the horns and say ‘dammit you have to standardize data schema’.”

To get the best out of AI, corporations should count on spending about 80% of their time putting into place the digital foundations, says roundtable participant Simon Greenman, co-founder and a partner at Best Practice AI, a U.K.-based management consultancy that specializes in helping companies create competitive advantage with AI. “AI is actually the easy bit,” says Greenman. “The hard bit is making sure the organization really has the platforms and the technology in place to be able to do AI. These things are slowing down the adoption curve.”

Clearly the best way to create systems that are more intelligent is to find the right data or “generate the data you need in a synthetic way to build your models and train them,” says roundtable participant Jose Luiz Florez, an AI expert and founder of a number of startups, including Dive.ai. “But if your models are not good enough then you need to put humans-in-the-loop.”

Getting Ready To Launch

That’s exactly what Finnair ended up doing. The expectation was that AI could actually decide what actions to take if congestion was increasing but AI is currently not able to handle such multi-dimensional optimization problems, says roundtable participant Ojanpera, who previously worked as Nokia’s Chief Technology Officer. Silo, his current company, helped Finnair develop a model to more accurately predict 36 hours in advance how many planes would be delayed based on various factors, such as bad weather. That was just a start because it is only one piece in a complicated problem, he says. When companies deploy such solutions they need to factor in a way to ensure that the human decision makers understand how the model works so that they actually believe what the model says. Then, and only then, can they start to look at the next problem: the possibility of automating some of the decisions that follow when AI starts to understand the context and the situation better. “It is important to break down the problem into pieces and start by selecting the one that will produce the best output in the short term,” Ojanpera says. “That’s how organizations become more AI literate. They start to understand what AI can and can’t do, and I think that’s a good starting point.”

In addition to ensuring the necessary data is ready and the technology underpinning is up to the job, corporates need to invest in AI talent, says AIG’s Khorshidi. “Don’t think by some magic trick that your company is going to go from pre-AI to AI-first without it,” he says. Once the team is in place a system needs to be put in place to properly test the AI and some sort of domain expertise is needed. There are a number of options for adding this expertise, including relying on employees’ insights.

“Usually we talk about data in the traditional sense but it is important to remember that many industries and businesses have been run by experts,” says Best Practice AI’s Greenman. “These human experts are a form of data – they have cognitive data that no company has managed to capture yet with tools. So, if companies are starting to collect more data they definitely should start looking more at their employees and use that cognitive data to help machine learning models to get better and better over time.”

Keeping Humans-In-The-Loop

A number of the roundtable participants argue that – for the time being – hybrid systems are the best if not the only real option to obtain better AI decision-making. “There is a need for humans in-the-loop and I don’t see that going away anytime soon,” says roundtable participant Chatrath, Secondmind’s CEO.

The Cambridge-based startup has developed what it calls the Secondmind Decision Engine, a machine learning-powered software-as-a-service platform designed to aid decision-making across industries, including, it says, “those in which visibility is low, data is sparse, and uncertainty is high.”

Currently in limited release, Secondmind’s Decision Engine is already being used by Kuehne+Nagel, a sea logistics provider which coordinates the movement of nearly 13,000 shipping containers per day and Brambles, an Australian company that specializes in the pooling of unit-load equipment, pallets, crates and containers. The companies are using the startup’s technology to make demand forecasting, planning and asset allocation decisions within their global supply chain operations.

Secondmind says its technology can offer – on average – a 35% improvement in efficiency by using a combination of Gaussian Process-based probabilistic modelling and decision-making machine learning libraries. The technology suite is adept at quantifying uncertainty, identifying operational trade-offs and explaining outcomes using sparse and low volume data, capabilities that meet business decision-making demands where other machine learning techniques like Deep Learning struggle, says Chatrath. Still, he says, humans with industry knowledge are crucial to success.

That’s a viewpoint that was shared by round table participant Jinsook Han, Accenture’s Managing Director and Global Lead of Growth and Strategy, Applied Intelligence. “Having humans in the loop is extremely important regardless of whether you are thinking about AI as a means of augmenting, accelerating, assisting and I would add a fourth “a” for avoid [ looking at the technology as a means of reducing risk], says Han. When chatbots were first being introduced clients came to Accenture and asked whether they could get rid of all 1000 employees in their call centers, she says. “We told them that is not the way you want to go. Let’s focus on what is important for the customer experience. What if the call center rep had enough information from the AI when they picked up the phone to be able to resolve the problem on the spot and give the customer a better experience? There are times when AI can handle an issue and other times when it is preferable to have a human-in-the-loop. My mantra is that we should let humans do what humans do best and let machines do what machines do best. Clients are beginning to understand that this is a journey.”

New Approaches

While corporates try and get their own houses in order data scientists are working on a number of ways to improve AI-decision making.Rules-based approaches, which use structured data, have been used to make intelligent business decisions since the 1980s. When you are dealing with multi-modal messy data and more complex problems it is generally agreed that machine learning is the better approach. But it is far from perfect. If new regulations come into play or the rules of the past no longer apply, ML, which has been trained on historical data, has no clue what to do – or even to recognize that the context has changed. “There is a lot of work being done now on how to correct these problems without losing the value derived from ML,” says Harley Davis, head of IBM France’s R &D Lab.

Merging Rules-Based Systems And Machine Learning

One way to try and solve for this is to merge rules-based systems with ML. IBM launched a new product in 2020 called Automation Decision Services that does just that. Combining the two approaches leads to better decision making, says Davis.

IBM’s Automation Decision Services product and its predecessor, IBM Operational Decision Manager, are now being employed in a variety of sectors, including financial services and aviation. For example, all of PayPal’s transactions now use a combination of ML fraud detection and explicit business rules to identify some very specific security concerns, says Davis. Mastercard is using it to do something similar – using the combination of ML analytics along with business rules developed with 800 member banks to detect fraud. In the U.S. Fannie Mae and Freddie Mac (federally-backed home mortgage companies created by the U.S. Congress) are using IBM’s technology to process over two-thirds of US mortgage applications, using ML on top of symbolic representation of rules, to make better decisions, he says. And airlines, such as Delta and United, are using it to figure out the best way to create upgrade offers. They are also using logic-based programming known as mathematical optimization to deal with complex logistical issues such as rescheduling. This approach – which has been around for decades in operations research – is now also being combined with AI-based predictions, in an IBM product called IBM Decision Optimization, says Davis, though he concedes that there is still often a need for humans-in-the-loop.

Knowledge Graphs

Knowledge graphs provide another way to start emulating the implicit functions of the human mind and combine it with the computing power of machines to represent meaning by putting data into context, similar to the way humans connect pieces of information to reach a conclusion. They are being used in Alexa and Siri voice assistant devices and in Google searches and they are starting to be applied in different industries, such as pharmaceuticals, chemicals R&D and oil and gas, using an approach IBM calls cognitive discovery.

IBM Research says it has developed a scalable pipeline of technologies that can be leveraged to extract information from highly unstructured sources such as documents or scanned images. This is typically the form in which companies have stored their knowledge and experience. For example, when scientists publish their papers and patents it is can be hard to process digitally. Cognitive discovery aims to automate the extraction of knowledge from these ‘dormant’ sources, combine it with other structured and semi-structured information and make it readily available via a user-friendly interface on top of a knowledge graph. This rich body of knowledge preserves the corporate wisdom and experience and makes it readily available for research and development in specific industries, says Stefan Mueck, IBM Germany’s Chief Technology Officer responsible for digital transformation.

This approach constitutes a new, accelerated and better way of doing R & D, he says. Leveraging the data pipeline for both internal and external sources (e.g. patents) researchers can start building a hypothesis based on a much bigger and broader input than any human could possibly read or have present in his mind. To help him even further, AI could be used to infer knowledge from what is in the graph.

IBM cites the following example, based on real-world experience with a number of chemicals companies as well as in its own material science research: Say a chemicals company wants to create a new and innovative material or substance with certain properties. Given all the internal knowledge plus all the external information that is free or licensed for use, the machine would extract what it finds about ingredients, formulations and processing as well as the related product properties of the end product. IBM Research has developed deep learning models that support the researcher with predictions: Given a formulation and a process, what would be the properties of the new substance? Or, given the properties, what would be the formulation? Similarly, if it is about base chemical reactions there are models to predict synthesis or retro-synthesis. The latter is made open source by IBM. It helps companies build extended capabilities on top of it. The decision intelligence can be driven even one step further by integrating predictions with lab automation, which is one of the latest innovations. This cognitive discovery capability has also been made available to the public for accelerating research around treatments or finding vaccines for Covid-19. Other industries are also using knowledge graphs to improve AI decision-making. The use of IBM’s cognitive discovery is under development with “an innovative player in the oil & gas sector,” says Mueck. It will be revealed at an industry conference called EAGE Digital in November, he says.

IBM believes knowledge graphs can offer business another big advantage. In some -but not all -cases, adding knowledge graphs to a combination of ML and rules-based systems can help companies explain why an AI made a particular decision, helping resolve serious social, legal and ethical concerns. “Machine learning represents a big black box. We don’t know why it gets the results it does which introduces multiple social, ethical and legal problems,” says Davis. “By looking at the knowledge graph and the rules you can give an explanation for a decision like why a loan was turned down: it was because your credit history was bad and your revenue was insufficient and so forth,” he says.

That said, there are still a number of tough problems to solve before more automated decision-making systems can be more widely and safely used by business, says Davis. He cites the well-known case of Amazon having to redesign an AI system that developed for recruiting purposes because it found that it was biased against women candidates. Amazon removed names and gender from the applications but other indicators such as hobbies or schools still led to the AI to give better ratings to male candidates, because previous human decisions favored males and those biases were correlated with other data in the resumes. There are tools that can find those biases, for instance IBM OpenScale, but you have to know what to look for and run analysis on the ML training data and it is not an easy problem to solve.

Decision Intelligence

New types of approaches – including the social sciences – may need to be introduced into ML models, says Davis. That’s where decision intelligence comes in. The term – which made it into Gartner’s 2020 hype cycle – refers to an emerging engineering discipline that augments data science with theory from social science, decision theory and managerial science to try and provide a framework for best practices in organizational decision-making and process for applying machine learning at scale. Gartner has developed a Decision Intelligence Model to help business executives identify and accommodate uncertainty factors and evaluate the contributing decision-modeling techniques.

Transformer-based Sequence Models

While all of these approaches may help businesses move closer to AI-led decision-making roundtable participant Khorshidi believes Transformer-based neural sequence models, which have shown tremendous advances in natural language processing, have the best opportunity for success. If these models can be tweaked to accommodate the multimodal nature of data, he believes “it will have the ability to go beyond language, beyond health, beyond finance and pretty much cover every real-world data generating process,” he says, helping to transform business as we know it.

Khorshidi and his team at Oxford’s Deep Medicine Program have had success testing the application of Transformer-based models to sequence the multi-modal biomedical data found in electronic health records.

Electronic health records are sequences of mixed-type data such as diagnoses, medications, measurements, interventions and more that happen in irregular intervals and are routinely collected by health systems.

If Khorshidi and his team’s initial positive results, which were published in Nature magazine last April, can be replicated at scale and across a range of data scenarios “this could mean a breakthrough in medicine and be the difference between pre-AI/AI-inside medicine and AI-first medicine,” he says. The breakthrough is tied to the ability to build complete electronic health records that include what health systems have been routinely collecting, as well as social, economic, environmental and lifestyle data that have been shown to be important (and predictive) for health outcomes. A sequence model’s ability to learn the key patterns and relationships/dependencies underlying such complex sequences will enable health systems’ ability to anticipate things before they happen, and intervene when needed. “This can enable better, cheaper, faster processes, and ultimately pave the way towards redesigning and re-imagining the system,” Khorshidi says. The same approach could also be applied to other sectors, such as finance and retail, he says.

“Dealing with sequential data that is mixed-type multimodal and happens in irregular intervals gives machines the ability to deal with any sort of data,” says Khorshidi. “In the real world it could be a customer’s data on Amazon, it could be a patient’s data in NHS or it could be a company’s data for asset management.”

If not just medicine but other industry sectors want to move to a world of high dimensional data in which the data is inputted in a messy way, there are not many solutions out there,” says Khorshidi. “You need to settle on some sort of feature engineering or settle for models that can deal with data as they arrive sequentially. And that’s why I think transformer based architectures have got higher odds of success.”

Regardless of which new method businesses use to improve AI decision making they should change the way they think about AI, he says.

“The true north for AI is transformation opportunities and reimagination opportunities,” Khorshidi noted at the end of the roundtable. “Automation is the lowest hanging fruit. We should use AI to reimagine the power of the existing base of employees and strengthen businesses by doing things differently.”



Thursday, July 10, 2025

Rebooting Copyright For The Age Of AI

 

“Is This What We Want?” is the name of a silent album released by UK musicians, including Annie Lennox and Kate Bush, to protest UK government plans to allow AI companies to use copyright-protected work without permission. It is just one of the many ways that well-known figures from the publishing, music, film, TV, design and performing arts sectors are displaying their displeasure over proposed changes to the country’s copyright law.

Data is the lifeblood of artificial intelligence, and large language models – such as ChatGPT – are training on vast amounts of publicly available data sets. They are reeling in content on the Internet produced by musicians, journalists, artist and authors, ballooning their own valuations while threatening the livelihoods of content creators.

Copyright lawsuits filed against GenAI companies abound, alleging that the way they operate amounts to theft. Examples include the New York Times v OpenAI lawsuit in the U.S., and in the Getty Images v Stability AI case in the UK. The allegations in AI and copyright cases generally split into two parts: first, that the outputs of AI models constitute an illegal copy; second, that using copyrighted works in training data for AI (inputs) is a breach of the image owner’s copyright.

Requiring developers to license all the material they use to train models would be very difficult due to the distributed nature of data and ownership, argues the Tony Blair Institute for Global Change (TBI). To provide legal clarity and accelerate AI development, some countries have already taken a lenient view on use of publicly available data for AI training, so if other countries take a restrictive stance, it will drive development elsewhere. And even if a workable solution for payment were found it would likely stifle competition since only large, well-funded AI companies could afford to pay.

“As technologies and societies evolve so must regulations,” Jakob Mökander, TBI’s Director of Science & Technology Policy said in an interview with The Innovator. “We need to find an approach that makes sense in the digital age. It is a question that all governments will face.”

On April 2 TBI published a report on rebooting copyright that says the current situation is unsustainable. It argues that the status quo harms all stakeholders, including creators, who are not properly remunerated for their labor; rights holders, who struggle to exercise control over how their works are used; AI developers, who face hurdles when it comes to training AI models, and society at large, which risks missing out on benefiting from AI diffusion and adoption. “Bold policy solutions are needed to provide all parties with legal clarity and unlock investments that spur innovation, job creation and economic growth,” says the report.

The TBI report supports the position favored by the UK government: a text and data mining (TDM) exception for AI model training with the possibility for creators and rights holders to opt out. This would make it legal to train AI models on publicly available data for all purposes, while giving rights holders more control over how they communicate their preferences with respect to AI training, argues the report.

The report notes that it is important to separate the debates around AI outputs and AI training. AI outputs should not be allowed to reproduce original works without proper license and remuneration, says the report, but prohibiting AI models from training on publicly available data would be misguided and impractical. “The free flow of information has been a key principle of the open Web since its inception,” says the report. “To argue that commercial AI models cannot learn from open content on the Web would be close to arguing that knowledge workers cannot profit from insights they get when reading the same content.”

There are better ways of supporting the creative industries, says Mökander, “There needs to be increased funding for creators in the digital age, we want to have flourishing industries, but copyright law may not be the best way to do that,” he says

The report suggests some alternative approaches to helping the creative industries, including the creation in the UK of a Centre for AI and Creative Industries which would serve three functions: bringing together experts and representatives; acting as an engine to create new technologies and infrastructures to support growth in machine learning in the UK creative industries; and providing much-needed training and expertise across academia and industry. If more funding for the arts are needed, and if governments need to raise more funds for this, one option to consider is taxing data connections on fixed lines and mobile devices, adding pennies per month to the Internet Service Provider (ISP) bills of households and businesses who benefit from using AI tools.

“Rather than fighting to uphold 20th-century regulations, rights holders and policymakers should focus on building a future where creativity is valued and respected alongside AI innovation,” says the report: “Copyright law provides insufficient clarity for creators, rights holders, developers and consumer groups, impeding innovation while failing to address creator concerns about consent and compensation. The question is not whether generative AI will transform creative industries (it already is) but how to make this transition equitable and beneficial for all stakeholders.”

The Trouble With Copyright

The truth is that no one is happy with the status quo, says Mökander.

Just ask American author Cory Doctorow. “For 40 years, the scope and duration of copyright have monotonically increased, the evidentiary burden for copyright claims has declined, and the statutory damages for copyright infringement have expanded,” he wrote in an online article.  “Publishing and other creative industries’ generate more money than ever – and yet, despite all this copyright and all the money that sloshes around as a result of it, the share of the income from creative work that goes to creators has only declined. The decline continues. There is no bottom in sight.”

Doctorow uses the following analogy to drive home his point: “If the bullies at the school gate steal your kid’s lunch money every day, it doesn’t matter how much lunch money you give your kid, he’s not gonna get lunch.  But how much lunch money you give your kid does matter – to the bullies. (Creators) are the hungry schoolkids. The cartels that control access to our audiences are the bullies. The lunch money is copyright.”

Strengthening copyright law would do little to benefit creators and requiring developers to license the materials needed to train AI would  threaten the development of more innovative and inclusive AI models, as well as important uses of AI as a tool for expression and scientific research, argues the Electronic Frontier Foundation (EFF), which has published a series of articles on problems with copyright in the age of AI.

Requiring researchers to license fair uses of AI training data could make socially valuable research based on machine learning and even text and data mining  prohibitively complicated and expensive, if not impossible, argues the EFF. It notes that researchers have relied on fair use to conduct TDM research for a decade, leading to important advancements in science and other fields.

For giant tech companies that can afford to pay, pricey licensing deals offer a way to lock in their dominant positions in the generative AI market by creating prohibitive barriers to entry, says the EFF. To develop a foundation model that can be used to build generative AI systems like ChatGPT and Stable Diffusion, developers need to train the model on billions or even trillions of works, often copied from the open Internet without permission from copyright holders. There’s no feasible way to identify all of the rights holders—let alone execute deals with each of them. Even if these deals were possible, licensing that much content at the prices developers are currently paying would be prohibitively expensive for most would-be competitors.

As the U.S. Federal Trade Commission recently explained, if a handful of companies control AI training data“they may be able to leverage their control to dampen or distort competition in generative AI markets” and “wield outsized influence over a significant swath of economic activity.”

The Way Forward

The UK’s proposal for a TDM exception with opt-out – which essentially allows the scraping of publicly available information- would bring UK regulation broadly in line with the European Union’s.

But other jurisdictions, such as Singapore and Japan, have more liberal copyright laws pertaining to AI training and China is speeding ahead. The current administration has indicated that the U.S. will not pursue strict AI regulations but there is ongoing litigation in the U.S. around AI training. What constitutes fair use of copyrighted materials in the U.S. will be decided on a case-by-case basis.

The legal landscape surrounding IP data scraping is not only complex it is rapidly evolving, says a February OECD report on data scraping. What’s more different actors in the data scraping ecosystem raise various types of legal issues. Some also use data scraping to support research and other endeavors, suggesting the need for policy tools tailored to different use cases, says the OECD report. The data scraping ecosystem encompasses research institutions and academia, AI data aggregators, as well as technology companies and platform operators. Research institutions and academia frequently employ data scraping to gather data for academic and scientific purposes. AI data aggregators make scraped data available to third parties, often without clear licensing terms or clear disclosure of data provenance, raising IP and other legal concerns. Technology companies and platform operators are sources of scraped data and regular data scrapers themselves.

The OECD is promoting a global data scraping code of conduct, standard contract terms, standard technical tools and initiatives for building awareness that would chart a responsible path for data scraping in an internationally coordinated manner. “This would be particularly effective if it is developed with input from a broad and diverse set of stakeholders, including rights holders, researchers, AI developers, civil society, and policymakers,” says the OECD report.

TBI’s Mökander says he would welcome  globally recognized codes of conduct. ”AI training data are only useful if there are clear international standards,” he says. “ If not, we risk a race to the bottom, pushing AI development to other jurisdictions with more lenient regulations. In fact, harmonized international standards should be top priority for policymakers seeking to build a flourishing ecosystem for the arts and AI.”

Tech tools will help ensure compliance, he says. For example, AI company Spawning has developed a Do Not Train registry that allows artists to tag work around the Internet as copies of their original. Developers can then use “data-diligence” software from Spawning to check whether URLs have been opted out.Another tool cited in the TBI report is ProRata.ai, a new company that uses tech to enable generative artificial intelligence (GenAI) platforms to attribute and compensate content owners.

ProRata CEO Bill Gross invented tech that can reverse-engineer where an answer came from and what percentage comes from a particular source so that owners can be paid for the use of their material on a per-use basis, says Gross, who has patented the technology. ProRata pledges to share half the revenue from subscriptions and advertising with its licensing partners, help them track how their content is being used by AIs, and aggressively drive traffic to their websites.

When a user poses a query ProRata’s algorithm compiles an answer from the best information available. At the top of the page there is an attribution bar which specifies where the answer came from. It might say, for example, 30% of this answer came from The Atlantic, 50% from Fortune and 20% from The Guardian. The publications are immediately compensated according to their contribution to the answer and a side panel displays the original articles and enables users to click on the original source to learn more. Think of it as “attribution-as-a-service.,” Gross said in an interview earlier this year with The Innovator. “Just as Nielsen measures how TV shows are watched to determine what advertisers should pay, we are moderating the output of the queries to determine how much GenAI providers should pay content providers.”

In the future it should be technically simple to build AI agents that can track creators’ portfolios, maintain registries and initiate robot-to-robot interactions with other websites, asking them to remove content, says the TBI report. These agents are expected to simplify content attribution for AI companies, enabling them to effectively track online content origins and eliminating plausible deniability for developers who claim ignorance about opted-out work appearing in their systems.

If the right policies are put in place the AI revolution “can be the standout engine for artistic and cultural renewal of our era,” says the TBI report. It could also help countries like the UK lead in the AI sector.

Time To Act

But time is not on anyone’s side, says the TBI report.  Large, effective foundation models already exist and are publicly accessible. They will continue to grow in capability and will be used by an increasing number of people. They will be developed around the world, in jurisdictions with very relaxed copyright laws, and used as tools to the extent that they will inevitably make some jobs redundant. At the same time, countries with restrictive laws will push developers to move to countries with less stringent measures, says the TBI report. “The longer governments take to tackle the issue of AI and copyright, the more they will inhibit innovation and entrench large AI developers in the global competition for AI leadership.”

Visit: https://innovatorawards.org/

Monday, July 7, 2025

Ten hard-won lessons from a decade of mobility innovation



After a decade of supporting mobility innovation, the Advanced Propulsion Centre UK (APC) and Zenzic have worked with hundreds of start-ups and SMEs, helping them take their innovation from concept to commercialisation. We’ve seen the breakthroughs and the breakdowns – the big deals and the cautionary tales. Here, Joshua Denne, Head of Product at APC, and Mark Cracknell, Programme Director for Zenzic, reflect on the top ten lessons that separate the companies that accelerate from those that stall.

The mobility sector is brutal. We are heavy on hardware, and unlike software, you can’t pivot on a whim. The product lifecycle is longer, the capital burn is higher, the asset requirement is intense, and proving technical viability is only half the battle. Commercial traction is what defines winners.

We launched the next evolution of our mobility start-up accelerator ‘Mobilise’ in January 2025. These ten hard-won lessons from the past decade have shaped how we hope to support the next generation of mobility start-ups.

1. The right support at the right stage

The UK’s mobility ecosystem is rich in its diversity: we have global corporate OEMs and suppliers, established supply businesses, small-and-steady SMEs, high-ambition scaleups, and bleeding-edge start-ups. Taking a single approach to supporting each of these segments does not deliver the best outcomes. A start-up taking its core technology to Minimum Viable Product (MVP) needs fundamentally different help than one scaling manufacturing capacity. A global OEM does not need the same intervention as an established UK-based supplier. Our focus in this article is on start-ups and the programmes we have developed to accelerate them.

For start-ups at the seed stage, funding for technology validation is immediately critical and an entrepreneur’s first priority. However, due to long product introduction periods, the high cost of development, and the challenging commercial environment, commercialisation expertise, IP strategy, and Investment readiness also need to be an early priority – support with the target market, and help preparing for relevant early-adopter market requirements are absolutely critical.

Mismatched support can leave a lot of value on the table. We’ve seen companies attempt to deliver large application readiness programmes before they’ve validated their technology, only to burn through cash with limited traction. Likewise, we have seen many start-ups waste time and resources shooting for an unrealistic market segment. The right support at the right stage accelerates, but the wrong support at the wrong time can be a distraction.

Mobilise is a structured early-stage accelerator programme that supports ambitious start-ups, university spinouts, or pivoting SMEs that are developing innovative mobility-related early-stage, zero-emission or Connected and Automated Mobility (CAM) technologies, products, services, or solutions to accelerate the transition to a safer, smarter, more sustainable future.

2. Early adopters beat ‘build it and they will come’

Proof of traction trumps proof of concept. It’s easy to fall into the trap of thinking that superior tech will automatically attract buyers. It won’t. Companies that spend years perfecting their product, without bringing early adopters on board, often fail. Likewise, focusing on large multinational customers as innovators or early adopters is a fatal error. We can think of just a handful of companies that have converted commercial deals with global multinationals as their first or early adopters.

The winners engage potential customers early. They focus on initial customer segments that can allow market entry at pace, ideally at a premium, even if the total market size is smaller. In an ideal world, they go beyond letters of intent (LOIs) to secure paid pilots and joint development agreements before scaling. These commitments provide validation and create customer pull, making the eventual commercial launch far less risky.

We’ve seen companies with inferior technology win market share because they had early adopter buy-in. Meanwhile, technically superior start-ups struggle because they waited for the ‘perfect product.’

3. Redefining MVP in hardware: Segment, model, product

The classic concept of a Minimum Viable Product (MVP) doesn’t always translate perfectly to hardware or deep tech. You can’t really ship a half-baked prototype in a highly regulated market. Instead, we think in terms of:

  • Minimum Viable Segment: Proving the value in a specific niche (e.g., low-volume EVs before targeting mainstream OEMs), which is big enough to make sense for initial product development, and small enough and innovative enough to be your first customer.
  • Minimum Viable Business Model: Demonstrating through a focused go-to-market strategy, developing the minimum viable asset set to service your identified first customer segment.
  • Minimum Viable Product: A product with just enough functionality to attract customer traction (including regulatory requirements) of this first segment.

For mobility start-ups, an MVP ≠ prototype. It’s about proving an initial business model, of which your product is just one part.

 Visit us: innovatorawards