Wednesday, September 24, 2025

How AI Is Helping Industry With Complex Decisions

 

BP, like other oil companies, was plagued by extreme sanding, which gums up the works during deep water drilling. About one-third of its wells are impacted. It took one scientist three decades to figure out how to predict which wells might be susceptible but he and his team could only visit only 15 a year and Shell has over 13,000. Beyond Limits, which leverages 45 technologies from Caltech’s Jet Propulsion Lab used in the U.S. space program and converts them into advanced AI, found an efficient way for BP to locate reservoirs that are less prone to extreme sanding, allowing greater precision in drilling and providing the company with more data about how much oil is flowing into its wells. The solution helped BP, which decided to become a major investor in the startup, to produce thousands more barrels of oil per day. Now Beyond Limits is applying its hybrid form of AI, which is designed to apply human-like reasoning to solve complex problems, to other industries, including financial services and healthcare. Earlier this month it teamed with medical experts to create a dynamic forecasting model to help in the fight against COVID-19.

Sunday, September 21, 2025

The Case For Appointing A Chief AI Ethics Officer


The number of companies with a designated head of AI position has almost tripled globally in the past five years, according to social network LinkedIn. And The White House announced U.S. federal agencies were required to designate chief AI officers “to ensure accountability, leadership, and oversight” of the technology.

While that may sound encouraging organizations are still putting more emphasis on improving workforce efficiency, identifying new revenue streams, and mitigating cybersecurity risks, than on ensuring AI is being used responsibly.

Indeed, a poll taken as part of The Artificial Intelligence Index Report 2024, which was published April 15 by the Institute for Human-Centered AI at Stanford University in California, is a case in point. The report cites a Responsible AI (RAI) survey of 1000 companies to gain an understanding of RAI activities across 19 industries and 22 countries. A significant number of respondents admitted that they had implemented AI with only some -or even no- guardrails in place.

It is a big mistake to let RAI becoming an afterthought or press release talking point. Many companies which surged ahead without thinking about RAI are now finding themselves in a costly re-wind process to meet regulatory requirements– a cautionary tale for all.

Against this backdrop I spoke to Steve Mills, Chief AI Ethics Officer and Managing Director & Partner at Boston Consulting Group, as part of a series of conversation I am having with individuals who are leading the way in helping organizations derive benefits from AI while also ensuring responsible design, development and use of AI.

Since BCG helps corporates with responsible implementation of AI, it had to ensure that its own house was in order. So, the consulting group created the position of Chief AI Ethics Offer and designed what Steve describes as “a comprehensive Responsible AI program that brought together organizational functions while establishing new governance, processes, and tools had to be created in house which is a large and complex task.”

Steve and I both agree that now, more than ever, responsible AI is a C-Suite task. It requires a senior executive with the appropriate stature, focus, and resourcing to advise the leadership team, engage with the external AI ecosystem, and effect meaningful change in how we build and deploy AI products.

The role of Chief AI Ethics Officer demands a unique blend of technical expertise, product development experience, and policy and regulatory understanding. “Although my title may still be a bit uncommon today, I believe we will see it become a de-facto standard very quickly given the importance of AI and generative AI (GenAI),” says Steve.

He says – and I agree – that if implementation of AI/GenAI is not done responsibly it can be value-destroying rather than value-accretive for companies.

The risk is not just creating one bad customer experience, it can be much more far-reaching. Failures of AI systems can grab headlines and the attention of regulators. They can rapidly destroy brand value and customer trust as well as carry costly financial and regulatory impact.

Irresponsible use of AI does not only harm companies; lapses can cause real harm to individuals. For example, consider a chatbot providing guidance on HR policies. An erroneous response on medical leave policy could create financial and emotional harms to an employee. It is the responsibility of any company building and deploying AI to ensure it does not create emotional, financial, psychological, physical or any other harms to individuals or society. “Certainly, there are risks to the company that need to be managed, but corporate responsibility goes far beyond that,” says Steve. Companies would do well to remember that there are now real financial penalties available to regulators to punish such behavior and protect the individual harmed.

There are other compelling reasons for building AI in a responsible manner. Companies with mature RAI programs report higher customer retention and brand trust, stronger recruiting and retention, and faster innovation. In addition, many RAI best practices lead to products that better meet user needs, meaning companies with mature RAI programs report more value from their AI investments. “RAI is about both minimizing the downside risk but also maximizing the upside potential of AI,” says Steve.

The pressure to rapidly commercialize AI and GenAI is intense and can dominate strategic discussions.

Steve and I are both big proponents of the transformative power of AI and recognize its strategic importance to businesses. But the bottom line is that companies cannot scale AI/GenAI without developing a robust Responsible AI program to mitigate risks and capture value. They cannot stop at talking points. They need to back up those conversations with action. They need to invest the necessary resources to create a comprehensive RAI program, including integrating RAI into their risk management frameworks, implementing RAI-by-design, and upskilling employees to create a culture of RAI.

“There are both direct and indirect benefits of RAI, all of which generate significant value for businesses,” says Steve. He points to BCG research with MIT which shows that companies that have implemented RAI report lower system lapses, less severity in those lapses and, interestingly, higher value driven through AI investment itself.

All companies must adopt RAI and they need to do it now, “I worry that companies feel like it’s too late, that they’ve implemented a ton of AI,” says Steve. “They need to focus on implementing RAI no matter what stage they are in because it’s critical that they build AI consistent with their values. Responsible AI is table stakes for any business that wants to realize the value of AI/GenAI”.

Kay Firth-Butterfield, one of the world’s foremost experts on AI governance, is the founder and CEO of Good Tech Advisory. Until recently she was Head of Artificial Intelligence and a member of the Executive Committee at the World Economic Forum. In February she won The Time100 Impact Award for her work on responsible AI governance. Firth-Butterfield is a barrister, former judge and professor, technologist and entrepreneur and vice-Chair of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. She was part of the group which met at Asilomar to create the Asilomar AI Ethical Principles, is a member of the Polaris Council for the Government Accountability Office (USA), the Advisory Board for UNESCO International Research Centre on AI, ADI and AI4All. She sits on the Board of EarthSpecies and regularly speaks to international audiences addressing many aspects of the beneficial and challenging technical, economic, and social changes arising from the use of AI.

Visit: https://innovatorawards.org/

Friday, September 19, 2025

How AI Is Mining and Transforming An Old Economy Business






Corporates around the world have spent billions implementing cutting-edge AI but according to studies only a small percent are reaping returns. An August report from MIT, for example, found that only about 5% of GenAI pilot programs achieve rapid revenue acceleration. The vast majority deliver little to no measurable impact on P&L.

Dorfner, a mid-sized company, which owns one of the most significant kaolin and silica sand deposits in Germany and traditionally leverages the mined minerals to sell functional fillers for paints, composite materials and construction materials, is proving to be an unlikely leader. Its journey offers valuable insights into how the technology can transform traditional industries by improving efficiency, making them more sustainable and reshaping future business models.

Thanks to AI, Dorfner has morphed from a company offering a sole product and raw material to one that additionally offers services that help its clients solve supply chain issues, cut costs and become greener. It is additionally functionalizing new categories of products, offering its tech tools and know-how to startups and aiding Germany’ old economy companies to use AI in new ways.

“Our use of AI not only optimizes internal operations, it redefines what is possible in resource usage, sustainability and business growth,” says Acting CEO Mirko Mondan, who is scheduled to participate in a fireside chat with The Innovator’s Editor-in-Chief at the DLD Future Hub: Impact AI conference in Munich on September 11.

The use of its AI platform has not only enabled Dorfner to expand into entirely new markets it has reduced the need to excavate virgin raw materials by 60%, making the company itself more sustainable, says Mondan, a serial entrepreneur who was recruited by the family as acting CEO with a mission to strengthen the core while finding new sources of business.

Mondan and his team has done that and more. Dorfner says it has measured significant improvements when comparing its traditional work to its AI-supported approach, reporting

-Time savings: Formulation time reduced by 66%

-Cost Efficiency: Laboratory material usage cut by 69%

-Quality Assurance: Consistent product quality reliably achieved

-Sustainability gains: Transport distances reduced by 43%

-Environmental Impact: Waste generation lowered by 69%

-Growth: Sales have increased by 30% over the last two years, the customer base has expanded beyond Europe and new revenue streams are being developed at a time when others in the industry are seeing revenues decline.

-Profitability: A 40% increase over the same two year period.

To ensure that Dorfner Group’s AI solution results in sustainable change and lasting impact it has aligned the initiative with the company’s long-term strategic plan, Dorfner 2035, which places AI at the heart of its transformation from a mineral processing company to what Mondan calls a “solution-oriented technology leader.”

“We are no longer just a mining company, we are becoming a technology provider of new offerings and sustainable solutions that drive progress across industries,” he says.

Confronting Critical Challenges

When Mondan joined Dorfner in 2020 the company faced two critical challenges: the need to extend the economic life and value of finite mineral resources and a growing demand for faster, more sustainable and higher-performing product formulations, particularly in industries like construction, chemicals and paints and coatings.

“We knew we could not rely on traditional methods alone to ensure long-term relevance and resilience,” says Mondan.” Our transformation journey began with that realization and the conviction that AI, when used the right way, could help us reinvent not just what we make, but how we think, work and grow.”

The first item on Mondan’s agenda was to forge an innovation strategy. The strategy he settled on is based on two pillars, sustaining innovations at the core and introducing radical innovations outside of current markets. (see The Innovator’s 2022 story about the start of Dorfner’s transformation journey.)

The true turning point came in November 2021 during an internal innovation workshop. In the face of increasing pressure from the COVID-19 crisis, sustainability demands and supply chain disruptions, it became clear that AI would be essential to maintaining future competitiveness, says Mondan. But it needed help to develop its strategy and implement the technology.

Getting Employees Onboard

During a ride in a dump truck at Dorfner’s mining facility, one of Mondan’s friends, who runs a family-owned business, shared his experience working with UnternehmerTUM, the Technical University of Munich’s Center for Innovation and Business Creation. UnternehmerTUM’s Business Creators program ticked all the right boxes, says Mondan, as it is specifically geared to helping SMES widen their traditional business models and help them use open innovation to think outside the box and move from the why to the how.

Dorfner did not just sign up a few key executives for UnternehmerTUM’s course. It chose people from across its business that Mondan thought could become good change agents and then instructed each one of them to go out and tell five people in the company what they learned along the journey. Master craftsmen and laboratory technicians were given the incentive and the opportunity to acquire new skills and broaden their horizons through targeted training, evolving into data analysts and playing a key role in the digital transformation, says Mondan. “During our AI journey we shared internally wins and visible benefits, such as reducing repetitive tasks and improving decision-making speed. As employees experienced these improvements firsthand confidence in the technology grew.” This approach not only helped overcome resistance but also fostered a stronger innovation culture, he says. What’s more “this cultural shift, supported by measurable outcomes like a 30% sales increase and higher customer satisfaction, gives us confidence in the long-term resilience of our AI transformation,” says Mondan.

“Thanks to a team effort we are succeeding in our goal of transforming the company and the mindsets and skillsets of the people so that we will have a base 50 years from now for the generations to come,” he says.

Building New Business Models and Process Innovations

To meet the planned growth path of the company, new sources of value had to be developed to extend the existing business and/or tap totally new sources of revenues. “We were thinking too narrowly,” says Mondan. “We needed to unlearn things and open our minds.” The company’s definition of innovation was rebuilt. Rather than just focusing on new products the company started building entirely new business models or process innovations.

While Dorfner was engaged in this process its big customers started approaching the company because hundreds of the raw chemicals needed to make their products were not available due to supply chain issues. Customers wanted to know if Dorfner could create alternative formulations.

It was an “aha” moment for Dorfner. Like many traditional companies much of its industry knowledge was stored either in the heads of their long-term employees or in spread sheets. During the workshop the idea that surfaced that its data about chemical formulations could be gathered and organized in a database and AI applied to that data to help with reformulations that would serve as alternatives or be more sustainable.

It took about six months for the company to get the right data sets in place.

“When our data was in Excel spreadsheets no one was using it,” he says. “Now we make use of our treasure. We took data from the last 25 years and made it digital. Over the next few years 30% of our people will retire so this is a way of ensuring we don’t lose the solid base from our past.”

The company thought it would have to build its own AI software platform, but in 2022 it found a Silicon Valley company that had already built an AI platform for the materials and chemicals industry.

In March 2023 Dorfner introduced the AI solution to the public and began engaging customers directly. “The initial feedback was overwhelmingly positive,” says Mondan, “validating not only the technology but also the strategic direction we have taken. It confirmed we were on the right path, not just technically but commercially.”

When a request for a filler formulation is received Dorfner uses AI to run a simulation. By bringing it into a platform and applying AI we can now offer a new formulation service to all our clients around the globe,” says Mondan. If a client uses a small fraction of a material in its formulations and that material is no longer available Dorfner can run a simulation, come up with the five best hits, test them in its lab and propose a solution within a matter of days, he says. “We are experts in functional fillers. AI helps speed up our R &D and promises to give us category leadership.”

The new strategy is also expected to allow Dorfner and its clients’ products to become greener. “We are currently shipping materials all over the globe,” says Mondan. “My dream is to tell customers that we can sell them a new formula for the functional filler field and point out the three local materials they can use. This way we stay in the game by offering the best formula, with the best quality, at the lowest cost, with the lowest environmental footprint.”

Painting: A Difference Future

Dorfner, which uses the minerals it mines to produce a critical component in paints, is starting by offering its clients AI-based recipes for functional fillers in paints that are cheaper and more sustainable.

Unlike conventional tools that generate only basic color recipes Dorfner’s AI predicts the complete set of final coating properties, with an average accuracy rate of 90%, says Mondan. It evaluates over ten key physiochemical parameters, using a networked optimization model that reflects real-world application demands.

“Our system can simulate over 100,000 formulations in just three hours, automatically identifying the top 500 candidates,” he says. These are presented through an intuitive visual interface and then narrowed down to the two or three most promising options using targeted filters. This allows experts to make high-impact decisions faster and with greater confidence. Waiting times for specific standards have been reduced from 30 days to just three hours, dramatically accelerating the development process.

“The effect on our operations has been game-changing,” says Mondan. “Our formulation process is now faster, smarter and significantly more efficient. Lab work has been streamlined. What used to require 70% of an employee’s time now takes far less, with more focus shifting to strategic development and innovation. Only the most relevant formulations reach physical testing, leading to faster time to market and resource savings.

“By reducing the carbon footprint of our products, encouraging the use of regional raw materials, and promoting the responsible use of finite resources it enables smarter, greener product development,” he says.

Dorfner is sharing its expertise and technology with German startup MissPompadour, to help it formulate its own decorative paint for do-it-yourself projects. Dorfner’s AI-driven formulations bring down the cost of decorative paint, makes it more sustainable and helps the startup develop new trendy colors in record time, “helping them to outpace their competitors by being on the edge of what people want now,” says Mondan.

“Our industry – paint- is super old and super slow,” says MissPompadour Co-founder Erik Reintjes, the startup’s chief marketing officer. It typically takes three to five years to develop a new paint and six months to test it. Thanks to Dorfner’s AI platform MissPompadour was able to produce its entire product line in three months. Since it owns it recipes, it can adapt them to the requirements of other countries and produce them locally in record time, he says. “Germany’s biggest paint companies – billion euro companies – are not this fast.”

Expanding Into New Business Lines

In addition to helping MissPompadour derive new paints, and optimizing the price, performance and sustainability of its own existing formulas, Dorfner’s AI platform “gives the company the competence to use data to derive new use cases,” says Tim Lüken, who left his position as a managing partner at UnternehmerTUM’s Business Creator program to work full-time for Dorfner, a testament, he says, to his believe in the company’s potential.

The use of AI-driven formulation technology and the exploration of new raw materials such as calcinates, meta kaolins secondary raw materials, recycling materials and waste streams, has helped the company has significantly reduced its environmental footprint while growing the business, says Lüken. One of the most significant outcomes has been the radical reduction in the need to excavate virgin raw materials compared to previous years.

The use of new novel raw materials is also opening new opportunities. “By sharing our solution across industries and business units, such as sanitary and kitchen sinks, we are scaling the impact of this innovation,” he says.

Dorfner is additionally using AI to offer services further far afield from its traditional business. It is, for example, developing a mobile phone app that analyzes the level of particle matter released by wood fireplace flames and an additive to reduce it. Through its new venture builder arm Sio2 Ventures Dorfner is supplying BMW and German SMEs with a solution that uses AI-powered visual image recognition to screen workers and give the company feedback on ergonomics to reduce workplace injuries.



And Dorfner doesn’t plan to stop there. “Our AI journey is far from over,” he says. “We are actively exploring new opportunities, tackling emerging challenges and harnessing the momentum of rapid technological advances. Our next frontier is designing smarter, more adaptive decision environments, intelligent systems that not only guide better decisions but also improve innovation and unlock the full potential of AI-driven operations.

Visit: https://innovatorawards.org/

Tuesday, September 16, 2025

How Deepfakes Affect You, Your Business And Society

A fake expletive-laden video of U.S. President Joe Biden’s July 21 announcement of his decision to leave the race began circulating on X almost immediately after the news broke.

PBS News, whose logo was featured in the video, issued a statement describing the video as a “deepfake,” adding, “PBS News did not authorize the use of this video, and we do not condone altering news video or audio in any way that could mislead the audience.”

With election season underway and artificial intelligence evolving rapidly, image and voice manipulation are becoming an issue of great concern. A recent report from Moody’s warns that generative AI and deepfakes could sway voters, impact the outcome of elections, and ultimately influence policy making, which would undermine the credibility of U.S. institutions.”

From manipulating elections to sowing confusion in Ukraine and the Israel-Hamas conflict, deepfakes are making a meaningful impact on society, notes deepfake and synthetic media expert, Henry Ajder. Henry identified the challenge of deepfakes and synthetic media over six years ago and was the first to start mapping their use, way before the explosion of interest in generative AI. In our wide-ranging conversation on deepfakes he noted that that there has been an uptick in the use of deepfakes and the volume of synthetic media being created and it is becoming more and more realistic.

Compounding these challenges is unreliable deepfake detection tools are providing false positive and false negative results, sowing doubt in authentic images and giving false confidence in AI-generated ones.

As realism improves and flaws are reduced, Henry says he believes these challenges will only become more acute, particularly in rapidly evolving and chaotic environments such as war zones and elections.

What’s more, it’s horrifyingly easy to make deepfake pornography of anyone thanks to today’s Generative AI tools. A 2023 report by Home Security Heroes (a company that reviews identity-theft protection services) found that it took just one clear image of a face and less than 25 minutes to create a 60-second deepfake pornographic video—for free.

The world took notice of this new reality last January when graphic deepfake images of Taylor Swift circulated on social media platforms, with one image receiving 47 million views before it was removed. But people far from the public spotlight have been victimized in the same manner for years. Henry’s 2020 investigation into a Telegram bot that enabled anyone to easily synthetically strip women of their clothes found that this led to a massive increase in the volume of victims and changed the profile away from the celebrities, who were previously targeted, towards private individuals.

According to the 2023 report, 99 % of victims are women or girls. Women in general are not well served by Generative AI, a topic I will return to in future columns, but deepfakes have been a disaster for them. We should brace ourselves for more targeting of female politicians, but also women in general through the use of deepfakes. “The use of women’s faces and bodies in non-consensual deepfake pornography is still arguably the biggest harm in terms of victim numbers,” says Henry. Millions of ordinary women and girls are affected which can lead to deep trauma and, in some cases, suicide. The deepfake epidemic shows no sign of stopping. The Online Harms Act in the UK is one of the few attempts to stem the tide.

In an interview about deepfake porn with The Institute of Electrical and Electronics Engineers (IEEE) Nadia Lee, CEO of the Australia-based startup That’sMyFace explained how her company is first offering visual-recognition tools to corporate clients who want to be sure their logos, uniforms, or products aren’t appearing in pornography (think, for example, of airline flight attendants). But her long-term goal is to create a tool that any woman can use to scan the entire Internet for deepfake images or videos bearing her own face.

“Our generation is facing its own Oppenheimer moment,” she told IEEE. “ We built this thing”—that is, Generative AI—”and we could go this way or that way with it.”

To Lee’s point, every technology can be used for good or for evil, but I would argue that in the case of deepfakes there is not a strong positive use case. The technology can be used for face swapping or other kinds of synthetic media in entertainment, in memes and satire. In my view, this small and niche use of deepfakes for good does not outweigh the clear damage being done to individuals, society and business by this technology.

The term deepfake emerged organically on the platform Reddit in late 2017 and was used at the time to exclusively refer to an open source piece of software that was being used to swap female celebrities faces into pornographic footage.” This proves my point that this technology’s use never had a driver which was good for humanity.

As time has gone on, the term deepfake has expanded to include different kinds of AI generated synthetic media, from voice audio and music to images and various forms of video manipulation. For example, vishing, which is voice phishing, cloning someone’s voice to impersonate them on a call, is an increasing problem for individuals and businesses alike. The voice is used to extract money or confidential material from the victim.

Another example is thieves using real time face swapping tools on video calls to mask their identity, apply for jobs and then disappear once they’ve got the sign on bonus.

There are also cases of people using full AI-generated avatars in video calls, such as a reported case in Hong Kong, where allegedly an entire Zoom call was populated by AI avatars, leading to the only human in the room parting with $25 million.

In another attempt to access confidential information, thieves attempted to deepfake the CEO of WPP, one of the world’s largest advertising agencies.

Some attempts to curtail deepfakes have been made in the U.S. EU, China, UK and Australia, but in many countries even children are not protected against bullying through classmates use of deepfakes, often through doctored pornographic images. Unfortunately, legislation needs to be global, or the perpetrators simply hide in countries without enforcement.

Regulators also have a role to play here as many of them have existing powers which can control deepfake abuses. Julie Inman Grant in Australia, for example, is the E-Safety Commissioner in Australia. She has used her powers successfully against deepfake and cyber bullying.

The need for effective regulation is urgent. Several studies show that distinguishing between real and synthetic media is effectively a coin toss for humans. Many deepfakes are achieving close to parity or parity with authentic outputs of voice audio or images or music. The models that are being used to create deepfakes and synthetic media have also become much more efficient both in terms of data and compute requirements. A good example is Microsoft’s Vall-E 2, which claims it can generate a highly realistic human voice clone of an individual from just three seconds of voice audio, notes Henry.

The emergence of smaller fine-tuned models, such as stable diffusion, a deep learning, text-to-image model based on diffusion techniques, that can also be run on devices such as a laptop, are further driving the democratization of the tools for creating AI-generated synthetic media.

These tools remove the need for expertise to operate them, putting them into the hand of everyone from school bullies to extremists on social media and fraudsters attacking business.

The important point for everyone to take away from this column is that no one is safe and the technology to identify the deepfakes is less available than the technology to make them. Robust training of employees is needed but even then, the technology is so good it’s hard to blame someone for not recognizing a deepfake. Henry and I will discuss disinformation, the other principle malicious use of deepfakes, in my next column.

Kay Firth-Butterfield, one of the world’s foremost experts on AI governance, is the founder and CEO of Good Tech Advisory. Until recently she was Head of Artificial Intelligence and a member of the Executive Committee at the World Economic Forum. In February she won The Time100 Impact Award for her work on responsible AI governance. Firth-Butterfield is a barrister, former judge and professor, technologist and entrepreneur and vice-Chair of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. She was part of the group which met at Asilomar to create the Asilomar AI Ethical Principles, is a member of the Polaris Council for the Government Accountability Office (USA), the Advisory Board for UNESCO International Research Centre on AI, ADI and AI4All. She sits on the Board of EarthSpecies and regularly speaks to international audiences addressing many aspects of the beneficial and challenging technical, economic, and social changes arising from the use of AI. This is the fifth of a planned series of exclusive columns that she is writing for The Innovator.

Visit: https://innovatorawards.org/

Sunday, September 14, 2025

Making A Difference, And Money, With Responsible AI



With the EU AI Act coming into force and measures taken by the Biden Administration in the U.S. to ensure AI is used responsibility being put in place conversation is again turning to how regulation stifles innovation. But does it?

I am privileged to sit on the Advisory Board of Philippines-based ADI, the AI and data arm of the Aboitiz Group, a conglomerate that operates in six major industries including power, banking and financial services, food, infrastructure, and Data Science and AI. I joined for two reasons: David Harddon, the CEO, did pioneering regulatory work on AI when he was Chief Data Officer at the Monetary Authority of Singapore and because he came to me saying that he wanted to only create AI use cases based on Responsible AI foundations.

We both believe that seeing Responsible AI as a cost of doing business is the wrong way to view it. With falling levels of trust in AI products and the companies creating and using it, we feel that creating products with responsible AI will add to the bottom line and success with customers. ADI’s approach can be used as guidance for regulators who want to harmonize the developmental and regulatory aspects of data and AI. It also serves as an example of how applied Responsible AI (RAI) makes business sense.

I recently caught up with Harddon and asked him why he decided that ADI will take a Responsible AI approach to all of its work.

Q: Why do you believe that it better to proactively embrace AI instead of waiting for regulators?

DRH: The time at The Monetary Authority of Singapore cemented my view that waiting for the regulator” was a form of social moral hazard as well as posing commercial risks. In fact, at ADI we had been able to demonstrate that embedded RAI can facilitate the sustainable operationalization of AI while boosting? commercial return. Our approach ensures that RAI is not a separate component, but is embedded into the core of our business strategy and decision-making process. Thus we decided to preemptively mitigate risks as well as ‘do the right thing’ by baking the guidelines of RAI, to the best of our ability and knowledge, at the core of our work. To give perhaps a silly analogy, there is no law that requires you to look left and right when crossing the road, however it is not only common sense, doing so sustains your ability to continuously and successfully cross the road.

Q: Why is it important to you to have an external advisory board?

DRH: To quote Albert Einstein “The more I learn, the more I realize how much I don’t know.” I truly believe in the value of surrounding oneself with those who know more, and the value of external counsel and perspectives. To quote ancient texts, “The way of a fool is right in his own eyes, but a wise man listens to counsel.” These are particularly true for AI start-ups like ADI, where we are continuously navigating unchartered waters while fostering ambitious goals to drive impact.

Q: Why do you believe using Responsible AI will help companies make money?

DRH: One of the realizations I had during my time as a regulator is that industry, largely, views governance, compliance, regulation, etc. as a business cost. This is most likely the same case with Responsible AI: it is seen as a cost to the business to adhere to governance and regulatory requirements. I personally disagree with this view and advocate that, while counter-intuitive, these functions need to have a business development mindset. The role of compliance isn’t to appease the regulator; the role is to drive business forward in a manner that adheres to the rules of the land. I believe that RAI results in ‘making money’ rather than ‘costing money’. In a recent HBR article that I co-authored, we wanted to validate whether including so-called discriminatory attributes like gender resulted in more negative discrimination in lending.. We were able to empirically show that overtly including this information, not only reduced potential gender-based discrimination but concurrently increased overall revenue. The RAI here used additional knowledge that was not previously available to enable the business to be more equitable and at the same time profitable in their lending operations.

Q: You found that your loan officers and AI come to better decisions together, rather than separately. Does this tell us anything others could learn about Responsible AI and augmentation of jobs?

DRH: In fact, this case demonstrates the power and effect of incorporating additional information/knowledge in business operations – further evidencing my staunch belief in the power of AI as Augmented Intelligence, and its net positive impact on jobs. At ADI with our partners, it’s both ways – AI enhances human capabilities and humans strengthen AI models. This approach is applicable in all verticals that we are focusing on – financial services, power, and smart cities. It makes sense because how can we be worse off by knowing more and seeking to be better?

Q: What are some of the other ways that RAI can increase inclusion?

DRH: I am on the Advisory Board of Connected Women, an organization with the objective to helping Filipino women find meaningful online careers, including jobs related to AI. Prior to my joining Connected Women, ADI had already partnerd with them to bring better economic opportunities to their Elevate AIDA (Artificial Intelligence and Data Annotation) graduates. ADI engages their graduates in data cleansing and data annotation, among other things. This not only provides a platform for these women to actively contribute to the digital economy and readies them for the future of work it also enhances our AI models.

Q: Its important for any company to return value to its owners, investors or shareholders. Do you think that’s possible by building on Responsible AI foundations?

DRH: Without a shadow of a doubt – yes. The goal isn’t to implement RAI for the sake of implementing RAI . ADI is leveraging AI in the pursuit of achieving business impact, quantifiably measured through revenue, operational efficiency, risk management, and sustainability.

In summary, I am hopeful that ADI represents the future of responsible design development and use of AI. Some companies have adopted extensive voluntary responsible AI internally but more often than not the push has been to innovate without guardrails. Those companies are now having to face up to the problems of moving fast and breaking things. On one of my panels in Davos this year the CEO of Accenture North America talked about the large number of AI deployments which they were now helping unwind because of the lack of Responsible AI guidelines used in initial implementation. Responsible AI can offer organizations a substantial upside. Taking a different route can prove costly in more ways than one.

Kay Firth-Butterfield, one of the world’s foremost experts on AI governance, is the founder and CEO of Good Tech Advisory. Until recently she was Head of Artificial Intelligence and a member of the Executive Committee at the World Economic Forum. In February she won The Time100 Impact Award for her work on responsible AI governance. Firth-Butterfield is a barrister, former judge and professor, technologist and entrepreneur and vice-Chair of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. She was part of the group which met at Asilomar to create the Asilomar AI Ethical Principles, is a member of the Polaris Council for the Government Accountability Office (USA), the Advisory Board for UNESCO International Research Centre on AI, ADI and AI4All. She sits on the Board of EarthSpecies and regularly speaks to international audiences addressing many aspects of the beneficial and challenging technical, economic, and social changes arising from the use of AI. This is the second of a planned series of exclusive columns that she is writing for The Innovator.

Visit: https://innovatorawards.org/