Thursday, January 30, 2025

State Of Compute: The New Power Paradox







With good reason: in a recent podcast OpenAI CEO Sam Altman called compute “the currency of the future” and says he believes “it’ll be maybe the most precious commodity in the world.”

Countries that can “manufacture intelligence” at scale will be at the forefront of harnessing the benefits of finding solutions to key challenges, from green transition to digital biology, says a recently released report by the Tony Blair Institute For Global Change (TBI). It argues that compute is not just a source of scientific and economic progress, but the new benchmark of global power economically and geopolitically.

“Just as governments needed to enable infrastructure such as roads, railways and telecommunication networks for business to thrive, investing in shared and public compute has become equally important,” Jakob Mökander, TBI’s Director of Science & Technology Policy, said in an interview with The Innovator.

Compute is, in fact, slated to become “the foundation of next-generation economic growth and influence, shaping economic developments, as well as the future of sovereign power and international influence,” says the TBI report. It notes that compute infrastructure – and the difference in its availability from country to country – risks becoming the basis of a new digital divide.

Last year TBI published a report that measured countries’ capabilities across the entire compute ecosystem, including talent, energy, governance initiatives, available training and technical partnerships, to help governments understand their compute capacity. The 2024 report, released November 18, underscores how the gap between nations has widened in the last 12 months.

The U.S. has some well publicized major advantages: U.S. tech companies dominate large language models globally and U.S. AI chipmaker Nvidia has become the leading beneficiary, with its Graphics Processing Units (GPUs) widely used by tech giants like Microsoft, Alphabet, Amazon, Meta, and OpenAI for AI applications. In 2024 Nvidia’s market cap reached $3.314 Trillion USD, making it the world’s second most valuable company.

The latest TBI report found that in addition to these advantages the U.S. built more data-center capacity in 2023 than the rest of the world combined, excluding China. TBI’s data-center investment indicator shows that the U.S. has announced that 5,796.2 megawatts’ (MW) worth of data centers are to be built over the next three years. This is a 164.4% rise from 2023 and shows no sign of slowing.

What’s more, large U.S. tech companies, including Alphabet, Amazon, Meta, Microsoft, xAI and Oracle, are driving competition for the latest chips, such as Nvidia’s Blackwell GPU, launched in March 2024. This latest architecture is designed to offer four-times faster training and 30-times faster inference.

As today’s largest AI models can be four times more computationally intensive than in 2023, these chips – due to start shipping at the end of 2024 – will revolutionize the next generation of compute resources, says the report.

The Race For Compute Supremacy

With $210 billion having been spent globally to upgrade and maintain the existing server base in the past year, a new compute security dilemma is emerging as the two largest state players, the U.S. and China, seek to check the other’s power with export controls, investments and industrial espionage, says the TBI 2024 report. With China’s new “HPC iron curtain”and its refusal to declare new supercomputers and other investments to global indices such as the TOP500 Supercomputers list, the balance of compute power will be increasingly difficult to verify, the report says. It forecasts that this strategy “will lead to increasing uncertainty in the world order and intensify the race for compute supremacy.”

Where Does This Leave The Rest Of The World?

As the commoditization of computer resources intersects with the clash of great powers, questions of digital sovereignty and digital colonialism are being raised, notes the report. “Caught between allegiances to infrastructure providers on the one hand and access to critical components on the other, emerging digital economies find themselves in strategic dilemmas as they trade-off between a series of sub-optimal binary choices,” the report says. Countries may find themselves in a position of having to align with one of the supercomputer superpowers for assured compute access, increasing tech bipolarity in an era of geopolitical upheaval, says the report. Alternatively, countries seeking to accelerate their own progress may be able to take advantage of the rivalry and develop alternative ecosystems to meet the needs of non-aligned countries, creating a new multi-polarity in the world order. Leaders wishing to avoid either dynamic may seek to develop sovereign capabilities, but risk investing in expensive infrastructure that is still behind the pace of the global leaders. Or they may find that key levers of their ecosystem, in particular talent, has left to work in more developed ecosystems.

To avoid a new type of digital divide and a fractured computing landscape, the TBI report recommends three possible courses of action for governments: public compute, public-private partnerships and international development for access to compute.

Public Compute

One good option for countries to build strategic autonomy while leveraging compute for the public benefit is to invest in “public compute”, i.e. state-built components of the compute stack, including data centers and cloud infrastructure and supercomputers, says the report. This compute can then be provided to critical sectors and be used as a lever to encourage responsible use of AI. As others have argued, compute should be provided on the basis that companies meet certain safety requirements, or that end outputs can help to contribute towards the digital commons, the report says.

Public compute requires investing in the key enablers of a model compute ecosystem. Having an abundance of energy and a mature data-center ecosystem has made it easier for countries like Malaysia to build their own public-compute capacity, the report says. Countries with abundant energy resources and stable business environments, such as Dubai, are focusing on attracting and hosting large-scale data centers.

Alternatively, nations with advanced manufacturing skills and startups might invest in developing domestic chip production, as seen in China’s efforts to counter U.S. restrictions while countries with more limited capital, resources and skills but a strong research ecosystem can pursue a strategy of AI model development. For example, state-of-the-art models can be fine-tuned to be better aligned with national language, values and interests and allow for models to be built with specialized sector-specific objectives in mind. The Netherlands has begun building GPT-NL with this purpose in mind.

“By strategically leveraging compute for public benefit,” says the report, “nations can not only address immediate sectoral needs but also reinforce both their long-term technological sovereignty and their competitive position in the global digital landscape.”

Public-Private Partnerships

Since TBI’s 2023 report, new models of mobilizing private capital for the development of AI infrastructure have emerged. These provide opportunities to unlock multi-trillion-dollar long-term investment and the potential for technological advancement on a global scale.

For example, Aramco, the Saudi Arabian government majority controlled petroleum and natural gas company has partnered with Groq, an American AI company that builds an AI accelerator application-specific integrated circuit (ASIC) that they call the Language Processing Unit (LPU) and related hardware to accelerate the inference performance of AI workloads, to create the world’s largest AI inference center. The center, strategically located in Saudi Arabia, aims not only to support the Saudi government’s Vision 2030 to drive digital transformation and economic diversification but also has the potential to open access to cutting-edge AI capabilities for those nations that lack the resources to build and maintain their own AI platforms. Through creating a global hub for AI infrastructure, Saudi Arabia hopes to reinforce its leadership in the global digital economy and provide a blueprint for how public and private capital can come together to build the advanced AI infrastructure needed to drive the next wave of technological innovation, economic growth and inclusivity on the global stage.

Regional Compute Sharing

Nations with limited energy and domestic compute infrastructure could also seek to build and participate in regional compute models.

Estonia, despite being one of the world’s most digitally advanced nations, has limited compute resources. By leveraging The European High Performance Computing Joint Undertaking ( EuroHPC JU) and, in particular, access to its fastest supercomputer LUMI) it overcomes potential issues in accessing supercomputing resources and AI servers via the Cloud, notes the report.

Similarly, the ASEAN-HPC taskforce is providing opportunities across the region so that ASEAN countries can access shared high-performance computing (HPC) resources such as Japan’s Fugaku supercomputer.

“A regional approach provides a larger market for compute providers to sell into, so may provide a stronger rationale for investment,” says the report. Transnational investment will help to reduce the public-spending requirements on any one individual nation.

The key for countries without strong political integration will be to find practical and regulatory mechanisms to secure corridors so that shared compute access can be realized, says the report.

International Development for Access to Compute

If countries lack the right political environment to build shared compute, an alternative could involve looking to the international development landscape for support, says the report. For example, the U.S. Department of State, in collaboration with companies such as OpenAI and Nvidia, have launched the Partnership for Global Inclusivity on AI. The partnership has allowed the Department of State and Nvidia to offer compute credits to emerging market economies. The Gates Foundation AI Grand Challenges has adopted a similar approach, focusing on health care and the Gates Foundation has also stated curating a “Gavi for compute” to redistribute compute resources in Africa.

Other multi-stakeholder agencies have begun their quest to close the international compute divide. The United Nations, for example, recently recommended the development of an AI development fund.

“Many of these projects are in their infancy, so there are unique opportunities for countries to shape this agenda by understanding their own national demand and engaging with these new initiatives, the report says.

Turning Challenges Into Opportunities

The opportunities for countries to boost their compute position don’t stop there.

Countries with a significant number of servers and data centers are gaining an advantage by beginning to replace general-purpose servers with smaller numbers of specialized AI servers. Switzerland, the Netherlands and Ireland are leading the way, says the 2024 report, and TBI projects that this trend will continue.

Small countries that commit to building out infrastructure are attracting investment. In this year’s TBI report, Israel has risen from 38th to 24th on its Cloud-service availability indicator, as it expanded from a single cloud or colocation data center in 2022 to five additional data centers – an expansion of 20,000 % (from 4,000 to 804,000 square feet). Furthermore, Israel has shown the largest percentage growth in subsea cables and Internet-exchange points, adding a further eight. This rapid growth has been driven in part by Israel’s Project Nimbus, first conceived in 2019, to be the basis of the public Cloud infrastructure in Israel to facilitate the move of a significant amount of government services to the Cloud, according to the report.With AWS and Google having won the tender to implement this $1.2 billion project, the subsequent investments play a key part in Israel’s rise, according to the report. This includes investments to increase the electricity grid, including building a new substation to facilitate the supercomputer and additional data centers, as well as a commitment to invest in smart grids and renewable energy to power this growth. The accompanying investment into regional connectivity, such as the 254 km subsea cable between the Mediterranean and the Red Sea, is also creating new opportunities for Israel’s compute ecosystem, notes the report.

Energy infrastructure is also a key factor. Companies building the largest data centers will prioritize putting them in locations that can build out energy infrastructure fastest, says the report. If Europe can translate its greening economy into one that can rapidly scale infrastructure, it will strengthen its competitive advantage. Investment in renewables has been a priority across Europe to secure domestic energy supply and reduce heavy reliance on Russian energy. It was the only region to have demonstrated significant growth in the availability of clean energy over the past year, according to the report.

Physical geography is becoming a key factor in growth for other reasons. For instance, accelerated private sector investment in Finland’s compute partly rests on the country’s cooler climate, making it cheaper and more energy efficient to run data centers. Likewise, Malaysia has taken advantage of its geographical proximity to Singapore to supply the country – which has placed a three-year moratorium on data center building – with compute at scale.

Africa is making strides in building its compute power, but still has a way to go. TBI projects that Cote d’Ivoire will have the highest server growth rate of the 67 countries studied over the next 5 years, at 84.3%. Ethiopia, Rwanda and Kenya have seen the biggest year-on-year increase in software engineers across TBI’s data, at 40%, 39% and 29% respectively. And Rwanda, Nigeria, Kenya and Ghana have shown high growth in both the size of their developer community and its activity, while the region more broadly has the highest growth in GitHub developers, year-on-year, according to TBI. That said, the Continent is being held back by a variety of issues including electricity blackouts and the fact that many African countries heavily restrict data portability across borders. In addition, despite higher levels of STEM graduates, sub-Saharan Africa is not developing strong human-capital pipelines to compute roles, says the report. Despite a new wave of investment, the region has a similar number of servers to Spain; per capita, the UK has 20 times as many, notes the report. It says two changes are required for Africa to further shore up its position in compute: an increase in edge computing (where data processing takes place on a device or local server, rather than a data center) to help leapfrog infrastructure limitations, and more opportunities for the region’s skilled workforce within local compute ecosystems. “Without these changes, the local talent pool is likely to seek opportunities elsewhere – and this could stymie the growth of the region’s access to compute,” the report says.

No Silver Bullet

The fact that U.S. companies dominate large language models (LLMs), AI chips and data center capacity and the increased tensions between the U.S. and China do not necessarily have to lead to a new digital divide and a fractured compute environment, says TBI’s Mökander. To solve the world’s biggest problems from climate to healthcare – and unlock economic growth – countries need not only invest in compute capacity, but also implement technical, procedural and cultural controls to manage tensions related to system interoperability, data sharing, and cyber resilience. “Not all compute requires the same level of localization and security.” he says. “Countries need a plurality of approaches and to segment use cases and data in order for the world to build a resilient and flourishing compute ecosystem.

Visit: https://innovatorawards.org/

Wednesday, January 29, 2025

Technology’s Role In Fighting Climate Change






Global leaders gathered in Dubai this week for COP28, the United Nations’ climate conference, a global hackathon was exploring how quantum computing can contribute to solving some of the world’s toughest challenges, including sustainability.

The winners of the hackathon, which was organized by French quantum company PASQAL and attracted more than 800 candidates with 75 proposals from 25+ countries, demonstrated how the cutting-edge technology can, among other things, be used for renewable energy forecasting and optimizing the layout of windfarms.

The application of new technologies to abate climate change is needed more than ever. A new report released this week by the World Economic Forum entitled The Net-Zero Industry Tracker 2023, published in collaboration with Accenture, takes stock of progress towards net-zero emissions for eight industries – steel, cement, aluminum, ammonia, excluding other chemicals, oil and gas, aviation, shipping and trucking – which depend on fossil fuels for 90% of their energy demand and pose some of the most technological and capital-intensive decarbonization challenges.

These emission-intensive sectors, which account for more than 40% of global greenhouse gas emissions, are not aligned with the trajectory to reach net zero by 2050, says the report. Over the past three years, absolute emissions have grown on average by 8% due to increased activity and demand and all sectors in scope depend on fossil fuels, most with over 90% reliance.

Transitioning these industries to a Net Zero future will require a collective investment of approximately $13.5 trillion, prioritizing the electrification of low to medium temperature industrial processes, says the report. That amount is needed to scale up the essential technologies and sustainable infrastructure, but investments aren’t enough, says the report. They must be complemented by policies and incentives that can help the industries make the switch while ensuring access to affordable and reliable resources that are critical for economic growth.

Visit: https://innovatorawards.org/

Tuesday, January 28, 2025

Putting AI Into Production





Like many large corporates Bosch, a German multinational engineering and technology company, is interested in putting AI into production to make its factories more productive and competitive. Although the machines and production line in its plants are already well-equipped, integrating new sensors and deploying AI technology could further enhance efficiency. While the company has an open innovation department and regularly works with startups and scale ups, its Maklar, Hungary plant had little experience in working with young tech companies.



Through the DeepTech Alliance, a private non-profit association of leading European entrepreneurship hubs that specializes in connecting deep tech companies with large corporates, the innovation manager of Bosch’s plant in Maklar connected with IPercept, a Swedish startup that serves as a kind of fitness tracker for industrial machines, leveraging AI to track mechanical movements to do root cause analysis and predictive maintenance.

Bosch tested the technology on a welding machine used in the automotive steering systems it produces. The objective was not just to tackle unplanned downtime, but to preempt it, leading to reduced costs, improvement in product quality, and elevated customer satisfaction. The pilot was a success, and the Maklar factory is now looking at how to apply the technology to other production lines, says Gabriel Gudra, the factory’s innovation manager. He and IPercent CEO Karoly Szipka appeared on stage together to talk about their collaboration at an October 9-10 DeepTech Alliance advanced manufacturing meeting in Munich which brought corporates together with startups targeting the manufacturing sector.

Bosch’s story is just one example of how manufacturers are starting to unlock AI’s potential. The same week the DeepTech Alliance was hosting the advanced manufacturing event in Munich the World Economic Forum announced the latest additions to its Lighthouse network, a community of 172 industry leaders pioneering the use of cutting-edge technologies in manufacturing. Nineteen manufacturing sites received designations as Fourth Industrial Revolution Lighthouses for achieving step-change impact in performance through technology-enabled transformation. Three others were designed as “Sustainability Lighthouses” for their use of advanced technologies to reduce their environmental impact.

The Forum’s latest cohort gained an average 50% boost in labor productivity, attributed to various digital solutions such as interactive training programs, smart devices and wearables, and automated systems that combine robotics, AI and machine vision. Process modelling and root-cause analytics unlocked efficiency gains across Lighthouses’ end-to-end supply chains, on average reducing energy consumption by 22%, inventory by 27%, and scrap or waste by 55%, according to the Forum.

“There are not many examples of successful applications of AI in industry,” says Federico Torti, the Forum’s Initiatives Lead, Advanced Manufacturing and Value Chains. “The lighthouses are demonstrating what can be achieved and serve as an inspiration for other companies.”

What sets the lighthouse companies apart is that they have already made critical investments in their tech stacks, and they design AI use cases for scale, creating an easily replicable package across their production network, says Torti. They start by fundamentally redesigning processes, reducing variability and waste, and organize technology deployment around the user, focusing on people skills and user experience / how users interact.

The lighthouse companies, which hail from 10 different countries, are also open to learning from their peers and across sector, he says. “They are really pushing the boundaries of how they can get value from cutting-edge technologies like AI,” says Torti. “It is not about piloting AI, it is about creating the right ecosystem that will allow their company to transform.”

In general, these companies are taking a long-term oriented approach, have a vision and are investing in “the required foundational aspects – such as clean, reliable data platforms, a good governance structure and training their work forces with a people-oriented approach –rather than trying to quickly adopt the shiny aspects of AI,” he says.

Global pharmaceutical company AstraZeneca’s Södertälje plant in Sweden – one of the Lighthouse factories in the Forum’s new cohort – is a case in point. The plant has implemented 50+ advanced technology solutions and a significant number incorporate AI or GenAI, Jim Fox, VP Sweden Operations, AstraZeneca, said in written responses to questions from The Innovator.

In drug development, AI predictive modeling is optimizing the physical and chemical properties of Active Pharmaceutical Ingredients (API) and predicting the performance of formulated products during manufacturing. What’s more GenAI, machine learning, and large language models (LLMs) are significantly reducing development lead times and the use of API in experiments, says Fox. AI-powered process digital twins are optimizing conditions for yield and productivity in the manufacturing process, reducing the use of raw materials, and minimizing tech transfer requirements. AI-powered tools are also aiding AstraZeneca in achieving its Net-Zero carbon footprint goal by pinpointing its environmental emission “hotspots” and the carbon footprint of itsd products across the entire supply chain.

An important factor in its success to date was preparing “findable, accessible, interoperable, reuseable and clean data” to power its AI algorithms and ensuring it is managed with good governance and data standards, says Fox.

Upskilling 3,000 employees to deal with VR/AR, AI computer vision, IoT, sensors, integration of cobots, drones and digital twins was also key. AstraZeneca has a strategic workforce plan that includes both outsourcing/offshoring AI services and building critical internal capabilities to cover the range of its needs, says Fox.

Specific tailored training for employees at the Swedish site was introduced through an extensive online training platform. “We also launched a Digital Academy together with a digital innovation zone for experimentation to sustain our digital capability uplift,” says Fox. In addition, the plant has also launched a comprehensive and accredited self-serving online program for basic to advanced AI and Gen AI training.

“The experience at the Södertälje plant has provided valuable insights into scaling AI and upskilling employees effectively,” he says. “If we had not done [the upskilling] we would not have seen the results that have led to a 56% increase in production and a 67% reduction in development lead times for launching new products.”

The Human Factor

The human factor must be considered in any successful adoption of new technologies, says Antti Rantanen, who runs the industry 4.0 and Industrial AI practice of the Nordics division of EFESO, an international consulting group specialized in helping manufacturers to use technology to advance operations strategy and performance improvement.

He cites a brewery’s manufacturing plant in northern Europe as an example. Most of the employees have been there for 30 to 35 years. “We were walking around on a factory tour when a bottle machine got stuck and the 60-year-old operator knew exactly where to kick it to get it started again,” says Rantanen, a speaker at the Deep Tech Alliance Munich event. An estimated 30% of the work force in Nordic factories will retire in next 10 years, he says, and that knowledge will disappear. Top-down technology solutions imposed by management are not the right fix.

“Factory operators hate it when headquarters comes and visits,” he says. “They have their own way of doing things.” Take the case of one packaging company Rantanen visited. The company’s leadership installed new SAP software. When leadership visited shop floor operators would open laptops to something that looked like the German tech company’s product and repeatedly push their shift buttons to make it look like they were using it. As soon as management left, they would go back to business as usual. “They explained to us that using a complicated IT system does not help them do what they need to do and if they adopted the technology it would ruin their P&L,” says Rantanen, who has 25 years of experience in digital transformation. “This is the situation in most manufacturing plants.”

When headquarters imposes AI solutions or software “they rarely ask the machine operators ‘what do you see as the inefficiencies and how will this connect with the overall value chain?’,” says Rantanen.

EFESO helps companies take a different approach. “We walk through a production plant before we start any process to understand the culture of that plant,” he says. “We usually meet with the CEO, plant manager, safety and maintenance – all the way down to the machine operators – to understand how people work together. We earn the trust of the machine operators, then, we go at this from an operational excellence perspective, find the inefficiencies and build the AI models with partners like SILO to be scaled out.”

Success depends not only on the input of people using the technology, with deep knowledge of operations. Cultural differences must also be considered, he says.

The French manufacturing culture is different from the German or the Swedish, says Rantanen. “Global companies think that they can roll out the same systems and same architectures in the same way but if you look at a plant in, say, Finland versus one in Brazil, the plants will be completely different in the way they work, their openness to change, in whether they adhere to processes or not,” says Rantenen. “Those things need to be taken into account, not just AI or robotics.” These issues will be ongoing even after AI is implemented and factory jobs change, he says. “AI will be essential for manufacturing companies to stay competitive globally,” says Rantanen. “But it will never work if they don’t take the human elements into account.”

Scaling Up

With all the hype around Machine Learning and AI “we get the impression that it is all changing very quickly,” says Thomas Klem Andersen, Executive Director of the DeepTech Alliance. “But a lot of manufacturers are still not there,” he says. “A lot more has to be done not just for the bigger industrial companies but also the smaller ones to help connect them to the right deep tech startups, identify use cases and implement them.

Visit: https://innovatorawards.org/

Sunday, January 26, 2025

The Case For Designing A Resilience-By-Design Cybersecurity Strategy









While there is lots of attention being given to AI and quantum computing there are an estimated 200 critical and emerging technologies shaping today’s technological landscape, each with their own unique cybersecurity implications.

“There is a Pandora’s box full of new technologies coming to market,” warns Dr. Hoda Al Khzaimi, director and founder of the Center for Emerging Technology Accelerated Research (EMARATSEC), and associate vice provost for research translation and entrepreneurship at New York University Abu Dhabi (NYUAD), United Arab Emirates. She is a co-author of a recent World Economic Forum report on Navigating Cyber Resilience in the Age of Emerging Technologies.

Indeed, the rapid growth in investments in emerging technologies– from approximately $4 billion in 2018 to more than $3.2 trillion today – demonstrates a significant surge in global interest and development, underscoring the need for a broad, inclusive approach to technology assessment and strategy development, says the report.

In the face of this complex and evolving threat landscape a traditional mindset of “security by design”, which focuses on embedding security features into new technologies from the outset, is no longer sufficient, says Al Khzaimi, who is also co-chair of the Forum’s Global Future Council for Cybersecurity and director emeritus of NYUAD’s Centre for Cybersecurity. Instead, there is a pressing need to adopt a “resilience by design” approach, which ensures that systems can withstand and recover from inevitable attacks that will occur as these technologies proliferate, she says. This approach involves embedding resilience principles into every stage of technology development and deployment. In practical terms this means enabling continuous monitoring, developing rapid response, and cultivating the ability to learn from incidents to strengthen defenses over time so companies can recover quickly with minimal impact.

Distinguishing Between Critical And Emerging Technologies

The first step in developing a resilience-by-design strategy is distinguishing between critical technologies which have already achieved a certain level of maturity and emerging technologies, says Al Khzaimi.

Critical technologies, such as smart and new material science, semiconductors and new means of energy generation, are already foundational and essential to national security and economic competitiveness, demanding immediate and sustained investments to protect them from cyber threats. In contrast, emerging technologies, such as AI, quantum computing and synthetic biology, are still at the developmental stage but have the potential to become critical as their applications expand and their strategic importance becomes more apparent. The two are not mutually exclusive. Some technologies fall into both categories. “This fluidity necessitates a flexible approach to cybersecurity that can adapt to both current and future risks, ensuring preparedness for a range of possible scenarios,” says the report.

Anticipating Worst Case Scenarios

While emerging technologies hold great promise for innovation and advancement across sectors, it is essential to consider the potential security challenges they might pose, says the report.

For example, biotechnology advances such as DNA data storage technologies raise questions about long-term data security and potential biological data breaches. Synthetic biology could potentially be used to create designer pathogens or manipulate existing organisms in unforeseen ways. If risks are not hedged within a certain framework of ethical and responsible development, the convergence of AI and biotechnology raises concerns about the potential for creating self-evolving biological systems.

Neuromorphic computing, an approach to computer engineering that designs hardware and software systems to mimic the structure and function of the human brain, may pose other risks. It can help improve efficiency and allow machines to perform more complex tasks but brain-like computing architecture could also prove vulnerable to new types of attack that exploit their learning capabilities, says the report.

Advanced 3D displays such as holograms could also be used for sophisticated phishing or social engineering attacks. Securing the data used to generate holograms is crucial in preventing unauthorized replication as there is potential for the creation of false environments that could manipulate decision-making in critical situations, the report says.

Creating Collective Cyber Resilience

Given the sheer volume of new threats how can companies and countries cope?

The Forum report contains three case studies. One illustrates how the French multinational company Schneider Electric is using generative AI (GenAI) for programmable logic controller code generation within industrial control systems. This application of AI can help to enhance operational efficiency and strengthen cybersecurity measures by automating code generation and improving code quality.

Another case study talks about how Singapore is working with multiple stakeholders on a critical information infrastructure (CII) supply chain program. “This program is a living blueprint that evolves to tackle changing risks and outlines guidelines to support stakeholders in risk management and cyber contracts,” says the report. “It prioritizes international cooperation to support cyber-risk management in supply chains with international and regional partners, working towards harmonizing cybersecurity standards across jurisdictions.”

The third case study highlights how the United Arab Emirates (UAE) is using emerging technologies at the national level to drive both technological innovation and cyber resilience. UAE government bodies are developing technologies such as AI, blockchain, quantum computing, 5G, IoT, digital assets, connected vehicles and smart cities with the goal of transforming sectors across the UAE, establishing it as a leader in technological innovation. The UAE is, for example, planning to transition all government transactions to blockchain by 2025 and has created the first official government body dedicated to the regulation of virtual assets. The Dubai Road and Transport Authority’s work on autonomous vehicles and the Dubai Electricity and Water Authority’s AI- powered operations are examples of the integration of AI into critical infrastructure while “considering safety, efficiency and decision-making,” says the report.

Relevant UAE government agencies are teaming with private-sector companies, research institutions and international partners in an integrated way to progress innovation while building in cybersecurity and anticipating future issues.

“They have an open assessment platform for all the new technologies that is being co-developed with different members of the private sector,” says Al Khzaimi. “What they are doing is creating collective resilience, by involving all of the stakeholders and not just the regulator and the government.”

The report acknowledges that this model is not applicable to all countries but endorses the approach that all nations must aim to derisk the potential threats of emerging technologies. “Understanding what types of bodies and what types of public–private collaboration lead to the most productive outcomes will ultimately serve more than just a single nation,” says the report.

Putting Resilience-By-Design In Practice

In addition to promoting cross-sector collaboration to build comprehensive cyber resilience the Forum report contains a list of practical suggestions for countries that want to put resilience-by-design into practice. They include: Focus dedicated research on fields such as quantum computing, blockchain, IoT and biotechnology to develop new technologies designed with inherent capabilities to detect, respond to and recover from cyberthreats.
Strategically integrate emerging technologies into critical infrastructure sectors such as energy, healthcare, finance and transportation. Technologies such as quantum resistant cryptography, IoT-enabled predictive maintenance and blockchain-based security protocols can enhance the resilience of these sectors.
Develop data-driven frameworks for technology and cyber governance with clear metrics for evaluating technology readiness, impact assessments and risk management.
Create training programs focused on emerging technology security, such as quantum computing, IoT and biotechnology to ensure workers have the right skills.
Implement ethical guidelines for emerging technologies
Adopt novel solutions tailored to local needs rather than existing technologies to reduce dependency on external technologies and promote local innovation ecosystems.
Establish continuous monitoring and incident response planning in cybersecurity practices.
Build trust in emerging technologies by communicating openly about cybersecurity measures, risks and responses to incidents.

“Emerging technologies require a multifaceted approach that integrates security, resilience, sustainability and quantifiable risk measurements into all aspects of technology development and deployment,” says the report. “By adopting these practical recommendations, leaders can enhance cyber resilience, promote responsible innovation and build a secure digital future. “

Visit: https://innovatorawards.org/

Wednesday, January 22, 2025

New Report Weighs The Benefits And Risks Of AI Agents


The first phase of AI was predictive, the second was generative. Now the third wave is here: autonomous AI agents that can not only recommend actions but can reason and tackle multi-faceted projects without requiring human oversight at every step.

By 2027, half of companies that use Gen AI will have launched AI agents, according to Deloitte.

“This is a trend,” Cathy Li, the World Economic Forum’s Head, AI, Data and Metaverse and Deputy Head of Center for Fourth Industrial Revolution (C4IR), said in an interview with The Innovator. “It is already here so we need make sure we think about the ramifications.”

To that end, the Forum and Capgemini published a new white paper on December 16, Navigating the AI Frontier: A Primer on the Evolution and Impact of AI Agents.

AI agent’s ability to manage complex tasks with minimal human intervention offers the promise of significantly increased efficiency and productivity, says the white paper. Additionally, the application of AI agents could play a crucial role in addressing the shortfall of skills in various industries, filling the gaps in areas where human expertise is lacking or in high demand.As the technology progresses AI agents are expected to be able to tackle open-ended, real-world challenges such as helping in scientific discovery, improving the efficiency of complex systems like supply chains or electrical grids, managing rare non-routine processes that are too infrequent to justify traditional automation, or enabling physical robots that can manipulate objects and navigate physical environments.

But AI agents also pose certain risks. Technical risks include errors and malfunctions and security issues including the potential for automating cyberattacks. The autonomous nature of AI agents raises ethical questions about decision-making and accountability and there are socioeconomic risks around potential job displacement and over-reliance and disempowerment.

The white paper urges corporates to take measures to mitigate these risks. It recommends:Establishing clear ethical guidelines that prioritize human rights, privacy and accountability is an essential measure to ensure that AI agents make decisions that are aligned with human and societal values.
Prioritizing data governance and cybersecurity before deploying AI agents.
Implementing public education and awareness strategies is essential to mitigate the risksof over-reliance and disempowerment in social interactions with AI agents.
Improving the transparency of agents and implementing “human-in-the-loop” oversight, enabling agents to work autonomously while human experts review decisions after they’ve been made.

The Forum’s AI Governance Alliance, an initiative that unites industry leaders, governments, academic institutions, and civil society organizations to champion responsible global design and release of transparent and inclusive AI systems, has been working on the topic of AI agents for the last 18 months, says Li, and AI agents will be on the agenda at the Forum’s annual meeting in Davos January 20-25.

“There is more in-depth work to be done,” she says. “There are issues that need to be tackled because AI agents could exploit loopholes or act in unintended way socioeconomically. There is also concern about job displacement. If deployed in the right way it’s not about replacing humans, it’s more about augmenting what we do, but at the same time each company and organization need to deploy AI responsibility and keep the potential impact on the workforce in mind.”

Tuesday, January 21, 2025






By investing and experimenting in quantum technologies now HSBC is not just preparing for the future; we’re shaping it,” Colin Bell, CEO of HSBC Bank and HSBC Europe, said in a statement.

HSBC is one of a number of banks helping pioneer tech solutions that combine AI and quantum or AQ for short. Shoring up cybersecurity is a key reason, but it is not the only one. Combining the two technologies can improve business outcomes across a broad spectrum of industries from financial services and healthcare to aerospace and manufacturing, say industry experts.

Using AQ now can also be a hedge against the day that fault-tolerant quantum computers become a reality. Since quantum computing is a step-change technology with substantial barriers to adoption industry pundits say early movers will seize a large share of the total value, while those who have not prepared may not be able to catch up and could see their businesses wiped out overnight.

That prospect is so worrying that a new acronym cropped up during the World Economic Forum’s annual meeting in Davos in January, says Clement Jeanjean, a Paris-based senior director at SandboxAQ, a company spun out from Google parent Alphabet that delivers AQ solutions that run on today’s classical computing platforms. “Instead of FOMO [Fear of Missing Out] people were talking about FOBO: Fear of Becoming Obsolete,” he says. “There is a race to make sure that historical incumbents in industry verticals are not put out of business overnight.”

There was also concern in Davos about not just companies but countries falling dramatically behind. “The growing global quantum divide between countries with established quantum technology programs and those without will lead to significant imbalances in core areas such as healthcare, finance, manufacturing and more,” according to the Quantum Economy Blueprint report published by the World Economic Forum on January 16, during the annual meeting.

Current worldwide public sector investments in quantum computing exceed $40 billion, says the report. But only 24 out of the 193 member states of the United Nations have some form of national initiative or strategy to support quantum technology development.

“We do not want to risk a quantum divide and hope the blueprint will enable discussions for different regions to participate and benefit from the quantum economy,” says Arunima Sarkar, the Forum’s Lead, Quantum Technologies. “We look forward to piloting the blueprint, working with policymakers in developing regional and national roadmaps for leveraging the potential of this technology and preparing for the transition to the post-quantum era.”

The Forum takes a much wider view of quantum technologies, including not just computing but also sensing and communications, says Sarkar. “Some of these technologies are already being deployed while for others we see rapid advancements,” she says. “Eventually these technologies are expected to permeate every sector of society, creating what we call the Quantum Economy.”