Tuesday, June 25, 2024

Can Europe Compete On AI? For Some The Answer Is Probabl

 



On May 27 French President Emmanuel Macron and German Chancellor Olaf Scholz published an op-ed in the Financial Times that focused on a plan to make Europe a strong world-class industrial and technological leader, while simultaneously making the EU the first climate neutral continent.

“Together, we will advocate to strengthen the EU’s sovereignty and reduce our critical dependencies…,” wrote the French and German leaders. “With an ambitious industrial policy, we can enable the development and rollout of key technologies of tomorrow, such as AI, quantum technologies, space, 5G/6G, biotechnologies, Net Zero technologies, mobility and chemicals. We call for strengthening the EU’s technological capabilities by promoting cutting-edge research and innovation and necessary infrastructures, including those regarding artificial intelligence and health.”

The op-ed followed a meeting convened on May 21 by Macron of France’s top AI talents, which focused in part on using open source to reinforce Europe’s technological autonomy, arguing that an open approach was the best way to make the necessary building blocks available to create sovereign AI for Europe.

France’s national 2030 program, which treats AI as both an innovation accelerator and differentiator, has set aside a budget of €32 million to develop and maintain open source tools. Part of that budget is going to a startup called Probabl, a spin-off of French research center Inria that has been financing a global open source data science library called scikit-learn, an approach widely used for performing complex AI and machine learning tasks.

“The reinforcement of the scikit-learn open source library is “particularly targeted” for support, says a recently released French government press kit about France’s 2030 program and its AI strategy.

Only two other such libraries exist in the world, at that scale: one created by Google, called Tensorflow and another called PyTorch created by Meta, which powers, among other things, OpenAI’s ChatGPT and Tesla’s autopilot. While not covering the same machine learning techniques, the open source scikit-learn library is ahead of both in popularity and usage.

According to independent measurement by Pypistats.org the open source scikit-learn library backed by Inria, supported by a global community of contributors, and now overseen by Probabl, has been downloaded 1.5 billion times, averaging 65 million per month (22% from the U.S., 25% from China and 3% from France). It creates more dependencies than PyTorch and Tensorflow combined. Dependencies are the number of projects and packages that depend on scikit-learn, i.e. where scikit-learn is a core component that helps build additional value. Since it’s middleware and popular, it is foundational to nearly a million projects and 15,000 “packages” (i.e. projects that are more structured and significant).

“We need to take advantage of open source to minimize dependencies on monolithic, proprietary and captive technologies,” says serial entrepreneur and Probabl CEO Yann Lechelle. “The U.S. enjoys near supremacy when it comes to chips, cloud and software. This isn’t great for Europe or any other nation for that matter. If you don’t control your technological infrastructure, you have no sovereignty.”

Scikit-learn, Probabl’s capstone software, is a Python library, which is widely used by machine learning teams working on tabular and quantitative data. The approach specializes in the resolution of a large range of problems, notably classification, regression, regroupment, and dimension reduction. Scikit-learn can handle diverse algorithms, ranging from traditional statistics models to neural networks. The number one use-case is health, i.e. accelerating the discovery of medicine and identifying patterns and symptoms, says Lechelle. Other use cases include fraud detection for the financial services industry, logistics optimization and forecasting. “Scikit-learn is ideal for everything that looks like a spreadsheet,” says Lechelle. “Whether there is a stream of data or patterns of behavior or logs this is the best tool because it can pinpoint and create probabilistic and predictive models out of all of those things.”

The company, which was launched February 1 by a team of 14 including 10 from Inria and Lechelle, the former CEO of cloud hosting company Scaleway, and founder of five other startups, is intent on keeping the software library open-source. It has published a manifesto that says:Individuals, academics and researchers, engineers and data scientists, small companies and large enterprises alike, as well as nation states should be on a level playing field and have unbridled access to such resources and technologies.
Users should choose how they are deployed, on-premises or in a cloud-agnostic way.
Users should not be locked-in with any provider.
The open-source approach is a way to maximize adoption, trust and eco-systemic value creation.
Choosing open-source strengthens data and operational sovereignty at all levels.

The company has inscribed its open source mission into its bylaws, a rare occurrence and a strong signal to all stakeholders, providing much needed alignment over the long term, says Lechelle.

“What is interesting about our approach is that you can do anything you want with the library,” he says. “We have a very permissive license.”

While Scikit-learn was largely developed by researchers and research engineers working at Inria, it has received financial support in the form of donations from BNP Paribas Cardif, Chanel, AXA and the startups Hugging Face and Dataiku.

Now that the library is being overseen by Probabl, a for-profit company, the hope is that corporate buyers and the government will pay for the commercial managed software-as-a-service (SaaS) model that is being built on top of scikit-learn. Probabl’s commercial activities will include training, support, certification, hosting and providing managed services and professional services for both government and big corporate clients, says Lechelle.

It is currently in the process of raising money from private investors to augment the funding from the French government.

Probabl is one of a growing number of private French companies targeting AI. A company called H, founded by former Google Deepmind scientists, which claims it has developed a more powerful and efficient way to build foundational models, has just raised a $220 million seed financing round that includes European, U.S. and Asian investors as well as the French government. France now has more than 600 AI startups and 76 of them are focused on GenAI; 50% are profitable or envision that they will be in the next three years, according to the French government. Few are open source.

What Does Openness Mean In The Age Of AI?

During earlier days of the Internet, open source – technologies that users can download and use the source code for free – played a core role in promoting innovation and safety. Open source technology provided a core set of building blocks that software developers have used to do everything from create art to design vaccines to develop apps that are used by people all over the world; it is estimated that open source software is worth over $8 trillion in value, says a report entitled The Colombia Convening On Openness and AI compiled by Mozilla and the Institute of Global Politics.

Today, open source approaches for artificial intelligence — and especially for foundation models — offer the promise of similar benefits to society, says the report. However, defining and empowering “open source” for foundation models has proven tricky, given its significant differences from traditional software development.

Indeed, critics say OpenAI’s name is a misnomer as its products are closed source and proprietary. Meanwhile, MistralAI, a French startup created with the promise of creating a European champion that could compete against U.S. giants like ChatGPT creator OpenAI, with an open-source product that reflects the Continent’s push for more transparency in the sector, has disappointed some supporters. Mistral AI is partnering with Microsoft—( the tech giant took a stake in Mistral)— to distribute its new large language model (LLM), Mistral Large, which is not open source. Unlike Mistral’s first open-source model releases, Mistral Large is a ready-made API that can be used for a fee and gives no access to the code.

The Colombia Convening on Openness and AI report seeks to define openness in the age of AI, a move welcomed by Lechelle.

“A nuanced framework will help structure the conversation and avoid “open source washing”, the Probabl CEO wrote on LinkedIn. “Open source should normally be a strict definition, i.e. every item should be provided so that from the source (code and data), it is possible to recompile/rebuild the output. For better or worse, the term “open sourcing” has become a common verb, signifying “putting something out there in the open to let people play with”. With deep-learning and the encoding of massive knowledge bases, some LLMs have been released as ‘open source’ without paying much attention to the semantic of the term… Let’s avoid concentration of knowledge by just a handful of companies.”

Probabl’s mission “is to build, sustain and maintain open source libraries for data scientists but we are also a for-profit company that is building a product around it with added value for data scientists,” says Lechelle. “We will offer paid and non-paid access. Our approach is reversible so if it some point a data scientist has more time than money they can revert to free.”

In both cases, what Probabl offers its clients is sovereignty, says Lechelle. ”Our tagline is: Own your Data Science.”

Lechelle says he is out to prove that an open source approach to AI can be profitable. Its inspiration is Red Hat, an American open source software company, which did just that. It was sold to IBM in 2019 for $34 billion.

“We are a hybrid play, one where economic interest binds with public interest, yielding both financial and societal dividends,” says Lechelle. “We want to show that another way is possible to distribute wealth and impact.”

If it succeeds, Probabl’s library could end up helping Europe achieve its goal of technology sovereignty and pave the way for an alternative to the offerings of U.S. and Chinese AI giants.

Visit:https://innovatorawards.org/

Thursday, June 13, 2024

24 New Technology Trends in 2024: Exploring the Future



Technology today is evolving at a rapid pace, enabling faster change and progress, causing an acceleration of the rate of change. However, it is not only technology trends and emerging technologies that are evolving, a lot more has changed, making IT professionals realize that their role will not stay the same in the contactless world tomorrow. And an IT professional in 2024 will constantly be learning, unlearning, and relearning (out of necessity, if not desire).

What does this mean for you in the context of the highest paying jobs in India? It means staying current with emerging technologies and latest technology trends. And it means keeping your eyes on the future to know which skills you’ll need to know to secure a safe job tomorrow and even learn how to get there. Here are the top 24 emerging technology trends you should watch for and make an attempt at in 2024, and possibly secure one of the highest paying tech jobs that will be created by these new technology trends. Starting the list of new tech trends with the talk of the town, gen-AI!
1. AI-Generated Content

Artificial intelligence can generate high-quality, creative content, including text, images, videos, and music. This technology uses algorithms like GPT (Generative Pre-trained Transformer) and DALL-E to understand and produce content that resonates with human preferences. The vast applications range from generating articles, creating educational materials, and developing marketing campaigns to composing music and producing realistic visuals. This speeds up content creation and reduces costs, and democratizes access to creative tools, enabling small businesses and individuals to create content at scale.
2. Quantum Computing

Quantum computers leverage the properties of quantum mechanics to process information exponentially faster than classical computers for specific tasks. This year, we're seeing quantum computing being applied in areas such as cryptography, where it can potentially crack currently considered secure codes, and in drug discovery, speeding up the process by accurately simulating molecular structures. The technology is still nascent but poised to revolutionize industries by solving complex problems intractable for traditional computers.
3. 5G Expansion

The fifth generation of mobile networks, 5G, promises significantly faster data download and upload speeds, wider coverage, and more stable connections. The expansion of 5G is facilitating transformative technologies like IoT, augmented reality, and autonomous vehicles by providing the high-speed, low-latency connections they require. This technology is crucial for enabling real-time communications and processing large amounts of data with minimal delay, thereby supporting a new wave of technological innovation.
4. Virtual Reality (VR) 2.0

Enhanced VR technologies are offering more immersive and realistic experiences. With improvements in display resolutions, motion tracking, and interactive elements, VR is becoming increasingly prevalent in gaming, training, and therapeutic contexts. New VR systems are also becoming more user-friendly, with lighter headsets and longer battery life, which could lead to broader consumer adoption and integration into daily life.
5. Augmented Reality (AR) in Retail

AR technology is transforming the retail industry by allowing consumers to visualize products in a real-world context through their devices. This trend is evident in applications that let users try on clothes virtually or see how furniture would look in their homes before purchasing. These interactive experiences enhance customer satisfaction, increase sales, and reduce return rates.
6. Internet of Things (IoT) in Smart Cities

IoT technology in smart cities involves the integration of various sensors and devices that collect data to manage assets, resources, and services efficiently. This includes monitoring traffic and public transport to reduce congestion, using smart grids to optimize energy use, and implementing connected systems for public safety and emergency services. As cities continue to grow, IoT helps manage complexities and improve the living conditions of residents.
7. Biotechnology in Agriculture

Advances in biotechnology are revolutionizing agriculture by enabling the development of crops with enhanced traits, such as increased resistance to pests and diseases, better nutritional profiles, and higher yields. Techniques like CRISPR gene editing are used to create crops that can withstand environmental stresses such as drought and salinity, which is crucial in adapting to climate change and securing food supply.
8. Autonomous Vehicles

Autonomous vehicles use AI, sensors, and machine learning to navigate and operate without human intervention. While fully autonomous cars are still under development, there's significant progress in integrating levels of autonomy into public transportation and freight logistics, which could reduce accidents, improve traffic management, and decrease emissions.
9. Blockchain Beyond Crypto

Initially developed for Bitcoin, blockchain technology is finding new applications beyond cryptocurrency. Industries are adopting blockchain for its ability to provide transparency, enhance security, and reduce fraud. Uses include tracking the provenance of goods in supply chains, providing tamper-proof voting systems, and managing secure medical records.
10. Edge Computing

Edge computing involves processing data near the source of data generation rather than relying on a central data center. This is particularly important for applications requiring real-time processing and decision-making without the latency that cloud computing can entail. Applications include autonomous vehicles, industrial IoT, and local data processing in remote locations.
11. Personalized Medicine

Personalized medicine tailors medical treatment to individual characteristics of each patient. This approach uses genetic, environmental, and lifestyle factors to diagnose and treat diseases precisely. Advances in genomics and biotechnology have enabled doctors to select treatments that maximize effectiveness and minimize side effects. Personalized medicine is particularly transformative in oncology, where specific therapies can target genetic mutations in cancer cells, leading to better patient outcomes.
12. Neuromorphic Computing

Neuromorphic computing involves designing computer chips that mimic the human brain's neural structures and processing methods. These chips process information in ways that are fundamentally different from traditional computers, leading to more efficient handling of tasks like pattern recognition and sensory data processing. This technology can produce substantial energy efficiency and computational power improvements, particularly in applications requiring real-time learning and adaptation.
13. Green Energy Technologies

Innovations in green energy technologies focus on enhancing the efficiency and reducing the costs of renewable energy sources such as solar, wind, and bioenergy. Advances include new photovoltaic cell designs, wind turbines operating at lower wind speeds, and biofuels from non-food biomass. These technologies are crucial for reducing the global carbon footprint and achieving sustainability goals.
14. Wearable Health Monitors

Advanced wearable devices now continuously monitor various health metrics like heart rate, blood pressure, and even blood sugar levels. These devices connect to smartphones and use AI to analyze data, providing users with insights into their health and early warnings about potential health issues. This trend is driving a shift towards preventive healthcare and personalized health insights.
15. Extended Reality (XR) for Training

Extended reality (XR) encompasses virtual reality (VR), augmented reality (AR), and mixed reality (MR), providing immersive training experiences. Industries like healthcare, aviation, and manufacturing use XR for risk-free, hands-on training simulations replicating real-life scenarios. This technology improves learning outcomes, enhances engagement, and reduces training costs.
16. Voice-Activated Technology

Voice-activated technology has become more sophisticated, with devices now able to understand and process natural human speech more accurately. This technology is widely used in smart speakers, home automation, and customer service bots. It enhances accessibility, convenience, and interaction with technology through hands-free commands and is increasingly integrated into vehicles and public spaces.
17. Space Tourism

Commercial space travel is making significant strides with companies like SpaceX and Blue Origin. These developments aim to make space travel accessible for more than just astronauts. Current offerings range from short suborbital flights providing a few minutes of weightlessness to plans for orbital flights. Space tourism opens new avenues for adventure and pushes the envelope in aerospace technology and research.
18. Synthetic Media

Synthetic media refers to content that is entirely generated by AI, including deepfakes, virtual influencers, and automated video content. This technology raises critical ethical questions and offers extensive entertainment, education, and media production possibilities. It allows for creating increasingly indistinguishable content from that produced by humans.
19. Advanced Robotics

Robotics technology has evolved to create machines that can perform complex tasks autonomously or with minimal human oversight. These robots are employed in various sectors, including manufacturing, where they perform precision tasks, healthcare as surgical assistants, and homes as personal aids. AI and machine learning advances are making robots even more capable and adaptable.
20. AI in Cybersecurity

AI is critical in enhancing cybersecurity by automating complex processes for detecting and responding to threats. AI systems can analyze vast amounts of data for abnormal patterns, predict potential threats, and implement real-time defenses. This trend is crucial in addressing cyber attacks' increasing sophistication and frequency.
21. Digital Twins

Digital twins are virtual replicas of physical devices for simulation, monitoring, and maintenance. They are extensively used in manufacturing, automotive, and urban planning to optimize operations and predict potential issues. Digital twins enable companies to test impacts and changes in a virtual space, reducing real-world testing costs and time.
22. Sustainable Tech

This trend focuses on developing technology in an environmentally and socially responsible manner. It includes innovations in the lifecycle management of tech products, from design to disposal. The aim is to reduce electronic waste, improve energy efficiency, and use environmentally friendly materials.
23. Telemedicine

Telemedicine allows patients to consult with doctors via digital platforms, reducing the need for physical visits. Providing continued medical care during situations like the COVID-19 pandemic has become vital. Telemedicine is expanding to include more services and is becoming a regular mode of healthcare delivery.
24. Nano-Technology

Nanotechnology involves manipulating matter at the atomic and molecular levels, enhancing or creating materials and devices with novel properties. Applications are vast, including more effective drug delivery systems, enhanced materials for better product performance, and innovations in electronics like smaller, more powerful chips.
Top 24 Jobs Trending in 2024AI Specialist: Designing, programming, and training artificial intelligence systems.
Quantum Computing Engineer: Developing quantum algorithms and working on quantum hardware.
Data Privacy Officer: Ensuring companies adhere to privacy laws and best practices.
5G Network Engineer: Installing, maintaining, and optimizing 5G networks.
Virtual Reality Developer: Creating immersive VR content and applications for various industries.
Augmented Reality Designer: Designing AR experiences for retail, training, and entertainment.
IoT Solutions Architect: Designing and implementing comprehensive IoT systems for smart cities and homes.
Genomics Biologist: Conducting research and development in genetics to create personalized medicine solutions.
Autonomous Vehicle Engineer: Developing software and systems for self-driving cars.
Blockchain Developer: Building decentralized applications and systems using blockchain technology.
Edge Computing Technician: Managing IT solutions at the network's edge, close to data sources.
Personalized Healthcare Consultant: Offering health advice based on personal genetic information.
Neuromorphic Hardware Engineer: Designing chips that mimic the human brain's neural structure.
Renewable Energy Technician: Specializing in installing and maintaining solar panels, wind turbines, and other renewable energy sources.
Wearable Technology Designer: Creating devices that monitor health and provide real-time feedback.
XR Trainer: Developing and facilitating training programs using extended reality technologies.
Voice Interaction Designer: Crafting user interfaces and experiences for voice-activated systems.
Commercial Space Pilot: Piloting vehicles for space tourism and transport missions.
Synthetic Media Producer: Producing AI-generated content for media and entertainment.
Advanced Robotics Engineer: Designing robots for manufacturing, healthcare, and personal assistance.
Cybersecurity Analyst: Protecting organizations from cyber threats and managing risk.
Digital Twin Engineer: Creating and managing virtual replicas of physical systems.
Sustainable Technology Specialist: Developing eco-friendly technologies and practices within tech industries.
Telehealth Technician: Supporting the technology that enables remote health services.
One Solution to Succeed in 2024

Although technologies are emerging and evolving all around us, these 24 technology trends offer promising career potential now and for the foreseeable future. And most of these trending technologies are welcoming skilled professionals, meaning the time is right for you to choose one, get trained, and get on board at the early stages of these trending technologies, positioning you for success now and in the future.

Saturday, June 8, 2024

2024 Research and Innovation Awards highlights excellence across Queen Mary University of London



"The Awards demonstrate the incredible breadth of Queen Mary's research excellence and the many profound ways our innovation is changing our world for the better”, said Professor Andrew Livingston, Vice-Principal (Research and Innovation) at Queen Mary University of London. “What sets Queen Mary research apart is our commitment to our communities and our partners, and it's been such a pleasure to celebrate those values in action."

The award categories and winners were:

Impact: Culture, Civic, Community and Policy

Winner: Indigenous Exchange and Climate Action, People’s Palace for their cultural exchange programme between indigenous and non-indigenous artists in the Brazilian Amazon, foregrounding indigenous experiences of climate.
Highly commended: N20: Know the Risks, a student-led nitrous oxide public health initiative, and Teaching London Computing which has transformed computer science teaching in schools over 20 years.

Vice Principal’s Award for Research Excellence
Winner: Professor Lars Chittka for his work on social insects, including bees, and for his commitment to communicating his findings on how we understand sentience, personhood and ecological citizenship.
Highly commended: Dr Caroline Roney for her work on digital twins and virtual representations of a real-life human organs, and Professor Rachael Mulheron for her research into Damages-Based Agreements reforms (governing no-win, no-fee legal agreements).

Interdisciplinary Team
Winner: Early Career Researchers Alexander Stoffel and Ida Roland Birkvad (LSE), for their work on Trans Theory in International Relations, a new field fusing gender theory and international politics.
Highly commended: The Queen Mary+Emulate Organs-on-Chips Centre, which allows researchers to model organs of their own design for use in experiments; and also PETs4SMEs, a multidisciplinary team which has created a Privacy Starter Pack for small and medium sized enterprises.

Early Career Researcher
Winner: Dr Yuanwei Liu both for his outstanding achievements and strongly inclusive approach. Dr Liu’s research focuses on simultaneous transmission and reflection surfaces, and he is the Principal Investigator of the STAR laboratory, which he founded and is at the forefront of a global RDI agenda.
Highly commended: Dr Colm Murphy for his writing work and strong record of policy and political engagement, and Dr Layli Uddin for her pioneering public histories and community empowerment work in Bangladesh and Tower Hamlets.

Research Support
Winner: Coleen Colechin, Senior Operations Manager (Pre-Award) at the Queen Mary-Barts NHS Trust Joint Research Management Office, for extensive efforts to mentor staff and work developing the Research Operations environment.
Highly commended: the Research Support Team in the Faculty of Humanities and Social Sciences for their ambitious, targeted and collegiate approach to research support; and Petra Ungerer, a Technical Facilities Manager in the School of Biological and Behavioural Sciences, for her work establishing a technician career development plan.

Research supervision
Winner: Professor Kimberly Hutchings for her work successfully supervising PhD students through to completion and her commitment to nurturing and developing future research leaders.
Highly commended: Dr Mathieu Barthet for his commitment to engaging industry in PhD training and successfully mentoring his students in applied research, and Dr Nicholas Tsitsianis, for creating a collegiate approach to the supervisory relationship.

Impact: Enterprise and Commercial Innovation
Winner: Dragonfly AI, a successful predictive visual analytics platform which comes out of late Professor Peter McOwan and Dr Hamit Soyel’s ground-breaking research in 2012 and whose clients include GSK, Mitsubishi and Jaguar Land Rover.
Highly commended: The Queen Mary Audio Engineering Research Team, originators of LandR, Nemisindo and many others, for their entrepreneurial research culture, and Accent Bias Britain, a commercial HR consultancy set up by Professor Devyani Sharma and Visiting Professor Erez Levon.

Technician
Winner: Martin Dodel, Research Technician in the Barts Cancer Institute whose outstanding contributions include: research publications, a patent application, and driving the establishment and benchmarking of the TREX method of assessing RNA-proteins interactions, and creating a supportive research culture.
Highly commended: Geography Technical Team for research that underpinned a BBC Panorama on the impact of coastal landfill, and work on an undergraduate prize in Geography; and Sherman Lo, Research Software Engineer in Central ITS for software improvement work.

VP’s Hon Award for Lifetime Contribution

Winner: Dr Helen Jenner who has served as the Chair of Queen Mary’s Ethics of Research Committee since 2017 and retires this year. She chaired a crucial element of our research governance process, one which oversees the wellbeing of researchers and their collaborators as well as reinforces our commitment to academic rigour and excellence.

Wednesday, June 5, 2024

Developers Spending More Time Firefighting Issues Than Delivering Innovation

 






Developers Call for Full-Stack Observability as Pressure Mounts to Accelerate Release Velocity and Deliver Seamless and Secure Digital Experiences



News Summary:

Developers warn that the current pace of innovation is not sustainable unless organizations equip IT teams with the tools they need.


Absence of the right tools to understand root cause of application performance issues and resolve them quickly results in developers spending hours in war room meetings and debugging applications, instead of creating code and building new applications.


Developers point to full-stack observability as an essential tool to free them up from reactive firefighting and focus on accelerated innovation.

SAN JOSE, Calif., May 7, 2024 — Cisco today unveiled findings from a survey that details how software developers are spending more than 57% of their time being dragged into ‘war rooms’ to solve application performance issues, rather than investing their time developing new, cutting-edge software applications as part of their organization’s innovation strategy.

Software developers play a critical role in building, launching and maintaining the applications and digital services that are essential to the way modern organizations operate today, and the pressure on them has never been higher. Globally, 85% of those surveyed report encountering increased pressure to accelerate release velocity, while 77% point to mounting pressure to deliver seamless and secure digital experiences.

But while developers are being expected to deliver new tools and functionality at ever faster speeds, they also find themselves on the receiving end of endless demands to help Site Reliability Engineers (SREs) and IT operations teams manage the ongoing availability and performance of applications. The result is teams of developers spending hours in war room meetings and debugging applications, instead of creating code and building new applications.


Lack of Critical Insight into Application Performance

Developers report that the issue is down to their organizations not having the right tools and visibility required to understand the root cause of application issues. They believe this stems from IT departments lacking a full and unified view into applications and the supporting IT stack. Developers are acutely concerned about the potential consequences this could have, with three quarters (75%) of those surveyed fearing that the lack of visibility and insight into IT performance is increasing the chances of their organization suffering downtime and disruption to business-critical applications.

The situation is significantly affecting morale amongst developers, with 82% admitting that they feel frustrated and demotivated, and 54% increasingly inclined to leave their current job. These findings should ring alarm bells for organizations who are now dependent on developers to create the compelling, intuitive digital experiences that customers and users expect. With demand for developer skills at an all-time high and a finite pool of talent, businesses cannot afford an exodus of talent simply because their IT teams don't have the tools they need to do their jobs.

“While most IT departments have deployed a multitude of monitoring tools across different domains, they simply fall short when it comes to today’s complex and dynamic IT environments, leaving technologists unable to generate a full and unified view into their applications and the supporting IT stack,” said Shannon McFarland, Vice President, Cisco DevNet. “When things go wrong, it’s incredibly difficult to quickly identify where the root cause lies, often resulting in panic war room situations and developers having to spend hours trying to help their colleagues in IT operations identify the quickest path to remediation.”

The Potential for Full-Stack Observability

Encouragingly, developers are acutely aware that there are solutions available to address these concerns, and as many as 91% feel that they should be playing a bigger role in shaping and deciding on the solutions needed within their organization. Above all else, developers point to full-stack observability as being a potential game changer, providing SREs and IT operations teams with unified visibility into applications and supporting infrastructure, across both cloud-native and on premises environments.

While developers themselves may not be the primary users of full-stack observability solutions – focusing instead on their specific areas of domain expertise – 78% believe that implementing full-stack observability within their organization would be beneficial. Developers recognize the benefits of having unified visibility across the IT estate and acknowledge that full-stack observability would make it much easier and quicker for operations teams to identify issues, understand root causes, and carry out necessary remediation. In turn, this would result in fewer technologists from multiple domain teams being required to attend war room sessions, and free up that talent – including developers – to focus on their day jobs.

76% of developers went so far as to state that it’s becoming impossible for them to do their job because SREs and IT operations teams don’t have the insights they need to effectively manage IT performance. This explains why 94% point to full-stack observability as the single thing that would most help them to escape war rooms and focus on innovation.

The Role of AI

Alongside full-stack observability, many developers (39%) also feel that their organization (and they themselves) would benefit from deploying AI to automate application issue detection and resolution. Rather than relying on manual processes, AI can enable IT teams to cut through overwhelming volumes of application data to identify the most serious issues and apply fixes in real-time.

In addition, developers are ready to embrace new ways of working within the IT department to drive greater efficiency and productivity, and a more streamlined approach to managing application performance. The majority (57%) believe that there needs to be greater ongoing collaboration between developers and IT teams. This is already being seen in shift left testing and widespread adoption of DevOps and DevSecOps methodologies, so that application availability, performance and security considerations are embedded into the development lifecycle from the outset.

“At a time when developer talent is in such high demand, organizations must do everything they can to empower their teams with the tools they need to be able to perform to their full potential and maximize impact,” added McFarland. “Full-stack observability has become mission-critical – without it, IT teams simply cannot deliver the levels of digital experience that consumers now demand.”

Additional Resources

Research Methodology

Cisco conducted research amongst 500 global software developers split across the U.S. (200), UK (100), Australia (30), and the rest of the world (170 - including Germany, France, Italy, Spain, Scandinavia, Japan, Singapore, India). The research was conducted by Insight Avenue in March and April 2024.

About Cisco 

Cisco (NASDAQ: CSCO) is the worldwide technology leader that securely connects everything to make anything possible. Our purpose is to power an inclusive future for all by helping our customers reimagine their applications, power hybrid work, secure their enterprise, transform their infrastructure, and meet their sustainability goals. Discover more on The Newsroom and follow us on X at @Cisco.

Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. A listing of Cisco's trademarks can be found at www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company.