The role of standards and stewardship in the co-evolution of AI and humans

Author: Sarah Dods, Mike Erskine
AdobeStock_566583213_AI and humans

At a glance

Artificial intelligence (AI) has permeated our daily reality, reshaping various sectors such as transportation, manufacturing, healthcare and finance. AI systems and platforms are developing at rapid speed and unlikely to slow down anytime soon. The incredible pace of AI progress poses several threats and vulnerabilities, calling for responsible development, standards and stewardship. This article discusses the importance of how organisations can reshape what they do and how they do it by integrating AI in an ethical and responsible way. 
Artificial intelligence (AI) has permeated our daily reality, reshaping various sectors such as transportation, manufacturing, healthcare and finance. AI systems and platforms are developing at rapid speed and unlikely to slow down anytime soon. The incredible pace of AI progress poses several threats and vulnerabilities, calling for responsible development, standards and stewardship. This article discusses the importance of how organisations can reshape what they do and how they do it by integrating AI in an ethical and responsible way.

Looking at generative AI today

Generative AI is changing aspects of the way we approach creativity and problem-solving, opening up many possibilities. As rapid improvements and new applications come to life at speed, there is a potential for unintended consequences. Issues around ethics, privacy, bias and trust can emerge. How can companies confidently integrate AI systems and platforms into their organisations, while effectively addressing the myriad of risks? It has become apparent that interactions with ever more sophisticated machines will require some different thinking and approaches that combine elements of people management, control systems management and human factors approaches.

New technology requires standardisation to scale

As an evolving chapter in society and technology, AI has the benefits of some learnings from history, whilst also requiring substantial new thinking. Standards have become a part of most global industries, as a set of established references for specialists, prepared by specialists for credible, societally accepted technology. Across the technological development of the rail, aviation, automotive and container shipping industries, standards helped to address increasing societal safety expectations, uniformity of product to scale manufacturing, and increasing environmental demands.

When a potential new step-change technology becomes a commercial reality, there are two key societal acceptance aspects that need to be addressed. The first is about societal risk, with legal, regulatory and ethical considerations. Around the globe, governments are putting together regulations around AI, as any new technology should have. Even before the arrival of ChatGPT, the EU had proposed a Regulatory Framework for AI, while National Institute of Standards and Technology (NIST) in the USA developed the AI Risk Management Framework, and are now considering an AI Bill of Rights. Australia had also already rolled out an AI Ethics Framework. AI is a global technology, with much of the work done in cloud computing services that can be located or accessed anywhere on the planet. 

The second key societal factor is standardisation, to enable scalability, embed safety, and guide organisations into unfolding territory. Standards provide foundational vocabulary and documented leading practice thinking from pioneering specialists, especially in aspects of the technology where new areas are explored, and existing models are disrupted. Historically, the automobile industry reached global scale and commoditisation through standards around safety, manufacture, and supply chain. In the same way, adopting standards for AI management could greatly assist societal acceptance, and accelerate the ability to harness AI’s global potential in ethical, safety, technical, economic and societal dimensions. 

 

AI standards creation is underway

While regulation is provided by governments, the creation of harmonised global and national standards for AI governance and best practice is governed by independent, international standards organisations. Global standardisation will influence how organisations leverage AI and mitigate the new risks and inequalities that could come into play. 

The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) have come together in a joint effort to create international standards for AI. Since 2018, they have been working through the diverse aspects of the management of AI. If one were to think of the main specifications of all that we should expect of people, but done with AI, then that sets the stage of thinking and scope for all the areas that the AI standards need to encompass. Around 400 experts, from many different backgrounds across 38 countries and 12 working groups, have been working on developing these standards, each focused on a different aspect of AI. There are now 20 published standards, and another 35 under development. Some of the key standards are listed below: 

  • Governance: ISO/IEC 38507:2022 provides organisations with guidance on the use of AI and its governance implications
  • Foundation: ISO/IEC 22989:2022 establishes terminology for AI and describes AI-related concepts
  • Risk management: ISO/IEC 23894:2023 seeks to assist organisations in managing AI-related risks
  • Information technology: ISO/IEC 42001:2023 aims to guide organisations on how to manage their AI systems.
 

Gearing up for what's next

All sectors and companies will benefit from the standardisation of the management of AI. It will create a common framework that is essential to achieve the level of accountability and compliance that is expected and demanded by customers, societies, and governments. The new standards will enhance safety, diminish risk, instill trust and credibility, and improve interoperability and scalability.

Organisations must invest in upskilling professionals in AI to promote agility and respond to the possibilities and risks that AI presents to their industry. Clear use cases need to be articulated and actively referenced to ensure AI addresses actual organisational needs and aligns with strategic goals. Investing in upskilling professionals in AI standards provides guidance on how to reduce the associated risk of wasted investment, sunk costs and reputational risk from solutions that do not scale, cannot be managed effectively, are unsupportable, or have unintended consequences.

At GHD, we look to the future as both a challenge and an opportunity. Alongside other organisations and the Australian government, GHD have been providing expert input to AI standards for the last four years. We’re only at the beginning of the significant shift that AI is triggering. We’re already helping our future-focused clients to engage with AI. We’re excited to engage in conversations, collaborate to ideate and bring to life new frontiers, and proud to be contributing to the vital work of standards development that will help enable AI-powered initiatives around the planet. 

Authors