artificial intelligence, network, programming, web, brain, computer science, technology, printed circuit board, information, data, data exchange, digital, communication, artificial intelligence, artificial intelligence, artificial intelligence, artificial intelligence, brain, brain, technology, technology, technology, technology, technology, data, digital, digital

By Kate Carruthers & Kobi Leins

What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.

 – Joseph Weizenbaum, creator of the first chatbot, 1964

What is AI?

‘Artificial Intelligence’ is a term that has been around since 1956. Definitions of AI abound – from the International Standards Organisation to the OECD – but our personal favourite is this one from 2004 from the Australian Administrative Review Council that refers to ‘expert systems’, which on their plain definition are ‘computing systems that, when provided with basic information and a general set of rules for reasoning and drawing conclusions, can mimic the thought processes of a human expert.’

What is AI made of?

AI includes a series of component parts, both software and hardware. AI may include data-based or model-based algorithms; the data, both structured and unstructured; machine learning, both supervised and unsupervised; the sensors that provide input and the actuators that effect output. Each of these component parts provide opportunities – and may pose risks. The devil is in the detail. Typically, when people are speaking about AI, they are talking about a specific AI related technology or technique. The primary types of AI that are included under the term at present include:

  • Artificial Intelligence: an umbrella term for the overall field of computer science that seeks to create intelligent machines that can replicate or exceed human intelligence.
  • Machine Learning: subset of AI that enables machines to learn from existing data and improve upon that data to make decisions or predictions.
  • Deep Learning: a machine learning technique in which layers of neural networks are used to process data and make decisions.
  • Generative AI: creation of new written, visual, video, and auditory content given prompts and existing data.
  • Agentic AI: AI systems that are capable of autonomous action and decision-making. These systems, often referred to as AI agents, can pursue goals independently, without direct human intervention.

What is different about AI?

A couple of things. Firstly, AI is made up of mostly historical data used in ways to project into the future at speed and scale in ways that may have unintended or harmful consequences.

Perhaps more importantly, data and AI embed values. Making sure that the tools you build, and use align with your vision, strategy and values from the outset is key. You might not need the expensive tool. Lower cost and risk opportunities are often the best way to build capability and understanding.

What do I need to do to ensure that my Board is managing AI risk adequately?

Although there is a lot of hype around AI management and governance, a large part of this work is done if there is a solid basis of good governance already, including practices such as information technology governance and data governance. Good business practices, including risk appetites (ideally specifically for AI), risk frameworks, procurement practices, privacy, legal, accessibility, whistleblower protection, and more – if you have these in place, you are already well-placed to govern and manage AI.

Where you might need to think of uplifting include:

  1. Uplift in Board capability in asking the right questions about AI.
  2. A specific AI risk appetite.
  3. An AI policy (consultation is queen).
  4. Adapting KPIs to reflect AI stance (carrot).
  5. Linking AI policy to Code of Conduct directly (stick).

Start thinking about the Three P’s: Policies, Processes and People

Policies

Policies are a great place to start to bring your business along to understanding what AI can and cannot do. Alone, they do very little. Policies need to be linked to other policies (such as the code of conduct, pay incentives, etc.) but also to processes.

Processes

One of the biggest questions is ‘what is AI and how do I review it?’. Referring to our definitions above, our recommendation is to have a wide funnel. Robodebt was an Excel spreadsheet – any ‘expert system’ that affects something else or helps to make a decision may have real-world (and legal) ramifications, so review widely. It will become clear what is higher risk as you go along. Ensure that you have clear pathways for procurement that include subject matter experts who can ask the right questions. Products are said to include AI until they work- the AI is often a sales pitch and what is sold as AI may not even be AI. Ensure robust documented due diligence of vendors. You may need extra expertise or training to enable this properly.

People

By far the most significant piece of AI management and governance is the people. Having protections (and safe culture) for those who call out risks is one of your biggest guardrails, and given the technical nature of the tools, often those at the lower levels have a much better idea of how the tools actually work. Think of the Volkswagen Case, or the Boeing Scandal, the main lesson of which is to have people on your Board who understand the technology and its benefits and risks.

Conclusion

Artificial Intelligence (AI) is here, or at the very least, it is on its way. Some surveys suggest that between 4265 per cent of workers across organisations globally are using generative AI. While these figures may be exaggerated, what is clear is that companies will continue to explore the possibilities of AI. Current estimates suggest that in Australia only 10 per cent of corporate leaders (a mix of executives and board members) have an AI strategy, while 13 per cent of company directors have a set of AI or data ethics principles. Less than half of corporate leaders who are using AI said their organisation was undertaking a risk assessment of their AI use. As AI adoption increases among businesses, Boards must be prepared.