What NEDs need to know about AI and cyber security

artificial intelligence, network, programming, web, brain, computer science, technology, printed circuit board, information,
Photo by geralt

By Kate Carruthers & Kobi Leins

Note: this is a republication of a old piece that Kobi and I wrote as our old website has now gone away.

AI and Cybersecurity

Cybersecurity has been on our radar for some time as an important factor of which company directors need to take account. The cyber threat landscape has now been complicated by the advent of the widespread adoption of artificial intelligence (AI) threat actors have also adopted this new technology to power their campaigns.

What is AI?

‘Artificial Intelligence’ is a term that has been around since 1956. Definitions of AI abound – from the International Standards Organisation to the OECD – but our personal favourite is this one from 2004 from the Australian Administrative Review Council that refers to ‘expert systems’, which on their plain definition are ‘computing systems that, when provided with basic information and a general set of rules for reasoning and drawing conclusions, can mimic the thought processes of a human expert.’

What is Cyber and Information Security?

Cybersecurity and information security are two pillars of data protection. Let us first define the terms cybersecurity and information security:

  • Cybersecurity is the protection of data and information assets against external threats and threat actors.
  • Information security is the maintenance of the confidentiality, integrity and availability of an organisation’s information assets.

What is different about AI?

A couple of things. Firstly, AI is made up of mostly historical data used in ways to project into the future at speed and scale in ways that may have unintended or harmful consequences.

Perhaps more importantly, data and AI embed values. Making sure that the tools you build, and use align with your vision, strategy and values from the outset is key. You might not need the expensive tool. Lower cost and risk opportunities are often the best way to build capability and understanding.

What do I need to do to ensure that my Board is managing AI cyber risk adequately?

Although there is a lot of hype around AI management and governance, a large part of this work is done if there is a solid basis of good governance already, including practices such as information technology governance and data governance. Good business practices, including risk appetites (ideally specifically for AI), risk frameworks, procurement practices, privacy, legal, accessibility, whistleblower protection, and more – if you have these in place, you are already well-placed to govern and manage AI.

Where you might need to think of uplifting include:

  1. Uplift in Board capability in asking the right questions about AI.
  2. A specific AI risk appetite.
  3. An AI policy (consultation is queen).
  4. Adapting KPIs to reflect AI stance (carrot).
  5. Linking AI policy to Code of Conduct directly (stick).

Start thinking about the Three P’s: Policies, Processes and People

Policies

Policies are a great place to start to bring your business along to understanding what AI can and cannot do. Alone, they do very little. Policies need to be linked to other policies (such as the code of conduct, pay incentives, etc.) but also to processes.

Processes

One of the biggest questions is ‘what is AI and how do I review it?’. Referring to our definitions above, our recommendation is to have a wide funnel. Robodebt was an Excel spreadsheet – any ‘expert system’ that affects something else or helps to make a decision may have real-world (and legal) ramifications, so review widely. It will become clear what is higher risk as you go along. Ensure that you have clear pathways for procurement that include subject matter experts who can ask the right questions. Products are said to include AI until they work- the AI is often a sales pitch and what is sold as AI may not even be AI. Ensure robust documented due diligence of vendors. You may need extra expertise or training to enable this properly.

People

By far the most significant piece of AI management and governance is the people. Having protections (and safe culture) for those who call out risks is one of your biggest guardrails, and given the technical nature of the tools, often those at the lower levels have a much better idea of how the tools actually work. Think of the Volkswagen Case, or the Boeing Scandal, the main lesson of which is to have people on your Board who understand the technology and its benefits and risks.

Conclusion

Artificial Intelligence (AI) is here, or at the very least, it is on its way. Some surveys suggest that between 4265 per cent of workers across organisations globally are using generative AI. While these figures may be exaggerated, what is clear is that companies will continue to explore the possibilities of AI. Current estimates suggest that in Australia only 10 per cent of corporate leaders (a mix of executives and board members) have an AI strategy, while 13 per cent of company directors have a set of AI or data ethics principles. Less than half of corporate leaders who are using AI said their organisation was undertaking a risk assessment of their AI use. As AI adoption increases among businesses, Boards must be prepared.