The Australian Institute of Company Directors (AICD) has partnered with the Human Technology Institute (HTI) at the University of Technology Sydney, to produce a new suite of resources to help directors and boards navigate the era of Artificial Intelligence (AI).
As more organisations adopt AI technologies and policymakers focus increasingly on regulating AI risks, the need for directors and boards to understand the governance requirements of ethical and informed use of AI is rapidly becoming imperative.
AI has the potential to offer significant productivity and economic gains. But alongside the benefits lie potential risks from AI system misuse and failures.
Research suggests boards face multiple challenges including how to implement effective oversight systems and gain clearer lines of sight of AI use within their value chains.
These new resources are designed to better position directors to take advantage of the benefits of AI for their organisations while avoiding the risk of serious harm, including customer impacts, commercial losses, reputational damage and regulatory breaches.
“We must assist directors and boards rise to the governance challenges that this new era of AI represents while ensuring their organisations can tap into these emerging and transformational technologies,” said AICD Managing Director and CEO Mark Rigotti.
“As stewards of organisational strategy and risk management, directors should seek to seize the opportunities and mitigate the risks of AI. This requires a robust governance framework that can adapt to the unique characteristics of AI systems.”
HTI Co-director Professor Nicholas Davis said highlighted that AI is becoming essential to Australian organisations.
“Yet investment in AI systems has not been matched by investment in AI governance. Directors should be engaging with management to understand where AI is being used in their organisation and how the risks are being managed.”
The Australian Government has committed to introducing a range of measures to support the uptake of safe and responsible AI, including the possibility of mandatory guidelines and regulation of AI deployed in high-risk settings, plus strengthening existing laws to address AI harms.
In Australia, as worldwide, the key challenge will be to walk the policy tightrope between regulating high-risk AI uses to avoid the most significant AI harms, without stifling innovation.
A Director’s Introduction to AI lays the foundations for understanding AI concepts.
A Director’s Guide to AI Governance provides practical guidance for boards already using or planning to deploy AI within their organisations.
An additional SME and NFP governance checklist recognises the significance of small and medium-sized enterprises to the Australian economy and the specific needs of this sector.
By applying the ‘eight elements of safe and responsible AI governance’, these resources can guide organisations to deploy AI systems safely and responsibly and help them optimise their strategic and competitive advantage.