I have spent the better part of three years watching the AI conversation unfold. My journey has been one of deliberate scepticism, and I think that has made me a better adviser.
Here is an honest account of how my thinking has evolved, and why I have chosen to formalise my expertise with the IAPP's Certified AI Governance Professional (AIGP) credential.
When ChatGPT launched in late 2022 and took hold in 2023, I did what any curious governance practitioner would do. I played with it.
My honest assessment at the time: an impressive grammar checker on steroids. The interface was compelling. The underlying capability was real. But the gap between what it could do and what organisations actually needed from a governance perspective was vast. The hype was real. The practical application was not.
By 2024, 'generative AI' had become unavoidable. Every conference, every board paper, every vendor conversation featured it prominently.
I watched carefully. The IT industry has a long history of promising transformation and delivering complexity. This felt familiar. Useful in places. Not the paradigm shift the breathless commentary suggested.
But I could see the hype cycle ramping up in organisations: 'AI everywhere for everything,' just like so many technologies before it. And that is where my governance instincts sharpened. Because even if I was sceptical about the technology's near-term impact, the Board's role had emerged as genuinely critical.
Getting ethical policy and guardrails in place for a highly disruptive technology, and laying out a clear strategic direction for management to operate within, was a governance problem long before it was a technology problem. Boards that waited for the technology to stabilise before engaging were already behind.
Early 2025 changed my thinking on the technology itself.
Agentic AI arrived in a meaningful way. These are systems that do not just respond to prompts. They act. They reason across complex, multi-step tasks. They can be given objectives and pursue them autonomously. This felt categorically different from a chat interface.
More importantly, it was something businesses could actually exploit. Not replacing call-centre staff with a chatbot and a cleanup crew, but genuinely automating knowledge work, decision support, and operational processes in ways that created real value.
The governance implications sharpened considerably.
Late 2025 was when I stopped observing and started doing.
Working with a colleague, I explored Gas Town orchestration: Steve Yegge's framing for the emerging practice of coordinating multiple AI agents, each with defined roles and bounded authority, working together toward complex outcomes. Think less 'chatbot' and more 'autonomous team.' The concept reframes what AI deployment actually means for organisations, and the governance questions it raises are substantive.
I used these tools in my own businesses. Not as experiments, but as operational tools. And the governance questions that surfaced when AI agents operate with real autonomy became the most interesting work I had engaged with in years.
By 2026 I was actively advising clients on AI governance. The demand was real and growing.
I watched my colleagues and peers complete the AICD AI Fluency for Directors Sprint, developed with the University of Sydney and Deloitte. It is a well-constructed course, and exactly right for directors who need to build confident boardroom literacy on AI. I have recommended it to several clients.
But my situation called for something different.
As a governance adviser, I need to design and assess AI governance frameworks, not just participate in conversations about them. I needed a practitioner-level credential that would signal genuine technical and regulatory depth to clients and boards.
I also wanted the training content itself to catch up with where the technology had moved. The AIGP Body of Knowledge was updated in early 2026 to include agentic AI architectures and current regulatory coverage across the EU AI Act, NIST AI Risk Management Framework, and ISO 42001. The timing aligned.
So I am sitting the IAPP's Certified AI Governance Professional (AIGP).
If you are a director or executive working out where to invest in AI capability, the most useful question is not 'what course should I do?' It is 'what role do I need to play, and what does that role actually require?'
For most directors, AI fluency and confident boardroom contribution is the goal. The AICD Sprint delivers that well.
For governance professionals, risk advisers, and those designing or assessing AI frameworks, the bar is higher. The credential needs to match the work.
I am happy to talk through either path.
For the post that started this journey, read: Gas Town and Governance: When 30 AI Agents Write Your Code
I use AI. Deliberately, and without apology.
Every insight, argument, and position in this post is mine: drawn from 20+ years of governance practice, client work, and hard-won experience in boardrooms and organisations across Australia. The ideas, the judgements, the professional reputation behind them — that is entirely human.
What AI contributes is craft: language, structure, fact-checking, and the kind of editorial discipline that turns a practitioner's thinking into something worth reading. I bring the substance. AI helps me express it clearly.
This is a partnership, not a shortcut. I would not publish anything I could not defend in a boardroom without notes.
I believe transparency about AI use is itself a governance practice. So I will always tell you.