A CEO I was speaking with recently had just taken over a not-for-profit. Her Board asked her to validate the organisation's cyber position with an independent penetration test.
Reasonable enough on the surface. The organisation runs a website with member logins, member activity data, and a donation page processing payments through Stripe and PayPal. A breach could compromise credentials, expose personal information, or disrupt the payment channel. For a mission-driven organisation, the reputational consequences alone could be severe.
But when I asked the CEO what the pen test would tell the Board that they could act on, she paused. Nobody had mapped which systems supported the organisation's critical operations, or what a successful attack would actually disrupt in terms of mission delivery. The Board's request was driven by a reasonable instinct that cyber risk needed attention, not by any structured understanding of where exposure sat.
The pen test might well surface a gaping hole they need to know about. That is not the issue. The issue is: on what basis does the Board make that spending decision? Nobody can connect the pen test to a specific organisational objective, weigh it against competing priorities, and make a justifiable call.
That is not a cyber problem. It is a governance problem.
A penetration test is a validation control. What is it validating? The security posture of systems that support the organisation's operations. Which systems matter most? The ones that underpin the most critical business processes. Which processes are most critical? The ones that deliver the organisation's strategic objectives.
For this NFP, reputation is genuinely one of those objectives. It underpins funder confidence, member trust, and community standing. A cyber incident that damages reputation is a legitimate strategic risk. But "reputation" alone is not specific enough to prioritise a control. Which systems, if compromised, would cause the damage? What would the downstream impact be on funding, operations, and the people the organisation serves? Answerable questions, but nobody had done the work to answer them.
So the CEO was left trying to balance spend on cyber against every other pull on a limited budget: programme delivery, fundraising, staffing uncertainties. Those competing priorities could each be connected to an organisational objective. The pen test could not. The IT provider faced the same problem in reverse: they could see infrastructure gaps, but could not translate them into language the Board would act on. The link between "this system is vulnerable" and "this is what happens to our mission if it fails" had never been established.
This is objective centric risk management in practice. A risk only makes sense in the context of the objective it threatens. Without that context, every security control competes on fear. And fear always loses to revenue.
This story reveals a structural problem beyond cyber. Three disciplines look at the same underlying dependencies, none of them connected.
The strategy view asks: what are our objectives, what processes deliver them, and what information and technology do those processes depend on? The focus is consistency of delivery and measuring progress.
The resilience view asks: which of those processes and dependencies are time-critical, and what happens when key systems or people are unavailable? The focus is recovery.
The cyber view asks: where are our vulnerabilities, what data needs protecting, and what controls should we deploy? The focus is preventing and detecting attacks.
Each produces useful outputs: process maps, business impact analyses, vulnerability assessments. But they are typically done by different people, at different times, using different tools, with no connective tissue between them. A Director receives a BIA from one consultant, a risk assessment from another, and a penetration test report from a third, and none tell a coherent story about what the organisation depends on to deliver its mission.
Since 2017 I have been authoring and delivering cyber governance programmes for the Australian Institute of Company Directors (AICD). A consistent theme is the importance of understanding your "crown jewels," the critical data and systems your organisation depends on. That advice is sound. What I have learned from working with hundreds of organisations and thousands of participants is that most struggle to action it, because they have not done the foundational work that sits above the cyber question.
Kaplan and Norton's Balanced Scorecard identifies four elements of an organisation's Learning and Growth perspective: Human Capital, Financial Capital, Organisation Capital, and Information Capital. Boards invest in people, track finances obsessively, and talk about culture and alignment. Information Capital, the information systems, knowledge assets, and technology infrastructure required to execute strategy, gets delegated to IT.
The challenge for every Director reading this: can your Board articulate not what technology you have, but what information you need to deliver your strategy?
That question is the top of the chain. Answer it, and you can scope a meaningful BIA. Complete the BIA, and you know which dependencies are time-critical. From there, you can target security controls at the things that actually matter to the organisation's mission.
Without that chain, you are protecting everything equally, which in practice means protecting nothing adequately.
Three questions for the next Board or committee meeting:
First, can your Executive trace a line from strategic objectives through to the information and technology assets that support them? Boards are drowning in cyber data: threat reports, trend analyses, industry benchmarks. The ACSC tells us cybercrime reports rose 14% in 2025 and the average cost to an SME was $56,600.¹ That is useful context, but it does not help a Board make a decision. Compare it with: "One average cyber incident would wipe out our FY25 surplus. We have three key funders, one is already under review. A public breach could cost us that relationship, resulting in a 30% reduction in FTE and operational outcomes." The second statement connects risk to the organisation's reality. Without the connective tissue from objective to dependency to control, Boards are left making investment decisions based on generic industry data rather than their own exposure.
Second, do you know which dependencies are time-critical, and what happens to service delivery when they fail? Do you know whether any carry regulatory reporting obligations under the Privacy Act, the Notifiable Data Breaches scheme, or sector-specific requirements? If not, you cannot validate whether controls are proportionate to the risk, or whether a breach triggers obligations you have not prepared for.
Third, for the CIO, the CISO, or the outsourced IT provider: help the Board and Executive see how security controls enable the outcomes they care about. Stop leading with vulnerabilities. Start with the objective, show the dependency chain, then present the control as assurance that the chain holds. When security investment is framed as protecting the organisation's ability to deliver on its strategic commitments, it stops competing with programme delivery for attention. The Board sets the direction, the Executive implements through people, systems, and processes that deliver outcomes, and the Board validates that the controls assuring those outcomes are targeted at the right dependencies.
A penetration test might be exactly what that not-for-profit needs. But until someone maps the chain from mission to information to system, neither the CEO nor the Board can know that with any confidence. The crown jewels are only as valuable as your understanding of the crown they serve.
¹ Australian Signals Directorate, Annual Cyber Threat Report 2024–2025.
This article reflects the author's own analysis, experience, and professional judgement. AI tools were used during drafting to assist with structure, editing, and refinement. The ideas and positions expressed are entirely the author's own.
For more on how Wilk Advisory uses AI, see our AI use statement.