Technology now sits inside the operating core of almost every enterprise. It affects how a company serves customers, communicates, makes decisions, moves money, delivers products, complies with regulations, and recovers from disruption. In many organizations, if critical technology fails, operations stop. The event is no longer contained within IT. It quickly becomes an operational, reputational, regulatory, and financial risk.
This is fast becoming an important governance gap.
Boards know technology is important. They know cyber matters. They know AI is changing the landscape. They know modernization has become harder to postpone. Yet many still approach these issues as separate technical topics, or as periodic reporting items, instead of governing them as a connected category of enterprise risk.
Cyber, AI, modernization, and third-party exposure are not isolated issues. They’re interdependent conditions that shape how resilient the enterprise actually is.
A cyber event will test recovery capability. An AI deployment will expose process weaknesses and data quality problems. A legacy environment will increase both security and continuity risk. A vendor change can alter data exposure, regulatory posture, customer experience, or all three at once.
This underpins the concept of Fiduciary Resilience: the board’s ability to anticipate, absorb, and respond to risk to maintain effective governance.
The Fiduciary Resilience Model is my board-level approach to governing technology risk as enterprise risk, with the same discipline boards apply to financial risk. It gives directors a practical way to focus oversight on what can disrupt the business, what must be governed, and what must be true so the organization can keep operating when technology fails.
The structure is straightforward: Visibility, Governance, and Readiness.
Those three pillars help boards stay at the right altitude while still asking the right questions.

Visibility
Boards don’t need more dashboards. They need evidence-based visibility into where technology can disrupt the enterprise.
This sounds obvious, but in practice, many boards are still getting summaries of controls, incident counts, heat maps, and program updates that do not answer the questions they actually need answered: what failures would most disrupt the business?
This question changes the conversation.
It moves the board away from generic reporting and toward business consequences. It helps identify which systems, dependencies, processes, and external relationships truly matter. It also forces management to express technology exposure in business and operational terms, which is where oversight becomes meaningful.
Visibility starts with critical technology dependencies.
- Which systems are essential to customer service, transactions, communications, compliance, or core operations?
- How are those systems risk-ranked?
- How current is that ranking?
- What dependencies sit behind them, including data, infrastructure, third parties, and manual workarounds that may be less resilient than people assume?
Boards also need visibility into cyber exposure in business terms. I’m less interested in the volume of attempted attacks than in understanding which business capabilities are most exposed and what interruption would look like if those capabilities failed.
The same is true for AI.
The first board question is often framed too broadly. It’s not simply, are we using AI? The more useful question is, where is AI embedded and what data does it touch?
In many organizations, AI is entering through vendor platforms long before the board sees a formal AI strategy. This matters because it can change data exposure, control requirements, customer outcomes, and regulatory implications without much visibility.
Third-party exposure belongs in this same visibility discussion. Many organizations increasingly depend on a relatively small set of software, infrastructure, and service providers. Vendor concentration, control changes, embedded AI features, and service disruptions can create enterprise impact quickly. An on-boarding vendor review at procurement isn’t enough if the vendor changes the product, the model, or the terms of use six months later.
Legacy fragility belongs here too. I’ve seen environments that look modern from the outside while older infrastructure underneath creates serious exposure. It’s this kind of hidden weakness that’s exactly what boards should want surfaced. Modernization conversations often focus on cost and efficiency. They should also assess fragility. If the underlying environment is difficult to secure, difficult to support, and difficult to recover, the board should understand this as an enterprise risk issue.
Visibility is where the board begins to separate reassuring reporting from decision-useful evidence.

Governance
Once the board understands where the exposure sits, the next question is governance.
Technology risk requires more than interest and good intentions. It requires ownership, structure, cadence, and escalation that match the pace of change.
This is where many boards still have work to do.
To be clear, the board does not need to, nor should, choose technology platforms or manage implementation plans. That’s management’s job. The board’s responsibility is to ensure there is clear ownership in management, a defensible oversight structure, and a cadence of review that reflects the reality of current technology risk.
Questions to ask include:
- Who owns technology risk in management?
- Where is it governed at the committee level?
- What triggers escalation to the full board?
- Does the cadence of review reflect annual planning cycles, or the speed at which cyber threats, vendor changes, and AI adoption are actually evolving in the real world?
These are governance questions, not technical questions.
In many companies, AI risk belongs under enterprise risk oversight, unless AI is, itself, the company’s core product. Cyber often sits here as well, or in a dedicated technology or risk structure depending on the company’s technology reliance profile. The point is less about finding one universal committee model and more about making the ownership and escalation model explicit.
Boards should also be realistic about their own composition. As with financial oversight, where all directors are expected to be financially fluent and at least one qualified financial expert can go deeper and translate complexity, the same model applies to technology. Directors do not need deep technical expertise across the board. They do need sufficient fluency and curiosity to ask thoughtful questions, understand the answers, and distinguish clarity from generality, supported by directors with deeper expertise who can interpret and contextualize technical risk. As technology becomes more central to operations, this becomes a board effectiveness issue.
Good governance in this area is disciplined without becoming intrusive. Directors should be probing, not performative. They should challenge assumptions, test the logic, and ask for evidence. They should also support management by helping clarify priorities, risk appetite, and investment decisions. Technically knowledgeable directors, in particular, should translate complexity and help the board distinguish signal from noise without overstepping into management. This matters especially with modernization, where the journey is often long, sequential, and difficult.
A board can add real value here by pressing on a few basic points.
- Why are we modernizing?
- What business value are we protecting or creating?
- What risks are we reducing over time?
- What evidence will show that the investment is improving resilience, not just refreshing technology?
These are board-level questions. They help keep the oversight focused on consequences.

Readiness
The third pillar is readiness.
Readiness is the proof point. It’s where oversight either becomes operational or remains theoretical.
For cyber, the governing question has changed. It’s no longer enough to ask whether the company is secure. Boards should ask how quickly the organization can detect an event, how quickly it can contain it, and how quickly it can recover critical operations. Every organization is a target. Prevention still matters. It’s just no longer sufficient as the primary frame.
Let me repeat this: every organization is a target.
This is why I see cyber as a resilience issue as much as a security issue. A serious cyber event will test leadership, operations, communications, customer trust, and continuity all at once. Boards should expect evidence that critical recovery plans exist, that they’re tested, and that testing changes something. A tabletop exercise that produces no operational improvement is a weak signal of readiness.
Culture matters here as well. If an organization lacks risk discipline, weak behavior at the edges can bypass strong controls at the center. Training, accountability, and escalation are part of readiness because resilience depends on how people act under pressure, not only on what tools are installed.
For AI, readiness begins with knowing where it’s embedded, what data it touches, where, once touched, it can go, and what controls are in place to govern its use. I’m wary when organizations pursue AI quickly on top of weak processes, poor documentation, or messy data. Automating disorder usually scales the problem. It doesn’t solve it.
This is why I believe operational cleanup often must come first.
If a process is inconsistent, dependent on workarounds, or supported by poor inputs, applying AI to it can create more output with less reliability. Boards should want to know whether management has identified the exposure, clarified accountability, and built quality control into the process. Human oversight remains critical, especially where outputs impact customers, compliance, financial decisions, or enterprise reporting.
For modernization, readiness means treating this work as structural risk reduction as well as business enablement.
A sound modernization plan should reduce fragility, improve recoverability, and strengthen the company’s ability to adapt. It should also support competitive positioning. Faster product delivery, stronger customer responsiveness, and better operating resilience are legitimate board-level outcomes.
For third parties, readiness means more than onboarding diligence. It includes contingency planning, visibility into concentration risk, awareness of vendor changes, and a practical view of what happens if a provider fails or materially alters the service. This is particularly important when vendor AI changes how data is used or how outputs are generated.
Readiness is ultimately about evidence.
- Evidence that the enterprise can detect.
- Evidence that it can respond.
- Evidence that it can recover.
- Evidence that it is reducing the structural conditions that make disruption more likely.
What boards should expect to see
A board that takes technology risk seriously should expect a small set of clear proofs.
It should see:
- A current view of critical dependencies and where failure would most affect operations.
- Recovery testing on important systems and clarity on what changed as a result.
- How AI use is being identified, governed, and bounded.
- Modernization decisions tied to resilience and business impact, not presented as a generic infrastructure refresh.
- Third-party risk treated as a dynamic exposure, especially where vendor changes affect data, controls, or customer outcomes.
Most of all, the board should expect candor.
If management is more focused on producing polished dashboards than surfacing what genuinely creates concern, oversight quality suffers.
The board’s role
I don’t believe boards should govern technology by becoming operators, but I do believe they should govern technology risk with fiduciary seriousness.
This means understanding what could disrupt the enterprise. It means defining who owns the risk, where it’s governed, and what triggers escalation. It means asking for evidence of readiness, not reassurance by presentation.
Technology now sits too close to enterprise continuity, customer trust, and long-term value to be treated as a side topic.
Boards already know how to oversee consequential risk. The work now is to apply that same discipline here, with the right lens and the right questions.
The question is no longer whether technology risk belongs in enterprise oversight.
The question is whether the board has a disciplined way to govern it before an inevitable disruption forces the issue.

