Shaping how decentralized universities adopt AI responsibly.

Many universities are still working through a core question: how do you support AI adoption across units where central mandates have limits and local teams make their own decisions? My role is not to chase tools, but to shape how AI is thoughtfully introduced in environments with real constraints — privacy, security, equity, and long-term sustainability. I've been building governance structures, community models, and adoption frameworks to answer that question in practice at Yale, grounded in what my doctoral research is revealing about how institutions actually make these decisions.

See My Research →

Research, governance, community, and feedback as a continuous loop.

Each element informs the others. The research sharpens the governance model. The community tests it in practice. The feedback restarts the cycle.

01

Research (DBA)

Executive DBA research on AI governance in decentralized universities provides the theoretical grounding and informs how the institutional strategy is structured.

02

Strategy & Governance

A structured AI support and governance model aligns research with university policy and security considerations, producing a clear AI support roadmap that accounts for decentralized decision-making.

03

Community (CoP)

A 100+ member AI Community of Practice creates space where learning happens safely and gradually. Workshops and Lunch & Learns lower barriers to entry while respecting that readiness varies widely across roles and units — generating adoption data and surfacing the questions governance hasn't answered yet.

04

Validation & Feedback

Community feedback surfaces new research questions, restarting the loop. 92% of participants reported increased confidence in using AI tools after community engagement.

How I think about AI in higher ed.

Adoption requires trust, not mandates

Faculty and staff adopt AI when they have space to ask honest questions, share early experiments, and understand how it fits — or doesn't fit — their work. Community-based models build that trust in ways top-down rollouts cannot.

Governance has to match the institution's structure

Decentralized universities need governance frameworks built for their reality. Centralized approaches don't reach where decisions actually get made, and they don't account for variation in local risk tolerance.

Equity has to be built in, not added later

Adoption strategies that don't account for access, representation, and impact on underserved populations will replicate existing gaps at scale. This is a governance question as much as a values question.

Practitioners are the bridge

The gap between AI strategy and institutional practice is filled by people who understand both. That's the role I occupy at Yale, and it's what makes the governance work credible rather than theoretical.

Restraint is a leadership choice

Determining when innovation serves the institution — and when it doesn't — is as important as building adoption capacity. The goal is not novelty, but usable, ethical systems that earn trust and can endure beyond early adoption.

The CIO role is changing

Senior IT leaders in higher education will need fluency in AI ethics, governance, and community leadership alongside technical knowledge. The institutions that recognize this early will be better positioned.

Working on something similar at your institution?

I'm always open to connecting with IT leaders, researchers, and practitioners working on AI adoption, governance, and support models in higher ed.

Get In Touch