Doctoral Research
Practitioner. Scholar. Both at once.
As an Executive DBA candidate at Fairfield University (expected 2028), I'm researching AI decision-making in decentralized universities, an area where the gap between institutional policy and actual behavior is wide and the existing research is thin. The questions I'm studying are ones I'm working through in practice every day at Yale, which is what makes the practitioner-scholar model more than a label here.
The Research Question
How does institutional context shape AI decision-making in decentralized universities?
How do professionals in decentralized universities decide whether to experiment with AI tools in their work, and how do risk, authority, and institutional context shape those decisions?
What Gap Does This Fill?
Most research on AI adoption focuses on centralized organizations or K-12 settings. Decentralized universities, where faculty, departments, and units make significant decisions independently, present a fundamentally different situation. Central policy doesn't automatically translate into local behavior. Individual judgment, local risk tolerance, and the ambiguity around institutional authority all shape what actually gets adopted. This research addresses that gap directly.
Why Decentralized Universities?
In a decentralized institution, there is rarely a single source of authority over technology decisions. A faculty member's willingness to experiment with an AI tool is shaped by their sense of what's permitted, what's safe, and what their colleagues are doing. These are organizational and social dynamics, not just technical ones. Understanding them requires a setting where that complexity is visible, and higher education provides exactly that.
Why Practitioner Insight Matters Here
This isn't a research problem I'm observing from the outside. I've spent 13 years navigating these dynamics at Yale, building AI workshops, shaping governance frameworks, and watching how risk perception and institutional culture shape what people actually do with AI tools. That experience makes my observations sharper and my frameworks more grounded in what real adoption looks like, day to day, in a complex institution.
How Research and Practice Reinforce Each Other
My leadership work at Yale provides the context for my research questions, and the research findings sharpen how I build adoption models and governance structures. The AI support and governance model I co-lead at Yale is informed by what the research is revealing about how professionals perceive AI risk and make decisions under ambiguity. That loop, from practice to theory and back, is what the practitioner-scholar model is built on.
The Connection
Research, leadership, and strategy as a loop.
What I study at Fairfield, I test at Yale. What I build at Yale, I examine through research. Leadership work produces better research questions. Research produces more grounded strategy. That cycle is intentional, and it's what keeps both the scholarship and the practice honest.
Publications
Textbook reviewer, three editions.
Textbook reviewer for a widely adopted university MIS textbook by Earl McKinney Jr. and David M. Kroenke, helping shape course-aligned content used in classrooms across the country.
Collaborate
Interested in this research?
I'm open to connecting with researchers, practitioners, and institutions working on AI adoption, governance, and decision-making in higher education. If you're exploring similar questions, I'd like to hear what you're finding.
Let's Connect