Psychology · AI Governance · Research
Psychology. Governance. Research.
I'm Khushi — a Psychology student at the University of Waterloo working at the intersection of research, responsible AI, and the human side of complex systems. Currently an AI Risk Governance Intern at Rogers.
AI Governance
& Risk
I make AI governance less abstract — turning frameworks like NIST AI RMF, ISO 42001, and the EU AI Act into structured documentation, incident frameworks, and process guidelines real teams can use.
Research
& Analysis
Across multiple psychology labs, I've developed the habit of turning messy information into something clear, organized, and actually useful — whether that's data, literature, or a dashboard.
Human-Centered
Systems
I care about who gets left behind. My work asks not just "does this function?" but "who does this help, and what would make this easier to trust?"
Most problems aren't technical — they're about translation. Between policy and practice. Between data and decision. Between systems and the people inside them.
My approach
to messy
problems.
I'm usually the person organizing documents, clarifying expectations, tracking patterns, and translating complexity into something other people can actually use. Quietly, carefully, and with a lot of notes.
Before anything else — what do I actually have? What's the real scope? I'd rather spend time understanding the problem than solving the wrong one.
Some problems need diverse input. Others need one person to just go deep. I've learned to tell the difference.
I create frameworks before I create outputs. Documentation, templates, synthesis — the scaffolding that makes complex work navigable.
A solution that can't be explained clearly hasn't been fully solved. I iterate until the clarity matches the quality of the work.
the following is a dramatisation.
caffeine and chaos were involved.
Designed an AI Incident Management Framework from scratch — defining incident categories, severity levels, escalation workflows, and ownership structures aligned with NIST AI RMF, ISO 42001, and the EU AI Act. Developed structured product requirements for an internal AI governance knowledge bot, including transparency standards, source citation logic, and risk-based guidance design. Conducted gap analyses between internal AI policy and global frameworks, identifying alignment opportunities across Rogers' generative AI strategy.
Researched AI vendor solutions and proposed enhancements for a healthcare staffing platform. Authored the Workforce Analysis section and contributed to AI-driven feature proposals including error detection and salary benchmarking.
Contributing to literature reviews, data collection, coding and databases, lab discussions, and written synthesis across two psychology labs. Careful detail and consistency drive both projects.
Evaluating manuscript submissions for APA alignment, ethics, and methodological clarity. Representing undergraduate voices nationally and promoting engagement across student communities.
Co-designed mental health awareness initiatives, created outreach materials, and supported help-seeking through tabling, residence pop-ups, and workshops tailored to diverse student groups.
Global Retail Sales & Profit Dashboard
A dataset-agnostic BI platform with a built-in Model Risk Governance layer — designed around OSFI E-23 (2027) and Canada's AIA. I designed the governance architecture: PII detection, proxy bias flagging, data integrity scoring, AIA Impact Level estimation, and a compliance report with Model Risk Officer sign-off. Built using AI-assisted Python/Streamlit development.
AI Governance Audit Framework
A 30-question governance audit across 6 domains — including a dedicated OSFI E-23 domain. Estimates Canada AIA Impact Level (1–4), generates a 16-KPI governance blueprint across pre/post/continuous phases, and maps every finding to its regulatory clause. I designed the full regulatory architecture; the governance knowledge driving it came from my Rogers internship and independent research. Built using AI-assisted development.
ESG Corporate Risk Scorecard
A sector-adjusted scoring tool evaluating organizations across 20 ESG criteria using SASB materiality weights. Financial Services prioritizes Governance; Energy prioritizes Environmental. Includes TCFD alignment, OSFI B-15 climate risk context, and sector-specific material risk cards. Produces a radar chart, domain breakdown, and downloadable report.
Currently
Exploring
Unfinished work is still real work. These are the ideas I'm actively thinking through.
AI Incident Management Frameworks
Exploring how organizations can structure AI incident response — what gets logged, who owns it, and how governance intersects with technical failure modes.
AI in Healthcare Platforms
Continued thinking on where AI meaningfully improves complex workforce platforms versus where it introduces new risks — drawing on research from a UWaterloo consulting program.
Unity Companion AI Prototype
A third-person gameplay prototype built in Unity. Core systems are working — player movement, camera, enemy AI, and a wolf companion that follows, detects, and attacks enemies. Currently at the combat and game loop stage, building incrementally toward a clean playable demo.
Why this
combination
matters.
Most people who work in AI governance come from law or engineering. I come from psychology — the science of how people actually think, decide, and behave. That's not a gap. That's the point.
- Psychology + AI Governance: I understand both the technical frameworks (NIST, ISO 42001, EU AI Act) and the human factors those frameworks exist to protect.
- Research rigor with practical output: I've worked in academic labs and corporate governance — I can read a methods section and write a governance template.
- I ask the uncomfortable questions: Not just "does this work?" but "who does this work for?" and "what happens when it doesn't?"
- Translation as a skill: I move between technical complexity and human legibility — a rare combination that matters in any risk or governance role.
- AI Risk Management
- NIST AI RMF
- ISO 42001
- EU AI Act
- Responsible AI Auditing
- Incident Frameworks
- Policy Documentation
- Literature Reviews
- Statistical Analysis
- Research Design
- Data Interpretation
- Academic Writing
- Synthesis & Reporting
- Python · Pandas (analysis & prototyping)
- Streamlit · Plotly
- RStudio · SPSS
- Microsoft 365 / SharePoint
- Zotero · LaTeX
- Figma · Miro
- Stakeholder Communication
- Project Coordination
- Student Support & Advising
- Mental Health Literacy
- Equity & Inclusion Work
- Peer Review
Let's
work
together.
I'm especially interested in research, ethical AI, governance, and risk assurance roles. If any of that overlaps with what you're building — I'd genuinely love to hear from you.