Music Technology · Human-AI Interaction · Multimodal AI
I am a Postdoctoral Scholar at the Northwestern University Interactive Audio Lab. My research is about rethinking interfaces in the age of AI to support inclusion, creativity, and expressivity.
Research
Primary Research
Developing machine learning models that recognize and anticipate performer gestures in real time, enabling fluid, low-latency control of generative AI music systems in live performance contexts. This work rethinks the performer–system relationship from reactive to anticipatory.
View Related PublicationsMulti-stage co-design research in making the EarSketch learning platform more accessible for Blind and Visually Impaired (BVI) learners.
Development of machine learning-controlled prosthetic limbs for expressive musical control.
Research and development of an experiemental conversational agent, a Co-Creative AI (CAI), to support users of the EarSketch learning platform in writing code and music.
Research and development of the AI Holodeck, an AI-supported 3D scene generation.
Teaching & Collaboration
My goal in teaching and research mentorship is to provide students with skills they can use in industry and interdisciplinary research with a constructionist, project-based approach. Courses and workshops I have taught or assisted include:
I have worked within and across large interdisciplinary research groups, including:
Education & Positions
Postdoctoral Scholar
Northwestern University, Department of Computer Science
Supervisor: Prof. Bryan Pardo
PhD, Music Technology, Minor in Human-AI Interaction
Georgia Institute of Technology
Dissertation: "Human-AI Partnerships in Gesture-Controlled Interactive Music Systems" — Advisor: Prof. Jason Freeman
MS, Music Technology
Georgia Institute of Technology
Advisor: Prof. Gil Weinberg
BS, Music Engineering & Technology, Minor in Computer Engineering
University of Miami