Gestural Recognition & Human-AI Interaction
To explore how human musicians perceive the AI components of systems they use to perform, I developed multiple AI-based Interactive Music Systems as part of my dissertation titled "Human-AI Partnerships in Gesture-Controlled Interactive Music Systems."
Publications
- Smith, Jason, and Freeman, Jason (2023). “Effects of Visual Explanation on Perceived Creative Autonomy in an AI-Based Generative Music System.” In IUI '23 Companion: Companion Proceedings of the 28th International Conference on Intelligent User Interfaces
- Smith, Jason and Freeman, Jason (2022). “Human-AI Partnerships in Generative Music.” In the 21st International Conference on New Interfaces for Musical Expression
- Smith, Jason, and Freeman, Jason (2021). "Effects of Deep Neural Networks on the Perceived Creative Autonomy of a Generative Musical System." In Proceedings of the 17th AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment
EarSketch
EarSketch is an online learning environment for coding and music education that I contributed to as a Graduate Research Assistant at Georgia Tech and Postdoctoral Scholar at Northwestern University. I contributed to two research projects:
EarSketch CAI
Georgia Tech and the University of Florida collaborated on research and development of an experiemental conversational agent, a Co-Creative AI (CAI), to support EarSketch users in writing code and music.
Publications
- Rahimi, S., Smith, J.B., Truesdell, E.J.K., Vinay, A., Boyer, K.E., Magerko, B., Freeman, J., and McKlin, T. (2023). “Validity and Fairness of an Automated Assessment of Creativity in Computational Music Remixing.” Workshop on Automated Assessment and Guidance of Project Work at the 24th International Conference on Artificial Intelligence in Education
- Smith, J. B., Vinay, A., & Freeman, J. (2023). “The Impact of Salient Musical Features in a Hybrid Recommendation System for a Sound Library.” In the 3rd Workshop on Intelligent Music Interfaces for Listening and Creation (MILC) as part of the 28th International Conference on Intelligent User Interfaces
- Truesdell, Erin JK, et al. (2021). "Supporting Computational Music Remixing with a CoCreative Learning Companion." In Proceedings of the 2021 International Conference on Computational Creativity
- Smith, J., Truesdell, E., Freeman, J., Magerko, B., Boyer, K. E., & McKlin, T. (2020). “Modeling Music and Code Knowledge to Support a Co-Creative AI Agent for Education.” In Proceedings of the 21st International Society for Music Information Retrieval
- Smith, J., Jacob, M., Freeman, J., Magerko, B., & Mcklin, T. (2019). “Combining Collaborative and Content Filtering in a Recommendation System for a Web-based DAW.” In Proceedings of the 5th International Web Audio Conference
- Smith, J., Weeks, D., Jacob, M., Freeman, J., & Magerko, B. (2019). “Towards a Hybrid Recommendation System for a Sound Library.” In the 1st Workshop on Intelligent Music Interfaces for Listening and Creation (MILC) as part of the 28th International Conference on Intelligent User Interfaces
Accessible EarSketch
Georgia Tech, Northwestern University, and the University of North Texas have collaborated on multi-stage research in making EarSketch more accessible for Blind and Visually Impaired (BVI) learners.
Publications
- Ding, S., Smith, J. B., Garrett, S., & Magerko, B. (2024). Redesigning EarSketch for Inclusive CS Education: A Participatory Design Approach. In Proceedings of the 23rd Annual ACM Interaction Design and Children Conference.
- Garrett, S., Smith, J. B., Blue, A., Ondin Z., Rempel, J., Mumma, K., Freeman, J., and Magerko, B. (2024). “Improving the Accessibility of the EarSketch Web-Based Audio Application for Blind and Visually Impaired Learners.” In Proceedings of the International Web Audio Conference