Publications

2024

D’Mello, S. K., Biddy, Q., Breideband, T., Bush, J., Chang, M., Cortez, A., Flanigan, J., Foltz, P. W., Gorman, J. C., Hirshfield, L. M., Ko. M., Krishnaswamy, N., Lieber, R., Martin, J. H., Palmer, M., Penuel, W. R., Philip, T., Puntambekar, S., Pustejovsky, J., Reitman, J. G., Sumner, T., Tissenbaum, M., Walker, M., and Whitehill, J. (2024). From learning optimization to learner flourishing: Reimagining AI in Education at the Institute for Student-AI Teaming (iSAT). In AI Magazine. Wiley. [link]

Nath, A., Manafi, S., Chelle, A., and Krishnaswamy, N. (2024). Okay, Let’s Do This! Modeling Event Coreference with Generated Rationales and Knowledge Distillation. In North American Chapter of the Association for Computational Linguistics (NAACL). ACL. [pdf]

Oved, I., Krishnaswamy, N., Pustejovsky, J., and Hartshorne, J. K. (2024). Computational Thought Experiments for a More Rigorous Philosophy and Science of the Mind. In Annual Meeting of the Cognitive Science Society (CogSci). Cognitive Science Society.

Nath, A., Jamil, H., Ahmed, S. R., Baker, G., Ghosh, R., Martin, J. H., Blanchard, N., and Krishnaswamy, N. (2024). Multimodal Cross-Document Event Coreference Resolution Using Linear Semantic Transfer and Mixed-Modality Ensembles. In Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING). ACL. [pdf]

Khebour, I., Lai, K., Bradford, M., Zhu, Y., Brutti, R., Tam, C., Tu, J., Ibarra, B., Blanchard, N., Krishnaswamy, N., and Pustejovsky, J. (2024). Common Ground Tracking in Multimodal Dialogue. In Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING). ACL. [pdf]

Manafi, S. and Krishnaswamy, N. (2024). Cross-Lingual Transfer Robustness to Lower-Resource Languages on Adversarial Datasets. In Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING). ACL. [pdf]

Venkatesha, V., Nath, A., Khebour, I., Chelle, A., Bradford, M., Tu, J., Pustejovsky, J., Blanchard, N., and Krishnaswamy, N. (2024). Propositional Extraction from Natural Speech in Small Group Collaborative Tasks. In International Conference on Educational Data Mining (EDM). International EDM Society.

VanderHoeven, H., Bradford, M., Jung, C., Khebour, I., Lai, K., Pustejovsky, J., Krishnaswamy, N., and Blanchard, N. (2024). Multimodal Design for Interactive Collaborative Problem Solving Support. In International Conference on Human-Computer Interaction (HCII). Springer. [pdf]

Zhu, Y., VanderHoeven, H., Lai, K., Bradford, M., Tam, C., Khebour, I., Brutti, R., Krishnaswamy, N., and Pustejovsky, J. (2024). Modeling Theory of Mind in Multimodal HCI. In International Conference on Human-Computer Interaction (HCII). Springer. [pdf]

VanderHoeven, H., Blanchard, N., and Krishnaswamy, N. (2024). Point Target Detection for Multimodal Communication. In International Conference on Human-Computer Interaction (HCII). Springer. [pdf]

Seefried, E., Bradford, M., Aich, S., Siebert, C., Krishnaswamy, N., and Blanchard, N. (2024). Learning Foreign Language Vocabulary Through Task-Based Virtual Reality Immersion. In International Conference on Human-Computer Interaction (HCII). Springer. [pdf]

Ghaffari, S. and Krishnaswamy, N. (2024). Exploring Failure Cases in Multimodal Reasoning About Physical Dynamics. In AAAI Spring Symposium: Empowering Machine Learning and Large Language Models with Domain and Commonsense Knowledge (MAKE). AAAI. [pdf]

Mannan, S., Vimal, V. P., DiZio, P., and Krishnaswamy, N. (2024). Embodying Human-Like Modes of Balance Control Through Human-in-the-Loop Dyadic Learning. In AAAI Spring Symposium: Symposium on Human-Like Learning (HLL). AAAI. [pdf] [poster]

Khebour, I., Brutti, R., Dey, I., Sikes, K., Lai, K., Bradford, M., Cates, B., Hansen, P., Jung, C., Wisniewski, B., Terpstra, C., Hirshfield, L. M., Puntambekar, S., Blanchard, N., Pustejovsky, J., and Krishnaswamy, N. (2024). When Text and Speech Are Not Enough: A Multimodal Dataset of Collaboration in a Situated Task. Journal of Open Humanities Data. Ubiquity Press. [link]

Li, T., Jing, M., Makhani, Z., Oved, I., Krishnaswamy, N., Pustejovsky, J., and Hartshorne, J. K. Modeling the development of intuitive mechanics. Poster presented at the Annual Meeting of the Cognitive Science Society. Cognitive Science Society.

2023

Henlein, A., Gopinath, A., Krishnaswamy, N., Mehler, A., and Pustejovsky, J. (2023). Grounding Human-Object Interaction to Affordance Behavior in Multimodal Datasets. In Frontiers in Artificial Intelligence: Section Language and Computation. Frontiers Media. [link]

Oved, I., Krishnaswamy, N., Pustejovsky, J., and Hartshorne, J. (2023). Neither neural networks nor the language-of-thought alone make a complete game (In response to: The best game in town: The reemergence of the language-of-thought hypothesis across the cognitive sciences). In Behavioral and Brain Sciences. Cambridge University Press. [link]

Nath, A., Mannan, S., and Krishnaswamy, N. (2023). AxomiyaBERTa: A Phonologically-aware Transformer Model for Assamese. In Findings of the Association for Computational Linguistics: ACL 2023 (Findings of ACL). ACL. [pdf] [poster]

Ahmed, S. R., Nath, A., Martin, J. H., and Krishnaswamy, N. (2023). 2*n is better than n^2: Decomposing Event Coreference Resolution into Two Tractable Problems. In Findings of the Association for Computational Linguistics: ACL 2023 (Findings of ACL). ACL. [pdf]

Bradford, M., Khebour, I., Blanchard, N., and Krishnaswamy, N. (2023). Automatic Detection of Collaborative States in Small Groups Using Multimodal Features. In International Conference on Artificial Intelligence in Education (AIEd). International AIEd Society. [pdf]

VanderHoeven, H., Blanchard, N., and Krishnaswamy, N. (2023). Robust Motion Recognition using Gesture Phase Annotation. In International Conference on Human-Computer Interaction (HCII). Springer. [pdf]

Kandoi, C., Jung, C., Mannan, S., VanderHoeven, H., Meisman, Q., Krishnaswamy, N., and Blanchard, N. (2023). Intentional Microgesture Recognition for Extended Human-Computer Interaction. In International Conference on Human-Computer Interaction (HCII). Springer. [pdf]

Ghaffari, S. and Krishnaswamy, N. (2023). Grounding and Distinguishing Conceptual Vocabulary Through Similarity Learning in Embodied Simulations. In International Conference on Computational Semantics (IWCS). ACL. [pdf] [slides]

Nirenburg, S., Krishnaswamy, N., and McShane, M. (2023). Hybrid Machine Learning/Knowledge Base Systems Learning through Natural Language Dialog with Deep Learning Models. In AAAI Spring Symposium: Challenges Requiring the Combination of Machine Learning and Knowledge Engineering (MAKE). AAAI. [pdf]

Ahmed, S. R., Nath, A., Regan, M., Pollins, A., Krishnaswamy, N., and Martin, J. H. (2023). How Good is the Model in Model-in-the-loop Event Coreference Resolution? In Linguistic Annotation Workshop (LAW). ACL. [pdf]

Lee, K., Krishnaswamy, N., and Pustejovsky, J. (2023). An Abstract Specification of VoxML as an Annotation Language. In International Workshop on Semantic Annotation (ISA). ACL. [pdf]

Terpstra, C., Khebour, I., Bradford, M., Wisniewski, B., Krishnaswamy, N., and Blanchard, N. (2023). How Good is Automatic Segmentation as a Multimodal Discourse Annotation Aid? In International Workshop on Semantic Annotation (ISA). ACL. [pdf] [slides]

Alalyani, N. and Krishnaswamy, N. (2023). A Methodology for Evaluating Multimodal Referring Expression Generation for Embodied Virtual Agents. In Workshop on Generation and Evaluation of Non-Verbal Behaviour for Embodied Agents (GENEA). ACM. [pdf].

DiZio, P., Krishnaswamy, N., Mannan, S., and Hansen, P. (2023). Manual balancing of a visual inverted pendulum by quantized versus proportional joystick commands. In Neuroscience. Society for Neuroscience.

Krishnaswamy, N., Oved, I., Hartshorne, J., and Pustejovsky, J. (2023). Meaning to Mean: A Precondition for Sentience and Understanding in Large Language Models. In The Science of Consciousness (TSC). Center for Consciousness Studies.

Oved, I., Montemayor, C., Krishnaswamy, N., Pustejovsky, J., and Hartshorne, J. (2023). The View from Outside the Matrix: Doing Philosophy of Mind and Cognitive Science with Virtual Worlds. In The Science of Consciousness (TSC). Center for Consciousness Studies.

Weatherley, J., Dickler, R., Foltz, P. W., Srinivas, A., Pugh, S., Krishnaswamy, N., Whitehill, J., Bodzianowski, M., Perkoff, M., Southwell, R., Bush, J., Chang, M., Hirshfield, L. M., Showers, D., Ganesh, A., Li, Z., Danilyuk, E., He, X., Khebour, I., Dey, I., and D’Mello, S. K. (2023). The iSAT Collaboration Analytics Pipeline. In International Learning Analytics and Knowledge Conference (LAK). Society for Learning Analytics Research.

Dey, I., Puntambekar, S., Li, R., Gengler, D., Dickler, R., Hirshfield, L. M., Clevenger, C., Rose, S., Bradford, M., and Krishnaswamy, N. (2023). The NICE framework: Analyzing Students’ Nonverbal Interactions During Collaborative Learning. In Interactive Workshop: Collaboration Analytics. Society for Learning Analytics Research.

2022

Krishnaswamy, N. and Pustejovsky, J. (2022). Affordance Embeddings for Situated Language Understanding. In Frontiers in Artificial Intelligence: Section Language and Computation. Frontiers Media. [link]

Nath, A., Mahdipour Saravani, S., Khebour, I., Mannan, S., Li, Z., and Krishnaswamy, N. (2022). A Generalized Method for Automated Multilingual Loanword Detection. In International Conference on Computational Linguistics (COLING). ACL. [pdf] [poster]

Mannan, S. and Krishnaswamy, N. (2022). Where am I and where should I go? Grounding positional and directional labels in a disoriented human balancing task. In Conference on (Dis)embodiment. ACL. [pdf] [slides]

Krishnaswamy, N., Pickard, W., Cates, B., Blanchard, N., and Pustejovsky, J. (2022). The VoxWorld Platform for Multimodal Embodied Agents. In Language Resources and Evaluation Conference (LREC). ACL. [pdf] [poster] [video]

Ghaffari, S. and Krishnaswamy, N. (2022). Detecting and Accommodating Novel Types and Concepts in an Embodied Simulation Environment. In Annual Conference on Advances in Cognitive Systems (ACS). Cognitive Systems Foundation. [pdf] [slides]

Bradford, M., Hansen, P., Beveridge, R., Krishnaswamy, N., and Blanchard, N. (2022). A deep dive into microphones for recording collaborative group work. In International Conference on Educational Data Mining (EDM). International Educational Data Mining Society. [pdf]

Pustejovsky, J. and Krishnaswamy, N. (2022). Multimodal Semantics for Affordances and Actions. In International Conference on Human-Computer Interaction (HCII). Springer. [pdf]

Dickler, R., Foltz, P. W., Krishnaswamy, N., Whitehill, J., Weatherly, J., Bodzianowski, M., Perkoff, M., Southwell, R., Pugh, S., Bush, J., Chang, M., Hirshfield, L. M., Showers, D., Ganesh, A., Li, Z., Danilyuk, E., He, X., Khebour, I., Dey, I., Puntambekar, S., and D’Mello, S. K. (2022). iSAT speech-based AI display for small group collaboration in classrooms. In Interactive event at International Conference on Artificial Intelligence in Education (AIEd). International AIEd Society.

Nath, A., Ghosh, R., and Krishnaswamy, N. (2022). Phonetic, Semantic, and Articulatory Features in Assamese-Bengali Cognate Detection. In Workshop on NLP for Similar Languages, Varieties, and Dialects (VarDial). ACL. [pdf] [slides]

Tomar, A. and Krishnaswamy, N. (2022). Exploring Correspondences Between Gibsonian and Telic Affordances for Object Grasping. In Workshop on Annotation, Recognition and Evaluation of Actions (AREA). ACL. [pdf] [slides]

Bradford, M., Hansen, P., Lai, K., Brutti, R., Dickler, R., Hirshfield, L. M., Pustejovsky, J., Blanchard, N., and Krishnaswamy, N. (2022). Challenges and Opportunities in Annotating a Multimodal Collaborative Problem Solving Task. In Workshop on Interdisciplinary Approaches to Getting AI Experts and Education Stakeholders Talking (Bridging AIEd). International AIEd Society. [pdf] [slides]

Castillon, I., Venkatesha, V., VanderHoeven, H., Bradford, M., Krishnaswamy, N., and Blanchard, N. (2022). Multimodal Features for Group Dynamic-Aware Agents. In Workshop on Interdisciplinary Approaches to Getting AI Experts and Education Stakeholders Talking (Bridging AIEd). International AIEd Society. [pdf] [slides]

Krishnaswamy, N., and Ghaffari, S. (2022). Exploiting Embodied Simulation to Detect Novel Object Classes Through Interaction. Poster presented at the Annual Meeting of the Cognitive Science Society (CogSci). Cognitive Science Society. [pdf] [poster]

2021

Pustejovsky, J. and Krishnaswamy, N. (2021). Embodied Human Computer Interaction. In KI - Künstliche Intelligenz: Special Issue on NLP and Semantics. Springer. [link]

Pustejovsky, J. and Krishnaswamy, N. (2021). Situated Meaning in Multimodal Dialogue: Human-Robot and Human-Computer Interactions. In Traitement Automatique des Langues: Special Issue on Dialog and Dialog Systems. Association pour le Traitement Automatique des Langues (ATALA). [pdf]

Pustejovsky, J. and Krishnaswamy, N. (2021). The Role of Embodiment and Simulation in Evaluating HCI: Theory and Framework. In International Conference on Human-Computer Interaction (HCII). Springer. [pdf]

Krishnaswamy, N. and Pustejovsky, J. (2021). The Role of Embodiment and Simulation in Evaluating HCI: Experiments and Evaluation. In International Conference on Human-Computer Interaction (HCII). Springer. [pdf]

Krishnaswamy, N. and Alalyani, N. (2021). Embodied Multimodal Agents to Bridge the Understanding Gap. In Workshop on Bridging Human-Computer Interaction and Natural Language Processing (HCI+NLP). ACL. [pdf] [poster]

2020

Krishnaswamy, N. and Pustejovsky, J. (2020). Neurosymbolic AI for Situated Language Understanding. In Annual Conference on Advances in Cognitive Systems (ACS). Cognitive Systems Foundation. [pdf] [slides] [video]

Krishnaswamy, N. and Pustejovsky, J. (2020). A Formal Analysis of Multimodal Referring Expressions Under Common Ground. In International Conference on Language Resources and Evaluation (LREC). ACL. [pdf]

Krishnaswamy, N., Narayana, P., Bangar, R., Rim, K., Patil, D., McNeely-White, D. G., Ruiz, J., Draper, B., Beveridge, R., and Pustejovsky, J. (2020). Diana’s World: A Situated Multimodal Interactive Agent. In AAAI Conference on Artificial Intelligence (AAAI): Demos Program. AAAI. [pdf] [poster]

Krishnaswamy, N., Beveridge, R., Pustejovsky, J., Patil, D., McNeely-White, D. G., Wang, H., and Ortega, F. R. (2020). Situational Awareness in Human Computer Interaction: Diana’s World. In International Conference on Artificial Reality and Telexistence & Eurographics Symposium on Virtual Environments (ICAT-EGVE): Demos. ACM/Eurographics. [pdf] [video]

Pustejovsky, J. and Krishnaswamy, N. (2020). Embodied Human-Computer Interactions through Situated Grounding. In International Conference on Intelligent Virtual Agents (IVA). ACM. [pdf] [slides]

Hutchens, M., Krishnaswamy, N., Cochran, B., and Pustejovsky, J. (2020). Jarvis: A Multimodal Visualization Tool for Bioinformatic Data. In International Conference on Human-Computer Interaction (HCII). Springer. [pdf] [slides]

Krajovic, K., Krishnaswamy, N., Dimick, N. J., Salas, R. P., and Pustejovsky, J. (2020). Situated Multimodal Control of a Mobile Robot: Navigation through a Virtual Environment. In Special Session on Situated Dialogue with Virtual Agents and Robots (RoboDIAL): Late-Breaking Papers. Non-archival. [pdf] [video]

Pustejovsky, J., Krishnaswamy, N., Beveridge, R., Ortega, F. R., Patil, D., Wang, H., and McNeely-White, D. G. (2020). Interpreting and Generating Gestures with Embodied Human-Computer Interactions. In Workshop on Generation and Evaluation of Non-Verbal Behaviour for Embodied Agents (GENEA). ACM. [pdf]

2019

Krishnaswamy, N. and Pustejovsky, J. (2019). Generating a Novel Dataset of Multimodal Referring Expressions. In International Workshop on Computational Semantics (IWCS). ACL. [pdf] [poster] [supplementary panel]

Krishnaswamy, N., Friedman, S., and Pustejovsky, J. (2019). Combining Deep Learning and Qualitative Spatial Reasoning to Learn Complex Structures from Sparse Examples with Noise. In AAAI Conference on Artificial Intelligence (AAAI). AAAI. [pdf] [poster] [slide deck]

Krishnaswamy, N. and Pustejovsky, J. (2019). Situated Grounding Facilitates Multimodal Concept Learning for AI. In Visually Grounded Interaction and Language Workshop (ViGIL). Neural Information Processing Systems Foundation. [pdf] [poster]

Krishnaswamy, N. and Pustejovsky, J. (2019). Multimodal Continuation-style Architectures for Human-Robot Interaction. In Workshop on Cognitive Vision: Integrated Vision and AI for Embodied Perception and Interaction. Cognitive Systems Foundation. [pdf] [slide deck]

Pustejovsky, J. and Krishnaswamy, N. (2019). Situational Grounding within Multimodal Simulations. In AAAI Workshop on Games and Simulations in AI (GameSim). AAAI. [pdf] [poster]

McNeely-White, D., Ortega, F., Beveridge, R., Draper, B., Bangar, R., Patil, D., , Pustejovsky, J., Krishnaswamy, N., Rim, K., Ruiz, J., and Wang, I. (2019). User-Aware Shared Perception for Embodied Agents] In International Conference on Humanized Computing and Communication (HCC). IEEE. [pdf]

2018

Krishnaswamy, N. and Pustejovsky, J. (2018). Deictic Adaptation in a Virtual Environment. In Spatial Cognition XI: International Conference on Spatial Cognition. Springer. [pdf] [slide deck]

Krishnaswamy, N. and Pustejovsky, J. (2018). An Evaluation Framework for Multimodal Interaction. In International Conference on Language Resources and Evaluation (LREC). ACL. [pdf] [poster]

Krishnaswamy, N., Do, T., and Pustejovsky, J. (2018). Learning Actions from Events Using Agent Motions. In Workshop on Annotation, Recognition and Evaluation of Actions (AREA). ACL. [pdf] [poster]

Pustejovsky, J. and Krishnaswamy, N. (2018). The Role of Event Simulation in Spatial Cognition. In Workshop on Models and Representations in Spatial Cognition (MRSC). Springer. [pdf]

Pustejovsky, J. and Krishnaswamy, N. (2018). Every Object Tells a Story. In Workshop on Events and Stories in the News (EventStory). ACL. [pdf]

Narayana, P., Krishnaswamy, N., Wang, I., Bangar, R., Patil, D., Mulay, G., Rim, K., Beveridge, R., Ruiz, J., Pustejovsky, J., and Draper, B. (2018). Cooperating with Avatars Through Gesture, Language and Action. IEEE. In Intelligent Systems Conference (IntelliSys). [pdf]

Do, T., Krishnaswamy, N., Rim, K., and Pustejovsky, J. (2018). Multimodal Interactive Learning of Primitive Actions. In AAAI Fall Symposium: Artificial Intelligence for Human-Robot Interaction. AAAI. [pdf]

Do, T., Krishnaswamy, N., and Pustejovsky, J. (2018). Teaching Virtual Agents to Perform Complex Spatial-Temporal Activities. In AAAI Spring Symposium: Integrating Representation, Reasoning, Learning, and Execution for Goal Directed Autonomy. AAAI. [pdf]

2017

Krishnaswamy, N. (2017). Monte-Carlo Simulation Generation Through Operationalizaition of Spatial Primitives. Doctoral dissertation, Brandeis University. ProQuest. [pdf]

Krishnaswamy, N., Narayana, P., Wang, I., Rim, K., Bangar, R., Patil, D., Mulay, G., Ruiz, J., Beveridge, R., Draper, B., and Pustejovsky, J. (2017). Communicating and Acting: Understanding Gesture in Simulation Semantics. In International Workshop on Computational Semantics (IWCS). ACL. [pdf]

Krishnaswamy, N. and Pustejovsky, J. (2017). Do You See What I See? Effects of POV on Spatial Relation Specifications. In International Workshop on Qualitative Reasoning (QR). AAAI/International Joint Conferences on Artificial Intelligence. [pdf] [slide deck]

Pustejovsky, J., Krishnaswamy, N., Draper, B., Narayana, P., and Bangar, R. (2017). Creating Common Ground Through Multimodal Simulations. In Workshop on Foundations of Situated and Multimodal Communication (FSMC). ACL. [pdf]

Pustejovsky, J., Krishnaswamy, N., and Do, T. (2017). Object Embodiment in a Multimodal Simulation. In AAAI Spring Symposium: Interactive Multisensory Object Perception for Embodied Agents. AAAI. [pdf] [poster]

2016

Krishnaswamy, N. and Pustejovsky, J. (2016). Multimodal Semantic Simulations of Linguistically Underspecified Motion Events. In Spatial Cognition X: International Conference on Spatial Cognition. Springer. [pdf] [slide deck]

Krishnaswamy, N. and Pustejovsky, J. (2016). VoxSim: A Visual Platform for Modeling Motion Language. In International Conference on Computational Linguistics (COLING): Technical Papers. ACL. [pdf]

Pustejovsky, J., Krishnaswamy, N., Do, T., and Kehat, G. (2016). The Development of Multimodal Lexical Resources. In Workshop on Grammar and the Lexicon (GramLex). ACL. [pdf]

Pustejovsky, J. and Krishnaswamy, N. (2016). Visualizing Events: Simulating Meaning in Language. In Annual Meeting of the Cognitive Science Society (CogSci). Cognitive Science Society. [pdf]

Pustejovsky, J. and Krishnaswamy, N. (2016). VoxML: A Visualization Modeling Language. In International Conference on Language Resources and Evaluation (LREC). ACL. [pdf]

Do, T., Krishnaswamy, N., and Pustejovsky, J. (2016). ECAT: Event Capture Annotation Tool. In International Workshop on Semantic Annotation (ISA). ACL. [pdf]

< 2016

Pustejovsky, J. and Krishnaswamy, N. (2014). Generating Simulations of Motion Events from Verbal Descriptions. In Lexical and Computational Semantics (*SEM). ACL. [pdf]

Krishnaswamy, N. (2013). The Features of Spatial Aspect: Examining the Inherent Semantics of Space in English Verbs. Master’s thesis, Brandeis University. ProQuest. [pdf]

Krishnaswamy, N. (2009). Comparison of Efficiency in Pathfinding Algorithms in Game Development. Senior Honor’s thesis, DePaul University (published as Technical Report). [pdf]