I am a postdoctoral researcher in the Department of Computer Science at Princeton University, advised by Tom Griffiths. Starting in January 2024, I will be an Assistant Professor in the Department of Linguistics at Yale University. Before coming to Princeton, I received a Ph.D. in Cognitive Science at Johns Hopkins University, co-advised by Tal Linzen and Paul Smolensky, and before that I received a B.A. in Linguistics at Yale University, advised by Robert Frank.
I study computational linguistics using techniques from cognitive science, machine learning, and natural language processing. My research focuses on how to achieve robust generalization in models of language, as this remains one of the main areas where current AI systems fall short and one of the most impressive components of language processing in humans. In particular, I study which inductive biases and which representations of structure enable robust generalization, since these are two of the major components that determine how learners generalize to novel types of input.
Conversation topics: Do you have a meeting scheduled with me but don’t know what to talk about? See this page for some topics that are often on my mind.
- R. Thomas McCoy, Paul Smolensky, Tal Linzen, Jianfeng Gao, and Asli Celikyilmaz. How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN. arXiv preprint arXiv 2111.09509. [pdf]
- R. Thomas McCoy, Jennifer Culbertson, Paul Smolensky, and Géraldine Legendre. Infinite use of finite means? Evaluating the generalization of center embedding learned from an artificial grammar. Proceedings of the 43rd Annual Conference of the Cognitive Science Society. [pdf]
- R. Thomas McCoy, Erin Grant, Paul Smolensky, Thomas L. Griffiths, and Tal Linzen. Universal linguistic inductive biases via meta-learning. Proceedings of the 42nd Annual Conference of the Cognitive Science Society. [pdf] [demo]
- R. Thomas McCoy, Ellie Pavlick, and Tal Linzen. Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference. ACL 2019. [pdf]
- R. Thomas McCoy, Robert Frank, and Tal Linzen. Does syntax need to grow on trees? Sources of hierarchical inductive bias in sequence-to-sequence networks. TACL. [pdf] [website]
- R. Thomas McCoy, Tal Linzen, Ewan Dunbar, and Paul Smolensky. RNNs implicitly implement Tensor Product Representations. ICLR 2019. [pdf] [demo]