I am a Ph.D. student in the Department of Cognitive Science at Johns Hopkins University, co-advised by Tal Linzen and Paul Smolensky. Before coming to JHU, I received a B.A. in Linguistics at Yale University, advised by Robert Frank. Last summer, I did a research internship at Microsoft supervised by Asli Celikyilmaz.
I study computational linguistics using techniques from cognitive science, machine learning, and natural language processing. My research focuses on how to achieve robust generalization in models of language, as this remains one of the main areas where current AI systems fall short and one of the most impressive components of language processing in humans. In particular, I study which inductive biases and which representations of structure enable robust generalization, since these are two of the major components that determine how learners generalize to novel types of input.
- R. Thomas McCoy, Jennifer Culbertson, Paul Smolensky, and Géraldine Legendre. Infinite use of finite means? Evaluating the generalization of center embedding learned from an artificial grammar. Proceedings of the 43rd Annual Conference of the Cognitive Science Society. [pdf]
- R. Thomas McCoy, Erin Grant, Paul Smolensky, Thomas L. Griffiths, and Tal Linzen. Universal linguistic inductive biases via meta-learning. Proceedings of the 42nd Annual Conference of the Cognitive Science Society. [pdf] [demo]
- R. Thomas McCoy, Ellie Pavlick, and Tal Linzen. Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference. ACL 2019. [pdf]
- R. Thomas McCoy, Robert Frank, and Tal Linzen. Does syntax need to grow on trees? Sources of hierarchical inductive bias in sequence-to-sequence networks. TACL. [pdf] [website]
- R. Thomas McCoy, Tal Linzen, Ewan Dunbar, and Paul Smolensky. RNNs implicitly implement Tensor Product Representations. ICLR 2019. [pdf] [demo]