This page is only updated sporadically. For a fully up-to-date list, see my Google Scholar page.

2023

Thomas L. Griffiths, Jian-Qiao Zhu, Erin Grant, and R. Thomas McCoy. Bayes in the Age of Intelligent Machines. arXiv preprint arXiv 2311.10206. [pdf]

R. Thomas McCoy, Shunyu Yao, Dan Friedman, Matthew Hardy, and Thomas L. Griffiths. Embers of Autoregression: Understanding Large Language Models Through the Problem They are Trained to Solve. arXiv preprint arXiv 2309.13638. [pdf]

R. Thomas McCoy, Paul Smolensky, Tal Linzen, Jianfeng Gao, and Asli Celikyilmaz. How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN. Transactions of the Association for Computational Linguistics. [pdf]

R. Thomas McCoy and Thomas L. Griffiths. Modeling rapid language learning by distilling Bayesian priors into artificial neural networks. arXiv preprint arXiv 2305.14701. [pdf]

Aditya Yedetore, Tal Linzen, Robert Frank, and R. Thomas McCoy. How poor is the stimulus? Evaluating hierarchical generalization in neural networks trained on child-directed speech In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. [pdf]

Thomas L. Griffiths, Sreejan Kumar, and R. Thomas McCoy. On the hazards of relating representations and inductive biases. Behavioral and Brain Sciences (response to "The best game in town: The reemergence of the language-of-thought hypothesis across the cognitive sciences"). [BBS link]

2022

R. Thomas McCoy. Implicit compositional structure in the vector representations of artificial neural networks. Johns Hopkins University PhD dissertation. [pdf]

Paul Smolensky, R. Thomas McCoy, Roland Fernandez, Matthew Goldrick, and Jianfeng Gao. Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systems. AI Magazine. [AI Magazine link]

Paul Smolensky, R. Thomas McCoy, Roland Fernandez, Matthew Goldrick, and Jianfeng Gao. Neurocompositional computing in human and machine intelligence: A tutorial. Microsoft technical report. [odf]

2021

Paul Soulos, Sudha Rao, Caitlin Smith, Eric Rosen, Asli Celikyilmaz, R. Thomas McCoy, Yichen Jiang, Coleman Haley, Roland Fernandez, Hamid Palangi, Jianfeng Gao, and Paul Smolensky. Structural Biases for Improving Transformers on Translation into Morphologically Rich Languages. Proceedings of the 4th Workshop on Technologies for MT of Low Resource Languages.

R. Thomas McCoy, Jennifer Culbertson, Paul Smolensky, and GĂ©raldine Legendre. Infinite use of finite means? Evaluating the generalization of center embedding learned from an artificial grammar. Proceedings of the 43rd Annual Conference of the Cognitive Science Society. [pdf]

2020

Michael Lepori and R. Thomas McCoy. Picking BERT's Brain: Probing for Linguistic Dependencies in Contextualized Embeddings Using Representational Similarity Analysis. Proceedings of the 28th International Conference on Computational Linguistics (COLING). [pdf]

R. Thomas McCoy, Junghyun Min, and Tal Linzen. BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance. Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP. [pdf]

Paul Soulos, R. Thomas McCoy, Tal Linzen, and Paul Smolensky. Discovering the compositional structure of vector representations with Role Learning Networks. In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP. [pdf]

R. Thomas McCoy, Erin Grant, Paul Smolensky, Thomas L. Griffiths, and Tal Linzen. Universal linguistic inductive biases via meta-learning. Proceedings of the 42nd Annual Conference of the Cognitive Science Society. [pdf] [demo]

Michael Lepori, Tal Linzen, and R. Thomas McCoy. Representations of syntax [MASK] useful: Effects of constituency and dependency structure in recursive LSTMs. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. [pdf]

Junghyun Min, R. Thomas McCoy, Dipanjan Das, Emily Pitler, and Tal Linzen. Syntactic data augmentation increases robustness to inference heuristics. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. [pdf]

R. Thomas McCoy, Robert Frank, and Tal Linzen. Does syntax need to grow on trees? Sources of hierarchical inductive bias in sequence-to-sequence networks. Transactions of the Association for Computational Linguistics. [arXiv] [website]

2019

R. Thomas McCoy, Ellie Pavlick, and Tal Linzen. Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. [ACL anthology] [pdf]

Samuel R. Bowman, Ellie Pavlick, Edouard Grave, Benjamin Van Durme, Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, and Berlin Chen. Can You Tell Me How to Get Past Sesame Street? Sentence-Level Pretraining Beyond Language Modeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. [ACL anthology] [pdf]

R. Thomas McCoy. Touch down in Pittsburghese. In Yale Working Papers in Grammatical Diversity. [pdf]

Najoung Kim, Roma Patel, Adam Poliak, Alex Wang, Patrick Xia, R. Thomas McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bowman, and Ellie Pavlick. Probing What Different NLP Tasks Teach Machines about Function Word Comprehension. In Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019). [ACL anthology] [pdf]

R. Thomas McCoy, Tal Linzen, Ewan Dunbar, and Paul Smolensky. RNNs implicitly implement tensor-product representations. In International Conference on Learning Representations 2019. [OpenReview] [pdf] [demo] [poster]

Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipanjan Das, and Ellie Pavlick. What do you learn from context? Probing for sentence structure in contextualized word representations. In International Conference on Learning Representations 2019. [OpenReview] [pdf]

R. Thomas McCoy and Tal Linzen. Non-entailed subsequences as a challenge for natural language inference. In Proceedings of the Society for Computation in Linguistics (SCiL) 2019. [arxiv] [pdf]

2018

R. Thomas McCoy, Robert Frank, and Tal Linzen. 2018. Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks. In Proceedings of the 40th Annual Conference of the Cognitive Science Society. [arxiv] [pdf]

Patrick Littell, R. Thomas McCoy, Na-Rae Han, Shruti Rijhwani, Zaid Sheikh, David Mortensen, Teruko Mitamura, and Lori Levin. 2018. Parser combinators for Tigrinya and Oromo morphology. In Language Resources and Evaluation Conference (LREC) 2018. [ACL anthology] [pdf]

R. Thomas McCoy and Robert Frank. 2018. Phonologically Informed Edit Distance Algorithms for Word Alignment with Low-Resource Languages. In Proceedings of the Society for Computation in Linguistics (SCiL) 2018, pages 102-112. [ACL anthology] [pdf]

2017

Jungo Kasai, Bob Frank, R. Thomas McCoy, Owen Rambow, and Alexis Nasr. 2017. Tag parsing with neural networks and vector representations of supertags. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1712-1722. [ACL anthology] [pdf]

Dan Friedman*, Jungo Kasai*, R. Thomas McCoy*, Robert Frank, Forrest Davis, Owen Rambow. Linguistically Rich Vector Representations of Supertags for TAG Parsing. In Proceedings of the 13th International Workshop on Tree Adjoining Grammars and Related Formalisms, pages 122-131. [ACL anthology] [pdf] *Equal contribution.

R. Thomas McCoy. 2017. English comparatives as degree-phrase relative clauses. In Proceedings of the Linguistic Society of America 2, 26:1-7. [link]