Do you have a meeting scheduled with me but don’t know what to talk about? Here are some topics that I’m always interested in discussing:

  • What can linguistics and natural language processing contribute to each other?
  • How can we use neural networks as cognitive or psycholinguistic models?
  • How impressed should we be by current AI? What are AI’s most important achievements and most serious shortcomings?
  • What factors explain the impressive performance of large language models (LLMs)?
  • Which aspects of language are innate?
  • How far can AI advance based on scale alone (larger models, more data, longer training, etc.)?
  • For improving models’ performance, to what extent should we focus on their training data vs. their architectural structure?
  • Does it matter if models get the right answers for the wrong reasons?
  • What role should language typology play in theories of language learning?
  • What types of insights can we gain from the paradigm of artificial language learning?
  • Is out-of-distribution generalization a meaningful concept? What about compositionality?
  • How should we measure progress in AI/NLP?
  • In language acquisition, how impoverished is the stimulus?
  • To what extent should AI models be human-like (including making the same mistakes as humans do)?
  • How can we tell what inductive biases a learner has (whether a human or a machine)?
  • Why do we need a field of cognitive science (instead of just individual disciplines of psychology, neuroscience, linguistics, etc.)?
  • What is the role of linguistics within cognitive science?
  • In computational cognitive science, what are the relative advantages/disadvantages of Bayesian models and neural network models?
  • What role is there (if any) for gradience or probabilities in linguistic theory?
  • How do (artificial or biological) neural networks represent language, given that neural networks seem qualitatively different from the symbolic structures traditionally used to analyze language?
  • Why do neural networks outperform symbolic models in natural language processing?
  • How much does language learning depend on real-world grounding, as opposed to linguistic form alone?

I’m also always happy to discuss any of my papers or to chat about life in academia: Work/life balance, choosing research topics, teaching, mentoring, outreach, etc.