Clara Na

Hello! I am a 3rd year PhD student at Carnegie Mellon University’s Language Technologies Institute. I am fortunate to be advised by Emma Strubell and supported by an NSF Graduate Research Fellowship. Recently, I spent a wonderful summer in Seattle as an intern on the AllenNLP team at AI2, working with Jesse Dodge and Pradeep Dasigi.

Before coming to CMU, I earned a BA in Computer Science and Mathematics at the University of Virginia. I began my research journey at UVA looking for “subtractive” design in patents with Katelyn Stenger and Leidy Klotz. My NLP origin story involves my half-baked bilingualism, a data science internship at the Washington Post, and some generous mentorship from Yangfeng Ji.

I study efficient methods and efficiency evaluation in NLP/ML. I am (very) broadly interested in language, information, impacts and applications of language technologies, and the communities of people who build and use them.


Misc: I was born and raised in northern Virginia (NoVA). My middle name is 선우 (Seon-Woo) – I am a second generation Korean American. I have a younger brother who also went to CMU. In my spare time, I like playing piano (especially with other people), running, climbing, and reading.


news

Oct 2023 Three papers accepted to EMNLP!
Aug 2023 We won a Best Paper award at the LTI Student Research Symposium!
May 2023 Interning at AI2!
Oct 2022 Our paper, Train Flat, Then Compress, was accepted to EMNLP Findings ‘22!
Aug 2022 Received the Best Novel Work award at the LTI Student Research Symposium :)

selected publications

  1. EMNLP Findings
    Energy and Carbon Considerations of Fine-Tuning BERT
    In Findings of the Association for Computational Linguistics: EMNLP 2023 2023
  2. EMNLP Main
    To Build Our Future, We Must Know Our Past: Contextualizing Paradigm Shifts in Natural Language Processing
    In The 2023 Conference on Empirical Methods in Natural Language Processing 2023
  3. EMNLP Main
    The Framework Tax: Disparities Between Inference Efficiency in Research and Deployment
    In The 2023 Conference on Empirical Methods in Natural Language Processing 2023
  4. EMNLP Findings
    Train Flat, Then Compress: Sharpness-Aware Minimization Learns More Compressible Models
    In Findings of the Association for Computational Linguistics: EMNLP 2022 Dec 2022