About me

Hi, welcome to my website! I am Qingyan Guo (郭清妍), a 2nd-year master student majoring in Artificial Intelligence at Tsinghua University, Shenzhen International Graduate School, supervised by Prof. Yujiu Yang. My current research interests mainly include large language models, prompt learning, etc.

Prior to that, I did my undergrad at Tianjin University, where I double-majored in Computer Science and English Literature.

Experiences

Education

  • Summer Intern, École Polytechnique Fédérale de Lausanne (EPFL), Jun. 2024 - Present
  • Master student, Artificial Intelligence, Tsinghua University, Sept. 2022 - Jul. 2025 (Expected)
  • Bachelor of Engineering, Computer Science, Tianjin University, Sept. 2018 - Jul. 2022
  • Bachelor of Art, English Literature (double major), Tianjin University, Sept. 2020 - Jul. 2022

Internships

Summer Intern, Brbic Lab
Jun. 2024 - Present
Advisor: Maria Brbic, Shawn Fan
Interests: Multi-modal single-cell foundation model
Research Intern, Machine Learning Group, MicroSoft Research Asia Beijing, China
Jan. 2023 - Present
Advisor: Rui Wang, Xu Tan
Interests: Prompt Learning, Machine Translation
MLE Intern, General Dialogue Group, Baidu Inc., Beijing, China
Mar. 2022 - Dec. 2022
Advisor: Zeyang Lei
Interests: Dialogue Generation, Persona for Users
MLE Intern, Marketplace Technology, Didi Inc., Beijing, China
Nov. 2021 - Mar. 2022
Interests: Causal Inference, Time Series Forecasting

Competitions

(Nov. 2022) 1st Place (1/54) in NLP task of NeurIPS 2022 IGLU Challenge

Publications

  • [ACL 2024 Findings] Mitigating Reversal Curse in Large Language Models via Semantic-aware Permutation Training [paper] [code]
    Qingyan Guo*, Rui Wang, Junliang Guo, Xu Tan, Jiang Bian, Yujiu Yang
    This work explores reversal curse which exists among many decoder-only large language models and poses a challenge to the advancement of artificial general intelligence (AGI), as it suggests a gap in the models’ ability to comprehend and apply bidirectional reasoning. To address this issue, we propose Semantic-aware Permutation Training (SPT), by segmenting the training sentences into semantic units (i.e., entities or phrases) with an assistant language model and permuting these units before feeding into the model. SPT effectively mitigates the reversal curse of LLMs.
  • [ICLR 2024] Connecting Large Language Models with Evolutionary Algorithms Yields Powerful Prompt Optimizers [paper] [code]
    Qingyan Guo*, Rui Wang*, Junliang Guo, Bei Li, Kaitao Song, Xu Tan, Guoqing Liu, Jiang Bian, Yujiu Yang
    This work proposes EvoPrompt, leveraging the strong natural language genetion capabilities of LLMs, as well as the efficient optimization performance of evolution algorithms. Without access to any internal parameters or gradients, EvoPrompt significantly outperforms human-engineered prompts and existing methods for automatic prompt generation.
  • [ICASSP 2023] Hint-enhanced In-Context Learning wakes Large Language Models up for knowledge-intensive tasks [paper]
    Yifan Wang, Qingyan Guo, Xinzhe Ni, Chufan Shi, Lemao Liu, Haiyun Jiang, Yujiu Yang
    This work proposes a new paradigm called Hint-enhanced In-Context Learning (HICL) to leveraging LLMs’ reasoning ability to extract query-related knowledge from demonstrations, to better elicit ICL ability of LLMs.
  • [ACL 2023 Main] MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction [paper] [code]
    Zhibin Gou*, Qingyan Guo*, Yujiu Yang
    This work proposes MvP, a simple unified generative framework for structure prediction to leverage the intuition of human-like problem-solving processes from different views. MvP achieves state-of-the-art performance on 10 datasets across 4 ABSA tasks.

Honors

  • Outstanding Graduate, 2022
  • Outstanding Graduate Thesis, 2022
  • National Scholarship, 2020
  • Tianjin Municipal Government Scholarship, 2021
  • Outstanding Youth, 2021

Personal Interests

I love music 🎵 (R&B, popular), singing 🎤 (popular music), coffee ☕️ and I spend most of my spare time on them! Feel free to contact me if you are interested too!

To be continued...