Hi, I am Jian. Currently, I work at Monash University as a PhD candidate, under the supervision of Prof. Aldeida Aleti, Prof. Chunyang Chen, and Prof. Hongyu Zhang.

I was working as a PhD candidate (research assistant) at University of Zurich, supervised by Prof. Harald C. Gall. Before that, I obtained my master’s degree in machine learning, at KTH Royal Institute of Technology, supervised by Prof. Martin Monperrus. Further, I completed the bachelor’s study in computer science (the elite class), at Shandong University, supervised by Prof. Jun Ma.

My research interests are on the intersection between software engineering and machine learning. The focus is adapting the idea of program repair to language models, namely LM Repair. If you are seeking any forms of academic communication, welcome to contact me via email.

🔥 News

💻 Featured Work

sym

Semantic-based Optimization for Repairing LLMs: Case Study on Code Generation
Jian Gu, Aldeida Aleti, Chunyang Chen, Hongyu Zhang

STAR is a novel semantic-based optimization approach for LM repair that efficiently locates and patches buggy neurons using statistical insights and analytical formulas, outperforming prior methods in effectiveness, efficiency, and minimizing side effects.

sym

A Semantic-based Layer Freezing Approach to Efficient Fine-Tuning of LMs
Jian Gu, Aldeida Aleti, Chunyang Chen, Hongyu Zhang

Our semantic-based layer freezing approach improves the efficiency of language model finetuning by determining where to finetune, outperforming existing methods through a detailed semantic analysis of the model’s inference process.

sym

Vocabulary-Defined Semantics: Latent Space Clustering for In-Context Learning
Jian Gu, Aldeida Aleti, Chunyang Chen, Hongyu Zhang

We propose “vocabulary-defined semantics” to reformulate in-context learning as a clustering problem, aligning semantic properties of language models with downstream data, outperforming state-of-the-art methods in effectiveness, efficiency and robustness.

sym

Neuron Patching: Semantic-based Neuron-level LM Repair for Code Generation
Jian Gu, Aldeida Aleti, Chunyang Chen, Hongyu Zhang

MINT is an efficient and reliable technique for repairing large language models in software engineering. It can successfully solve model failures by patching merely 1 or 2 neurons, outperforming state-of-the-art methods in coding tasks.

sym

Towards Top-Down Automated Development in Limited Scopes: A Neuro-Symbolic Framework from Expressibles to Executables
Jian Gu, Harald C. Gall

Deep code generation integrates neural models into software engineering for generating code but requires enhancements for project-level tasks, suggesting a taxonomy on code data and introducing a semantic pyramid framework to improve software development processes.

📝 Publications

Software Engineering for Deep Learning (SE4AI)

Deep Learning for Software Engineering (AI4SE)

“Machine intelligence is the last invention that humanity will ever need to make. Machines will then be better at inventing than we are, and they’ll be doing so on digital timescales.” – Nick Bostrom