Zhenyu Hou
I am Zhenyu Hou, a PhD student in the Department of Computer Science and Technology at Tsinghua University. I am a member of KEG and am advised by Prof. Yuxiao Dong and Prof. Jie Tang.
I work at the intersection of research and engineering for Language Models, Agents, and Reinforcement Learning. My recent work centers on post-training for reasoning and alignment, and on the GLM-4.5 and GLM-5 model series.
Current Focus
- Post-training methods that improve reasoning quality and reliability in real-world settings.
- Agentic capability design and evaluation for foundation models.
- Alignment and safety tuning with practical deployment constraints.
Selected Publications
- GLM-5: from Vibe Coding to Agentic Engineering (arXiv 2026, Tech Leads) [arXiv]
- GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation Models (arXiv 2025) [arXiv]
- Does RLHF Scale? Exploring the Impacts From Data, Model, and Method (ICLR 2025) [arXiv]
- T1: Advancing Language Model Reasoning through Reinforcement Learning and Inference Scaling (ICML 2025) [arXiv]
- ChatGLM-RLHF: Practices of Aligning Large Language Models with Human Feedback (arXiv 2024) [arXiv]
- GraphMAE: Self-Supervised Masked Graph Autoencoders (KDD 2022) [arXiv] [GitHub]
- GraphMAE2: A Decoding-Enhanced Masked Self-Supervised Graph Learner (WWW 2023) [arXiv] [GitHub]
See the full list on the Publications page.
Education
- PhD student, Department of Computer Science and Technology, Tsinghua University (September 2021 – present)
- Bachelor’s degree, Department of Computer Science and Technology, Tsinghua University (September 2017 – July 2021)
Recent Updates
- 2026: GLM-5 released as our latest step from vibe coding toward agentic engineering.
- 2025: Does RLHF Scale? accepted at ICLR 2025; T1 accepted at ICML 2025; GLM-4.5 released.
- 2024: ChatGLM-RLHF and Harnessing LLMs for Hyperedge Prediction (AAAI 2024) published.
- 2022–2023: GraphMAE (KDD 2022), GraphMAE2 (WWW 2023), MTDiag (AAAI 2023), and CogDL (WWW 2023) published.
