Sitemap
A list of all the posts and pages found on the site. For you robots out there, there is an XML version available for digesting as well.
Pages
Zhenyu Hou
PhD student at Tsinghua University working on language models, agents, and reinforcement learning.
CV
Curriculum vitae of Zhenyu Hou: education, research experience, publications, talks, and teaching.
Publications
Selected papers on language models, agents, reinforcement learning, and graph learning.
Talks and presentations
Talks, invited presentations, and research sharing by Zhenyu Hou.
Blog posts
Chronological archive of blog posts and updates.
Posts
portfolio
publications
Self-Supervised Attributed Graph Learning: A Comprehensive Review
Published in IEEE Transactions on Knowledge and Data Engineering (TKDE), 2021
A review of self-supervised learning methods on attributed graphs.
Recommended citation: Xie, Y., Xu, Z., Ji, J., Wang, Z., Wang, S., Liu, J., Ding, T., Hou, Z., and Tang, J. (2021). "Self-Supervised Attributed Graph Learning: A Comprehensive Review." IEEE TKDE.
Download Paper
Automated Unsupervised Graph Representation Learning
Published in IEEE Transactions on Knowledge and Data Engineering (TKDE), 2021
AutoProNE automatically searches for optimal graph filters to enhance any graph representations.
Recommended citation: Hou, Z., Cen, Y., Dong, Y., Zhang, J., and Tang, J. (2021). "Automated Unsupervised Graph Representation Learning." IEEE TKDE.
Download Paper
GraphMAE: Self-Supervised Masked Graph Autoencoders
Published in ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2022
KDD 2022 paper proposing masked autoencoding for self-supervised graph representation learning.
Recommended citation: Hou, Z., Liu, X., Cen, Y., Dong, Y., Yang, H., Wang, C., and Tang, J. (2022). "GraphMAE: Self-Supervised Masked Graph Autoencoders." KDD.
Download Paper
MTDiag: An Effective Multi-Task Framework for Automatic Diagnosis
Published in AAAI Conference on Artificial Intelligence (AAAI), 2023
A multi-task framework reformulating symptom checking as multi-label classification for automatic diagnosis.
Recommended citation: Hou, Z., Cen, Y., Liu, Z., Wu, D., Wang, B., Li, X., Hong, L., and Tang, J. (2023). "MTDiag: An Effective Multi-Task Framework for Automatic Diagnosis." AAAI.
Download Paper
CogDL: A Comprehensive Library for Graph Deep Learning
Published in ACM Web Conference (WWW), 2023
A unified and efficient library for graph deep learning research and applications.
Recommended citation: Cen, Y., Hou, Z., Wang, Y., Chen, Q., Luo, Y., Yu, Z., et al. (2023). "CogDL: A Comprehensive Library for Graph Deep Learning." WWW.
Download Paper
GraphMAE2: A Decoding-Enhanced Masked Self-Supervised Graph Learner
Published in ACM Web Conference (WWW), 2023
WWW 2023 paper extending masked graph autoencoding with enhanced decoding for stronger self-supervised graph representation learning.
Recommended citation: Hou, Z., et al. (2023). "GraphMAE2: A Decoding-Enhanced Masked Self-Supervised Graph Learner." WWW.
Download Paper
Harnessing Large Language Models for Hyperedge Prediction
Published in AAAI Conference on Artificial Intelligence (AAAI), 2024
Using LLMs and hypergraph learning for hyperedge prediction tasks.
Recommended citation: Hou, Z., Fang, Y., Liu, Z., Cen, Y., Zheng, V., and Tang, J. (2024). "Harnessing Large Language Models for Hyperedge Prediction." AAAI.
Download Paper
ChatGLM-RLHF: Practices of Aligning Large Language Models with Human Feedback
Published in arXiv, 2024
An RLHF pipeline for ChatGLM with strategies for stable large-scale training and alignment.
Recommended citation: Hou, Z., Niu, Y., Du, Z., Zhang, X., Liu, X., Zeng, A., Zheng, Q., Huang, M., Wang, H., Tang, J., and Dong, Y. (2024). "ChatGLM-RLHF: Practices of Aligning Large Language Models with Human Feedback." arXiv:2404.00934.
Download Paper
ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools
Published in arXiv, 2024
A technical report of the ChatGLM model family from GLM-130B to GLM-4.
Recommended citation: Zeng, A., Liu, X., Du, Z., Wang, Z., Lai, H., Hou, Z., and others. (2024). "ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools." arXiv:2406.12793.
Download Paper
Does RLHF Scale? Exploring the Impacts From Data, Model, and Method
Published in International Conference on Learning Representations (ICLR), 2025
A systematic study of RLHF scaling properties across data, model size, and inference budget.
Recommended citation: Hou, Z., Du, P., Niu, Y., Du, Z., Zeng, A., Liu, X., Huang, M., Wang, H., Tang, J., and Dong, Y. (2025). "Does RLHF Scale? Exploring the Impacts From Data, Model, and Method." ICLR.
Download Paper
T1: Advancing Language Model Reasoning through Reinforcement Learning and Inference Scaling
Published in International Conference on Machine Learning (ICML), 2025
T1 studies reasoning improvements via reinforcement learning and inference-time scaling.
Recommended citation: Hu, W., Xu, C., Hou, Z., et al. (2025). "T1: Advancing Language Model Reasoning through Reinforcement Learning and Inference Scaling." ICML.
Download Paper
GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation Models
Published in arXiv, 2025
GLM-4.5 introduces ARC-oriented large language models for agentic behavior, reasoning, and coding.
Recommended citation: Zhipu AI et al. (2025). "GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation Models." arXiv:2508.06471.
Download Paper
GLM-5: from Vibe Coding to Agentic Engineering
Published in arXiv, 2026
A technical report for GLM-5, focused on agentic engineering, long-horizon reasoning, and coding.
Recommended citation: GLM-5 Team, Zeng, A., Lv, X., Hou, Z., Du, Z., Zheng, Q., Chen, B., Yin, D., Ge, C., et al. (2026). "GLM-5: from Vibe Coding to Agentic Engineering." arXiv:2602.15763.
Download Paper
