Haozhu Wang 

I'm an AI researcher working on foundation models at AWS AI. I earned my Ph.D. degree in Electrical and Computer Engineering (Machine Learning track) from the University of Michigan, Ann Arbor. My research focuses on LLMs, reinforcement learning, alignment, agent, and AI for science. I'm open to collaboration with aspiring researchers and I frequently mentor students on AI research projects. If you find shared research interests and would like to discuss collaboration opportunities, please feel free to contact me via email!

Email  /  LinkedIn /  Google Scholar  /  GitHub

profile photo
News

[July-2024] Our work OptoGPT, a foundation model for optical inverse design, has been published as a cover article in Opto-Electronic Advances (IF: 14.1). The work has been reported in over 15 news outlets [News1] [News2] [News3].

[Dec-2023] Our work Graph Neural Prompting with Large Language Models is accepted by AAAI-24.

[Dec-2023] 4 workshop papers accepted by NeurIPS'23.

[Aug-2022] Our paper Dynamic prediction of work status for workers with occupational injuries: assessing the value of longitudinal observations has been published on Journal of the American Medical Informatics Association!

[Aug-2022] Our AWS Machine Learning blog on real-time fraud detection is online.

[July-2022] I served as a session chair in ICML 2022.

[Mar-2022] I joined AWS as a Research Scientist!

Selected Publications

The full list of my publications can be found on Google Scholar.

Graph Neural Prompting with Large Language Models [large language models, knowledge graphs]
Yijun Tian, Huan Song, Zichen Wang, Haozhu Wang, Ziqing Hu, Fang Wang, Nitesh V.Chawla, Panpan Xu
AAAI, 2024
arxiv

A knowledge graph prompting method for large language models to improve their commonsense and biomedical reasoning performance.

A Review of Reinforcement Learning for Natural Language Processing, and Applications in Healthcare [reinforcement learning, LLMs]
Ying Liu, Haozhu Wang, Huixue Zhou, Mingchen Li, Yu Hou, Sicheng Zhou, Fang Wang, Rama Hoetzlein, Rui Zhang
under review, 2023
arxiv

A comprehensive review of reinforcement learning applied to NLP and its healthcare applications.

T3GDT: Three-Tier Tokens to Guide Decision Transformer for Offline Meta Reinforcement Learning [reinforcement learning, foundation model]
Zhe Wang, Haozhu Wang, Yanjun Qi
NeurIPS Workshop on Robot Learning, 2023
paper

We developed an hierarchical prompting method for transformer-based reinforcement learning models to enable efficient few-shot policy adaptation.

Latent skill discovery for chain-of-thought reasoning [LLMs, foundation model]
Zifan Xu, Haozhu Wang, Dmitriy Bespalov, Yanjun Qi
NeurIPS Workshop on Robustness of Zero/Few-shot Learning in Foundation Models, 2023
paper

We developed an unsupervised method for discovering latent skills to guide the demonstration selection for in-context learning with large langauge models.

Reinforcement Learning-Enabled Environmentally Friendly and Multi-functional Chrome-looking Plating [AI for science, reinforcement learning]
Taigao Ma, Anwesha Saha, Haozhu Wang, L. Jay Guo,
NeurIPS AI for Science Workshop, 2023, [Oral, selection rate: 10/150=6.7%]
OpenReview

Using reinforcement learning, we designed and fabricated two multilayer thin film structures that can mimic the visual appearance of decorative chrome plating, serving as a environmentally friendly and multi-functional replacement.

OptoGPT: A Foundation Model for Inverse Design in Optical Multilayer Thin Film Structures [AI for science, foundation model]
Taigao Ma, Haozhu Wang, L. Jay Guo
under review, 2023
arXiv

We developed OptoGPT, the first foundation model for optical thin film structure inverse design. After being trained on a large dataset of 10 million optical thin film designs, OptoGPT demonstrates remarkable capabilities including: 1) autonomous global design exploration, 2) efficient designs for various tasks, 3) the ability to output diverse designs, and 4) seamless integration of user-defined constraints. We believe OptoGPT is a major leap towards accelerating optical science with foundation models.

Dynamic prediction of work status for workers with occupational injuries: assessing the value of longitudinal observations [ML for healthcare]
Erkin Ötleş, Jon Seymour, Haozhu Wang, Brian T Denton
Journal of the American Medical Informatics Association, 2022
paper

We developed a forecasting model to predict return-to-work after occupational injuries based on longitudinal claim data. The model may allow case managers to better allocate medical resources and help speed up patients' recover process.

NEUTRON: Neural Particle Swarm Optimization for Material-Aware Inverse Design of Structural Color [AI for science]
Haozhu Wang, L. Jay Guo
iScience, 2022
paper/ code

We propose a hybrid machine learning and optimization method that combines mixture density networks and particle swarm optimization for accurate and efficient structural color inverse design.

Benchmarking Deep Learning-based Models on Nanophotonic Inverse Design Problems [AI for science]
Taigao Ma, Mustafa Tobah, Haozhu Wang* , L. Jay Guo*
Opto-Electronic Science, 2022 (*: correspondence)
paper

We provide extensive benchmarking results on accuracy, diversity, robustness for commonly used deep learning models in nanophotonic inverse designs. The findings can help researchers select models that best suit their design problems.

Automated Optical Multi-layer Design via Deep Reinforcement Learning [AI for science, reinforcement learning]
Haozhu Wang , Zeyu Zheng, Chengang Ji, L. Jay Guo
Machine Learning: Science and Technology, 2021
paper/ code/ abridged NeurIPS workshop version/ DOI

Training a novel sequence generation network with Proximal Policy Optimization for automatically discovering near-optimal optical designs.

Learning Credible Models [Trustworthy ML]
Jiaxuan Wang, Jeeheh Oh, Haozhu Wang , Jenna Wiens
KDD, 2018
paper/ code

Expert-yielded-estimates regularizer for incorporating expert knowledge into linear models.

Learning to Share: Simultaneous Parameter Tying and Sparsification in Deep Learning [model compression]
Dejiao Zhang*, Haozhu Wang *, Mario A.T. Figueiredo, Laura Balzano
ICLR, 2018 (*: equal contribution)
paper/ code

Group-ordered-weighted lasso (GrOWL) for deep model compression.

Teaching

Mentored > 30 graduate and undergraduate students on AI research projects.

cs188 EECS 442 Computer Vision (F20), Prof. Andrew Owens
Graduate Student Instructor

EECS 504 Fundations of Computer Vision (W20), Prof. Andrew Owens
Graduate Student Instructor

EECS 545 Machine Learning (F17), Prof. Mert Pilanci
Graduate Student Instructor

Service
Conference review: ICLR'22-23,ICML'22-23, NeurIPS'20-22, AutoML-Conf'22, MLHC'18-22, AMIA Annual Symposium'20-22
Journal review: Journal of Physics Communications, AIP Advances
Workshop review: NeurIPS'20-22 Meta-Learning Workshop, NeurIPS'21-22 Machine Learning and the Physical Sciences Workshop, ICML'22 Pre-training Workshop


The source code of this website is from Jon Barron.

(last update: 2024)

Web Analytics Made Easy - Statcounter