About

I'm a Research Scientist at Codec Avatars Lab, Meta, building photorealistic digital humans — from foundational avatar models to relighting, hair, faces, hands, and full-body articulation. My goal is to let anyone create a lifelike digital twin from a casual phone capture.

I received my Ph.D. from the Australian National University (advised by Hongdong Li and Yasuyuki Matsushita) and my B.Eng. from Shanghai Jiao Tong University.

Interested in interning with us? I'm always looking for motivated students working on 3D vision, neural rendering, or generative models — drop me an email!

News

  • NEW One paper accepted at CVPR 2026LCA!
  • One paper accepted at ICCV 2025 as Oral Presentation!
  • Two papers accepted at SIGGRAPH 2025!
  • Three papers accepted at CVPR 2025Vid2Avatar-Pro!
  • One paper accepted at SIGGRAPH Asia 2024URAvatar!
  • One paper accepted at CVPR 2024 as Oral Presentation!
  • Two papers accepted at CVPR 2023!

Experience

Research Scientist

Codec Avatars Lab, Meta
Pittsburgh, United States · Jul 2023 – Present

Research Intern

Tencent
Canberra, Australia · Feb 2023 – May 2023

Research Scientist Intern

Reality Labs Research, Meta
Pittsburgh, United States · Jun 2022 – Dec 2022

Publications

Large-scale Codec Avatars: The Unreasonable Effectiveness of Large-scale Avatar Pretraining

Junxuan Li, Rawal Khirodkar, et al.
CVPR 2026
LCA is a high-fidelity, full-body 3D avatar model that generalizes to world-scale populations via large-scale pre/post-training, achieving precise expressions, finger-level articulation, and emergent relightability.

Vid2Avatar-Pro: Authentic Avatar from Videos in the Wild via Universal Prior

Chen Guo*, Junxuan Li*, Yash Kant, Yaser Sheikh, Shunsuke Saito†, Chen Cao† (*† equal contribution)
CVPR 2025
Authentic, animatable 3D avatars are generated from challenging videos captured "in the wild" by leveraging a universal prior model.

URAvatar: Universal Relightable Gaussian Codec Avatars

Junxuan Li, Chen Cao, Gabriel Schwartz, Rawal Khirodkar, Christian Richardt, Tomas Simon, Yaser Sheikh, and Shunsuke Saito
SIGGRAPH Asia 2024
We present URAvatar, a high-fidelity Universal prior for Relightable Avatars. You can create URAvatar (Your Avatar) from a phone scan.

Relightable Gaussian Codec Avatars

Shunsuke Saito, Gabriel Schwartz, Tomas Simon, Junxuan Li, and Giljoo Nam
CVPR 2024 Oral Presentation
We build high-fidelity relightable & animatable head avatars with 3D-consistent sub-millimeter details such as hair strands and pores on dynamic face sequences.

HairCUP: Hair Compositional Universal Prior for 3D Gaussian Avatars

Byungjun Kim, Shunsuke Saito, Giljoo Nam, Tomas Simon, Jason Saragih, Hanbyul Joo, Junxuan Li
ICCV 2025 Oral Presentation
A universal prior model, HairCUP, explicitly disentangles hair and face components to enable flexible hairstyle swapping and the creation of high-fidelity 3D head avatars from only a few images.

Relightable Full-body Gaussian Codec Avatars

Shaofei Wang, Tomas Simon, Igor Santesteban, Timur Bagautdinov, Junxuan Li, Vasu Agrawal, Fabian Prada, Shoou-I Yu, Pace Nalbone, Matt Gramlich, Roman Lubachersky, Chenglei Wu, Javier Romero, Jason Saragih, Michael Zollhoefer, Andreas Geiger, Siyu Tang, Shunsuke Saito
ACM Transactions on Graphics (SIGGRAPH 2025)
The first drivable, full-body avatar that can be realistically relighted is introduced, employing a new method to manage complex lighting effects on an articulated body.
3DGH

3DGH: 3D Head Generation with Composable Hair and Face

Chengan He, Junxuan Li, Tobias Kirschstein, Artem Sevastopolsky, Shunsuke Saito, Qingyang Tan, Javier Romero, Chen Cao, Holly Rushmeier, Giljoo Nam
ACM Transactions on Graphics (SIGGRAPH 2025)
A novel generative model, 3DGH, creates a wide variety of 3D heads by freely composing different hair and face components.

FRESA: Feedforward Reconstruction of Personalized Skinned Avatars from Few Images

Rong Wang, Fabian Prada, Ziyan Wang, Zhongshi Jiang, Chengxiang Yin, Junxuan Li, Shunsuke Saito, Igor Santesteban, Javier Romero, Rohan Joshi, Hongdong Li, Jason Saragih, Yaser Sheikh
CVPR 2025
Personalized and animatable 3D avatars are reconstructed with a fast, feed-forward method from just a few images, removing the need for per-subject optimization.

LUCAS: Layered Universal Codec Avatars

Di Liu, Teng Deng, Giljoo Nam, Yu Rong, Stanislav Pidhorskyi, Junxuan Li, Jason Saragih, Dimitris N. Metaxas, Chen Cao
CVPR 2025
High-fidelity, real-time 3D avatars efficient enough for mobile devices are created using a layered model that separates the hair and face.

MEGANE: Morphable Eyeglass and Avatar Network

Junxuan Li, Shunsuke Saito, Tomas Simon, Stephen Lombardi, Hongdong Li, and Jason Saragih
CVPR 2023
A 3D compositional morphable model of eyeglasses with a hybrid surface-volumetric representation, enabling geometry modification, lens insertion, frame deformation, and relightable rendering with realistic face-glasses shadow interactions.

In-the-wild Inverse Rendering with a Flashlight

Ziang Cheng, Junxuan Li, Hongdong Li
CVPR 2023
A practical in-the-wild inverse rendering method that recovers scene geometry and reflectance from smartphone images by exploiting the built-in flashlight as a minimally controlled light source.

Self-calibrating Photometric Stereo by Neural Inverse Rendering

Junxuan Li, and Hongdong Li
ECCV 2022
A self-supervised neural network for uncalibrated photometric stereo that jointly estimates surface shape and light sources without supervision.

Neural Reflectance for Shape Recovery with Shadow Handling

Junxuan Li, and Hongdong Li
CVPR 2022 Oral Presentation
Self-supervised shape and material estimation with explicit shadow prediction, achieving state-of-the-art surface normal accuracy an order of magnitude faster than prior methods.
Neural Plenoptic Sampling

Neural Plenoptic Sampling: Learning Light-field from Thousands of Imaginary Eyes

Junxuan Li, Yujiao Shi, and Hongdong Li
ACCV 2022
A neural plenoptic function representation with proxy depth and color-blending, achieving higher PSNR and over 10x faster training/testing than prior methods.
360 Panoramic Stereo

Lighting, Reflectance and Geometry Estimation from 360° Panoramic Stereo

Junxuan Li, Hongdong Li, and Yasuyuki Matsushita
CVPR 2021
Joint estimation of spatially-varying lighting, reflectance, and geometry from 360° stereo images, enabling AR applications such as mirror-object insertion.
Learning to Minify PS

Learning to Minify Photometric Stereo

Junxuan Li, Antonio Robles-Kelly, Shaodi You, and Yasuyuki Matsushita
CVPR 2019
Dramatically decrease the demands on photometric stereo by reducing the number of images at input. Automatically learns the critical and informative illuminations required.
Frequency Domain SR

A Frequency Domain Neural Network for Fast Image Super-resolution

Junxuan Li, Shaodi You, and Antonio Robles-Kelly
IJCNN 2018 Oral Presentation
A frequency domain neural network for image super-resolution that leverages the convolution theorem, achieving one to two orders of magnitude speedup over prior methods.
Stereo SR

Stereo Super-resolution via a Deep Convolutional Network

Junxuan Li, Shaodi You, and Antonio Robles-Kelly
DICTA 2017
A deep stereo super-resolution network that efficiently combines structural information across large regions via residual learning.