About Experience Publications Professional



Junxuan Li

Research Scientist, Reality Labs Research, Meta

I'm a Research Scientist at Meta Reality Labs Research, Codec Avatars Lab. Before that, I was a PhD student in Australian National University, with my interest focus on Computer Vision and Deep Learning, supervised by Hongdong Li and Yasuyuki Matsushita. I received the Master degree in Australian National University. And B. Eng degree in Shanghai Jiaotong University.

My main research topics are computer vision and machine learning. Specifically, I am currently doing researches on 3D object reconstruction, scene reconstruction, and novel view synthesis via deep learning approaches.


Experience

Research Scientist. Codec Avatars Lab, Reality Labs Research, Meta.
Pittsburgh, United States. Jul. 2023 -- present

Working on photorealistic telepresence, Codec Avatars and AR/VR.


Research Intern. Tencent.
Canberra, Australia. Feb. 2023 -- May 2023

Worked on text-to-3D from diffusion priors.


Research Scientist Intern. Reality Labs Research, Meta.
Pittsburgh, United States. Jun. 2022 -- Dec. 2022

Solve problems in enabling photorealistic telepresence, digital human reconstruction, Codec Avatars and AR/VR related tasks using Neural Rendering.


Publications

Relightable Gaussian Codec Avatars
Shunsuke Saito, Gabriel Schwartz, Tomas Simon, Junxuan Li, and Giljoo Nam. Preprints 2023. [pdf] [Project Page]

We build high-fidelity relightable & animatable head avatars with 3D-consistent sub-millimeter details such as hair strands and pores on dynamic face sequences.

MEGANE: Morphable Eyeglass and Avatar Network
Junxuan Li, Shunsuke Saito, Tomas Simon, Stephen Lombardi, Hongdong Li, and Jason Saragih. CVPR 2023. [pdf] [Project Page]

We propose a 3D compositional morphable model of eyeglasses that accurately incorporates high-fidelity geometric and photometric interaction effects.
We employ a hybrid representation that combines surface geometry and a volumetric representation to enable modification of geometry, lens insertion and frame deformation.
Our model is relightable under point lights and natural illumination, which can synthesize casting shadows between faces and glasses.


In-the-wild Inverse Rendering with a Flashlight
Ziang Cheng, Junxuan Li, Hongdong Li. CVPR 2023. [pdf] [Project Page]

We propose a practical photometric solution for the in-the-wild inverse rendering under unknown ambient lighting.
We recovers scene geometry and reflectance using only multi-view images captured by a smartphone.
The key idea is to exploit smartphone's built-in flashlight as a minimally controlled light source, and decompose images into two photometric components: a static appearance corresponds to ambient flux, plus a dynamic reflection induced by the flashlight.


Self-calibrating Photometric Stereo by Neural Inverse Rendering
Junxuan Li, and Hongdong Li. ECCV 2022. [pdf] [Project Page]

Introduced a self-supervised neural network for uncalibrated photometric stereo problem.
The object surface shape, and light sources are jointly estimated via the neural network in an unsupervised manner.


Neural Reflectance for Shape Recovery with Shadow Handling
Junxuan Li, and Hongdong Li. CVPR 2022. Oral presentation [pdf] [Project Page]

Formulated the shape estimation and material estimation in a self-supervised framework.
Explicitly predicted shadows to mitigate the errors.
Achieved the state-of-the-art performance in surface normal estimation and been an order of magnitude faster than previous methods.
The proposed neural representation of reflectance also presents higher quality in object relighting task than prior works.


Neural Plenoptic Sampling: Learning Light-field from Thousands of Imaginary Eyes
Junxuan Li, Yujiao Shi, and Hongdong Li. ACCV 2022. [pdf]

Proposed a neural representation for the plenoptic function, which describes light rays observed from any given position in every viewing direction.
Proposed proxy depth reconstruction and color-blending network for achieving well approximation on the complete plenoptic function.
The generated results are in high-quality with better PSNR than previous methods. The training and testing time of proposed method is also more than 10 times faster than prior works.


Lighting, Reflectance and Geometry Estimation from 360° Panoramic Stereo
Junxuan Li, Hongdong Li, and Yasuyuki Matsushita. CVPR 2021. [pdf] [code]

Estimating high-definition spatially-varying lighting (environment map), reflectance, and geometry of a complex indoor scene from a pair of 360° images.
Outperformed prior state-of-the-art methods in light estimation (7dB better in PSNR) and geometry estimation.
Enabled many augmented reality applications such as mirror-objects insertion.


Learning to Minify Photometric Stereo
Junxuan Li, Antonio Robles-Kelly, Shaodi You, and Yasuyuki Matsushita. CVPR 2019. [pdf] [code]

Dramatically decrease the demands on the photometric stereo problem by reducing the number of images at input.
Automatically learn the critical and informative illuminations required at input.


A Frequency Domain Neural Network for Fast Image Super-resolution
Junxuan Li, Shaodi You, and Antonio Robles-Kelly. Neural Networks (IJCNN), 2018 International Joint Conference on. IEEE, 2018. Oral Presentation. [pdf] [code]

A frequency domain neural network for image super-resolution.
Employs the convolution theorem so as to cast convolutions in the spatial domain as products in the frequency domain.
The network is very computationally efficient at testing, which is one to two orders of magnitude faster than the previous works.


Stereo Super-resolution via a Deep Convolutional Network
Junxuan Li, Shaodi You, and Antonio Robles-Kelly. Digital Image Computing: Techniques and Applications (DICTA), 2017 International Conference on. IEEE, 2017. Oral Presentation. [pdf]

A deep network for images super-resolution with stereo images at input. The network is designed to allow combining structural information in the image across large regions efficiently.
By learning the residual image, the network copes better with vanishing gradients and its devoid of gradient clipping operations.


Professional Activities

Reviewer.

Served as the conference reviewer of CVPR, ECCV, ICCV, ICLR, NeurIPS, 3DV and IROS.
Served as the reviewer of multiple journals: TPAMI, IJCV, SIGGRAPH and RA-L.


modified from © Yihui He 2017