Distribution-Aligned Diffusion for Human Mesh Recovery
1Lin Geng Foo,
2Hossein Rahmani,
1Jun Liu#,
1Singapore University of Technology and Design
2Lancaster University
#corresponding author


Recovering a 3D human mesh from a single RGB image is a challenging task due to depth ambiguity and self-occlusion, resulting in a high degree of uncertainty. Meanwhile, diffusion models have recently seen much success in generating high-quality outputs by progressively denoising noisy inputs. Inspired by their capability, we explore a diffusion-based approach for human mesh recovery, and propose a Human Mesh Diffusion (HMDiff) framework which frames mesh recovery as a reverse diffusion process. We also propose a Distribution Alignment Technique (DAT) that injects input-specific distribution information into the diffusion process, and provides useful prior knowledge to simplify the mesh recovery task. Our method achieves state-of-the-art performance on three widely used datasets.

Our method

Illustration of the proposed Human Mesh Diffusion (HMDiff) framework with the Distribution Alignment Technique (DAT).

Our Visualization



Foo, L. G., Gong, J., Rahmani, H., & Liu, J.
>Distribution-Aligned Diffusion for Human Mesh Recovery.
In ICCV, 2023.
(hosted on ICCV)



This template was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project; the code can be found here.