Clothed human body reconstruction is an important but challenging task for many applications, including augmented reality, virtual reality and metaverse. The use of deep implicit function sparks a new era of image-based 3D clothed human reconstruction. The vast majority of works locate the implicit surface by regressing the deterministic per-point implicit value. However, should all the points, like near-surface and floating points, be treated equally?
In this paper, we replace the implicit value with adaptive uncertainty distribution, to differentiate the points located at different distance fields. Such a simple ''value to distribution'' transition, finally leads to a significant improvement on almost all the baseline approaches. Qualitative results show that the models, which are trained with our uncertainty distribution loss, could recover more detailed wrinkles, and human-like limbs.
In this paper, we propose the novel neural implicit distribution fields to enhance the performance of clothed human reconstruction.
Rather than regressing the implicit field of a clothed mesh directly, sampling the implicit value (coarse occupancy) from a distribution predicted by a neural network can endow the model with both accuracy and uncertainty. Subsequently, the Occupancy Rectifier, an additional MLP, refines the coarse occupancy field. The final clothed mesh is obtained by setting the level set of the rectified occupancy field to 0.5.
@article{yang2023dif,
author = {Xueting Yang, Yihao Luo, Yuliang Xiu, Wei Wang, Hao Xu and Zhaoxin Fan},
title = {D-IF: Uncertainty-aware Human Digitization via Implicit Distribution Field},
journal = {International Conference on Computer Vision 2023 (ICCV 2023)},
year = {2023},
}