ShapeFormer: Transformer-based Shape Completion via Sparse Representation

Shapes are typically acquired with cameras which at best can obtain partial information from the visible parts of objects. Therefore, researchers intensively study the problem of surface completion. One of the ways for learning high-quality surface completion is the deep implicit function (DIF).

Geometric shapes - artistic impression.

Geometric shapes – artistic impression. Image credit: PIRO4D via Pixabay, free licence

A recent study on arXiv.org proposes a novel DIF representation based on sequences of discrete variables compactly representing satisfactory approximations of 3D shapes.

Researchers present ShapeFormer, a transformer-based autoregressive model that learns a distribution over possible shape completions. The ShapeFormer is able to produce diverse high-quality completions for various shape types, including human bodies.

State-of-the-art results are achieved in terms of completion quality and diversity.

We present ShapeFormer, a transformer-based network that produces a distribution of object completions, conditioned on incomplete, and possibly noisy, point clouds. The resultant distribution can then be sampled to generate likely completions, each exhibiting plausible shape details while being faithful to the input. To facilitate the use of transformers for 3D, we introduce a compact 3D representation, vector quantized deep implicit function, that utilizes spatial sparsity to represent a close approximation of a 3D shape by a short sequence of discrete variables. Experiments demonstrate that ShapeFormer outperforms prior art for shape completion from ambiguous partial inputs in terms of both completion quality and diversity. We also show that our approach effectively handles a variety of shape types, incomplete patterns, and real-world scans.

Research paper: Yan, X., Lin, L., Mitra, N. J., Lischinski, D., Cohen-Or, D., and Huang, H., “ShapeFormer: Transformer-based Shape Completion via Sparse Representation”, 2022. Link to the article: https://arxiv.org/abs/2201.10326
Link to the project page: https://shapeformer.github.io/

Facebook Comments Box