NerVE: Neural Volumetric Edges for Parametric Curve
Extraction from Point Cloud
CVPR 2023

Abstract

overview

Figure 1. Our NerVE compared with previous methods for parametric curve extraction from point clouds.

Extracting parametric edge curves from point clouds is a fundamental problem in 3D vision and geometry processing. Existing approaches mainly rely on keypoint detection, a challenging procedure that tends to generate noisy output, making the subsequent edge extraction error-prone. To address this issue, we propose to directly detect structured edges to circumvent the limitations of the previous point-wise methods. We achieve this goal by presenting NerVE, a novel neural volumetric edge representation that can be easily learned through a volumetric learning framework. NerVE can be seamlessly converted to a versatile piece-wise linear (PWL) curve representation, enabling a unified strategy for learning all types of free-form curves. Furthermore, as NerVE encodes rich structural information, we show that edge extraction based on NerVE can be reduced to a simple graph search problem. After converting NerVE to the PWL representation, parametric curves can be obtained via off-the-shelf spline fitting algorithms.We evaluate our method on the challenging ABC dataset. We show that a simple network based on NerVE can already outperform the previous state-of-the-art methods by a great margin.


Methodology

cube representation

Figure 2. Definition of the tree attributes in NerVE (a) and the illustration of PWL curve extration from NerVE (b).

We present a new paradigm to generate accurate parametric curves using our neural volumetric edge representation, named NerVE. Firstly, we introduce the definition of NerVE and then propose a dedicated network to learn NerVE that supports direct estimation of structured 3D edges. As shown in Figure 2, (a) each cube in NerVE contains three attributes: 1) edge occupancy o. o ∈ {0, 1}, which is a binary value to define whether the cube contains edges; 2) edge orientations e. ei ∈ {0, 1}, i=1,2,3, which are 3 binary values that represent whether the cube should connect with the adjacent cubes to construct pieces of edges. Note that a cube has 6 faces and shares with its neighbors, thus we define only three statuses on the left, bottom, and back faces in a cube; 3) edge point position p, which defines an edge point in the cube. With the three defined attributes, we can discretize continuous curves in a unified and regular volumetric representation, which can be learned directly via neural networks. (b) We can easily extract edges from the defined or learned NerVE cubes. These edges compose PWL curves that can not only provide precise edge positions but also contain valuable topology information, which eases the difficulty of extracting parametric curves.


method

Figure 3. Overview of our proposed network for learning NerVE.

Given a point cloud, we first utilize a simplified PointNet++ module and a 3D CNN module to obtain the feature grid (has the same resolution of our NerVE. Three individual decoders are applied to process cube features in the grid to predict the corresponding three attributes of NerVE, i.e., edge occupancy, edge orientation, and edge point position. With the PWL curves extracted from the NerVE cubes, we further utilize a simple post-processing procedure to correct for their topology errors. Finally, parametric curves can be obtained using a straightforward graph search and spline fitting algorithm. The pipeline of our method is illustrated in Figure 3.


Qualitative Results

qualitative

Figure 4. Qualitative comparisons with DEF on parametric curve extraction.

qualitative

Figure 5. Qualitative comparisons with VCM, EC-NET and PIE-NET on edge points estimation.


qualitative

Figure 6. More qualitative results of our method.

BibTeX


                @inproceedings{zhu2023nerve,
                    title={NerVE: Neural Volumetric Edges for Parametric Curve Extraction from Point Cloud},
                    author={Zhu, Xiangyu and Du, Dong and Chen, Weikai and Zhao, Zhiyou and Nie, Yinyu and Han, Xiaoguang},
                    booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
                    pages={13601--13610},
                    year={2023}
                  }                  
              

Acknowledgements

The website template was borrowed from Michaƫl Gharbi.