3D Conceptual Design Using Deep Learning

2Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This article proposes a data-driven methodology to achieve fast design support to generate or develop novel designs covering multiple object categories. This methodology implements two state-of-the-art Variational Autoencoder, dealing with 3D model data, with a self-defined loss function. The loss function, containing the outputs of individual layers in the autoencoder, obtains combinations of different latent features from different 3D model categories. This article provides detail explanation for utilizing the Princeton Model-Net40 database, a comprehensive clean collection of 3D CAD models for objects. After converting the original 3D mesh file to voxel and point cloud data type, the model will feed an autoencoder with data in the same dimension. The novelty is to leverage the power of deep learning methods as an efficient latent feature extractor to explore unknown designing areas. The output is expected to show a clear and smooth interpretation of the model from different categories to generate new shapes. This article will explore (1) the theoretical ideas, (2) the progress to implement Variational Autoencoder to attain implicit features from input shapes, (3) the results of output shapes during training in selected domains of both 3D voxel data and 3D point cloud data, and (4) the conclusion and future work to achieve the more outstanding goal.

Cite

CITATION STYLE

APA

Yang, Z., Jiang, H., & Zou, L. (2020). 3D Conceptual Design Using Deep Learning. In Advances in Intelligent Systems and Computing (Vol. 943, pp. 16–26). Springer Verlag. https://doi.org/10.1007/978-3-030-17795-9_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free