>> GarmageNet <<
A Dataset and Scalable Representation for Generic Garment Modeling

1Zhejiang Sci-Tech University,     2Style3D Research,     3Shanghai Jiao Tong University,     4State Key Lab of CAD&CG, Zhejiang University
Equal contribution    Corresponding author
MY ALT TEXT

GarmageNet in Action: A rich and diverse array of garment assets generated by GarmageNet alongside their corresponding Garmages. GarmageNet is an advanced image-based, multi-modal garment generation framework trained on a large-scale, high-fidelity garment dataset. It enables the creation of intricate, multi-layered garments with standardized sewing patterns, precise stitching relationships, and well-defined geometry initializations. Seamlessly integrating with state-of-the-art cloth modeling software, GarmageNet supports efficient workflows for pattern editing, material refinement, and dynamic human-in-cloth animations, unlocking new possibilities for high-quality virtual garment design and simulation.

Abstract

High-fidelity garment modeling remains challenging due to the lack of large-scale, high-quality datasets and efficient representations capable of handling non-watertight, multi-layer geometries. In this work, we introduce $\textbf{\textit{Garmage}}$, a neural-network-and-CG-friendly garment representation that seamlessly encodes the accurate geometry and sewing pattern of complex multi-layered garments as a structured set of per-panel geometry images. As a dual-2D-3D representation, Garmage achieves an unprecedented integration of 2D image-based algorithms with 3D modeling workflows, enabling high fidelity, non-watertight, multi-layered garment geometries with direct compatibility for industrial-grade simulations. Built upon this representation, we present $\textbf{\textit{GarmageNet}}$, a novel generation framework capable of producing detailed multi-layered garments with body-conforming initial geometries and intricate sewing patterns, based on user prompts or existing in-the-wild sewing patterns. Complementing this, we introduce a robust stitching algorithm that recovers per-vertex stitches, ensuring seamless integration into flexible simulation pipelines for downstream editing of sewing patterns, material properties, and dynamic simulations. Finally, we release an industrial-standard, large-scale, high-fidelity garment dataset featuring detailed annotations, vertex-wise correspondences, and a robust pipeline for converting unstructured production sewing patterns into GarmageNet-standard structural assets, paving the way for large-scale, industrial-grade garment generation systems. Our code and dataset will be publicly available.

Method

MY ALT TEXT

Overview of the GarmageNet. During training, a sample garment (A) will be rasterized into a structured set of per-panel geometry images - Garmages (B), and encoded into a geometry latent space (C). During generation, we apply a two-stage diffusion process to re-produce the garment (D) asset based on its text descriptions or raw sewing patterns.


MY ALT TEXT

Overview of PanelJigsaw. Beginning with the generated Garmage (A), we predict point-to-point stitching using both 2D and 3D boundary point features (C-E). Simultaneously, we obtain vectorized pattern from Garmage (B). We then combine it with point-to-point stitching to get sewing pattern (F), which can be integrated into general garment modeling workflows and simulated for high-fidelity garments assets (G).

Results generated with Sewing Pattern and Text condition

MY ALT TEXT

Sewing Patterns guided garment generation results. For clarity, we highlight specific panels in the sewing patterns to help readers identify correspondence between the generated Garmage and raw sewing pattern.



MY ALT TEXT

Text guided garment generation results. From left to right of a example, we show the generated Garmages, predicted point-to-point stitches, line segments and final simulation results.

Comparison with Sewing Pattern Based method

Comparison of garment assets generated with GarmageNet with state-of-the-art sewing pattern based garment generation approaches DressCode and Design2GarmentCode.

Comparison with Genreral 3D Generator

MY ALT TEXT

Comparative with Tripo3D and Rodin Gen-1.5. Each row corresponds to the generation results from a specific prompt, with our method’s output on the left, followed by Tripo3D and Rodin Gen-1.5. For each result, we present the 3D normal and X-ray rendering (left), close-up views of key features (middle), and the normal rendering map in UV space (right). Our method demonstrates significant novelty by producing semantically meaningful UV maps and geometrically coherent mesh structures, essential for industrial production. The X-ray renderings highlight our method’s ability to generate clean, organized internal geometries, distinctly separating different cloth pieces, unlike the chaotic structures produced by Tripo3D and Rodin Gen-1.5. This precision ensures accurate representation of intricate garment features and aligns with manufacturing standards, making our approach highly suitable for practical applications in fashion design and production.

BibTeX


        @misc{li2025garmagenetdatasetscalablerepresentation,
          title={GarmageNet: A Dataset and Scalable Representation for Generic Garment Modeling}, 
          author={Siran Li and Ruiyang Liu and Chen Liu and Zhendong Wang and Gaofeng He and Yong-Lu Li and Xiaogang Jin and Huamin Wang},
          year={2025},
          eprint={2504.01483},
          archivePrefix={arXiv},
          primaryClass={cs.GR},
          url={https://arxiv.org/abs/2504.01483}, 
    }