Design2GarmentCode:
Turning Design Concepts to Tangible Garments Through Program Synthesis

1Zhejiang Sci-Tech University,     2Style3D Research,     3Shanghai Jiao Tong University,     4Zhejiang University
Teaser Image

Design2GarmentCode is a motility-agnostic sewing pattern generation framework that leverages fine-tuned Large Multimodal Models to generate parametric pattern-making programs from multi-modal design concepts.

Compared with previous methods that directly synthesize vector-quantized patterns, synthesizing directly parametric pattern-making programs allows Design2GarmentCode to generate dedicated, structurally correct patterns from multi-modal design inputs with minimal computational and data requirements.

Abstract

Sewing patterns, the essential blueprints for fabric cutting and tailoring, act as a crucial bridge between design concepts and producible garments. However, existing uni-modal sewing pattern generation models struggle to effectively encode complex design concepts with a multi-modal nature and correlate them with vectorized sewing patterns that possess precise geometric structures and intricate sewing relations. In this work, we propose a novel sewing pattern generation approach Design2GarmentCode based on Large Multimodal Models (LMMs), to generate parametric pattern-making programs from multi-modal design concepts. LMM offers an intuitive interface for interpreting diverse design inputs, while pattern-making programs could serve as well-structured and semantically meaningful representations of sewing patterns, and act as a robust bridge connecting the cross-domain pattern-making knowledge embedded in LMMs with vectorized sewing patterns. Experimental results demonstrate that our method can flexibly handle various complex design expressions such as images, textual descriptions, designer sketches, or their combinations, and convert them into size-precise sewing patterns with correct stitches. Compared to previous methods, our approach significantly enhances training efficiency, generation quality, and authoring flexibility.


Method

Pipeline Overview


Overview of Dress2GarmentCode. (1) Program Learning: we finetune the DSL Generation Agent (DSL-GA) using GarmentCode example programs, teaching it the GarmentCode grammar and the semantics of each design parameter. (2) Prompt Synthesis: the DSL-GA generates prompts for the Multi-Modal Understanding Agent (MMUA) to interpret and extract relevant design features from the input (3). (4) Program Synthesis: based on the MMUA's responses, the DSL-GA synthesizes GarmentCode-compliant design configurations and garment programs, which are then executed by the GarmentCode engine to produce sewing patterns and simulated garments (5). To enhance robustness, we incorporate two validation loops: during program synthesis, we employ rule-based validations (7) to ensure the MMUA's outputs are sufficient for generating complete and valid garment programs and design parameters; after the initial generation, the MMUA compares the generated design with the input and suggests modifications to minimize discrepancies.


Text Guided Generation


Quality Comparison on Text-Guided Sewing Pattern Generation. For each design, we present the generated pattern using our method (left) alongside DressCode (right), including front and back renderings of the draped garment. We highlight design elements accurately captured by our method but missed by DressCode use red color in the input prompt.

Text Guided Generation Results

Image Guided Generation


Quality Comparison on Image-Guided Sewing Pattern Generation. We compare our method with SewFormer on Internet-collected fashion photographs (left), and AI-generated design images without human models (right). The results indicate that our method successfully captures design details from diverse styles, producing sewing patterns that accurately reflect neckline (a, d), cuffs (a, e, g), darts (c, d), and asymmetry (f). In contrast, SewFormer's results exhibit several issues, including incorrect necklines (a, d), missing components (b, g), misplaced or imaginary stitches (d, e), and extraneous pattern pieces (h). Additionally, since SewFormer’s pattern generation does not account for body shape, garments like skirts and pants frequently appear oversized around the waist, causing them to sag when draped.

Image Guided Generation Results

Sketch Guided Generation

Sketch Guided Generation Results


Examples of sketch-based sewing pattern generation. Our method was able to generate high-quality sewing patterns from design sketches under various styles and could integrate seamlessly with industrial fashion design software for (a) pattern editing, i.e. sleeve panels in red boxes are merged from separate front/back sleeve panels; and (b) avatar posture and fabric material editing.


Applications

BibTeX

        
          @misc{zhou2024design2garmentcodeturningdesignconcepts,
            title={Design2GarmentCode: Turning Design Concepts to Tangible Garments Through Program Synthesis}, 
            author={Feng Zhou and Ruiyang Liu and Chen Liu and Gaofeng He and Yong-Lu Li and Xiaogang Jin and Huamin Wang},
            year={2024},
            eprint={2412.08603},
            archivePrefix={arXiv},
            primaryClass={cs.GR},
            url={https://arxiv.org/abs/2412.08603}, 
        }