Sewing patterns, the essential blueprints for fabric cutting and tailoring, act as a crucial bridge between design concepts and producible garments. However, existing uni-modal sewing pattern generation models struggle to effectively encode complex design concepts with a multi-modal nature and correlate them with vectorized sewing patterns that possess precise geometric structures and intricate sewing relations. In this work, we propose a novel sewing pattern generation approach Design2GarmentCode based on Large Multimodal Models (LMMs), to generate parametric pattern-making programs from multi-modal design concepts. LMM offers an intuitive interface for interpreting diverse design inputs, while pattern-making programs could serve as well-structured and semantically meaningful representations of sewing patterns, and act as a robust bridge connecting the cross-domain pattern-making knowledge embedded in LMMs with vectorized sewing patterns. Experimental results demonstrate that our method can flexibly handle various complex design expressions such as images, textual descriptions, designer sketches, or their combinations, and convert them into size-precise sewing patterns with correct stitches. Compared to previous methods, our approach significantly enhances training efficiency, generation quality, and authoring flexibility.
@misc{zhou2024design2garmentcodeturningdesignconcepts,
title={Design2GarmentCode: Turning Design Concepts to Tangible Garments Through Program Synthesis},
author={Feng Zhou and Ruiyang Liu and Chen Liu and Gaofeng He and Yong-Lu Li and Xiaogang Jin and Huamin Wang},
year={2024},
eprint={2412.08603},
archivePrefix={arXiv},
primaryClass={cs.GR},
url={https://arxiv.org/abs/2412.08603},
}