Garment Modeling from Fashion Drawings and Sketches

Garment Modeling from Fashion Drawings and Sketches by Cody John Robson B.Sc., The University of Wisconsin, 2007 A THESIS SUBMITTED IN PARTIAL FULFI...
Author: Brendan Pope
0 downloads 3 Views 6MB Size
Garment Modeling from Fashion Drawings and Sketches by Cody John Robson

B.Sc., The University of Wisconsin, 2007

A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in The Faculty of Graduate Studies (Computer Science)

THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) September 2009 c Cody John Robson 2009

Abstract Modeling of three-dimensional garments is essential for creating realistic virtual environments and is very helpful for real-life garment design. While fashion drawings are commonly used to convey garment shape, so far little work had been done on using them as inputs to the 3D modeling process. We present a new approach for modeling of garments from fashion drawings. This approach combines an analysis of the drawing aimed to extract major garment features with a novel modeling method that uses the results of this analysis to create realistic looking garments that provide a believable interpretation of the drawing. Our method can be used in a variety of setups, where users can sketch the garment on top of an existing three-dimensional mannequin, draw it free-hand, or even scan pre-existing fashion drawings. We demonstrate the robustness of our method on a variety of inputs and provide a comparison between the results it produces and those created by previous methods.

ii

Table of Contents Abstract

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

ii

Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . .

iii

List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

v

Acknowledgments

vi

. . . . . . . . . . . . . . . . . . . . . . . . . . .

Statement of Co-Authorship . . . . . . . . . . . . . . . . . . . . . vii 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

1.1

Motivation

. . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

1.2

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3

1.3

Organization . . . . . . . . . . . . . . . . . . . . . . . . . . .

3

2 Related Work

. . . . . . . . . . . . . . . . . . . . . . . . . . . .

5

2.1

Sketch and Image Based Modeling . . . . . . . . . . . . . . .

5

2.2

Garment Modeling . . . . . . . . . . . . . . . . . . . . . . . .

9

2.2.1

Traditional Garment Modeling . . . . . . . . . . . . .

9

2.2.2

Sketch-Based Garment Modeling . . . . . . . . . . . .

10

3 Virtual Garment Modeling . . . . . . . . . . . . . . . . . . . .

15

3.1

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15

3.2

Drawing Analysis

15

3.3

. . . . . . . . . . . . . . . . . . . . . . . .

3.2.1

Mannequin Fitting

. . . . . . . . . . . . . . . . . . .

17

3.2.2

Line Analysis and Labeling . . . . . . . . . . . . . . .

20

Garment Surface Modeling . . . . . . . . . . . . . . . . . . .

23

iii

Table of Contents 3.3.1

Line Interpretation

. . . . . . . . . . . . . . . . . . .

25

3.3.2

Initialization and Wrapper Surface Computation . . .

27

3.3.3

Modeling Complete Garments

. . . . . . . . . . . . .

30

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

34

5 Discussion and Future Work . . . . . . . . . . . . . . . . . . .

47

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

49

4 Results

iv

List of Figures 1.1

Two example results from our modeling system . . . . . . . .

2

1.2

Process overview flowchart . . . . . . . . . . . . . . . . . . . .

4

2.1

Previous Work: Igarashi et al. 1999 . . . . . . . . . . . . . .

6

2.2

Previous Work: Karpenko et al. 2006

. . . . . . . . . . . . .

7

2.3

MayaCloth System . . . . . . . . . . . . . . . . . . . . . . . .

10

2.4

Previous Work: Turquin et al. 2004 . . . . . . . . . . . . . .

11

2.5

Example of offset surface from Turquin et al. 2004 . . . . . .

12

2.6

Previous Work: Turquin et al. 2007 . . . . . . . . . . . . . .

13

3.1

Fitting and line analysis . . . . . . . . . . . . . . . . . . . . .

16

3.2

Region assignment psuedocode . . . . . . . . . . . . . . . . .

22

3.3

Stages in our modeling process . . . . . . . . . . . . . . . . .

24

3.4

Finding the perspective estimation . . . . . . . . . . . . . . .

27

3.5

Impact of developability term . . . . . . . . . . . . . . . . . .

28

4.1

Results: A multi-layered outfit . . . . . . . . . . . . . . . . .

38

4.2

Results: A princess dress . . . . . . . . . . . . . . . . . . . . .

39

4.3

Results: A basic schoolgirl skirt . . . . . . . . . . . . . . . . .

40

4.4

Results: A fashion illustration with shirt and pants . . . . . .

41

4.5

Results: A fashion illustration with a tight skirt . . . . . . . .

42

4.6

Results: Loose pajamas . . . . . . . . . . . . . . . . . . . . .

43

4.7

Results: A Chinese dress comparison . . . . . . . . . . . . . .

44

4.8

Results: A loose doll dress comparison . . . . . . . . . . . . .

45

4.9

Results: A one-strap dress comparison . . . . . . . . . . . . .

46

v

Acknowledgments First and foremost, I would like to thank my supervisor, Prof. Alla Sheffer, for pushing me to accomplish more than I thought was possible. I learned more from working for her than I did in any particular class, and her advice will greatly aid me in my future work. I would also like to thank Prof. Michiel van de Panne for being my second reader, he has been extremely positive and helpful. Thanks to everyone in the lab for all the help. Vladi, Tibi, Ian, James, and Xi, you guys helped me get through the difficult moments and kept morale up during the crunch times. I would like to thank my family, for doing everything they could to support me in my undergraduate and graduate years. I always knew I could ask them for help at any time. Most of all, I would like to thank my wife, Amy, for venturing out to British Columbia with me for the last two years. Her unwavering love and support has been invaluable during the perils and long hours of graduate school courses and research.

Cody Robson The University of British Columbia September 19th, 2009

vi

Statement of Co-Authorship The garment modeling system and algorithms described in Chapter 3 were developed collaboratively by Dr. Alla Sheffer, Vladislav Kraevoy, and myself. I created the implementation for the entire garment modeling system with the exception of the process described in section 3.2.1, which was original authored by Vladislav Kraevoy and maintained and updated by myself.

vii

Chapter 1

Introduction 1.1

Motivation

Dressed people are a ubiquitous part of our surroundings, making garment modeling essential for creating realistic virtual environments. Additionally, three-dimensional garment models are a valuable tool for real-life garment design and manufacturing. Despite their ubiquity, garments remain challenging to model. The traditional approach for modeling virtual garments largely follows the real-life design and tailoring workflow [8, 15]. While it enables the creation of realistic, sophisticated garments, it requires both significant time investment and a high degree of user expertise. For hundreds of years people have used fashion illustration to communicate garment shape to one another. The use of illustrations remains an integral part of garment design in the fashion industry today. As an intuitive means of communicating garment shape, fashion illustration could serve as a user-friendly input to a virtual garment modeling system. Unlike the traditional approach, novice users would be capable of simply drawing the type of garment they wish to model. Some solutions in this direction have been explored in recent years. A lightweight sketch-based approach was proposed by Turquin et al. [27] and further developed in subsequent publications [4, 23, 28]. The method uses a stroke-based interface where the user sketches the garment on top of a template mannequin model and the system generates a garment surface that reflects the sketch. This approach simplifies the modeling process making 3D garment creation accessible to non-expert users. However, it suffers from several major drawbacks. The modeling paradigm used to interpret the sketched garment shape is fairly simplistic often leading to creation of 1

1.1. Motivation

Figure 1.1: Two examples from our garment modeling system. Our novel modeling technique analyzes fashion drawings, fits and poses a 3D mannequin, and generates believable virtual garments consistent with the input drawings. unnatural looking garments (Figure 2.5). Moreover, by requiring the sketching to be performed on top of an existing mannequin the user is restricted to pre-defined wearer proportions and pose. Lastly, the basic system [27] is limited in the amount of garment details it can model, leading Turquin et al. [28] to introduce specialized notations to support details such as folds, requiring additional learning effort from the users (Figure 2.6). Our work aims to overcome these drawbacks while maintaining the ease of modeling provided by a free-hand drawing or sketching interface. Our approach enables users to freely sketch the desired garments using standard fashion drawing techniques, and introduces a sophisticated novel modeling mechanism that produces realistic-looking 3D garments that provide a believable interpretation of these free-hand inputs (Figure 1.1). Since our system is capable of processing completed garment drawings, and not just those created using a specialized interface, users can create models from

2

1.2. Overview pre-existing sketches, thereby taking advantage of the thousands of designs freely available in literature and online.

1.2

Overview

To achieve these goals we develop an algorithm that analyzes the particular sketch or drawing at hand using a set of general observations about typical fashion drawings and garment models. We use the analysis results to fit a 3D mannequin to the drawn outfit, adjusting the mannequin pose and proportions as necessary (Section 3.2). The results of the fitting and analysis serve as input to a novel garment modeling algorithm (Section 3.3) that can be used to create realistic-looking garments both in conjunction with our free-hand drawing analysis setup and in an interactive sketching setup where the garment is traced on top of an existing mannequin. We do not aim to create fully-realistic garment models that can be manufactured from planar patterns as this requires knowing the location of the garment seams [23]. Many fashion drawings do not contain this information, as exact seam placement typically requires expert tailoring knowledge. Moreover seams are hard to separate from other details users may draw. Instead, our goal is to create garments that provide a believable interpretation of the user input.

1.3

Organization

This document is organized as follows. Relevant previous work in both sketched-based modeling and garment modeling is discussed in the next chapter. Chapter 3 is the full, detailed description of our new garment modeling system. Results are shown in chapter 4 and conclusions and future work are discussed in chapter 5.

3

1.3. Organization

Figure 1.2: Our system can be run one of two ways. An existing fashion illustration or freehand drawing can be fed into our drawing analysis process to pose a mannequin as well as extract and classify characteristic lines. Alternatively, a user can sketch characteristic lines on top of a predefined mannequin. Either method then drives our garment surface generation process which generates the virtual garment.

4

Chapter 2

Related Work This section reviews recent related work on sketch and image based modeling as well as work specific to garment modeling, both traditional and sketchbased.

2.1

Sketch and Image Based Modeling

The pioneering work in sketch-based computer interaction comes from Sutherland et al. [25] in developing the first system where users can interact with a computer by means of drawing strokes on a screen. Beyond the input method, this work is also extremely significant in its contributions to graphical user interfaces and object oriented programming. In recent years, many of systems have been developed using such a sketched-based interface as a means of modeling 3D geometry. These systems solve the problem of generating 3D geometry given sparse, 2D input which is inherently underconstrained. In order to successfully model something believable one must therefore regularize the problem, by adding constraints based on the goals of the system. In the case of sketch-based modeling, many methods leverage constraints arising from domain-specific knowledge about the class of objects they are modeling to make their solution more plausible, more detailed, or even possible in the first place. Some attempts at sketch-based modeling consider fairly general surfaces. Igarashi et al. [10] developed a sketch based interface for modeling 3D objects which creates smooth 3D surfaces whose contours match the user sketched lines. Once an initial surface is created, the user can augment or deform the surface with additional sketching gestures (Figure 2.1). Nealen et al. [18] developed a system that uses 3D curves as a basis for modeling 5

2.1. Sketch and Image Based Modeling

Figure 2.1: A user interactively modeling with Teddy[10] (left) by sketching 2D strokes while the system generates 3D geometry. This intuitive and effective means of 3D modeling for novice users was very successful and became the basis for commercial applications and video games (right). operations to allow the user to add fine details to their original model. In the case of these systems, only the first curve generates 3D geometry from a 2D input from scratch. The generated surface is typically an amorphouslooking smooth, round shape. After that, additional curves are interpreted in 3D as editing operations with respect to the model in progress. With enough curves, users are able to model fairly detailed 3D geometry in only a few minutes without prior knowledge of commercial 3D modeling software. These works can be classified as interactive modeling systems, because they require a user to iteratively manipulate the model as the system updates the results after each stroke. Karpenko et al. [12] developed a system with additional flexibility in the interpretation of the user’s 2D sketched input. Their system infers hidden contours and junctions and can handle surfaces with arbitrary holes, unlike [10] which dealt only with closed curves. In contrast to the previously mentioned methods, this system creates a 3D surface to match a complete 2D sketch, and does not require iterative refinement with additional 3D curves. While their system works well for generating generic 3D shapes, few of their examples resemble specific recognizable figures (Figure 2.2). This illustrates the difficulty in generating detailed surfaces from a 2D sketch without the 6

2.1. Sketch and Image Based Modeling

Figure 2.2: Two views of three different models generated from Karpenko et al.[12]. Their system makes no assumptions about the type of objects the user wishes to model. As a result, there is limited detail and shape complexity that can be achieved in their system. aid of domain-specific knowledge. The authors hypothesize that more detailed results could be generated if the system had a means of predicting the type of object the user was drawing, perhaps with the aid of a database. This would allow them to maintain a system that did not constrain the user to a specific object domain, but once one is identified it would aid the system in making interpretations of the user’s sketch. Bourguignon et al.[3] created a 3D sketching system with multiple applications. The user navigates a 3D scene and augments it by sketching 2D strokes. The system will actively reinterpret the stroke’s shape and visibility as the viewpoint is altered, essentially extending them into 3D. The process infers a 3D surface for each 2D stroke and migrates the stroke along this surface as the viewpoint changes. With this framework, they present ways to annotate preexisting 3D scenes, or sketch 3D characters and garments by adding strokes from multiple viewpoints. It is important to note that no actual 3D surface geometry is created with this method, but rather it generates viewpoint-adapting 3D lines with some artistic shading to convey the appearance of a 3D shape. Many other sketch-based modeling techniques target a specific class of objects from the outset and leverage common attributes and principals asso7

2.1. Sketch and Image Based Modeling ciated with members of that class. For many of these techniques the input is too sparse to generate a perfect 3D representation of the object to be modeled, but rather the goal is to create a realistic, believable interpretation of the input [21, 22]. For instance, sketch-based modeling solutions to virtual garments leverage key geometric properties of garments, usually utilizing knowledge about how garments are worn by people [27]. This allows these systems to interpret sketched lines differently depending on their orientation and proximity to a mannequin. Other subject domains have been explored as the basis of 3D model generation [20]. For instance, Fu et al. [5] took a series of sparse, sketched lines from a user and generated virtual hairstyles. The user only needs to draw lines characteristic of the flow of a hairstyle and the system solves for a vector field to fill in regions between user strokes to create a dense, believable hair model. Mori et al. [16] developed a sketch-based system to create stuffed animals. This method utilize the fact that stuffed animals tend to be smooth, and not contain sharp details like cusps and creases. They maintain a flat pattern that would be used to physically construct the stuffed toy as the user interactively models the 3D representation. This allows them to constrain the deformation or augmentation of the model based on the physical limitations of the pattern. The system developed in Yang et al. [31] allows for an arbitrary object class template to be created and used to model a variety of objects. They provide an algorithm for processing and matching sketched strokes to this new object template. This template consisst of one or more definitions of a variety of parts and each user stroke is associated with the best matching part. Completing each template is a series of procedural modeling rules for interpreting the strokes to create a 3D model. They demonstrate templates for modeling mugs, planes, and fish. While developing a complete template with its associated modeling process may be outside the scope of novice user ability, this system could be used by application developers to provide their users with a sketch-based modeling system for objects relevant to their software. The example they give would be allowing users to design airplanes 8

2.2. Garment Modeling for a flight simulator. Architecture is another domain with key principles utilized by sketchbased geometry generation techniques [17]. Google’s SketchUp [6] program provides a sketch-based interface to model architecture or other usually manmade objects typical of Computer-Aided Design. It utilizes a method of interpreting the implied depth of the user’s object from their 2D sketch. Image and video based techniques have also targeted specific subject domains to generate believable 3D models. Quan et al. [22] utilized known properties of plants to do plant reconstruction from multiple images. For example, observations about the similarity among leaves aid them in constructing a generic leaf model that allows for higher quality results in the face of noise or occlusions in the input images. This leads them to create very believable, realistic looking plants even if they are not able to perfectly reconstruct the areas hidden from view. Imaged based architectural modeling systems, like their sketch based counterparts discussed earlier, use similar domain-specific attributes of man made structures to make their modeling problems tractable. Most structures are built of mostly planar, often symmetrical, smooth surfaces and knowing this makes interpreting images and user drawn sketches much easier and more accurate. Sinha et al. [24] are able to detect planar surfaces and camera properties from photographs to reconstruct buildings and rooms.

2.2 2.2.1

Garment Modeling Traditional Garment Modeling

The traditional garment modeling pipeline used in commercial softwares, such as [7, 15], follows a very manual approach similar to real life garment design [8]. First the user must design the pattern that will be folded and stitched into the 3D garment surface. Specifying the correct planar pattern with the seams and cuts in precisely the right places to produce a desired virtual garment is a non-trivial process which requires significant tailoring expertise (Figure 2.3). With the patterns specified, the user must then dress

9

2.2. Garment Modeling

Figure 2.3: An example of modeling a shirt with MayaCloth[15]. The proper placement of seams and cuts in the shirt pattern is critical to generating a plausible virtual garment and is not an quick or intuitive process for novice users. the mannequin, which requires that the patterns be tuned to the proportions and scale of the mannequin. To make the dressed mannequin appear natural, users typically run a physical simulation to account for gravity and collisions. The simulation often requires a significant amount of trial-and-error-based parameter tuning to obtain a desired look. Despite recent advances [9], simulation interfaces remain challenging to use for non-experts. The entire process is extremely labour intensive and difficult to master. It requires working knowledge in three otherwise disjoint fields: tailoring, artistic use of 3D modeling programs, and specifying and tuning physical simulations. Consequently, numerous attempts were made to simplify the modeling process. Wang et al. [30] propose creating templates of clothing components and defining rules on how these templates fit together to create complete garments. This approach allows the user to mix and match components to generate new garment forms, assuming the desired components are already available, which is often not the case. Others [4, 23, 27, 28] propose using sketch-based methods for garment modeling as discussed next.

2.2.2

Sketch-Based Garment Modeling

The motivation for a sketch-based system for modeling virtual garments stems directly from the complexities of using a traditional garment modeling system for non-expert users. Novice users may have a vivid idea of the shape 10

2.2. Garment Modeling

Figure 2.4: An example from Turquin et al.[27]. In contrast to the complexities of traditional virtual garment design, this system allows the user to simply draw the outline of a shirt and the system generates a 3D surface that satisfies the user’s sketch. of the virtual garment they wish to model, but at no point in the traditional garment modeling pipeline do they get to directly specify the shape of a desired end-product. Furthermore, if they manage to run the whole pipeline and the resulting garment is not quite what they had desired, it may not be obvious how to modify the planar patterns to correct the final result. Inspired by the difficulty of traditional virtual garment design, Turquin et al. [27] introduce a sketch based system for garment modeling drastically simplifying the process, specifically making the user input much more intuitive. Now, the user is allowed to simply draw a 2D outline of the garment they wish to model, and from it the system generates a 3D virtual garment. The user is given a virtual canvas with a 3D mannequin, and proceeds to dress the mannequin by drawing lines on the canvas. The drawn lines are interpreted as one of two different categories. The first being silhouettes, which the user is specifying where the desired 3D garment is wrapping around the mannequin to the backside. The second type of lines are garment borders, like the neckline or hem of a dress. The classification of these lines becomes as simple as identifying which lines cross the mannequins body (borders) and which lines do not (silhouettes). Once the system has classified the input lines, it proceeds to generate a 3D surface. An observation made by

11

2.2. Garment Modeling

Figure 2.5: An example result from Turquin et al. [27]. Notice how the surface widens at the bottom (red box) of the dress as if it were following the mannequins leg shape, even though it is too far away from the mannequin to be expected to do so. This is because their offset surface method takes mannequin shape into account even in loose regions, leading to results not typical of real world garments. Turquin et al. is that given a garment sketch the viewers would expect the distance from the garment to the mannequin body to be mostly consistent as we move in the direction across the body, perpendicular to the torso and limbs. Since the distance from the silhouette lines to the body can be measured, they utilize it to create an offset surface representing the virtual garment. An offset surface is a surface defined at each point by a value that specifies the distance from the surface to the mannequin. This distance-tobody value is measured for each of the silhouette lines and interpolated for each of the border lines. These distance-to-body values are then propagated to the surface interior. Additional calculations are done to simulate cloth tension, specifically in regions between limbs. The resulting virtual garments are indeed consistent with the user’s input sketch from the front view, however the use of an offset surface has its own undesirable effects. Most notably, the depth of loose areas of the garment are influenced by the mannequin shape underneath, even when the garment

12

2.2. Garment Modeling

Figure 2.6: In the system developed in [28], the user sketches additional lines besides the outline to produce more detailed results. In this case, the user sketches vertical strokes (green) to model folds and wavy strokes (purple) to indicate a wavy border at the bottom of the skirt. is significantly far away from the mannequin where the viewer does not expect the mannequin to have a direct influence on the garment shape. For instance, in Figure 2.5 the surface at the bottom of the skirt (red box) juts outward because of the mannequin’s leg shape. This looks unrealistic and the system does not have a way for the user to correct such behavior once the surface is generated. Such unintuitive influence of the mannequin body in loose regions is only magnified on extremely loose garments. This work is extended in Turquin et al. [28] to include additional types of input lines that could model more complicated garment behavior. Figure 2.6 shows how the user can draw lines to specify a wavy border and folds. The more of these new features that are added to the sketch-based system, the more complicated the sketching gesture language becomes. The original motivation for a sketch-based interface is to allow the user to easily draw a 2D representation of the 3D surface they wish to create. The more complicated that process becomes, the more time and effort the user must spend to learn how to use the modeling interface. As a result, these methods have a trade-off between ease of input and quality of results. Other works [4, 23] have focused on utilizing surface developability to

13

2.2. Garment Modeling model more realistic garments with a sketch-based interface. Mathematically, a surface is developable if it has zero Gaussian curvature. In laymen’s terms, any surface that can be laid into a flat plane with no distortion is developable. Real-life garments are quasi-developable because their patterns consist of planar pieces. Once constructed, real-life garment materials often exhibit little stretch or distortion from their original pattern shape, so they largely continue to behave like a developable surface. Decaudin et al. [4] augment the sketching system from [27] by taking the resulting surface and deforming it to increase its developability. Then they procedurally model fabric buckling to create realistic folds. Rose et al. [23] replace the distance-based surface within each panel by a developable approximation. Both methods continue to use the distance to body metaphor to infer the seam positions and the results rely heavily on the quality and placement of the seams. Our method improves on the surface modeling approach proposed by these works in two ways. First, we produce a more realistic-looking interpretation of the input, especially in loose, interior regions. Second, we provide a much more intuitive method of input, allowing the user to draw or provide a previously drawn image of the front view of a person wearing the garments they wish to model. Unlike previous sketch-based methods, the user is not limited to a mannequin provided by the sketching canvas, and they do not need to learn a sketching gesture language to communicate with the system.

14

Chapter 3

Virtual Garment Modeling 3.1

Overview

The goal of our system is to create realistic-looking garment models from fashion drawings or sketches using similar cues to those used by humans when interpreting such drawings. As already observed by Turquin et al. [27] the human interpretation of a drawn garment is strongly linked to the shape of the garment wearer. When modeling a garment from a fashion drawing, in contrast to the sketching setup of Turquin et al., we do not a priori have a model of the wearer. We observe that typical fashion drawings contain the entire dressed figure [1, 11] as, predictably, it helps the viewer to interpret the garment proportions. We therefore use this drawn figure to fit a 3D human template or mannequin to the drawing (Section 3.2). Our fitting process starts by fitting a planar skeleton model (Figure 3.1) to the drawing, extracting the mannequin’s pose and proportions and then refining the fit using the actual mannequin. We combine the fitting process with an analysis of the drawing that identifies characteristic garment lines that are subsequently used by the modeling process (Figure 3.1). Given the set of lines and the fitted mannequin we proceed to model the actual garment. The modeling process is one of the key contributions of this work The method is based on a new interpretation of the characteristic lines, leading to the creation of more realistic-looking garments than previous techniques.

3.2

Drawing Analysis

The drawing analysis performs two tasks necessary for the garment modeling stage: it fits a mannequin to the drawn figure and identifies the characteristic 15

3.2. Drawing Analysis

(a)

(b)

(c)

(d)

(e)

(f)

Figure 3.1: Fitting and line analysis for a garment in Figure 4.4: (Left to right) skeleton in rest pose with typical female figure proportions; initial regions and outline; skeleton fit to initial outline; clustered regions and updated outline (extremity clusters, removed later on, shown in grey); skeleton fit to new outline; characteristic line labeling with silhouettes in red, borders in green and part boundaries in yellow. garment lines in the drawing. We assume that the drawings are purely linebased or, alternatively, that the lines had been extracted from the drawing using standard image-processing software. As a pre-processing step we close small gaps in the line drawing, extending existing lines along the tangent direction by an epsilon distance if the extension reaches another line, and extract all the connected regions (Figure 3.1 (b)). We define the initial outline of the drawn figure as the boundary of the outer region and use it to perform initial mannequin fitting (Section 3.2.1). We then perform 16

3.2. Drawing Analysis further analysis on the drawing using the obtained mannequin fit to detect any interior loops as well as inner silhouette edges (Section 3.2.2) and use those to refine the fit (Figure 3.1 (e)). The result of the final fitting is used to identify and label the characteristic garment lines (Section 3.2.2).

3.2.1

Mannequin Fitting

The implementation for this section was originally programmed by Vladislav Kraevoy, and is included here for the sake of completion. Since its original implementation, both the skeleton fitting and mannequin refinement process has been expanded upon and maintained by Cody Robson. Our method fits a rigged human template or mannequin to the outline of a dressed figure. This task is quite similar to the fitting of human templates to segmented images [2, 29]. In general, due to the vast variety of human poses, such fitting is quite challenging, with the methods above relying on images taken from multiple views to extract the template’s pose and proportions. In our case, only one view is available. However we can rely on a number of assumptions about typical character poses and view directions used in the fashion drawings to simplify the problem. Specifically we observe that the subjects nearly always face the viewer and are typically drawn in a standing pose with articulation largely limited to the image plane, as such poses tend to best showcase the garment [11]. Thus we can effectively reduce the pose-extraction component of the fitting to two dimensions, orienting the mannequin to face the viewer and restricting the articulation to the image plane. Following [29] we use a two-step fitting approach. We first fit a 2D skeleton structure to the outline, obtaining the pose and proportions used to adjust the mannequin rig and to skin it, and then fine-tune the fit using the actual mannequin. Skeleton Fitting: We associate a skeleton with a basic set of links (Figure 3.1(a)) with our mannequin. Each link is defined as a rectangle with associated width and length and a specified connectivity to other links. In contrast to Vlasic et al. we do not know a priori the proportions of

17

3.2. Drawing Analysis the drawn figure and thus cannot specify the link dimensions beforehand. Instead, to maintain realism, we rely on human proportion ratios typically used in fashion drawings [1, 11], for instance requiring thighs and calves to be the same length. The skeleton fitting is performed using non-linear optimization with the link dimensions and joint angles as unknowns and is based on the following set of considerations. • We clearly expect the mannequin to be bounded by the outline of the dressed figure and thus expect the skeleton to be contained by it. To reflect this expectation, our non-linear optimization function contains a term measuring an asymmetric signed distance from the sides of the skeleton links to the outline. The distance is measured by sampling the link sides at equal intervals and computing the signed distance from the sampled points to the closest points on the outline. The metric becomes zero if the distance is negative. • Since garments in general can be quite loose, we do not expect the skeleton to closely match the outline. However, we can expect the sides of the links to be close to the outline in the parts of the skeleton where the drawing typically shows the actual wearer’s body, such as wrists and ankles, and along the shoulders where gravity dictates a tight garment fit. The optimization term reflecting this expectation measures the distance from these points on the skeleton links to the closest point on the outline, where we take the distance into account only if the relevant skeleton points are inside the outline. This term is necessary to prevent the mannequin from “floating inside” the drawn figure. • For the same reason we use an anti-shrinkage term which pulls the mannequin head toward the top of the outline and the feet toward the bottom. • We add a term aiming to maintain the typical proportion ratios [1] between link dimensions. • Most figures in fashion drawings have fairly simple poses, as those best 18

3.2. Drawing Analysis showcase the actual garment. Hence the optimization includes a term aiming to preserve the skeleton angles relative to a default rest pose. We initiate the skeleton fitting by aligning the skeleton with the center of mass of the outline and scaling it to fit the outline height. To speed up convergence, as an initial guess we place arm and leg joints as much apart as the outline allows, while keeping the torso centered at the midpoint between them. We use the line-search method as implemented in Matlab to solve for the optimal skeleton placement. The distances in all the terms are normalized with respect to outline height. The term weights are one, one, eight, five and one hundred, respectively. The high weight for angle deviation is the result of differences in scale between distances and angles. The fitting is first performed with the initial outline and then refined using the more detailed outline computed as described in Section 3.2.2. Local Mannequin Deformation: Once the skeleton fitting converges, we fit the actual mannequin to the drawing. We pose the mannequin and adjust its proportions based on the skeleton fitting results, using standard skinning techniques. Following the fitting the mannequin captures the overall shape of the dressed figure, however minor local misalignments may still exist. Most are harmless from the garment modeling point of view. However, parts of the mannequin may protrude outside the outline, causing potential artifacts in the subsequent modeling stage. To improve the fit we compute matches between the outline and the mannequin and use those to deform the mannequin toward the outline, using linear Laplacian deformation [13]. For each point on the outline we find the best-fitting mannequin point using a combination of normal and position similarity, kp − vk2 + ψ(np · nv − 1)2 where p and v are the 2D positions of the matched points and np and nv the respective 3D normals. We set ψ = 100 in all examples. The use of the normal component is critical to ensure that only points on or close to the mannequin silhouette are considered by the matching. We ignore matches where the mannequin point is already inside the outline as well 19

3.2. Drawing Analysis as outlier matches. To resolve inconsistent matches we use an ICP-like setting treating matches as soft constraints, and repeating the match-anddeform steps several times with increasing constraint weight. The process is repeated until the mannequin lies entirely inside the outline.

3.2.2

Line Analysis and Labeling

The analysis has three main goals: extracting and labeling characteristic lines, identifying non-garment drawing elements (extremities), and refining the outline. Since we assume the drawing contains a full human figure we expect the extremities (hands, feet, head) to show up in most drawings. However, we do not expect them to correspond to actual outfit parts (we do not aim to model gloves, shoes, or hats). Thus we detect the extremities and remove them from the modeling input. Separating Body Parts: We roughly identify the major body parts of the drawn dressed figure, associating regions in the drawing with groups of skeleton links representing body parts significant for modeling purposes (Figure 3.1 (d)) and clustering them together. Thus, shoulders and upper torso, thighs and lower torso, and head and neck are each viewed as one group. The association is loosely based on the amount of overlap in the plane between the regions and the link groups representing each part. To initiate the extremity detection process, for each of the extremity groups we locate the biggest region overlapping it and associate the region with it. We then associate the remaining regions with the parts they overlap most, initially leaving regions that do not overlap the skeletal links or overlap only a small portion of a link (less than 30% of the region area) unassigned. For regions that overlap multiple body parts, such assignment may not be best. To improve assignment we perform re-clustering based on cluster compactness. Regions are moved from one cluster to another if they overlap the links associated with both and the reassignment shortens the sum of cluster perimeters. The re-clustering ignores the regions assigned to extremities in the first stage. Finally we process the previously unassigned regions, testing if they can correspond to interior outline loops. We check

20

3.2. Drawing Analysis if they are surrounded by assigned regions and if the shared boundaries are aligned with the corresponding skeleton links. If the answer is yes, the regions are classified as exterior and their boundary is added to the outline. Otherwise they are added to the best-fitting adjacent cluster based on compactness. Pseudocode for this algorithm is shown in 3.2. The results of the clustering are used to refine the skeleton fit and to facilitate line labeling. Following the fitting, the clustering is performed again to capture minor changes in skeleton layout. Line Labeling: The labeling step extracts from the drawing the four types of lines: silhouettes, borders, part boundaries, and folds (Section 3.3) used by our garment modeling algorithm. Following the clustering stage we expect the cluster boundaries to capture the lines in the drawing that fit in one of the first three categories. We begin by considering the boundaries between adjacent clusters. By construction the boundaries consist of polylines that are either aligned with corresponding skeleton links or perpendicular to them. Polylines aligned with the links are likely to be model silhouettes that separate body parts, e.g., the torso from the arms or the legs from one another. The perpendicular polylines are likely to be borders or part boundaries. To extract the two types of polylines we process each boundary edge independently, labeling it as silhouette or part boundary based on the angle between it and the corresponding links. We found this simple procedure to produce correct labeling. The identified silhouettes are treated as duplicate outline edges by the fitting and modeling stages. The part boundary labeling is temporary and can be changed later on. We now proceed to label the outlines. Outline edges can belong to one of two categories, silhouettes or borders. We expect silhouettes to be aligned with the skeleton, and borders to be roughly perpendicular to it. However in this case edge level labels can be inaccurate. To obtain a conclusive labeling we employ a bottom up clustering with the expectation that an outline should be split into only a few border or silhouette edge sequences. First each outline edge is labeled independently using two criteria: the angle between the edge and the closest skeleton link, and if the edge shares a vertex 21

3.2. Drawing Analysis // find the extremities for all bone ∈ E do bestArea ← 0; bestRegion ← N ull; for all region ∈ R do overlap ← AreaIntersection(region, bone); if overlap > bestArea then bestArea ← overlap; bestRegion ← region; end if end for regionBones[bestRegion] ← bone; regionClusters[bestRegion] ← boneCluster[bone]; end for // assign bones for remaining regions for all region ∈ R do if regionClusters[region] 6= N ull then continue; end if bestArea ← 0; bestBone ← N ull; for all bone ∈ B do overlap ← AreaIntersection(region, bone); if overlap/Area(region) > .3 and overlap > bestArea then bestArea = overlap; bestBone = bone; end if end for regionBones[region] ← bestBone; regionClusters[region] ← boneCluster[bestBone]; end for // test compactness for all cluster ∈ C do myP arimeter ← P arimeter(cluster); for all nregion ∈ GetN eighborRegions(C) do newP arimeter ← P arimeter(cluster + nregion); if newP arimeter < myP arimeter then regionClusters[nregion] ← cluster; end if end for end for // process unassigned for all region ∈ R do if regionClusters[region] 6= N ull then continue; end if bestP arimeter ← 0; bestCluster ← N ull; for all nregion ∈ GetN eighborRegions(region) do if regionClusters[nregion] == N ull or not IsAligned(Border(region, nregion), regionBones[nregion]) then bestCluster ← N ull; break; end if myP arimeter = P arimeter(regionClusters[nregion] + region); if myP arimeter < bestP arimeter or bestCluster == N ull then bestP arimeter ← myP arimeter; bestCluster ← regionClusters[nregion]; end if end for regionClusters[region] ← bestCluster; end for

Figure 3.2: This algorithm assigns each enclosed line region to a body part cluster. E is all extremity bones, B is all non-extremity bones, R is all enclosed line regions, C is all body part clusters. regionBones is map between regions and bones, regionClusters is the map between regions and clusters to be filled by this algorithm.

22

3.3. Garment Surface Modeling with a line classified as part boundary in the previous stage, the angle with this line. The classification error for an edge labeled as silhouette is measured as 6 (e, s)2 + (π/2 − 6 (e, b))2 ) where e is the edge in question, s the skeleton and b the adjacent part boundary. If no adjacent part boundaries exist, only the first term is used. The error effectively measures how far the edge is from being parallel to the skeleton and perpendicular to the part boundary. The classification error for an outline edge labeled as border is effectively inverted and is measured as (π/2 − 6 (e, s))2 + 6 (e, b)2 ). We first label each edge independently, selecting the label with smaller error and then incrementally invert the labeling of sequences of edges. We stop once the next label flip requires an order of magnitude jump in the error (in all our experiments this was an ideal indicator of overclustering). Once the process terminates, part boundaries adjacent to borders and aligned with them are relabeled as borders. Region boundaries interior to clusters but aligned with the newly identified border lines are similarly labeled as borders. Finally, we discard the detected extremity clusters relabeling part boundaries adjacent to them as borders and identify folds, as relatively straight lines in the drawing that go upward from borders. User Interaction: The analysis step is based on a number of assumptions about garment drawings, which while holding true for most models, can break up in some setups. Both part identification and line labeling errors can be trivially corrected by the user via a basic visual interface. Once the identification or labeling are corrected the rest of the algorithm can proceed as is. Chapter 4 discusses the type of inputs where this mechanism might be utilized.

3.3

Garment Surface Modeling

Our modeling algorithm generates three-dimensional realistic-looking garment surfaces using as input the posed mannequin on which we fit the garment and a labeled set of characteristic lines describing the garment. In this context, we associate a local body-aligned frame with each surface vertex, where body-aligned refers to alignment with the corresponding link in the 23

3.3. Garment Surface Modeling

Figure 3.3: Modeling stages for the suit in Figure 1.1: (top) posed mannequin and labeled lines, wrapper surface, (bottom) tightness mask and final surface.

24

3.3. Garment Surface Modeling mannequin skeleton. The frame consists of a body-aligned vector and two cross-body vectors orthogonal to it, one of which is in the image plane. So for example on a sleeve we will have one vector aligned with the arm link or bone and two perpendicular to it.

3.3.1

Line Interpretation

For modeling purposes we use four types of lines, extracted by the drawing analysis: silhouettes, borders, folds, and part boundaries (Figure 3.3, top left). The silhouettes represent the locations where the back and front garment surfaces coincide. The borders correspond to depth discontinuities in the outfit, indicating garment boundaries. Folds capture interior silhouettes on the front of the garment where the surface normal is roughly orthogonal to the viewer. Part boundaries correspond to natural boundaries between major body parts(e.g. between the upper and lower torso or between the shoulder and upper arm) present in the drawing and extracted by our line analysis algorithm, effectively as a by-product. The modeling process is based on number of observations or assumptions about human interpretation of these types of lines in fashion drawings. Silhouettes: The silhouettes undoubtedly provide the strongest cues to the 3D shape of the garment. However, the question of how to interpret them remains open. Instead of the offset-based interpretation [27], we speculate that the human interpretation of silhouettes takes into account three potentially conflicting factors: the silhouette shape, avoidance of garmentbody intersections, and lastly gravity. We speculate that when silhouettes are far from the body their shape dictates much of the garment shape. Specifically, the normals along silhouettes seem to predict the body-aligned component of the surface normal across the body, and to a lesser degree the other normal components. The silhouette influence on the normal is subject to some attenuation the further we are from the silhouettes due to the fact that garments often have side seams, the stiffness of which counteracts gravity while in typically seamless areas, on the back and front, gravity is likely to play a bigger role, reducing the vertical component of the surface

25

3.3. Garment Surface Modeling normal. In regions where the silhouettes are close to the body, avoidance of garment-body intersections leads to the expectation of garments tightly wrapping the body, overriding other considerations. Borders: In traditional fashion drawings the viewer is assumed to be standing at a finite distance from the drawn figure and at the same eye level. This assumption combined with the expectation of garment borders to be roughly planar produces a subtle visual effect heavily utilized in fashion drawings that often lets viewers infer the depth profile along garment borders (Figure 3.4). Given the shape of the border, viewers appear to use it to infer the cross-body profile of the garment near the border. In other words, the cross-body components of the vertex normals at the border strongly influence the cross-body normal components along the surface. Thus, combining the silhouettes and the borders effectively provides the normals across the surface in loose garment regions, defining the surface shape. As demonstrated by the figures throughout this paper, this interpretation appears to lead to realistic-looking results consistent with the drawings. Folds and Part Boundaries: Fashion drawing often contain multiple other lines in addition to silhouettes and borders. Many of those reflect texture and not geometric details. Others capture details that are not frontback symmetric, namely ones showing on the front but not on the back, such as pockets or collars. We focus on modeling garment geometry and use the drawing to infer the back geometry as well. Thus we only consider lines indicating geometric features that are likely to exhibit a front-back symmetry. Both folds and part boundaries fit these criteria, with folds having an obvious geometric meaning and part boundaries typically indicating the presence of narrow grooves on the surface usually found at seams or transitions between garments (Figure 4.3). As noted earlier the modeling considerations are significantly different in tight and loose regions of the garments. Hence our modeling procedure operates in two stages (Figure 3.3), first computing a tight-fitting wrapper garment that provides a feasible geometry for the regions where the silhouettes are close to the mannequin (Section 3.3.2), and then updating the geometry in loose regions to reflect the additional considerations while 26

3.3. Garment Surface Modeling

Figure 3.4: Using viewer location to estimate border positions. We use the distance to the body d and the vertical position of the vertex h to obtain the ray from the eye through the point vi and intersect it with plane P to obtain 3D coordinates. The computation uses a number of approximations, but the result is accurate enough for our purposes. ensuring a smooth transition between the different regions (Section 3.3.3).

3.3.2

Initialization and Wrapper Surface Computation

The garment modeling process starts by generating a triangulation of the garment outline that conforms to all the characteristic lines. The triangulation is duplicated for the front and back regions, with the vertices and edges on the silhouettes shared by the two meshes and the rest moving independently. Vertices along borders between garment layers (e.g., at the waistline in Figure 3.3) are duplicated and the bottom surface is extended to hide the visible discontinuity. The initialization uses the likely viewer location to estimates the 3D positions for border vertices (Figure 3.4). Using the assumption that garment borders are typically planar, we fit a plane to each border line that is as-perpendicular-as-possible to the image plane. We now compute the three-dimensional positions of border vertices as intersections between this plane and a ray from the viewer location to the vertex position in the image plane. The obtained vertex depth depends on the estimated distance from the viewer to the mannequin. We set the default distance to twice the man27

3.3. Garment Surface Modeling

Figure 3.5: Impact of developability term: tight dress modeled with γ = 0 (left) and γ = 0.5 (right). The difference is quite subtle, but is most noticeable in the small of the back. nequin height as this appears to lead to most plausible results. By modifying the distance the user can control the depth effect. We mirror the computed front border positions to the back. This process is deemed reliable and the obtained positions are used for further processing if the computed borders do not intersect the mannequin. We now proceed to compute the actual surface geometry. As observed earlier, the garment geometry in regions with tight silhouettes is most influenced by the requirement for the garment to wrap around the body without intersecting it. The most common approach for simulating tight garments is to use a spring system with or without rest lengths [8]. When rest lengths are not used this amounts to finding a minimal surface, or minimal mean curvature surface, with body collisions as boundary conditions. However this formulation neglects the expectation of garments to be somewhat developable, i.e., to have small Gaussian curvature. We observe that one way to reduce the Gaussian curvature is to minimize the squared normal curvature on the surface in a consistent direction. For garments, due to gravity and human body shape, the vertical direction is a natural choice. Thus to obtain a plausible wrapper surface we use a modified weighted Laplacian (or equivalently, spring-based) formulation which prioritizes curvature minimization along the vertical direction. The resulting energy term is combined with the requirements to preserve the known vertex positions along the borders and silhouettes and to prevent garment intersection with the body. For

28

3.3. Garment Surface Modeling the silhouettes we only preserve the positions in the image plane, letting the vertices move freely in the depth coordinate. Combining these requirements yields the following optimization functional: min

X

kvi − P

δ

i∈S

[(vix − cxi )2 +

X

(i,j) φij

i

X

1

(viy



cyi )2 ]



φij vj k2 +

(i,j)

X

kvi − ci k2

(3.1)

i∈B

where vi are the mesh vertices, S and B are the sets of silhouette and border 2

vertices respectively, φij = (1 − γ) + γe((1−Nij ·V )/σ1 ) , Nij is the direction of vi vj and V the vertical direction. We set δ = 100, σ1 = 0.1 and γ = .5 in all our examples. The choice of soft rather than hard constraints creates a smoother and more natural surface shape near the constrained vertices. The intersection avoidance is incorporated explicitly through the use of a GaussSeidel type solver. The process is initiated by moving each interior vertex of the mesh along the depth axis to avoid intersection with the mannequin. At each iteration, if the new position, computed based on Equation 3.1, is inside the mannequin, the vertex is only moved along the vector between the old and new positions until it lies on the mannequin surface. An example result demonstrating the impact of the vertical weighing is shown in Figure 3.5. The main difference is in the amount of garment “stickiness” in concave regions, such as the small of the back, where using γ > 0 we achieve results more consistent with less stretchy fabrics and a lack of tightly-tailored seams. During the iterations, determining if a candidate position lies inside the mannequin can be expensive if not done properly. In the na¨ıve case, we would need to calculate the distance from each vertex’s candidate position to each triangle on the mannequin mesh, taking a prohibitively long time. To make the process efficient we utilize a pre-computed distance transform function. A distance transform function computes the distance to a given mesh for each point on a regular grid. Once computed, it provides fast distance estimates for any arbitrary point in the space spanned by the grid. To estimate the distance from a candidate vertex position to the mannequin mesh, we simply take a weighted average of the eight nearest grid points’ 29

3.3. Garment Surface Modeling distance values. Accessing a regular grid and trilinearly interpolating eight distance values is much faster than computing distances on the fly, and provides a satisfactory distance estimate for our purposes. We utilized an implementation of the distance transform introduced in [14], which has linear complexity both in the number of grid points and the size of the mesh.

3.3.3

Modeling Complete Garments

For a tight-fitting outfit the wrapper provides a feasible interpretation. However for outfits that have loose regions we need to incorporate additional information into the setting, specifically the shape of the silhouettes, borders and folds. We do this using a normal based surface editing approach, as enforcing normals implicitly enforces the shape of a surface. Tightness Mask We start by computing a tightness mask, which for each vertex indicates how much we expect the wrapper surface solution to be preserved locally. The mask is set to zero or near zero in loose regions and is close to one in tight regions (Figure 3.3). We first compute the value of the tightness mask along the silhouettes and then propagate it inward. The mask at the silhouette vertices is a function of the planar distance d, from the vertex to the body (normalized with respect to the mannequin bounding box), and is 2

set using a Gaussian distribution to e(−d/σ2 ) with a value of σ2 = 0.003, as we want the mask to become zero once we move away from the mannequin even slightly. To distribute the values across the mesh we solve a least-squares minimization, which assigns similar mask values to adjacent vertices. Since we expect the fit distance to remain roughly similar as we circle around the mannequin, we assign higher weights to pairs of vertices locally aligned with the 2D cross-body direction. min

X

wij (mi − mj )2

subj to mi = Mi , ∀i ∈ S

ij

30

3.3. Garment Surface Modeling where mi are the per vertex masks and Mi are the masks computed for the silhouettes vertices. The weights wij are set to 1/kvi − vj ke

−|Nij ·(Li +Lj )/2| 2 σ3

,

where Li is the direction of the skeleton link associated with vi . To better propagate mask values across the body, the sum

P

ij

includes a two-ring

neighborhood of each vertex. Based on experiments we set σ3 = .5. We solve the resulting linear system using Cholmod [26]. Solving for Normals We expect the general garment shape in loose regions to be determined by the normals along the silhouettes and borders. To this effect we propagate both types of normals across the body. To incorporate the observations about the asymmetric impact of the border and silhouette normals on the target surface we separate the body aligned and cross-body components in the computation. To initialize the computation we fix the normals at silhouette vertices to their image plane values. If the border vertex depth information is deemed reliable by the initialization step, at each border vertex we compute and fix the cross-body normal components. We expect the body-aligned normal component to be most similar as we circle the body, or go across it, and the cross-body components to be more similar in the body-aligned direction. To this end we assign appropriate smoothing weights when propagating the two. In addition we introduce an attenuation term on the vertical component of the normal to account for gravity. The combined minimized functional is min

X i

mi (ni − n0i )2 + (1 − mi )

X

[wij knai − naj k2

ij 2

0 +wij knpi − npj k2 ] + αnyi ,

where nai is the body-aligned component of the normal, npi is the cross-body component, and n0i are the initial normals computed on the wrapper surface. The weights wij are the same as for the mask computation, propagating the 0 are defined to body aligned component across the body. The weights wij

31

3.3. Garment Surface Modeling identically spread the cross-body components along the body. As in the mask computation the sum

P

ij

includes a two ring neighborhood of each

vertex. We set α = 0.05. We solve this non-linear problem using Gauss-Seidel iterations. Since the solution involves normals, we need to renormalize them at each iteration. However, standard renormalization provides unintuitive results in our setting. Consider two normals at the sides of a cone, propagated along a circular arc. We would naturally expect the normal component aligned with the cone axis to remain the same. However, na¨ıve normal averaging combined with normalization will in fact change it. In our setting when averaging normals on opposing silhouettes we have a similar expectation of the body-aligned normal component being the average of the corresponding components at the silhouette. We therefore explicitly separate computation for the body aligned and cross-body components, renormalizing only the cross-body components as necessary to preserve the overall unit length. Folds: After we compute the basic normals across the surface, we incorporate folds into the setting as lines along which the normal is rotated away from the view direction, perpendicular to the fold axis. Starting at the bottom of each fold we have the option to use the normal at the bottom point (recall that by construction the bottom point of each fold lies on a border) or to turn the normal further away from the view direction. We choose to deepen folds if the current angle with the view direction is less than sixty degrees, rotating the bottom normal around the fold axis. We propagate the normals upward in a smooth manner, where the normal at the top of each fold is left unchanged and the normals in between change smoothly. We then repeat the normal propagation process keeping the fold normals fixed. Solving for Positions Given the vertex normals we search for the vertex positions that satisfy them. To optimize for normal preservation we use a quadratic term proposed

32

3.3. Garment Surface Modeling in [19], P (v) =

X

X

(ni · (vj − vk ))2 .

(3.2)

i (i,j,k)∈T

In general computing vertex positions from normals is an ill-posed problem, with multiple solutions. To stabilize the system, in addition to preserving vertex positions in tight regions and along silhouettes and known borders, we add a surface smoothness term, minimizing: min P (v) +

X

mi (vi − vi0 )2 + µ

i



X i∈S

[(vix − cxi )2 + (viy − cyi )2 ] + β

X

kvi − ci k2

i∈B

X

(vi −

i

1 X vj )2 . |(i, j)| (i,j)

We set µ = 5, letting silhouettes and boundaries move slightly to obtain a smooth garment, and use β = 10 in all our examples. To find the minimizer we first solve the corresponding linear system using Cholmod [26] and then resolve any collisions with the body using Gauss-Seidel iterations. Solving this system provides the desired three dimensional model of the drawn outfit. As the last modeling touch we embed part-boundaries into the garment surface as narrow grooves by moving those vertices a small distance in the direction opposite of their surface normal.

33

Chapter 4

Results We have tested our method on a variety of inputs, including several garments sketched on top of a mannequin and others drawn in freehand and then processed by our drawing analysis code. We use the same mannequin in different pose and proportions, showcasing our method’s ability to automatically adjust to differently drawn figures. We decided to focus on women’s garments, as they tend to be much more interesting than men’s. We contracted an artist, Daichi Ito, from Adobe Systems Incorporated to provide us with clean, front-facing garment drawings (Figures 4.1, 4.2, 4.3, 4.6 (head/hands)). Ito is a professional artist, but not a professional fashion illustrator, and each drawing took roughly half an hour. Using images from previously drawn, professional fashion illustrations require that they can have lines successfully extracted from them. We chose two professional illustrations (Figures 4.4 and 4.5) that were able to be vectorized by Adobe Illustrator, so that the resulting b´ezier curves could be used for our drawing analysis algorithm. The original scanned images for those two examples are omitted because of copyright concerns. Our remaining examples (Figures 4.7, 4.8, 4.9) were drawn directly on a manequin using the interface of Turquin et al. [27]. It is difficult to devise an objective measure of evaluation for virtual garments. While we believe the comparison with previous results speaks for itself, we are planning a user study to measure the plausibility of our results given the input images. One survey method would ask the participants to draw the profile silhouette of a garment given a front view, and compare their sketches to the profile silhouette of our results. We have demonstrated the ability of our method to create a variety of garment shapes. The characteristic lines of the princess dress (Figure 4.2) 34

Chapter 4. Results are relatively simple yet very clearly illustrate the different behavior of our virtual garments in tight and loose regions. The lower part of the dress puffs outward, retaining a consistent silhouette from the front and side views. The bodice, conversely, is very tight against the mannequin in the front view and so the garment follows the mannequin shape in the side view. The multi-layered suit (Figure 4.1), shown at multiple stages earlier in this document, demonstrates support for layered garments and asymmetric hems. Models of shirts and pants are shown in the example used to demonstrate the skeleton fitting stage (Figures 3.1 and 4.4) as well as the loose nightgown (Figure 4.6). In most of these examples the drawing analysis was done fully automatically. There were two instances of user intervention needed for the drawings shown in this document. First, in the ballgown drawing (Figure 4.2), where the perspective angle is very large, the boundary between silhouette (red) and border (green) lines on the bottom of the dress becomes very fuzzy. Thus, user intervention was necessary to identify the exact transition points. The second instance was in the pajama drawing (Figure 4.6). This garment is very loose and the arms are raised fairly high leading the automatic skeleton fitting algorithm to place the arms inside the shirt and converge to an incorrect local minimum. In this case, the user simply had to initialize the skeleton’s arms near those of the drawing and the skeleton fitting algorithm was then able to converge to a successful fit. Other ambiguous inputs can be resolved in similar manner. The majority of the coefficients used in the optimizations performed by our system are fixed for all examples and we expect the specified default values to be applicable to all models. However, there are two parameters which we found users may want to control, developability γ and estimated distance d from the viewer to the drawn figure (Section 3.3.2). The first is strongly linked to the fabric and cut of the modeled garment, which can vary from model to model. For instance, for the princess dress (Figure 4.2) we turned the developability off, as we wanted the bodice to be very fitted. Figure 3.5 shows the impact of different values of γ. The distance value d effectively controls the depth of the garments at the hemlines. Our default 35

Chapter 4. Results value works well for most models, but users may choose to modify it to obtain more or less puffy results, or turn it off altogether if the drawing has no visible perspective. When comparing our results to those of previous methods [23, 27, 28] the differences are most prominent on loose garments. On these inputs the garments created by our method appear noticeably more realistic and better reflect the input drawings. This is especially noticeable on the doll dress example in Figure 4.8. Notice how the garment from Turquin et al. [27] follows the mannequin’s body shape in front and behind, despite the fact that it is an extremely loose garment. Our method, on the other hand, maintains the same silhouette shape in the side view as found in the front view. This is what we would expect from a loose garment, because the given front view silhouette shape is telling of the stiffness of the garment as well as its expected behavior as it wraps around to the front and back. On tight garments the difference is more subtle, however due to our use of mean and Gaussian curvature minimizing terms in the wrapper modeling our garments tend to be less “sticky” and thus more realistic in concave regions. Turquin et al.’s subsequent work in 2007 [28] added new features for folds and wavy hemlines (as referenced earlier in Figure 2.6), but the underlying garment modeling procedure, specifically the use of an offset surface, remains the same and thus would create similar results to [27] given similar inputs. Figure 4.9 compares our results to those of Rose et al. [23]. The result from Rose et al. is shown with and without a physical simulation post process. Their system uses the predefined seam lines and darts (lines of non-zero curvature) to divide the garment surface into developable pieces which are then modeled and stitched together. The developable property of the resulting surface allows their results to be manufactured as real-world garments. Despite this attractive property, the unsimulated result appears unnaturally stiff in the side view. In contrast, the side view of our result the garment follows the mannequin shape in the tight region and retains the garment’s front-view silhouette in the loose region. The result from Rose et al. with simulation has a natural cloth look, but now the garment no longer matches the given characteristic lines in the front view, making the 36

Chapter 4. Results modeling system less intuitive. Runtimes: The overall processing of the examples shown here takes on the order of two to three minutes on a 2.13GHz Intel Core 2 CPU, with most of the time spent in the skeleton fitting code. The modeling stage by itself takes around 20 seconds processing meshes with about 50K triangles. Thus users who would want to use our system in an interactive setting where the fitting step is not necessary are looking at near-interactive reaction times.

37

Chapter 4. Results

Figure 4.1: A multi-layered outfit. The input sketch (top left). The extracted characteristic lines (top right). The resulting 3D garment (bottom).

38

Chapter 4. Results

Figure 4.2: A princess dress. The input sketch (top left). The extracted characteristic lines (top right). The resulting 3D garment (bottom).

39

Chapter 4. Results

Figure 4.3: A basic schoolgirl skirt. The input sketch (top left). The extracted characteristic lines (top right). The resulting 3D garment (bottom). 40

Chapter 4. Results

Figure 4.4: A tight shirt and pants outfit fashion illustration. The original image is omitted because of copyright concerns. (top left) The segmented image. (top right) The extracted characteristic lines. (bottom) The resulting 3D garment. 41

Chapter 4. Results

Figure 4.5: A tight skirt with folds fashion illustration. The original image is omitted because of copyright concerns. (top left) The segmented image. (top right) The extracted characteristic lines. (bottom) The resulting 3D garment. 42

Chapter 4. Results

Figure 4.6: Loose pajamas with folds. (top left) The input sketch. (top right) The extracted characteristic lines. (bottom) The resulting 3D garment.

43

Chapter 4. Results

Figure 4.7: A tight Chinese dress comparison of Turquin et al. 2004[27] (center) with our result (bottom). Input lines and mannequin shown at the top. 44

Chapter 4. Results

Figure 4.8: A very loose doll dress comparison of Turquin et al. 2004[27] (center) with our result (bottom). Input lines and mannequin shown at the top. 45

Chapter 4. Results

(input)

(a)

(b)

(c) Figure 4.9: A one-strap dress comparison of Rose et al. 2007[23]. Rose et al. without simulation (a), Rose et al. with simulation post-process (b), and our result (c). The black seam lines are ignored by our method.

46

Chapter 5

Discussion and Future Work The primary contribution of this work is the introduction of a new garment modeling technique. It produces virtual garments that we believe to be believable interpretations of the sketched inputs. The garments are visually consistent as they wrap around the mannequin in the front and back as we would expect from real world garments. To demonstrate this novel modeling technique we have built a garment modeling system which is the first to use hand-drawn fashion illustrations as input. We accomplish this with a novel drawing analysis algorithm that segments the drawing into key body and garment regions, and estimates the figure’s pose with a mannequin fitting procedure. This allows novice users to operate the garment modeling system without needing to learn the specifics of tailoring or 3D modeling software. Additionally, this allows the virtual garment to be dressed on a mannequin with an arbitrary pose inferred from the drawing, unlike previous online sketch-based modeling techniques that require a predefined mannequin. We have leveraged a number of observations about how garments behave and how people draw fashion illustrations to communicate garment shape in order to make this sketch-based modeling problem tractable. While these observations hold in most cases, exceptions can occur. For instance, some fashion illustrations may have the model posed at odd angles for stylistic reasons, or may have the model’s arms or legs occluded. To support these types of inputs, the system could be extended to allow the user to adjust the skeleton in 3D during the drawing analysis phase. Later steps would only have to make minor adjustments to accommodate a non-planar mannequin pose. Another extension could allow the modeling of garments that are not symmetric in the front and back. This could be done by either providing a 47

Chapter 5. Discussion and Future Work back-view drawing to supplement the original front-view, or the user could manually edit the extracted characteristic lines for the back. Beyond modeling just the garment shape, one could extend our system to texture the surface from the original drawing. The front should be more or less trivial, just being a texture map of the drawing itself, but the backside would provide a few challenges. One would have to detect what features present on the front are most likely to propagate to the back. This is similar to research done in inpainting, where one must generate image details in one region given a surrounding image context. We speculate features like pockets would likely not propagate to the back, but seam lines or the shading of folds at the bottom of a skirt would be a reasonable features to extend to the back. In future years, with more advance image analysis techniques, it should be possible to extract sufficient information from a photograph of a fashion model or mannequin to generate a virtual garment. Perhaps given a video sequence of a model doing the iconic runway walk, enough views of the garment could be tracked and captured to recreate a full virtual garment along with detailed animation properties. Furthermore, additional principals of tailoring and human body shape and pose could be leveraged to create even more sophisticated garment modeling techniques from fashion drawings or sketch-based software.

48

Bibliography [1] Anne Allen and Julian Seaman. Fashion Drawing: The basic principles. Batsford, 2000. [2] Alexandru O. Balan, Leonid Sigal, Michael J. Black, James E. Davis, and Horst W. Haussecker. Detailed human shape and pose from images. Computer Vision and Pattern Recognition, IEEE Computer Society Conference on, 0:1–8, 2007. [3] David Bourguignon, Marie-Paule Cani, and George Drettakis. Drawing for illustration and annotation in 3d. Computer Graphics Forum, 20(3):114–122, 2001. [4] Philippe Decaudin, Dan Julius, Jamie Wither, Laurence Boissieux, Alla Sheffer, and Marie-Paule Cani. Virtual garments: A fully geometric approach for clothing design. Comput. Graph. Forum (Proc. Eurographics’06), 25(3):625–634, 2006. [5] Hongbo Fu, Yichen Wei, Chiew-Lan Tai, and Long Quan. Sketching hairstyles. In SBIM ’07: Proceedings of the 4th Eurographics workshop on Sketch-based interfaces and modeling, pages 31–36, New York, NY, USA, 2007. ACM. [6] Google. Google Sketchup http://sketchup.google.com/. [7] Haute Couture 3D. http://www.gcldistribution.com/en/haute couture 3d.html. [8] Donald H. House and David E. Breen, editors. Cloth modeling and animation. A. K. Peters, Ltd., Natick, MA, USA, 2000.

49

Chapter 5. Bibliography [9] Takeo Igarashi and John F. Hughes. Clothing manipulation. In Proc. UIST”02, pages 91–100, 2002. [10] Takeo Igarashi, Satoshi Matsuoka, and Hidehiko Tanaka. Teddy: A sketching interface for 3d freeform design. pages 409–416, 1999. [11] P. J. Ireland. Fashion Desing Drawing and Presentation. Batsford, 1989. [12] O. A. Karpenko and J. Hughes. Smoothsketch:3D free-form shapes from complex sketches. ACM Transactions on Graphics, 25(3):589– 598, 2006. [13] Yaron Lipman, Olga Sorkine, David Levin, and Daniel Cohen-Or. Linear rotation-invariant coordinates for meshes. In Proc. SIGGRAPH ’05, pages 479–487, 2005. [14] S. Mauch and D. Breen. A fast algorithm for computing the closest point and distance function. Technical report, 2000. [15] MayaCloth. http://caad.arch.ethz.ch /info/maya/manual/MayaCloth (accessed 2009).

[16] Yuki Mori and Takeo Igarashi. Plushie: an interactive design system for plush toys. In Proc. SIGGRAPH’07, pages 45–54, New York, NY, USA, 2007. [17] Pascal M¨ uller, Peter Wonka, Simon Haegler, Andreas Ulmer, and Luc Van Gool. Procedural modeling of buildings. ACM Trans. Graph., 25(3):614–623, 2006. [18] Andrew Nealen, Takeo Igarashi, Olga Sorkine, and Marc Alexa. Fibermesh: designing freeform surfaces with 3d curves. ACM Trans. Graph., 26(3):41, 2007. [19] Diego Nehab, Szymon Rusinkiewicz, James Davis, and Ravi Ramamoorthi. Efficiently combining positions and normals for precise 3D

50

Chapter 5. Bibliography geometry. ACM Transactions on Graphics (Proc. of ACM SIGGRAPH 2005), 24(3), 2005. [20] Luke Olsen, Faramarz F. Samavati, Mario Costa Sousa, and Joaquim A. Jorge. Sketch-based modeling: A survey. Computers and Graphics, 33(1):85 – 103, 2009. [21] Sylvain Paris, Will Chang, Oleg I. Kozhushnyan, Wojciech Jarosz, Wojciech Matusik, Matthias Zwicker, and Fr´edo Durand. Hair photobooth: geometric and photometric acquisition of real hairstyles. In Proc. SIGGRAPH ’08, pages 1–9. ACM, 2008. [22] Long Quan, Ping Tan, Gang Zeng, Lu Yuan, Jingdong Wang, and Sing Bing Kang. Image-based plant modeling. ACM Trans. Graph., 25(3):599–604, 2006. [23] Kenneth Rose, Alla Sheffer, Jamie Wither, Marie-Paule Cani, and Boris Thibert. Developable surfaces from arbitrary sketched boundaries. In Proc. Eurographics Symposium on Geometry Processing, 2007. [24] Sudipta N. Sinha, Drew Steedly, Richard Szeliski, Maneesh Agrawala, and Marc Pollefeys. Interactive 3d architectural modeling from unordered photo collections. ACM Trans. Graph., 27(5):1–10, 2008. [25] Ivan E. Sutherland. Sketch pad a man-machine graphical communication system. In DAC ’64: Proceedings of the SHARE design automation workshop, pages 6.329–6.346, New York, NY, USA, 1964. ACM. [26] Sivan Toledo, Doron Chen, and Vladimir Rotkin. Taucs: A library of sparse linear solvers, 2003. http://www.tau.ac.il/ stoledo/taucs/. [27] Emmanuel Turquin, Marie-Paule Cani, and John F. Hughes. Sketching garments for virtual characters . In Proc. Eurographics Workshop on Sketch-Based Interfaces and Modeling, pages 175–182, 2004. [28] Emmanuel Turquin, Jamie Wither, Laurence Boissieux, Marie-Paule Cani, and John F. Hughes. A sketch-based interface for clothing virtual characters. IEEE Comput. Graph. Appl., 27(1):72–81, 2007. 51

Chapter 5. Bibliography [29] Daniel Vlasic, Ilya Baran, Wojciech Matusik, and Jovan Popovi´c. Articulated mesh animation from multi-view silhouettes. ACM Trans. Graph., 27(3):97–106, 2008. [30] Charlie Wang, Yu Wang, and Matthew Yuen. Feature based 3d garment design through 2d sketches. Computer-Aided Design, 35:659–672, 2003. [31] Chen Yang, Dana Sharon, and Michiel van de Panne. Sketch-based modeling of parameterized objects. In SBIM ’05: Proceedings of the 2nd Eurographics Workshop on Sketch-based Interfaces and Modeling, Dublin, 2005.

52

Suggest Documents