Dynamic Texturing of Real Objects in an Augmented Reality System

Please see the color plate on page 330. Dynamic Texturing of Real Objects in an Augmented Reality System Kreˇsimir Matkovi´c∗ Thomas Psik† Ina Wagne...
Author: Brooke Warren
1 downloads 2 Views 3MB Size
Please see the color plate on page 330.

Dynamic Texturing of Real Objects in an Augmented Reality System Kreˇsimir Matkovi´c∗

Thomas Psik† Ina Wagner‡

Denis Graˇcanin§

VRVis Research Center Vienna, Austria

Institute of Design and Assessment of Technology Vienna University of Technology Vienna, Austria

Department of Computer Science Virginia Tech Blacksburg, VA 24061, USA

A BSTRACT The ability to physically change properties of real objects used in augmented reality (AR) applications is limited. Geometrical properties (shape, size) and appearance (color, texture) of a real object remain unchanged during a single application run. However, an AR system can be used to provide a virtual texture for the real object. The texture can be changed dynamically based on user interactions. The developed AR system includes two components, the “3D Table” and the “Texture Painter.” The 3D Table is a table where real objects are placed. The tabletop is used as a projection surface, making it possible to add a context to the real object. The Texture Painter makes it possible to paint on the real object, using a real brush and virtual ink (texture). ARToolkit markers are placed on the 3D Table tabletop to augment the environment with the virtual objects. Markers are either physical (printouts on the tabletop) or virtual (projections). The scene is recorded with a camera and the composed video is projected in real time. The projection shows a virtual environment, real objects painted with virtual ink, and virtual objects positioned where real or virtual ARToolkit markers are placed. The developed system is used in architectural design applications where, due to the different qualities of real architectural models and rendered architectural models, real models are still used. The system was tested at the Academy of Fine Arts in Vienna where it is used as a support tool for architecture students. CR Categories: H.5.1 [Information Interfaces and Presentation (e.g., HCI)]: Multimedia Information Systems—Artificial, augmented, and virtual realities; I.3.4 [Computer Graphics]: Graphics Utilities—Virtual device interfaces Keywords: augmented reality, user interface, texture 1

I NTRODUCTION

Virtual Reality (VR) [12] allows a user to immerse in and experience a completely synthetic virtual environment. However, the created virtual environment is still “simpler” than the real world. Unlike VR, Augmented Reality (AR) enables the user to interact with virtual objects and real objects in a real environment in real time [6]. The user can experience enhanced reality by adding virtual objects and by superimposing the computer generated information such as text or graphics onto real objects. A typical AR system combines several different components and technologies into a single system. That includes, for example, display technologies enabling the combination of real and virtual objects into a single view and a tracking system allowing real-time interaction, modeling and calibration. ∗ e-mail:

[email protected] [email protected] ‡ e-mail: [email protected] § e-mail: [email protected] † e-mail:

IEEE Virtual Reality 2005 March 12-16, Bonn, Germany 0-7803-8929-8/05/$20 ©2005 IEEE

One of the main problems with using real objects in an AR system is time and effort needed to change their properties. For example, one can not simply change the shape or size of a real object. However, some other properties like color and texture can be changed “virtually” so that all changes made in the virtual world would be automatically transferred to the real world, and vice versa. Dynamic coloring and texturing of real objects in AR systems open some interesting possibilities for interactive design and modeling, especially for applications where the size and shape of objects is relatively stable, i.e. it does not change during a single application run. Architectural modeling and design is a typical example of such applications [4, 10]. Architects may prefer to keep real objects and real models instead of depending only on computers during the design process. They continuously transform the environment during the design process to experiment and create many unusual and unconventional combinations. An AR system can provide a framework where such experiments can be performed. Two main problems need to be addressed. First, a well defined physical environment must be provided for placing and manipulating real objects so that they can be integrated in an AR environment. Second, an “augmented brush” must be provided for interactive, dynamic “painting” of the real objects using colors, textures or animations/video clips. Two components, the 3D Table and the Texture Painter, were created in order to address those problems. The first component, the 3D Table, is a table where images can be projected onto the tabletop [8]. Real objects are placed on the tabletop and then “painted.” An environment (e.g., landscape) or a map is projected on the 3D Table to create a context for the placed real object. This context is easily changed to help the users in evaluating different views and designs. For example, an igloo can be placed in a desert or in a medieval city center. Although such combinations seem “out of place,” they support creativity and inspiration. The second component, the Texture Painter, makes it possible to paint on a real object using a real brush and virtual ink. The painting interface is natural and straightforward, and resulting models are significantly enriched by the textures. Virtual objects are added using ARToolkit markers in specific locations on the 3D Table. In this way the created scene combines a virtual environment, real objects “painted” with virtual ink and virtual objects. The environment can be changed rapidly and easily. The system was developed and tested for architectural design applications. Architectural models (real objects) used to be very important in architects’ practice. They were extensively used in design and presentation stage of the project. Real models are being gradually replaced with the computer generated and rendered models but the models are still indispensable in some situations. This make architectural design an excellent application domain for the developed system. The system was developed together with architects and architecture students from the Academy of Fine Arts in Vienna. The remainder of the paper is organized as follows. Section 2 describes the design of the proposed system including the 3D Table, the Texture Painter and the markers. Section 3 discusses some user

245 Proceedings of the IEEE Virtual Reality 2005 (VR’05) 1087-8270/05 $20.00 © 2005 IEEE

interface issues while Section 4 describes a case study and initial results. Section 5 concludes the paper and provides directions for future work. 2

S YSTEM D ESIGN

The two main components of the developed system are the 3D Table and the Texture Painter. The Texture Painter can be used stand alone or in combination with the 3D Table (Figure 1).

Figure 1: System design

2.1

3D Table

The 3D Table is a central component of the system. It is a table with a semi transparent glass top. The tabletop serves as a backprojection surface. The main idea is to use it as a modeling table. The real objects are painted using the Texture Painter and then the corresponding environment is created by projecting images on the tabletop. Any image can be used, for example city maps, various landscapes or abstract images. Animations and video clips are used to show the models in dynamic context. The system is implemented so that the model can be illuminated only from one side, the side visible to a user. The table is built using metal profiles and can be easily disassembled. A mirror inside the table makes it possible to use a beamer (video projector) placed next to the table to project images on the tabletop. USB connectors and radio frequency ID (RFID) tag readers are built in the table frame. The USB connectors are used to connect cameras, and RFID tag readers support the texture selection. ARToolKit [5] physical markers can be placed on the tabletop. 2.2

Texture Painter

Texture Painter is an application used to paint virtual ink on real objects using a real brush. A user has a real brush in the hand, and “paints” real objects with it. Instead of using a real paint, textures (static images or videos) are applied. The installation includes a projector, a brush and an object that will be painted. The brush is a slightly modified conventional brush. A retro reflective marker placed on the brush makes it possible to track it precisely. A simple camera was used to track the brush in the first implementation. The current implementation uses DynaSightT M sensor [1] from Origin Instruments which provides very stable and smooth tracking. Since point and click is needed for painting, just as in the most interfaces based on the common WIMP paradigm, an additional button is needed. During initial experiments a wireless mouse was used as a click device (only the buttons were used). The system worked well, but it was inconvenient to hold the mouse in the hand all the time. The next step was adding a button and a radio transmitter to the brush. A radio receiver is needed as well.

246 Proceedings of the IEEE Virtual Reality 2005 (VR’05) 1087-8270/05 $20.00 © 2005 IEEE

In the current configuration the real brush is equipped with retro reflective markers used for optical tracking and a built-in wireless button. Figure 2 shows an example of using a texture brush. Before a design process can start, a projector and the real object (architectural model) are positioned. The brush tracking device is placed on the top of the projector. Once the equipment and the model are positioned, the system needs to be calibrated. The calibration is done by clicking at the four corners of the projection plane. A cross marker is projected in each corner so the user has to place the brush at the cross marker and press the button. All four corners have to be clicked on a (imaginary) projection plane perpendicular to the projecting direction. If the projector is placed at a proper distance from the object (about 1.5 meters), and if the object does not extend too much into depth, the calibration is quite simple. After a successful calibration, the system can be used. Note that the real object is not tracked, i.e., if the user moves the real object, applied textures will not follow the object and therefore will not be properly placed on the real object. The perspective distortions occurring at the real object surfaces which are not perpendicular to the projection axis are not taken into account. It would be possible to track the real object and use object geometry information [2, 8] but the emphasis is on system’s simplicity so it can be easily used by non-expert users. Figure 2 also shows a system toolbar projected on the table side. The toolbar displays available textures and tools. After the user has selected a texture (using the brush and the button), the object will be painted at the position of the brush, when the button is pressed. Additional functionality such as polygon draw, polygon fill, and other well known functionality from basic painting programs are implemented as well. This speeds up the paint process in case of large planes that need to be covered with the same texture. The user points on the desired texture and then clicks using a point-and-click device. In addition, RFID tags and tag readers are used to store and retrieve images by the students during a project. Each multi-media file (image or video) is stored in a database by the students and gets a unique ID. This ID can be assigned to RFID tags. The RFID tag is glued to a card representing the image. This is usually a thumbnail of the image, or a screen shot from the video. A tag-reader is placed near to the real object and used to select textures. A texture selection can be performed by simply placing a tagged card onto a tag reader, now. 2.3

Markers

ARToolkit markers are widely used in the AR community. ARToolkit markers are physical objects, mostly cards, with a specific pattern on them. A virtual object is assigned to each marker. The markers are visually tracked, and can be easily manipulated (moved, rotated...) by a user or a group of users. This depth information is used to calculate the size of the virtual object. An isotropic camera model is used to render the objects, so if the marker is bigger, than the rendered object will be larger. Objects commonly used in the architectural design (trees, park banks, cars, etc.) are used. Although the markers were initially designed in order to have a graspable representation, projected (or virtual) markers were also used. As far as we know this is the first application that uses projected ARToolKit markers. Since the 3D Table is capable of back projection, the virtual markers are superimposed on the landscape image. The physical markers can be easily manipulated by users just by physically moving them. The arranged scene consists of the painted object, the background projected on the surface of the table and the physical markers with the over imposed virtual objects. The scene can be explored using one or more cameras. For each camera a video stream displaying the whole scene is rendered. A snapshot

function is implemented to document the current state. Video export, which can be used to generate small video clips off the scene, is implemented as well. The physical markers have the advantage that they can be manipulated in a tangible way, just by moving them on the table. When working with physical markers a scene can only be restored by replacing all physical markers as they were before. Physical markers can be printed in different sizes to provide for scaling, but the user has to manually remove the old marker and replace it with a different sized one. The virtual markers rendered on the tabletop have two advantages. The first advantage is that they can be resized. That results in a resize of the virtual object that is assigned to the marker and in easy saving and loading of a prepared scene. In Figure 3 the user had explored the change in scale of the scene by changing both the texture on the real object and the scale of the virtual object. When scaling the virtual marker, the position of the virtual objects remains the same. The positions of virtual markers are always bound to the tabletop. However, physical markers can be lifted from the tabletop to place virtual objects upon real objects. By adding a translation between the position of the marker and the virtual object, this can also be achieved using virtual markers. However, tests showed that this is too complicated and therefore this feature was removed. The second advantage of virtual markers is that the current marker configuration can be saved and reloaded later. That enables a “version control” where various stages in design evolution and development can be saved and reloaded at will. The version control provides an insight into a design process and helps in educational process. 3

U SER I NTERFACE

The Texture Painter could be enhanced [5, 6], but the aim is to make a simple, low-cost solution, which can be easily setup by non-expert users anywhere. Furthermore, the object geometry is not needed. That speeds up the process and allows rapid and straightforward experiments. Interestingly, architects didn’t complain about the drawbacks, they use the tool extensively. Simplicity and mobility of the setup makes it popular among the students. As there is no real object geometry stored in the system, object polygons can not be automatically found. The user has to specify polygon vertexes using the brush, and this polygon will be filled. If the user wants to scale or rotate the texture, the user selects the rotation or scale tool from the toolbar. By moving the brush closer or further to the projector, the texture is scaled or rotated accordingly. Texture manipulation plays an important role in the design process. Imagine, for example, a small white block painted with a brick texture. If the texture is scaled, so that the cube contains only 3 rows of bricks, it will be perceived as relatively small. If the texture is scaled down, the same block suddenly appears to be a wall. If the process goes on in either direction (making the bricks very small or huge) the block will be not be perceived as a block made of bricks any more. Playing in this way with individual textures and combinations of textures is common and important architectural practice. The ability to combine a real object and textures so they can be easily manipulated is one of the most useful features of the system. Although texture selections using a mouse and a brush are different, the underlying metaphor remains the same. While most of the tangible AR interfaces require the user to wear a head-mounted display (HMD) in order to view the AR environment [3, 9], the developed system overcomes this by using two different approaches. In the first approach, the AR environment is projected on the projection screen to provide a simple visual feedback. Similar approach is used in the AR Groove application [7]. However, using a separate projection screen removed from the working

environment may make it difficult for the user to observe results of interactions with the system. The second approach uses the 3D Table projector to directly project textures on the real object (tangible interface) thus completely eliminating any need for an HMD or a projection screen. There is no separation between the display and the working environment. Interactions are very intuitive and the user can directly observe results. The user interface provides for direct manipulation and alleviates some of the related concerns [11]: • The relative simplicity of the system means that the required system resources are relatively smaller compared to similar systems. • Since a user uses a real brush and interacts with the system as if painting object’s surfaces, user’s actions are natural and intuitive. • Save and load features for virtual markers enable, to some extent, history and tracing mechanisms. • Combination and scaling of textures with a real object provide some useful design macros. • Direct projection of textures onto the real object makes the system more accessible to users with limited eyesight (no need to use an HMD or a projection screen). 4

C ASE S TUDY

The system was used by approximately 20 architecture students from the Academy of Fine Arts in Vienna. Since the students do not have extensive computer skills and prefer not to use traditional computer-aided design tools, the overall simplicity was imperative. The whole system is very simple and any non-expert user can start using it immediately. Students can bring architectural models, just place them on the tabletop, and start painting. There is no need of adding any kind of trackers to the model, and there is no need for 3D scanning. The whole setup, as illustrated in Figure 1, consists of the 3D Table where a projector projects a landscape onto the tabletop. A physical architectural model is placed on the table. The user takes the real brush, which is tracked using the DynaSightT M system. The brush has a button with a wireless link. The toolbar is projected on the table side-plate. After selecting the texture from the toolbar using the brush, the user starts painting. Painted textures can be scaled and rotated. The toolbar and the texture painted on the model are being projected by a second projector. Additionally the user can place physical ARToolkit markers, or use projected markers. Now the whole scene can be recorded with a camera (also connected to the table) and displayed as a live video stream onto a large projection screen next to the 3D Table. The projected video contains the painted architectural model, the landscape, and the virtual objects in the places where real or virtual markers are. Virtual markers simulate different scales easily, and they can be used to restore physical markers’ positions after save and load. The tests showed that the reflectivity of the table surface can be a problem for the marker detection. When the model is covered with bright textures the reflection of that texture disturbs the marker recognition. This problem can be solved by positioning the camera only in a part of the surrounding space of the table. This region is both limited in height and angle to the projector that creates the image on the surface table. Therefore, some considerations must be taken into account when setting up the system. The best results were produced when the Texture Painter projector and the projector for the tabletop were positioned so that their projection directions create a 90 degrees angle.

247 Proceedings of the IEEE Virtual Reality 2005 (VR’05) 1087-8270/05 $20.00 © 2005 IEEE

Figure 4 shows examples of different textures applied to an architectural model. The underlying model is always the same, but due to the different textures the overall impression is quite different. Doing these experiments using only a traditional model and real textures would be much more tedious. The model used is an existing design for which the architects originally explored design issues by producing and studying sketches and collages. The design process was re-enacted using the developed system. That included accentuating a difference between the base and the attachment as well as studying the possibilities to merge different parts of the design. The design process included painting a variety of different textures onto the base and different parts of the attachments while exploring their changing relationship. It was interesting to observe how projections of different textures charged the building with different meanings [10]. Central design issues were choice of materials (appearance) and the duality of base (the building) and attachments (the attic). Texturing a base of the model using a video texture showing waves transformed the model into a cliff with a fortress or a concrete structure on the top of the cliff. Changing the context also changes the scale, from building to cliff. The projections helped erase preconceptions of the building, seeing it differently. In other studies some of the issues included how to make the complex interior 3D structure of apartments, patios and terraces visible outside. Other concerns were the materiality of the fac¸ade and the possibilities of differences through using textures. While the system is very simple and easy to use, case studies pointed to some problems. The disadvantage is that textures do not follow the architectural model when it is moved. The textures are also sometimes distorted in terms of perspective. The experiments with students show that these disadvantages are not significant and that all of the students are very fond of using the system. 5

C ONCLUSIONS AND F UTURE W ORK

This paper describes an AR system built to support architectural design process. The real object, an architectural model with its own qualities, still plays an important role in this process. Computer generated renderings simply cannot replace the architectural model in all cases. Instead of replacing the architectural model, the AR system is used to enhance it. Actually, the enhancement goes in both directions. Not only the architectural models are enriched, but the computer generated images projected on the architectural model (instead on the plain projection screen) are much more vivid and useful. This system makes an inspiring test bed for the architects. The architectural models can be significantly enhanced by using the Texture Painter. The ease of painting and the natural user interface makes it a very popular tool among the students. This demonstrates how an interesting combination of simple tools can be used to create an innovative AR system, and to reach a new and broader group of users. Although none of the components are novel in the field, the unique combination, low cost and the demonstration of the ease of use is the main contribution of the described system. The future work will involve studying connections and relationships among different parts of the system. If a user changes the scale of a texture on the real object, the system should be able to automatically change the size of the virtual markers. Another interesting field of exploration is “saving” the physical markers. After “telling” the system that a scene should be saved, all physical markers can be removed and virtual markers will be added to the background at the positions extracted from the physical markers. Another possible avenue for future work is integration of the developed system within the CAVET M . The projection screen (display) from Figure 1 can be replaced with the CAVE walls thus

248 Proceedings of the IEEE Virtual Reality 2005 (VR’05) 1087-8270/05 $20.00 © 2005 IEEE

providing an immersive yet simple to use system. Obviously, the marker detection problem is even more emphasized in this case. Some related work was done using the Cave corner, a low-tech immersive environment consisting of three large-size projection screens which can be fixed at different angles and several projectors [10].

6

ACKNOWLEDGMENTS

The authors would like to thank our co-researchers from the Atelier Project, in particular Andreas Rumpfhuber. This work was partially sponsored by the European Commission - IST programme - Future and Emerging Technologies - Proactive Initiative - The Disappearing Computer II - through the ATELIER project (EU IST-2001-33064). Parts of this work were carried out at the VRVis Research Center in Vienna (http://www.VRVis.at/), Austria, which is funded by an Austrian governmental research program called K plus. This work was also partially sponsored by the National Institutes of Health (NIH) grant 5R03LM008140-02.

R EFERENCES [1] —. The DynaSightT M sensor. http://www.orin.com/3dtrack/dyst.htm [last accessed 09/05/2004]. [2] Deepak Bandyopadhyay, Ramesh Raskar, and Henry Fuchs. Dynamic shader lamps: Painting on movable objects. In Proceedings of the IEEE and ACM International Symposium on Augmented Reality (ISAR) 2001, pages 207–216, 2001. [3] Doug A. Bowman, Ernst Kruijff, Joseph J. LaViola, Jr., and Ivan Poupyrev. 3D User Interfaces: Theory and Practice. Addison-Wesley, Boston, 2004. [4] Giulio Iacucci and Ina Wagner. Supporting collaboration ubiquitously: an augmented learning environment for architecture students. In Proceedings of the Eight European Conference on Computer Supported Cooperative Work (ECSCW) 2003, pages 139–159, 2003. [5] Hirokazu Kato, Mark Billinghurst, Ivan Poupyrev, Kenji Imamoto, and Keihachiro Tachibana. Virtual object manipulation on a table-top AR environment. In Proceedings of the IEEE and ACM International Symposium on Augmented Reality (ISAR) 2000, pages 111–118, 2000. [6] Marie Odile Berger, Brigitte Wrobel-Dautcourt, Sylvain Petitjean, and Gilles Simon. Mixing synthetic and video images of an outdoor urban environment. Machine Vision Applications, 11(3):145–159, 1999. [7] Ivan Poupyrev, Rodney Berry, Jun Kurumisawa, Keiko Nakao, Mark Billinghurst, Chris Airola, Hirokazu Kato, Tomoko Yonezawa, and Lewis Baldwin. Augmented groove: Collaborative jamming in augmented reality. In SIGGRAPH 2000 Conference Abstract and Applications, page 77. ACM Press, 2000. [8] Ramesh Raskar, Greg Welch, and Wei-Chao Chen. Table-top spatially-augmented reality: Bringing physical models to life with projected imagery. In Proceedings of the Second IEEE and ACM International Workshop on Augmented Reality (IWAR) 1999, pages 64–71, 1999. [9] Mary Beth Rosson and John M. Carroll. Usability Engineering: Scenario-Based Development of Human-Computer Interaction. The Morgan Kaufmann Series in Interactive Technologies. Morgan Kaufmann Publishers, San Francisco, 2002. [10] Andreas Rumpfhuber and Ina Wagner. Sampling mixed objects as part of architectural practice. In Proceedings of Pixelreiders 2004, 2004. [11] Ben Schneirdeman. Designing the User Interface: Strategies for Effective Human-Computer Interaction. Addison-Wesley, Reading, Massachusetts, third edition, 1998. [12] John Vince. Virtual Reality Systems. ACM SIGGRAPH Books Series. Addison-Wesley Publishing Company, Wokingham, England, 1995.

I I 6ZE Proceedings of the IEEE Virtual Reality 2005 (VR’05) 1087-8270/05 $20.00 © 2005 IEEE

Suggest Documents