Intelligent Building Simulator

Bachelor of Science Thesis Intelligent Building Simulator IMM-B.Sc-2008-05 Authors: Mark Ludwigs (s042762) Christian Flintrup (s042169) Supervisors...
Author: Angel Stewart
2 downloads 0 Views 3MB Size
Bachelor of Science Thesis

Intelligent Building Simulator IMM-B.Sc-2008-05

Authors: Mark Ludwigs (s042762) Christian Flintrup (s042169)

Supervisors: Christian D. Jensen Hans-Henrik Løvengreen

April 1, 2008

Technical University of Denmark DTU Informatics Building 321, 2800 Kongens Lyngby, Denmark

Abstract

This report covers the process of building an application for simulating an intelligent home, where you can draw a floor plan, add sensors and simulate walking around in your own intelligent home. Furthermore, a simple policy language for handling communication between sensors and electrical equipment of the home is developed, along with a graphical editor of these policies. The emphasis is on usability and a strong framework for future development. The program is subjected to usability tests to ensure user-friendliness. The programming is done in Java, using Java Swing to create the graphical user interface. OpenGL is used for the graphical representation of the user’s home. The end result is a user-friendly product that lets you create your own home, and then add different types of sensors and appliances. The home, interior, sensors, policies and so forth can be saved to a file using XML, and loaded back into the application. All parts of the program are easily extensible, and new types of sensors, interior items and other components are easily implementable.

iv

Contents

Contents

Abstract 1 Introduction

iii 1

1.1

Project Goal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2

1.2

About this report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2

2 Problem Description

3

3 Analysis

5

3.1

Floor plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5

3.2

Which sensors and appliances to simulate . . . . . . . . . . . . . . . . . . . .

6

3.3

Degree of smartness in sensors and appliances . . . . . . . . . . . . . . . . . .

7

3.4

Degree of smartness in central “intelligence” station . . . . . . . . . . . . . . .

10

3.5

Human interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

11

4 Design

13

4.1

Modelling a building . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

14

4.2

Building rules / policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

21

4.3

Simulating a person . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

22

4.4

Drawing a building . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

24

vi

CONTENTS

4.5

Undo handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

26

4.6

Saving and opening data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

28

5 Usability tests

31

5.1

Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

31

5.2

Target groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

33

5.3

Final definitions

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

33

5.4

Usability test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

34

5.5

Further usability testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

44

6 Implementation

45

6.1

Programming Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

45

6.2

Graphical User Interface with Swing . . . . . . . . . . . . . . . . . . . . . . .

46

6.3

Graphics Rendering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

53

6.4

Interaction Map

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

54

6.5

Geometry classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

58

6.6

Detecting a person - line-of-sight . . . . . . . . . . . . . . . . . . . . . . . . .

62

6.7

Saving and opening files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

64

6.8

Concurrency issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

65

7 Test

67

8 Documentation

69

8.1

Commenting policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

70

8.2

Project management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

70

9 Results

73

9.1

Accomplishments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

73

9.2

Other programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

74

CONTENTS

vii

9.3

Bugs and shortcomings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

77

9.4

Extension options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

79

10 Conclusion and recommendations

83

A Usability

87

A.1 Pre-task session interview questions

. . . . . . . . . . . . . . . . . . . . . . .

87

A.2 Partcipant tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

87

A.3 Post task session interview questions . . . . . . . . . . . . . . . . . . . . . . .

88

A.4 GUI prototype version 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

90

A.5 GUI prototype version 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 B Glossary

103

C Example Project XML

109

D User’s Guide

111

D.1 How to start IBUSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 D.2 Step 1: Draw your house . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 D.3 Step 2: Place appliances in your house . . . . . . . . . . . . . . . . . . . . . . 113 D.4 Step 3: Place sensors in your house . . . . . . . . . . . . . . . . . . . . . . . . 113 D.5 The Rule Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 D.6 Design a scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 D.7 Further information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

viii

CONTENTS

Chapter

1

Introduction

In today’s modern society the demand for an intelligent house is increasing slowly. One of the things holding some people back is the lack of knowledge about what they are getting themselves into. To effctively smarten up a house you might need to make large changes to your home, which few people are willing to jump blindly into. Most of today’s intelligent or programmable buildings are homes that are newly built, where sensors and intelligence are built in from the beginning. A way to break down one of the barriers is if you could get an idea of the possibilities you have for your home, before investing in expensive equipment and installation. This could be accomplished with a program in which you could build your home, place virtual sensors in it, and then simulate walking around in it. This way you could experience how your home would function if you introduced intelligence into it. This intelligence, however, may simply mean that the home owner can set up rules that a central intelligence part of the home will follow. If a system had a real “Artificial intelligence”, the resulting product might become more convenient for the user. One could imagine a system which automatically learns that if the bedroom door is opened at 02:00 in the night, then it probably means the user is on the way to the bathroom. The system might then turn on the light down the hallway along the path to the bathroom (of course, an electrical motor would also raise the toilet seat and activate a gentle muzak). As will be revealed later, none of the systems that has been examined during the work with this project have had any actual kind of artificial intelligence built-in, though, but were simply programmable base stations managing the home. Thus, the need for a home-managing system with an artificial intelligence should be explored further.

2

1.1

Introduction

Project Goal

In this project we will build a user-friendly program for simulating intelligent homes, with a solid framework with possibility for further development. The system will be prepared for future extensions such as an artificial intelligence that can supervise the events that happen in the house and automatically suggest enhancements. We will name this project “IBUSI”, for “Intelligent Building Simulator”, mostly because the ibusi.org domain name was available.

1.2

About this report

This report consists of four major parts. First, an analysis of the problem will be performed. Then the application design will be detailed. After this, a comprehensive usability test will be performed, and lastly, the implementation of the application will be covered. At the end of the report, appendices with a glossary, GUI mockups, and XML schemes are added.

Chapter

2 Problem Description

Currently, most residential homes have simple electrical switches for controlling the electrical appliances such as lights, kitchen equipment, home entertainment devices, and so forth. By utilizing contemporary sensor technology one could vastly improve the intuitiveness and efficiency of everyday tasks in the home. However, many people are hesitant of switching to a new technology and spending time and money on a solution they are unsure of. To alleviate this situation, a simulator will be designed and developed. The simulator will enable users to place sensors and other equipment in a virtual house and visualize the resulting behavior of the house. Thus, the user can try out a configuration before having to invest in any hardware. Hopefully, this will ease the transition from a traditionally controlled house to a modern intelligent house. In order to target a broad audience the software should be easy to use and will be subject to extensive usability tests. The system should be able to model any given type of input device (such as sensors or push buttons) and output devices (such as lamps or other electrical equipment) in a plug-in fashion. Also, it should be possible to define rules/ policies for any of these different types of input and output devices, even though their physical properties might be unlike each other. Along with the simulator itself, a number of actual sensors and output devices will be modeled and visualized in the simulator. It will be statistically ensured that the sensors act accurately inside the simulator, compared to their real-world counterparts. Time permitting, an artificial intelligence will be developed, enabling automatic creation of new house policies based on the behavior of the residents.

4

Problem Description

Chapter

3 Analysis

The problem description mentions that a simulator should be designed and that sensors and it should be possible to place sensors in a house. It also implies a relationship between the sensors and appliances. However, the details of the simulator are not specified. The purpose of this chapter is to further analyze the problem and detail exactly what should be included in the application. A simulation can be performed in many ways. It could merely be a mathematical model of the events performed in the house, which is later executed as a simulation. However, in order for the application to be usable by regular computer users, there should probably be a graphical user interface for interacting with the simulation. Therefore, in this analysis it will be assumed that the simulation will consist of a graphical view of the home, along with the person interacting with the home. The actual representation will also be discussed in this chapter.

3.1

Floor plan

It is important to provide a context for the simulation, so the user can relate the events to his or her own world. Most people are used to seeing floor plans of a home when visiting real estate agencies or when furnishing their home, or have at least just seen one at some point. With the application, it should be possible to make a floor plan of the house, and have the user see the program as an abstraction of these floor plans they see on paper.

6

Analysis

At least, the basic elements of the floor plan should be designed and implemented, such as:

• The walls • The doors • The windows

There can also be support for:

• Decorations and other ornaments • Furniture, such as sofas • Different color schemes • Multiple floors, for multi-storied buildings

It should also be possible to add sensors and appliances to the floor plan so their geometrical relationship with the rest of the floor plan can be easily seen. The actual graphical representation of the floor plan can be done in any way wished, but the different types of objects should be easily distinguisable.

3.2

Which sensors and appliances to simulate

To give some kind of idea of what the simulator is capable of, some kinds of sensors and appliances must be modelled and implemented. A very “all around” sensor is the motion sensor, as it has many use cases. Alarm systems, convenience systems, power savings systems and logistics management systems can all use a motion sensor. Hence, at least motion sensors should be implemented. The motion sensor should be adaptable to many different conditions, and it should be possible to adjust the range and spread of the motion sensor to simulate the properties of many different physical sensors. It should also be possible to add at least one appliance, such as a lamp, and determine its current status (on/off) easily.

3.3 Degree of smartness in sensors and appliances

3.3

7

Degree of smartness in sensors and appliances

In general, sensors themselves are useless unless they are connected to other devices. As the famous bash.org quote1 goes, what does your robot do, sam it collects data about the surrounding environment, then discards it and drives into walls In this section we will try to give an idea about the possibilities that exist for connecting sensors to other devices, and which ones should be covered by this project.

3.3.1

Simple sensor with integrated device

Figure 3.1: Cheap LED lamp with built-in light and motion sensors Devices with integrated sensors, such as motion sensors, have become very cheap. For example, an LED lamp with built-in passive infrared (PIR) motion sensor and light sensor (as seen in figure 3.1) was found in a local supermarket for just 20 DKK (around 4 USD)2 . These types of devices are limited in that they will only perform the action they are built for, with no room for extensions. Also, they cannot communicate with other devices, which makes it difficult to use them as a part of an intelligent system. On the other hand, they are very cheap and thus it is reasonable to think that many persons will buy them. They are also very easy to set up, compared to most other systems. Therefore, the system should be able to simulate the functionality of such sensors with integrated devices, but it may be done through other means than directly simulating a sensor with built-in intelligence. 1 2

http://bash.org/?240849 Seen in Netto, Fiolstræde, Copenhagen, in February 2008

8

3.3.2

Analysis

Simple sensor with single outlet

Figure 3.2: Motion sensor with outlet Some sensors have power outlets for direct connecton of another device. Typically these can be found in 120V and 230V versions, for direct connection of a consumer appliance. One such device can be seen in figure 3.2. Like a sensor with integrated device, it is not suitable for use as a part of a larger intelligent system, because the intelligence lies in the sensor itself with no room for communication. These types of sensors are also pretty cheap, with prices ranging from 100 DKK to 300 DKK (20-60 USD) and therefore, one might expect to find these in consumer homes. Some sensors of this kind also have multiple outlets of which each can be configured, but the idea is still the same. The system should be able to simulate these kinds of sensors, but it may be done through other means than directly simulating a sensor with built-in intelligence.

3.3.3

Centralized intelligent control

Instead of having the sensors be intelligent, another option is to ensure they do not have any intelligence at all. Many sensor-based systems (such as alarm systems and house control systems) instead have a central station which manages input from the different sensors, and activates the correct output devices. In these systems, there is usually as little intelligence in the sensors as possible. Usually there is a set number of sensors and appliances that can be controlled. One such alarm system is “PowerMax” from “Visonic”, which has a central base station and a number of controllable sensors and output devices. Although it primarily functions as an alarm system, it does have X10 outputs which means it can also control any household electrical device. As seen in figure 3.3, the sensors communicate wirelessly with the central station, and the appliances are controlled through the X10 network, and through analog phone lines. The PowerMax system can only handle very limited “rules” about how the building should act. One of the buttons on the keyfobs (remote control units) can be programmed to a given

© VISONIC LTD. 2001 POWERMAX CE5452S0 (REV 08/01)

Emergency alarm and monitoring for the elderly and disabled Two-way voice communication Real time clock and event log (100 events) Wide range of optional wireless transmitters and detectors

3.3 Degree of smartness in sensors and appliances Sprinkler

A/C

9

Radio TV

Magnetic Contactact

Wireless Detector (Up to 29 units) Installed in the Protected Premises

X-10 Glass Break Detector

X-10

X-10

X-10

X-10

X-10

X-10

Existing Electrical Cabling in network Emergency Pendant Transmitters

Powerline Interface Module

Smoke Detector

Public Telephon Exchange

Tel Line PIR Detector ax PowerM nel l Pa Contro

Internal Siren PGM Relay

PIR Detector

Keyfob

Keyfob

Keyfob

Keyfob

Four Private Telephones

External Siren

Central Monitoring Stations

Device triggered by various factors as programmed by the installer

Figure 3.3: Visonic PowerMax system Head Office (Israel) Visonic Inc. (U.S.A.) Visonic Ltd (U.K.) Visonic Iberica (Spain) Visonic GmbH (Germany) Visonic Desson (East Asia) Visonic Rep. (Nordic countries) Visonic Rep. (Singapor) Visonic Rep. (Pacific)

Tel Tel Tel Tel Tel Tel Tel Tel Tel

(972- 3) 6456789 (860) 243-0833, (800) 223-0020 (44-1767) 600857 (34-91) 413-4575 (49-2202) 104930 (852) 2157-7111 (47) 6758-0843 (66) 223-6768 (61-2) 9687-1333

Fax (972-3) 6456788 Fax (860) 242-8094 Fax (44-1767) 601098 Fax (34-91) 413-4597 Fax (49-2202) 104959 Fax (852) 2512 9071 Fax (47) 6758-0814 Fax (66) 223-6769 Fax (61-2) 9687-2333

X10 output. Input events are categorized as “alarms”, of which there are a few groups. Each group of alarm can trigger on or more X10 events. This is a bit limiting, and this project’s system should be able to do more, as detailed in section 3.4. W e b

s i t e :

h t t p : / / w w w . v i s o n i c . c o m

The simulator should be able to simulate this kind of system. Every sensor should have no built-in knowledge about anything else than its own inner workings; rather, they should communicate with a central station which coordinates how the devices work together. There should be no imposed limit on the number of sensors and appliances that can be controlled, except for the ones caused by resource limits within the system on which the application is run.

© VISONIC LTD. 2001 POWERMAX CE5452S9 (REV 08/01)

3.3.4

Distributed intelligent control

Having a central control station also means there is a single point of failure. One could design a distributed system to avoid this problem. For example, control could be handed over to another station in the event of a breakdown, or the responsibilities could be shared across many control stations in order to avoid a total service outage in case of problems. However, in the case of a simulation application targeted toward consumers, this issue may not be as relevant. Futhermore, it adds additionally to the required amount of work. Thus, the system does not have to be able to simulate this kind of system, although it could be a possible extension.

10

3.4

Analysis

Degree of smartness in central “intelligence” station

Although it is established that the sensors modelled by the system should not carry any global knowledge, it is still not established exactly what kinds of relationships between sensors and appliances should be allowed. There should be a way of defining a set of policies for the central station to use, along with a policy language to use for this purpose. There should be a graphical user interface to manipulate these policies, each minimally consisiting a set of conditions and a set of actions to be performed when the conditions are met. There are several ways of doing this. To analyze which kinds of policies that will be needed, a number of potential use cases will be reviewed.

3.4.1

Usecases

Usecase: Person enters a room, and the light turns on. This scenario could be created using a motion sensor pointed at the entrance of the room, and a lamp. This fairly simple scenario should be supported by the system, as it would probably be very common and could be adapted to a lot of other scenarios.

Usecase: Person enters a room, and the light turns for 20 seconds. This scenario could also be set up with a motion sensor and a lamp, but would need some kind of timing mechanism. This scenario should also be supported by the system. The timing mechanism would be useful in many other usecases.

Usecase: Person enters a room, and the light turns on while the person is inside the room. This scenario also requires a motion sensor and a lamp. This scenario could be used in many situations to save power, and is thus very common. It should be supported.

Usecase: Person enters a room. Two lamps and a TV turn on. This usecase will additionally require that a given policy in the system has the ability to affect several appliances. As this is judged to be a pretty common wish, this feature should be accomodated too.

3.5 Human interaction

11

Usecase: Person enters a room. Two lamps and a TV turn on. When the person leaves, only the two lamps turn off. This usecase will additionally require that a policy maintains both a list of actions that should occur when the sensor is triggered, and when the condition is no longer true. This should also be supported by the system.

Usecase: Person walks down the hall. The light follows the user down the hall. This usecase requires that each policy can operate individually from the others, and that several sensors can control the same lights (as there would be overlapping zones if lamps were mounted all the way down a hallway). This requires a lot of setup from the user’s side, but should still be supported by the system.

3.4.2

Adaptions

The aforementioned usecases only used one kind of sensor and a few different appliances. However, the system must be able to support similar usecases, but with any suitable appliance and any suitable sensor. Of course, for this to be simulated, a virtual person should be moved around in the simulated building.

3.5

Human interaction

To give the user a real-time experience, it should be possible to control an on-screen person around the house and see the resulting actions on the different appliances. The control of the on-screen person should be similar to video games, for example by using the arrow keys. It should be possible to easily view the whole floor plan, by zooming and panning around the image.

3.5.1

Pre-defined scenarios

It should also be possible to create and replay user-defined scenarios, where the route of the person is defined in advance by the user. There should be a user interface for creating and editing the route.

12

Analysis

3.5.2

Interoperability

An important part of any application is being able to use the data with other applications. There should be a way of exporting the program data to other applications, using an easily accessible format. There can also be support for importing data to the program. The program should run on these three major platforms: Mac OS X, Windows, Linux.

3.5.3

Summary

To sum up, the we will design and implement the following:

• Basic models for a building floor plan, such as walls, windows, doors, appliances, and sensors. • A simple policy language for communication between sensors and appliances, allowing use cases mentioned in section 3.4. • A mode providing a real-time experience of walking around the building. • An editor mode for defining routes for the person to follow, and the ability to play back routes. • Interoperability options with XML import/export.

Chapter

4 Design

Floorplan Editor

Scenario Editor

Policy Handling

Model

Model

Model

View

View

View

Controller

Controller

Controller

Figure 4.1: Layering of code (MVC) among different groups of code The development structure is grouped in 2 different ways. On one level, the application is divided into three parts;

• the model collection describing how a building with floors, walls, windows, policies, etc. work • the graphical user interface • a control part handling the communication between the GUI and the model layer.

14

Design

On another level, the application is divided into three other parts;

• the floor plan viewer and editor • the timeline editor • the central intelligence part which handles house policies

In other words, the floor plan, timeline editor, and policy handling part each have their own code which can be divided into model-view-controller parts. This division is seen on figure 4.1. In this section, each part of the system will be reviewed. In this section the most trivial subjects will not be discussed. Rather, the focus will be on the parts that in one or another way deserves special attention. First, the base models of the system will be discussed, along with certain geometry-related issues are described (for example, how to represent a shape). Later, the “Policy Handler” is discussed, as our approach on how to be able to define house rules is revealed. Lastly, various notorities such as how we handle “Undo” events are described. We also recommend reading the “Implementation” chapter, as it functions as a follow-up to this design chapter.

4.1

Modelling a building

A building has several features in the real life. In order to model those relevant for the application, a set of definitions must be established. For example, how to describe and store the geometry of the building walls, along with information about the electrical appliances inside the building.

4.1.1

Shapes

To make the representation of the models used in the building as easy as possible to handle, their shapes will be represented by a general shape object, which is the superclass for several different shape types. The design allows for simple shapes such as a circle or a square, but a shape can also be composed of several other shapes. This generic shape object will also contain its color, whether it is solid (i.e. filled or only represented by its outline), where it is positioned, and its rotation. How the shapes are interconnected is best shown in figure 4.2. When the shapes are needed for a given task, such as drawing to screen, the object is passed

4.1 Modelling a building

15

Shape

CompositeShape

Abstract color, is_solid, position, rotation

1 *

Line

Circle

Rectangle

start, end

radius

length, width

CircleArc

CircleSlice

arc size

slice size

Figure 4.2: Class diagram showing the relationship between classes in the Shape framework.

to the method that carries out the action. This method is then reponsible for gathering the necessary information off the shape object, and send it along to the draw API. Having a generalized shape concept allows for a modular design and a rich toolkit. For example, it is very easy to check if any shape overlaps another shape, just by using the built-in methods that has been developed.

4.1.2

Floors

The application, IBUSI, is able to handle simple multi-storied buildings. A given floor in the building is independent from the others, and the floors are modelled identically. Using the model, an application could manipulate one floor of the building without taking into account the other floors. The advantage of this is simplicity and isolation of the objects in the building across the different floors. However, it might pose some problems in case of buildings with interleaved floors and other buildings with special layouts (because the building is not modelled as a single large 3D object). Modelling the building as one large 3D object would create many new design and development issues, so a solution with isolated floors is chosen.

16

Design

4.1.3

Object events and parameters

Objects, such as furniture, appliances, etc., can have certain events and parameters. Parameters contain information about the current state of the object. Events can be performed on an object, and usually changes one or more parameters. They can require that certain conditions are met before they can be performed. Some events can be performed by a person, others by the central intelligence, and others by both. For example, a window can have the events “open” and “close” for opening and closing the window. These events can be performed by both a person and by the intelligence. The window can also have the parameter “is open”, representing a current state of the window (whether or not the window is currently open). The event “open” can require that “is open” must be false, and the event “close” can similarily require that “is open” must be true. When the event “open” is performed, the value of “is open” will be changed to true, and likewise with the “close” event, which will naturally cause “is open” to be false.

4.1.4

Sensors

Parameterized

Selectable

Interface

Interface

Sensor Abstract

Positionable Interface

PositionableSensor Abstract

Motion Sensor

Heat Sensor

PseudoSensor

AttachableSensor

DoorSensor

PressureSensor

Abstract

Clock

OutsideTemperatureSensor

Figure 4.3: Class diagram showing the relationship between different types of sensors.

The main focus of this project is on motion sensors, but this should not be a restricting factor. The three main sensor types in IBUSI are positionable sensors, attachable sensors and pseudo sensors.

4.1 Modelling a building

17

None of the modelled sensors have any built-in intelligence that knows about other parts of the system. They can notify the central intelligence of the house when they are triggered, or provide parameter values when prompted. However, micro-scale intelligence only relevant for the sensor itself might be implemented. One example could be the motion sensor, which has a “timeout delay” that is reset every time the sensor is triggered. The sensor can then only be triggered again after the delay has been met. This kind of “intelligence” is allowed within sensors, as it resembles a close model of real-life sensors (many motion sensors will include such a delay in order to converve power), The class relationship between the different types of sensors can be seen in figure 4.3.

4.1.4.1

Positionable sensors

Positional sensors include motion, heat and light sensors. They are characterized by having a 2D location on the floor plan. The motion sensor is the most interesting one, because it has the most use cases. The generic motion sensor will have settings such as a range, an angle and a direction of detection. This way the user can model an actual sensor that is in his posession. Another motion sensor that extends the generic motion sensor can then have predefined settings, that cannot be changed. For instance if you download a model of a specific garage door sensor that some company sells, it can have a fixed range and angle of detection, but the user is still able to specify which direction it should point. The motion sensors area of detection is represented by a shape object, namely a CircleSlice.

4.1.4.2

Attachable sensors

Attachable sensors can, as the name implies, be attached to other things. For instance to a door or a window, if the user wants to know when they are opened. As with the motion sensor, there is a generic attachable sensor that can be set up as needed. The generic attachable sensor can be attached to all objects, and can then read the parameters from the object it is attached to, and detect any changes. Using the previous window example, if an attachable sensor is attached to that window, it is able to act when the window closes, because the value of the parameter will change. When another attachable sensor extends the generic one, one is able to limit the various possibilities of which types of objects the sensor can be attached to, and which changes in parameters should cause the sensor to react. This way it is possible to model most of this type of real life sensors. To model the sensor put on a window that detects whether it is open or closed, a sensor extending the generic sensor is made. The sensor is then limited only to be attached to windows, and only to have access to the parameter “is open”.

18

4.1.4.3

Design

Pseudo sensors

Pseudo sensors cover global events, like time or the outside weather. Like the attachable sensor, it reacts only to changes in certain parameters. A light sensor can also be modelled as a pseudo sensor, but modelling it as a poitionable sensor gives the user more options to control local events, and is also a more accurate model of real life sensors. On the other hand, a positionable light sensor would also require that a “light map” of current light and dark spots in the building was implemented. However, in this project, the focus will be on motion sensors and hence this issue is not so relevant.

4.1.5

Actuators

Actuators are mechanical devices that can perform an event on an object. It does not have any logic or sensing ability. When handling in the program, an actuator is very similar to attachable sensors. There is a generic actuator that can be attached to any item, and the central intelligence will then be able to perform the events of the object that is meant to be performed by the intelligence system. As with the generic attachable sensor, more specific actuators can be built from the generic actuator by extending it and choosing which object, and which events on the object, the actuator has access to. For instance, if the house should be able to automatically open a window, if a sensor senses that the temperature is higher than it should be, and that it is colder outside than inside, an actuator is attached to the window. The intelligence system will then have access to the events “open” and “close” of the window.

4.1.6

Walls

To provide a more realistic model, walls are represented as rectangles rather than simple lines. Therefore, they have a thickness as well as a length, angle, and location. A wall can either be stored as two points (start and end) together with a thickness, or as a starting point, length, angle and thickness. The latter option is chosen, because this makes things easier when it comes to adding openings (such as doors and windows) to a wall. This will be described further in the next section. The starting location of the wall is a simple 2D point. A padding of half the wall width is then added around the center line, as shown in figure 4.4. This is mostly a convention, but helps in the cases where the wall should be represented as a line (such as when building new walls in the graphical user interface). In these cases, it makes the most sense that the wall is built around the line chosen by the user. As a final note about walls, it is possible to add actuators and attachable sensors to them, although it might not make sense in most cases, since a wall usually does not have any dynamically changeable parameters. Theoretically, though, one could imagine some kind of

4.1 Modelling a building

19

Wall Width

Wall

y

}

x Starting point

Wall Angle

Wall Length

Figure 4.4: Graphical overview of the way walls are represented. Note that the location of the wall is located within the wall, with a width/2 distance to every edge.

pressure sensor being attached to a wall, which is triggered whenever someone bumps into the wall. Therefore, it is a physical possibility and it is therefore accurate to model it this way.

4.1.6.1

Openings in walls

Positionable

Wall

Interface

1

*

Selectable

Opening

Interface

1 0,1

OpeningObject

Door

Window

Figure 4.5: Class diagram showing the relationship between walls, their openings, and the general Positionable and Selectable interfaces.

A wall can have one or more openings, such as space for a door, window or the like. While these could be modelled by simply having several different sections of walls, having a special

20

Design

concept for openings serve several purposes. For instance, it will ease the process of building the floorplan, as the user will only have to think of one thing at a time; first walls are built then the special features of the walls are applied. Furthermore, this layered approach will make it easy for the application to gather more information about the house. For example, one could imagine a situation where a user requests information about only the walls in the building, for a simplified view. With this design, it is not possible to place doors or windows without creating a wall first. A door on its own would not make sense in an application that models whole buildings. If the opening is not just a hole in the wall, it will be mounted on the wall at some point. Although the program will initially only be 2D, mount points on the top and bottom of an opening will be allowed. This says which way the opening opens. For instance if a window is mounted on the left side of the opening, it will graphically be shown opening another direction than if it had been mounted on the right side. A wall, an opening, and any object located within the opening each has its own class. The relationship between the different classes can be seen in 4.5. Openings can have events and parameters. They can therefore also have attached sensors and actuators.

4.1.7

Furniture

Furniture has no built-in intelligence, and cannot interact with other objects. Furniture is purely included for visualizing a home. However, attachable sensors and actuators can be attached to furniture, and furniture can also have events and parameters. These events are usualy only available to be performed by a person. An example of an attached sensor could be a pressure sensor for a chair that is triggered when a person sits in it. Furniture is represented by a shape object, which makes new furniture types easily implementable.

4.1.8

Electrical applicances

Like furniture, electrical appliances do not have built-in intelligence. They just do as they are told. They can have events, parameters, attachable sensors and actuators. Appliances are basically furniture with predefined actuators. They could have been modelled otherwise, so appliances were automatically controlable, but that does not model the real world closely. In the real world, different kind of actuators and sensors can also be added to appliances in order to connect them to a network. Appliences have a shape, and a list of actuators that the intelligence system can access. An oven, for instance, can have actuators for turning on and off, and for setting the temperature. This modelling makes it easy to add new appliances.

4.2 Building rules / policies

4.1.9

21

Special features

While the current model does not include support for other special features of a house or floor, such stairs, elevators, escalators, balconies, or sloped ramps, some of them can be modelled using furniture. For example, a staircase can be represented by a furniture object.

4.2

Building rules / policies

Motion Sensor

Door Sensor

Temperature Sensor

Artificial Intelligence Interaction Handler

Policy Handler

(extension)

User-created policies (editable in GUI)

TV

Lamp

Oven

Figure 4.6: Path of messages between sensors and appliances

In order for the house to seem interactive, and for it to obtain a certain level of “intelligence”, the different sensors and electrical appliances must be connected somehow. As described in the analysis, a simple language is to be designed. This language should allow comparisons between an object parameter (such as a sensor value) and a constant. It should also allow comparisons between two object parameters. In other words such comparison would be a condition for the policy to trigger. The valid list Ao of comparison operators to use is defined as:

Ao = {=, , ≤, ≥} While the “greater than” operators could be represented by swapping the operands and using the “less than” operators, we have chosen to include both set of operators in the language in order to make the language easily human readable.

22

Design

The overall communication path between sensors, the policy handler, and the different appliances, can be seen in 4.6. This design will allow for an artificial intelligence to observe the actions in the house and insert new rules, without altering the sensors or electrical applicances directly. This layer of seperation ensures that a malfunctioning AI may not affect the system operation. Should a great desire arise for preventing the world being taken over by robots, one could implement a system in the policy handler preventing insertion of rules about certain devices or certain conditions. For example, it might be desired that the exit door can always be operated.

4.3

Simulating a person

A person can obviously perform certain tasks in a home, so this is also modelled. This section will cover how interaction between the simulated person and the simulated home is handled.

4.3.1

Interacting with objects

On several different occasions it is necessary to determine whether a person collides with other objects on the current floor. For example, this becomes necessary when determining if a person is detected by a motion sensor, or standing “within” a wall, i.e. walking into it. The na¨ıve approach to this problem is to check for collisions between the person and each object, however this might consume resources in situations where thousands of objects are placed on a floor, or in situations where computation resources are sparse. This problem is solved with the help of an interaction map. The floor is divided into a checkered map. Each object on the floor is then mapped according to which fields in the map the object occupies. This way, when a person moves, his position is looked up in the map, and for each field he occupies each object mapped into that field is checked for whether they intersect with the person. When a person and an object collide, the person will be presented with a list of events that can be performed on the object. The events to choose from are those for wich the conditions are satisfied. When an event is performed, it will do the necessary changes in parameters, but also notify each attached sensor, if any, that an event has occured. The attached sensors will then either disregard the message, if it is out of its scope, or react to it by sending a message to the central intelligence of the house. The intelligence system, along with the policy handler, will then decide what should happen next, i.e. which events which actuators should perform. An example of a sequence could be that a person walks up to the previously mentioned window while it is closed. The only event with a satisfied condition is the “open” event, so only that event is presented to the person. When performed, the sensor attached is told that a change occured and that it should react to that specific event. The sensor then lets

4.3 Simulating a person

23

the intelligence system know that it has detected the event “open”. The intelligence system processes the event and realizes that it should turn down the heat of the nearby radiators to save energy. The system then tells the actuators, attached to the radiators, that control the heat to turn the heat down. These actuators then performes the event “heat down” on the radiator.

4.3.2

The interaction handler

The interaction handler is the mediator between the actions performed by the user and the background models. If the user wishes to perform any change in the system, the interaction handler performs the change and reports back whether or not it was succesful. The benefit from this structure is that the GUI objects need only know about the Interaction Handler. Also, it eliminates redundant code in different UI elements, and aids the program in conforming to the MVC pattern. A disadvantage of having a centralized handler is that it may become very general and clumsy, with methods affecting objects of different nature. However, this problem is not so relevant in an application of little size.

4.3.3

Modelling predefined human actions

A user can define a path that they would like the person to follow, to see what happens, in a timeline environment. This path is represented by sequencial frames. In each frame the person has a position, and a set of events to perform. When playing the path the person is movoed along a straight line from the position in the first frame to the position in the second frame, activating any motion sensors that might be on the route. The path can be built so the route crosses a wall, there is nothing stopping that. This is because the user should then be limited when creating the floor plan. A wall would have to be denied being placed where it was supposed to be, if a path in a timeline is built that crosses that place. However, when the path is played the user will stop at the wall, he will not go through it. If an object with events performable by a person is within reach of a position in a frame, the user can define a set of events that the person should perform when that frame is reached. If, at the time the person reaches the frame, the right conditions are not met, the event will just not be performed. The user can also specify if the person should be presented with alternative events that can be performed, if the chosen event is not performable, or if no event is specified.

24

4.4

Design

Drawing a building

The visual representation of the building is important in order to create a good user experience. In order to provide a clean and uncluttered interface, only one floor of the building is drawn at a time. Every relevant feature of the floor is drawn, including walls, wall openings, sensors, furniture, etc., as to resemble a real floorplan. Of course, how a floorplan should look depends on who you ask. There will therefore be a couple of color themes. The basic theme will look like the regular floorplan one would find at a real estate office, which is meant for the average user. If the user is in the construction business he might be used to looking at blueprints, so there will be a theme with colors resembling these.

Model World coordinates

Abstraction Layer

Screen

OpenGL

Screen coordinates

Figure 4.7: Path of Draw events between the model layer, the drawing API (OpenGL), and the screen

Instead of drawing directly to the screen, an abstraction layer is used. All drawing to the screen is routed through the abstraction layer, as shown in figure 4.7. This makes it easy to implement features such as zooming and panning, just by telling the abstraction layer to use a certain scaling factor (zoom) and x/y offset (pan). While zooming and panning might at first glance seem like a “luxury” feature that is not really needed, one should also realise that the floorplan area would be restricted to that of the visible screen area, if at least panning was not implemented. OpenGL is chosen as the abstraction layer. For more information, see the “Implementation” section. OpenGL, along with the previously mentioned generic shape, makes it very easy to draw mulitple items of the same type, eg if you have several chairs. Then there is just one predefined shape for the chair which is drawn on several locations on the floor.

4.4.1

Object structure

The various items that can be placed on a floor are grouped so they are easy to find. They are kept in a simple internal folder structure from which they can dynamically be extracted. Each folder can have an arbitrary number of sub folders and objects. A class diagram depicting this relation is shown on figure 4.8. The user will be presented with a representation of the internal folder structure, and will be able to see the objects in the various folders.

4.4 Drawing a building

25

1

Folder *

*

subfolder

*

Object

Figure 4.8: Class diagram depicting the structure of folders and sub folders.

4.4.2

Drawing walls

Drawing all the walls in a building can be a cumbersome task, it is therefore important to make this as easy as possible. The user can draw consecutive walls with a specified width. This width will initially be set to a default thickness, so the user has less to worry about. After chosing to insert consecutive walls, the user can click anywhere on the chosen floor, and walls will be drawn between the selected points. Afterwards the settings of the walls can be altered if the user is not completely satisfied. Originally the user was supposed to have the option of placing a very specific wall, were the location, angle, thickness and length can be set initially. But the aforementioned function should be sufficient, since it is possible to place a single wall approximately where it should be, and then correct the specifics afterwards, and this way end up with the same result.

4.4.3

Drawing openings

Openings can only be located on walls, as mentioned earlier. This means that they can only be drawn on walls as well. When wanting to draw an opening, the user must select a wall to place the opening in. The user will then graphically be shown where in the wall the opening is about to be put. After placing an opening, the specifics of its location on the wall and its width can be adjusted. The opening has a width setting that can be set initally, but has a default value as with walls.

4.4.4

Drawing sensors

Only two of the mentioned sensors can be drawn, the motion - and the attachable sensors. The motion sensor has a dot representing its fix point, and a shape representig its range and coverage. When a user wants to draw a motion sensor they can set sensor’s range, unless it is a predefined sensor, and then click anywhere to position it. Another solution is to let the user click anywhere to position the sensor, and then drag the mouse out to where the sensor should cover to and click again to set the range. The first approach models the reality better,

26

Design

since sensors usually have a fixed range, and a fixed range is easier, and more accurately, set by changing a number, than by approximating the distance. Of course, after the sensor has been set, it is possible to change the settings of the sensor, and the end result will therefore be the same. To present the user with as few initial decisions as possible, the first approach is used. Attachable sensors work much like openings. When they are selected to be put in something on the floor, the user can click any object to attach it, provided that the attachable sensor and the object are compatible.

4.5

Undo handling

It is useful to be able to “Undo” events such as building a wall or deleting a sensor. However, the events to undo may differ very much in nature. One undo event could be related to the timeline, and another could be about policies. Still, it is user-friendly to keep one single “Undo”-feature that applies for any event (typically, this is located in the “Edit” menu and has a shortcut key). One way to commence this would be to assemble all the required logic in a “Undo” class and let it handle everything. This would require objects of many different types to reside in the same class or package, along with a description of what events should be performed on the objects in order for them to be undone. This, naturally, is not a very good option, as it implies high coupling between the objects and introduces some redundancy. A better design is to retain the undo logic of the different objects in the model that manipulates them, and have these models implement a common interface, as shown in 4.9. A single class (UndoHelper) will keep a stack of the events that occured lately. The stack only needs to hold the object that performed the action. In case the user wishes to undo something, the UndoHelper pops an object off the stack and tells it to undo. The undo is then performed locally in the model that manipulates the objects, and yet the user will be able to undo events in the reverse order of how they first occured, no matter that the events occured on objects of different nature. This sequence is shown in figure 4.10.

4.5.1

Re-doing events

In addition to an “Undo” feature, many applications also have a “Redo” feature, used if an undo command is later regretted. The UndoHelper supports this by maintaining a list of redo events as well. When an action is undone, it is added to the redo stack. When an action is redone, it is popped from the redo stack and added to the undo stack again. This could also be done by having a list with a pointer to the current event, but this is up to the implementation.

4.5 Undo handling

27

UndoHelper

calls

UndoHandler

WallBuilder

Interface

Floor

Wall

Figure 4.9: Undo handling class diagram in the case of the Wall Builder

GUI Menu

UndoHelper

select undo item

remove event from helper and report success

UndoHandler (eg. the wall builder)

lookup and call correct undo handler

perform undo and report success

Figure 4.10: Undo handling sequence

28

Design

Whenever an action is added to the undo history, the redo history is flushed. Otherwise, there could be a fork in the “timeline” of events that happened, much like in “Back To The Future”.

4.5.2

Unique identifiers

A policy refers to specific sensors, and hence it is useful to be able to refer to a sensor using an unique identifier such as an ID number. If every sensor and every appliance is assigned a unique ID, it is easy to decribe which sensor and appliances belong to which policy. However, it is also important to ensure that every ID assigned has not been in use before, as such a collision might cause the program to malfunction. Some different schemes can be used for this purpose. One scheme is simply assigning an integer number to each sensor, in natural order. The first sensor or appliance would be number one, the next would be number two, and so on. This scheme is very simple and is used by many modern databases such as MySQL.1 Benefits of this scheme include easiness of implementation and maintance, and low overhead. An ID number need only take up one byte, and is generated very quickly. Another scheme could be using globally unique identifiers (GUIDs). A GUID is a 16-byte number generated from information about the user’s hardware, a random pool, and other information. The idea is that by using such a large number of possible combinations (2128 ), the likelyhood of a collision is so little that the issue can be disregarded. However, for this project, a simple integer number is chosen for IDs, starting with the number 1. However, this causes a special condition when ever a file is opened from disk. In this cases, the “next number” to be issued must match the highest ID used in the file, plus one. This special condition is taken care of whenever a file is opened. Should other special conditions arise in future development, they must be handled appropriately by setting the ID number accordlingly. One primary use of the IDs is for saving and opening saved project, where relations have to be maintained.

4.6

Saving and opening data

It should be possible to save the current program data to a file, and load it into the active application session again at a later time. However, there are several kinds of data sets in the program. For instance, one data set is the building including its floors, walls, sensors and other features. Moreover, the program contains policies on how the sensors work. Finally, timelines can be used to define a set of actions performed by a person in the building. One possible option is to create a file format for each of these categories of data. There could be a “building” file format, a “policy” file format, and a “timeline” file format. The advantages 1

http://dev.mysql.com/doc/refman/5.0/en/example-auto-increment.html

4.6 Saving and opening data

29

of splitting up the file data into such categories would be a reduced file size for each file, and the possibility of using one type of data with data from another project. For example, to use a timeline defined in one building together with a policy set made in another IBUSI session. However, this may not have many practical applications, as both timelines and policy sets are usually made by the user to work with one given building. Disadvantages of splitting up the files include increased complexity for the user, and the requirement of having a way to map a given set of policies or timelines to another given building. Because the use cases for a “split up” file format strategy are relatively sparse, and because of the increased complexity involved in the development and user experience, a solution where all information about a project is saved in one single file is favored over splitting up the dataset to different files. However, the file format itself should be designed in such a way that it is easy to extract parts of the dataset, in the case an advanced user wishes to use a policy set with another application or another building. As for the file format itself, two overall strategies can be used; either to design your own file syntax bit-for-bit, or use a pre-defined markup language such as XML. For instance, files saved by Microsoft Word XP 2 use a tightly designed binary structure, whereas OpenOffice (OpenDocument) files use the XML markup language3 . Saving files in a defined language such as XML has the advantage of being very easy to parse because many tools exist for the purpose. Hence, you will not get your hands dirty having to create your own tools to manipulate strings, or use obscure bitwise operations. Furthermore, using a pre-defined language is more versatile and robust to platform-specific notorities such as line endings. XML files are almost human readable because of the normallanguage keywords used. However, these keywords and other syntax in the file are often much longer than necessary for a program to process, and thus the file may take up a lot more disk space than needed. It may also take longer time (CPU power) to process XML data than a closely defined binary structure. Unless IBUSI is to be employed on a very resource limited platform such as a slow embedded system with little memory, the disk space and CPU time involved in saving files as XML is not going to be a relevant issue, especially not if the XML files are compressed before they are saved to disk. Using the XML file format will also enable advanced users to extract partial datasets from the saved file, such as a policy set or the timelines. Because of these advantages, XML is chosen for the file format. The application will, however, use compression such as gzip to minimize file sizes.

4.6.1

XML scheme

An XML document, like any subset of the SGML language, is a tree structure and is defined from the root element. The XML scheme for IBUSI saved files consist of three parts; one describing the building along with its walls, sensors, and other features; one describing the 2 3

http://www.microsoft.com/word/ http://xml.openoffice.org/

30

Design

policies of the sensors; and one describing a set of actions (timelines). These are all sub elements of the root element, named . A more thorough, although not complete reference can be seen in figure 4.11

DoorSensorElement AttachableSensorElement SensorElement

PressureSensorElement PositionableSensorElement

WallElement

XMLElement

FloorElement

TemperatureSensorElement

MotionSensorElement

LightSensorElement BuildingElement

Figure 4.11: XML class overview. Each class represents an actual XML element, and the nesting corresponds to the nesting of the output XML.

A simple example of a saved project can be seen in appendix C.

Chapter

5 Usability tests

From the point of view of the user, the interface is the most important part of the program, because it represents what the application is actually able “to do”. If a user finds the GUI hard to navigate they will find another similar program to use. Therefore, we will focus a lot on usability when creating the GUI. In general it is never a good idea for the programmers or the designers of a program to conduct the usability test themselves, as they will have a hard time being subjective when analyzing the results. However, in the interest of creating an easily usable program, and since we have limited resources, we will conduct the tests ourselves as best we can. In this chapter, a complete usability test will be performed, based on mockups of the graphical user interface. The results of the test will be used as basis for designing the final graphical user interface.

5.1

Approach

The type of usability test that will be conducted is called a “Think aloud” test. It will be performed on a prototype of the GUI. This is a commonly used method for usability testing [2], [3]. For simulating the GUI, a prototype made simply of pieces of paper will be used, each representing a part of the GUI. The user will also be provided with an unattached computer keyboard since some of the keys are used in the program. Certain elements will then be drawn in by hand. In the example seen in figure 5.1 the number of the floor will be written by hand, since it can change. So will the x and y coordinates in the settings panel, the width and length, and the width field in the wall adder panel in the bottom. Also, if any walls have been “drawn” in the main area, these will also be drawn in. All the parts can be found in

32

Usability tests

Figure 5.1: Example of composite GUI prototype.

appendix A.4. Not all parts of the program have been modelled in the prototype. If the user goes to these areas, they are told that the selected area is outside the scope of the test. The simplicity is required since we have no program to perform the test on. It is not possible to make “a quick working prototype” that performs as we intend the real program to perform. However, the simplicity forces the user to abstract from certain details. Because the process of performing an action is slow, due to papers having to be exchanged, the user has time to think about what he expects will happen when he pushes a button, and to react to the outcome, which is the whole point of a “think aloud” test. When using a primitive prototype, the participants are usually more prone to express how they feel, or assume, a given situation can be solved visually, which is helpful if there seems to be a general consensus of these solutions. The test is composed of three stages. First the general computer skills of the users will be determined in a preliminary interview. After being told what the general purpose of the program is, the user will be asked for his expectations to the program, which features he expects to see, and what he would like to be able to do. The user will then be asked to perform some predefined tasks using the program, such as placing some walls on the floor plan. The complete list of tasks can be seen in appendix A. The tasks can be marked with a comment or a condition (in parenthesis) and the purpose of the task (in italic). Comments and purposes are not shown to the participant. Before beginning performing the tasks the user is asked to think aloud, and comment on what he encounters. The tasks are designed to find out whether the program can be used using mostly ones intuition. To do this, sometimes a preliminary task is given to ensure that the user is in the state he is supposed to be in. Also, it is important not to make the participant feel “stupid”. Although the user is told beforehand that this is not a test of their abilities, but of the program, he or she can still become irritated or feel stupid if tasks are hard or unintuitive.

5.2 Target groups

33

It is best not to get the user into this state of emotion, because they will then start doubting his intuitions, which is not wanted. Therefore it is important to start off with an easy task, and then once in a while throw in another easy task so the participant can feel good about his own performance. After completing the tasks the user is asked about their experience with the program.

5.2

Target groups

The groups of people who can benefit from using this program are those who either own a home or are having one built, and would like intelligence of some sort installed into it. These are the groups to whom the GUI should be easily understandable. Home owners come in all ages, probably from about 30 and upwards. However, lack of interest in technology increases when a certain age is reached. Therefore we narrow our target group to males and females of age 30 to 60 years. One problem with the situation as it is, is that very few people that are interested in having intelligent homes do not know much about the technology behind it. Only people with technological experience would consider that a program as this might exist. The rest would go directly to a supplier and ask about the specifics. It can be argued that the target group should be suppliers of intelligent house systems. They would then function as mediators of the program, telling their customers about it and showing how it works. Their customer could then return to their homes to experiment with the program and simulate their intelligent home from there, and then return when they know what they want. The suppliers would even be able to give the customer a file containing models of all the various sensors they supply. In the end, the groups actually using the program are the same as initially assumed. This is therefore the groups we will perform usability tests on.

5.3

Final definitions

When finding participants the following demands must be met: • 30 - 60 years of age. • Owns a computer

34

Usability tests

Their level of knowledge in IT is based on the following outline:

1. None - has never used a computer. 2. Observer - has seen people use computers. 3. Beginner - uses computers for a couple of simple things. 4. Moderate - uses computers regularly. 5. Experienced - uses non-basic programs without problems. 6. Very experienced - develops programs.

The participants’ comments and experiences have been classified in the following categories: Good. Participants liked this, or liked the way it was handled. Good idea. A suggestion from either participant or conductor that could improve the user’s experience Minor problem. The user was stalled for a short moment. Serious problem. The problem stalled the participant for a couple of minutes, but was able to work his way out on his own. On occasion led to a disaster. Critical problem. Frequently led to disaster. A “disaster” being a situation where the participant is unable to perform a task, or is severely aggitated.

5.4 5.4.1

Usability test Participant profiles

The following participants have been submitted to the usability test: No. 1 2 3 4 5 ∗

Sex Male Female Female Male Female

Age 32 52 44 44 48

Occupation Student Publishing Logistics Electronics purchaser Marketing Coordinator

IT experience Moderate Beginner Experienced Experienced Moderate

Knowledge of like program None None IKEA Home Planner ∗ ConceptDraw ∗∗ None

http://www.ikea.com/ms/da DK/rooms ideas/splashplanners.html http://www.conceptdraw.com/en/products/cd5/ap floor plan.php

∗∗

5.4 Usability test

5.4.2

35

Participant expectations of the program

As explained, the participants were asked about their expectations of the program. The interesting points are those made individually by more than just one of the participants. They expect to be able to • draw rooms with doors and windows. (5 participants) • “do something with sensors.” (2 participants) • place furniture. (3 participants) • walk around in the house (3 participants) • tell the house to perform tasks(2 participants) It is clear that the participants did not quite know what to expect from the sensor part of the program.

36

Usability tests

5.4.3

Participant experiences with the program

5.4.3.1

Builder environment

Figure 5.2: Sample mockup of Builder environment GUI. The builder is the mode in which a user can place items on the floor plan.

All participants thought that building a room was easy. Tasks involving inserting walls and openings were solved easily.

2 participants would like to be able to undo a positioning of a wall during placemant of multiple walls.

3 participants expressed relief that the sensors actually made sense to them.

3 participants mentioned that it is not very clear which part of the program (which environment) you are in, and would like to have some more obvious indication where they are. This problem hopefully solves itself during implementation, where we are able to use more visual indications.

5.4 Usability test

37

Selecting an opening How it works: When selecting an opening (door, window or empty opening), you must click on the opening itself. The problem: Participants do not know what to click to select an opening, since an opening is just a hole, so they click the wall it is on. The solution: Also list wall openings under wall settings when a wall is selected.

Finding item settings How it works: The button for enabling changes to an item is called “Item info”. The problem: Participants expect just to see some info about the item, not to be be able to make changes to it. They therefore did not initially go there when they wanted to alter the settings of an item. The solution: Rename “Item info”, eg to “Item settings”.

Understanding sensor settings How it works: Sensor settings are called “Range”, “Angle” and “Direction”. The problem: Participants are not sure what (mainly) “Angle” and “Direction” covers. And therefore do not use them. The solution: Find other names, or add icons.

38

Usability tests

5.4.3.2

Timeline environment

Figure 5.3: Example of Timeline environment

Inserting new frames How it works: A frame with the person’s current position is added when the “add frame” button is pressed. No changes are made to the current frame, but the new frame is changed to current frame. The problem: The participants are not aware that the frame in which the person moves around is not changed when you move around in it, and therefore loose their changes. The solution: Move person around in the chosen frame and add changes on the fly. When “add frame” is clicked, copy location and change current frame.

5.4 Usability test

39

Changing a frame How it works: A frame can be updated by making changes to the frame, and then click “Save frame”. (Related to previous problem) The problem: Participants are confused as to what the button does. “Save the frame as a file on computer?” “Why do you have to press the button?” The solution: Changes are applied to the frame immediately and the button is removed. According to the new approach described above.

Understandig the concept of frames How it works: Each stop along the simulated user’s path is called a frame. The problem: Participants do not understand the meaning of the term in this context, and therefore do not know what it means. The solution: Change the name, eg to “Stop point”.

40

Usability tests

5.4.3.3

Simulator environment

Figure 5.4: Example of Simulator environment Finding the environment How it works: The part of the progam, and the button for the part, where you can walk around the house is called “Simulator”. The problem: Participants expected a relation between “Time line” and “Simulator”. When asked why, the reply was “Because of the name”. Some thought that the timeline was used in the simulator, and some kept looking in “Simulator” for a way to create a predefined route. The solution: Rename “Simulator”, eg to “Free roam”

5.4 Usability test

5.4.3.4

41

General comments

Finding a specific environment How it works: The various environments are called “Builder”, “Timeline” and “Simulator”. (Already some-what handled) The problem: Participants are not sure what the titles cover. “Builder... of what?” The solution: Rename environments, eg “Builder” to “Create floor plan”, “Timeline” to “Design scenario” and the aforementioned”Simulator” to “Free roam”

5.4.4

Analysis

5.4.5

Task solving overview



Only one participant used the attachable sensor. The rest placed a motion sensor in the door way, which is also a valid solution, and therefore not marked as critical. ∗∗ Some participants solved the task by adding multiple sensors, which is an approved solution. ∗∗∗ Because of the task formulation som participants just altered a frame so the person more or less passed through it without changing directions. This solution has not been marked as critical since it is actually a valid solution. By analyzing the results of the usability tests we are able to get a pretty good picture of how the GUI should be built. The specific details of the functionalities are hard to asses through this particular type of usability test, however the general positioning and labeling og buttons and texts have been the main focus, especially since they are practically all the user has had to navigate the program. The sequence of actions needed to solve a certain task is also important when talking about usability. The sequence can be too long or too complicated, which can result in the user not wanting to use program. Whether this is the case can also be read from the test. After testing it is clear that not all the labeling are self-explanatory. For instance, when looking at the tasks where the essence is simply to find the right tab, they should all ideally have been marked “Good”. But in the task overview table (figure 5.5), there are a lot of minor problems. But these minor problems, in this context, are actually pretty significant. Mostly for first time users, but also in general. The participants were guessing which buttons to press, without trying to use their logical sense. Some of the labels should therefore, as proposed, be changed into something less domain specific.

42

Usability tests Task / Participant

1

2

3

4

5

1: Build room 2: Add doors & windows 3: 1.3m door 4: Alter door 5: Door sensor



6: Movement in room 7: Movement in upper half

∗∗

8: Tab for walking around 9: Walk around 10: Tab for predefined routes 11: Create predefined route 12: Add intermediate frame 13: Remove frame

∗∗∗

Figure 5.5: Overview of the tasks that the users were asked to perform.

The tasks involving long sequences of actions do not score high either. This is to be expected when dealing with a program that is not in a common-knowledge regime, but serious problems should still be considered an issue. Although some participants have some knowledge of other building tools, only one of them being similar in context, but this program adds a dimension that they are not familiar with. In the case of task no. 7, the sequence of actions included multiple actions where the participant was in doubt. The labels for the settings of a sensor do not take into account that the user, in most cases, does not have any knowledge of that domain. Again, we must change the labeling, or somehow relay to the user what the settings mean. In this case it is probably best done by adding icons, since it is hard to find a single word or very short phrase that explains what the settings mean. Task no. 11 scored really low, but this was not because of the sequuence of actions, but because of the functionality behind. The user naturally can not guess how the program works

5.4 Usability test

43

behind the scenes. It is therefore important to either tell the user exactly how it works, or make sure that it is as intuitive as possible. In the case of task no. 11 all the users expected the program to save their changes to a frame on the fly, and did not realize that they had to actively save it by pressing a button. The nature of the behaviour is taken under consideration, and the program should, as mentioned in the proposed solution, be altered to do what most people find intuitive. Another common usability problem in programs is actually giving the user too many options. It is common to think that the more options a user has, the better, when in reality it is the other way around. Especially when dealing with a program you are not used to, or one that deals with subjects you are not familiar with. This relates nicely to task no. 12. The participants identify two buttons in two different areas of the interface that they think could be the right one. In this case the extra choice just adds to their confusion, and they hesitate to use either one. The “Critical problems” in this task are because of an already covered issue regarding how frames are saved. Fortunately this problem is easily handled, simply by removing buttons so at any given time, there is only one button for each action that can be performed. The tricky part is to remove the right one. The most obvious and descriptive buttons are on the left panel, but it makes more sense to have them by the frame itself. By adding the right icons you can have the same descriptive effect as with text.

5.4.6

Changes

Based on the analysis, some changes have been made to the prototype. The parts that have been changed can be seen in appendix A.5. These are obviously only the changes visible to the user. Some underlying functionality has also been changed.

• In the main frame the labels of the environments are changed. They are now called “Create floor plan”, “Design scenario” and “Free roam”. The item info bare has also been renamed, it is now called “Item settings”. These are much more descriptive, and less domain specific, and should help the user to understand what exactly to expect when pressing them. • Almost all buttons have been removed from the left menu in the scenario environment. The only one left, is the one to place a person with a click of the mouse. It seems sensible that the functions are accessible from around the actual frames which they affect. Frames, which are now known as “Stop points”. • Icons have been added in the “Item settings” bar to underline the meanings of the words. This approach seems like the only choice since there is no more descriptive words or phrases than the ones used. • Under the settings for a wall, the openings of the wall are now listed, with a button you can press to go to the settings for that opening. This should help users that are not sure how to select an opening.

44

Usability tests

As mentioned, not all changes were visible. The procedure of adding and editing frames has been changed. Insted of actively having to save your changes, the change is now applied immediately. The reason for the original choice was, that you could perform changes to a frame (stop point), but then discover that this was not what you wanted, and then just not save it. This way an undo-button would not be necessary. The change is done because participants apparently find this solution more intuitive. However, an undo function will not be added for stop points. This is a reasonable decision for now, as changes to a stop point are very limited and non-destrutive (because they are easily correctable).

5.5

Further usability testing

A usability test on the finished program will not be performed. However, it is recommended that this is done at a later point of time, since the experience of using an actual program is obviously much different from pointing to some pieces of paper.

Chapter

6 Implementation

6.1

Programming Language

Java is chosen for the programming language, because of its existence on all major hardware and software platforms. It is also a decent object oriented language, and the project members have had earlier experience with the language. Java has a graphical user interface toolkit called Swing, which is used for the graphical user interface of the application.

6.1.1

Package overview

Package org.ibusi org.ibusi.models org.ibusi.geom org.ibusi.control org.ibusi.creator org.ibusi.timeline org.ibusi.gui org.ibusi.gui.3d org.ibusi.gui.creator org.ibusi.gui.timeline org.ibusi.gui.freeroam org.ibusi.xml org.ibusi.test

Description Main package containing common IBUSI classes Core models describing the different parts of a building Helper classes used to aid working with 2D geometry User interaction Classes aiding the “Builder” mode Classes aiding the “Timeline” (simulator) mode Swing panels used thoroughout the system OpenGL helper classes Swing panels used in the “Builder” mode Swing panels used in the “Timeline” mode Swing panels used in the “Free Roam” mode XML import/export Regression tests

46

6.2

Implementation

Graphical User Interface with Swing

Figure 6.1: IBUSI main window and two policy editor windows

Each component in the user interface is formed as independent JComponent objects with very low coupling. This makes each component re-usable in other settings. The most important components are described in this section. On figure 6.1, the main windows of the graphical user interface are shown.

6.2.1

Main Frame

The main frame is a JFrame containing the most important parts of the graphical user interface, such as the main OpenGL floorplan view (FloorViewCanvas), the button bar (EnvironmentSelectorBar), and side bar and an “inspector”-style pane for configuring the properties of a given object (SettingsBar). Furthermore, it contains a pane at the bottom (BottomPanel) used for different things depending on the context (for example it is a time line when the Scenario mode is selected). The main frame can be seen on figure 6.2.

6.2.2

Menu Bar

The application has a menubar, MainMenuBar. It has menus for file access, view options, and a list of utility tools such as the house rule editor. Whenever an action is performed in a menu bar, an ActionListener picks it up and executes the corresponding command. Care is taken to ensure that all messages are passed solely to the InteractionHandler which, in turn, processes the request. This approach results in low

6.2 Graphical User Interface with Swing

Figure 6.2: The main window

47

48

Implementation

coupling.

6.2.2.1

Shortcut keys

Shortcut keys are useful for allowing quick access to menu items. IBUSI has shortcut keys for most menu items. As seen in figures 6.3 and 6.4, the shortcut keys are different on various platforms. While Windows, for example, normally uses the “control” key as the modifier key, other platforms may use another modifier key. To ensure that the program adheres to the modifier key of the host platform, a special construct is used:

JMenuItem openItem = new JMenuItem("Open Project..."); openItem.setAccelerator (KeyStroke.getKeyStroke(KeyEvent.VK_O, shortcut));

The Java KeyStroke class will ensure that native shortcuts are used.

Figure 6.3: IBUSI menu bar on Mac OS X

Figure 6.4: IBUSI menu bar on Windows Vista

Also note the absence of the “Exit” menu item on Mac OS X; on Macs this menu item is globally present inside the Application menu of every application, and it is not needed in the “File” menu.

6.2 Graphical User Interface with Swing

6.2.3

49

Application Launch

The Launcher class initializes system-specific properties and launches the main application window. The main method simply creates a new MainFrame which is a JFrame. In the Launcher, some considerations were taken to make the application look more like a native application on Mac OS X:

// For Mac OS X users, use global menu bar space System.setProperty("com.apple.macos.useScreenMenuBar", "true"); System.setProperty("apple.laf.useScreenMenuBar", "true"); System.setProperty("com.apple.mrj.application.apple.menu.about.name", "IBUSI");

This causes the application to use the main Mac OS X menu bar instead of a menu bar inside the window. Also, it sets the title in the application menu to “IBUSI”. These improvements give the application a more natural feel on Mac OS X. The difference can be seen in figures 6.3 and 6.4.

6.2.4 6.2.4.1

GUI interaction Clickables

Figure 6.5: Bottom panel in the scenario editor. Each frame is clickable and thus implements the ClickablePanel interface.

The user interacts with the program through pushing buttons and filling in text fields. Som places where a user can click does not necessarily look like it is clickable. Like for instance the bottom panel in the scenario environment shown on figure 6.5. The plus sign, to most people, look like it can be clicked, but the stop point panels representing each stop point do not. Most people who have used computers before realize, that when the mouse changes shape it means that you can do something other than what you could before. The “pointing hand” shaped cursor is generally known to be used when something can be clicked, so the cursor should change to this form when hovering a clickable place in the GUI. To achieve the above mentioned, the classes ClickableButton, ClickableComboBox, ClickablePanel and ClickableToggleButton are implemented. They all extend an appropriate JComponent sub class, and all implement MouseListener. Of the mandatory methods to be made

50

Implementation

when implementing MouseListener, only MouseEntered and MouseExited are given contents. When the mouse enters the clickable item, the cursor is changed to a hand using the setCurser function of AWT components. When the mouse exits, the cursor is returned back to normal. It would have been nice to have a single abstract class Clickable which all clickable objects could extend. However, this is not possible. There are two ways imaginable it could have be implemented. The first, is if Clickable could extend a super class of all the objects, so the necessary functions are available from there. The problem with that is that the closest common super class of the components is JComponent, which does not provide all the necessary functions for all components. The other way it could be implemented is, if Clickable just implements MouseListener, and the components then extend both Clickable and the approproate JComponent, but a class can only extend one class, so this solution is not implementable either. The chosen implementation is therefore the best way to achieve the goal of having the cursor turn into a hand when hovering something clickable.

6.2.4.2

Receiving clicks

Once buttons, and other clickable components are in place, the event that someone clicks it has to be caught. All clickables have to have action listeners added so click events are caught. This listener is created and added specially for each clickable as they are made. Eg a button could be made like this:

ClickableButton button = new ClickableButton("Say hello"); button.addActionListener(new ActionListener(){ public void actionPerformed(ActionEvent ae){ System.out.println("Hello world"); } }); add(button)

This implementation is chosen instead of having a generic action listener for a group of buttons, because it is a better way to split up functionality. This way it is easy later on to see exactly which button does what, and having to manage which button is clicked is avoided.

6.2.4.3

Arrow key control

When the person is moved around inside the building, it is controlled by the arrow keys on the keyboard. This is done by having the floor view canvas (FloorViewListener in

6.2 Graphical User Interface with Swing

51

the org.ibusi.gui.gl package) implement a KeyListener from Java’s AWT framework. However, this event will only be triggered whenever the Floor View Canvas (the large drawing pad) has focus. The solution to this problem is to actively focus the canvas whenever another change has happened, such as the user switching to another mode in the program. Another solution could be to implement the KeyListener on the main window instead of just the canvas, but this would cause any other key entry, such as text input in a text field, to be caught by the KeyListener too. Furthermore, adding the KeyListener directly on the canvas allows for a more modular design, as one might imagine the canvas view to be used inside another frame, at a later point of time.

6.2.4.4

Mouse shapes

In many cases, having a shape attached to the mouse is needed. Mostly when placing or moving objects. This is solved by having a variable mouseShape defined in the FloorViewCanvas class. The mouse shape can be set via the public function setMouseShape which takes a Shape as argument. Whenever the canvas is repainted, this shape will be drawn on top of everything else, currently only being under the lines describing the paths between stop points in the scenario environment. The shape wil always be positioned under the cursor. For instance, the mouse shape is set when the user chooses to insert a sensor. A Shape representing the sensor’s range is then attached to the mouse by the interaction handler using setMouseShape.

6.2.4.5

Building walls

When building walls the mouse shape is also used. This is how the user is shown where the next wall is going to be placed, if they click at a specific location. The interaction map asks an instance of the class WallBuilder to return a Shape representing the wall going from the last place the user has clicked, to the point where the curser is currently located. This shape is then set as the shape of the mouse, but not added as an actual wall until the user clicks again. The WallBuilder class is, as the name implies, solely for building walls. When a user chooses to build walls, a new instance of the class is instantiated, and the interaction handler is told, using the setMouseAction function, that the state is currently “wall”, which means that it should provide the apropriate mouse shape, as previously mentioned. WallBuilder contains an ArrrayList of Points which are locations the user has clicked in the current session. When the wall builder is told to add a wall, it not only adds it to its internal array, but adds it to the building right away. The internal array is used for undoing and redoing. Granted, redoing has not been implemented, but undoing has, as explained in section 4.5.

52

6.2.4.6

Implementation

Icons

Figure 6.6: “Item info” icon in IBUSI

In order to assign a graphical icon to buttons in the application, the image files must exist in the system. The path to the image is defined when creating an icon, and could be an absolute path to the icon in the file system. However, this would require the image files to be located in the system, and would make it difficult to deploy the application (as the image files would have to be installed in the same location). Therefore, images are loaded using Java’s “resource” framework which makes it easy to get images that are located within the .jar file. This code snippet is used for this purpose:

ClassLoader classLoader = h.getClass().getClassLoader(); String iconFileName = "icons/" + name + ".png"; java.net.URL imageURL = classLoader.getResource(iconFileName); if (imageURL != null) { ImageIcon icon = new ImageIcon(imageURL); return icon; } else return new MissingIcon();

Figure 6.7: Red “X” shown when an icon cannot be displayed.

The imageURL variable will point to the URL of the image, relative to the path of the class files. If the image is not found, a MissingIcon will be used instead. MissingIcon is a special icon that uses Java2D graphics to paint a red “X” indicating that the file is missing (as shown in figure 6.7).

6.3 Graphics Rendering

6.3

53

Graphics Rendering

For drawing the floor plan, OpenGL is used. This section will cover the implementation of the OpenGL view.

6.3.1

Why OpenGL?

OpenGL provides an easy way to draw shapes on a 2-dimensional plane. It exists on all major platforms, and is well supported by many different kinds of hardware. It is also hardware accelerated, which means it will perform better and faster than if the scenes were rendered in software, only using the regular CPU. The OpenGL view has its own coordinate system, and the application simply refers to coordinates in this system. Many things are handled by OpenGL, for example zooming and panning the image. A wrapper framework for OpenGL, called the Lightweight Java Game Library (LWJGL), is used to access OpenGL features. This library is very fast and gives some features for free, such as wrapper classes allowing you to treat primitives such as spheres as classes. It also makes initialization of the OpenGL view a bit easier. LWJGL is dependent on a native library, which is available for Mac OS X, Linux and Windows. For more information, see the LWJGL “About” page. [4] Using LWJGL makes it easier to extend the user interface to a 3D user interface at a later point of time.

6.3.2

2D objects in a 3D API

z

Camera

x, y Visible floor area

Figure 6.8: Camera setup

OpenGL can draw objects using an orthogonal projection or a perspective mode projection.

54

Implementation

As shown in CONTINUE HERE NOW GO When the orthogonal projection is used, the third column of the transformation matrix could be filled with zeros, effectively causing the z coordinate to be ignored. While this may at first seem to be suitable for 2D drawings, it also leaves some possibilities out. Adding 3D-like effects would be difficult or impossible with an orthogonal projection, and there would be no way to change the perspective of the view. Thus, using an orthogonal projection would hinder future 3D extensions. Because of this, a perspective projection is used. It is initialized as follows:

GLU.gluPerspective(fieldOfWidthAngle, width/height, nearClip, farClip);

nearClip is the lower bound for the scene’s clipping, and farClip is the upper bound.

6.3.3

Zooming and panning

In order for the user to be able to “zoom” the floor plan view in or out, some OpenGL features are used. In this context, zooming means to be able to broaden the view and view more of the floor plan at the same time. Simply by moving the OpenGL camera along the x and y axes in the OpenGL coordinate system, the illusion of being able to pan across the building is established. Zooming is likewise implemented by moving the camera up or down (along the z axis). These ideas are shown in figure 6.9. The fact that OpenGL is handling panning and zooming saves time and development costs. We have implemented zooming and panning by moving the whole scene around, using glTranslatef:

GL11.glTranslatef(eyeX, eyeY, -eyeHeight);

6.4

Interaction Map

The interaction map keeps a list of every object that can be found in a given area of the floor. It is used for quickly determining if the person intersects with any object or is touching any sensors, and for whenever a mouse click is performed and any objects located at the cursor location should be marked as selected in the GUI. The floor is divided into grids, and the map keeps a hashtable of every grid, along with the objects they contain. Each point located on the grid is called a grid point, and every point on the floor is assigned a grid point, namely the nearest grid point located down and to the left in relation to the point inside the grid (see figure 6.10). The getGridPoint() function returns the grid point for a given point.

6.4 Interaction Map

55

z

Pan

Camera

x, y Visible floor area

z

Zoom

Camera

x, y Visible floor area

Figure 6.9: Zooming and panning

y

p

g

x

Figure 6.10: A point and the corresponding grid point

56

Implementation

Whenever an object, such as a wall or a sensor, is added to the map, every grid it occupies must be determined. The grid points of these grids are used as the index key for the map. This is because the interaction map must return a list of objects that are placed in a given grid. The algorithm that determines which grids to add uses raytracing, and how to do this depends on the geometrical shape. It will be discussed in the following sections.

y

p2 g6

g3

g4

g5

p1 g1

g2

x

Figure 6.11: Tracing a line

In order to trace a line between points p1 and p2 , an algorithm was created. This algorithm uses the getGridPoint() function in the InteractionMap class to determine which grid point to start at. This point, g1 in figure 6.11 is added, and then the algorithm moves its way towards the end grid point, following the line. The algorithm simply adds the correct set of grids depending on the line slope. In order to find all the grid squares that a rectangle occupies, the whole shape must be traced. This is done by tracing several lines through the rectangle, as shown on figure 6.12. Other objects can be traced using their bounding rectangle. The Shape class contains a method for finding any object’s bounding rectangle. In some cases, using the bounding box means that too many grids will be added to the interaction map. However, this is still a safe operation, as the interaction map is just an index for performance purposes. However, it must always be possible to find a given object from looking up any of the points it occupies (there can be false positives but no false negatives). This ensures that an event will always be triggered, if the user is interacting inside the grid. The size of each grid should not be too small, as that would require many grids to house the same objects. It should not be too large either, as that would cause each grid to house many objects and thus require iteration to find the right object. The default grid size value is 1000 mm, but this is easily changable in the InteractionMap class.

6.4 Interaction Map

57

y

g12

g7

l3

g1

l2

g2

g4

g3

g5

l1

g6

p1

Figure 6.12: Tracing a rectangle

x

58

6.5

Implementation

Geometry classes

The Java package org.ibusi.geom contains classes that describe 2D geometry. Existing geometry-related classes are used wherever it is reasonable, however classes representing geometrical points, lines and shapes have been re-implemented because the existing Java AWT classes lacks features and uses doubles to represent lengths. The underlying implementation often uses AWT, though.

6.5.1

Datatype overview

Only SI units or close derivatives are used throughout the system. All lengths are stored in millimeters, which gives adequate precision and allows the use of an integer datatype. Measure Lengths Angles Colors Points Lines Shapes

6.5.2

Data type long double java.awt.Color org.ibusi.geom.Point org.ibusi.geom.Line org.ibusi.geom.Shape

Comments In millimeters. In degrees Integer-based, with extra methods. Integer-based, with extra methods. Not restricted to simple polygons like the Shape class in AWT.

Point

The java.awt.Point class is not used, because it is not integer-based. Instead the class org.ibusi.Point is implemented. It implements many of the same functions as java.awt.Point, but using different datatypes. Where viable, the underlying implementation uses AWT functions. The class can be given a java.awt.Point, a String containing two comma seperated integers, or a set integers in the constructor. Point implements a function getTranslated which is used several places. This function returns a Point that is rotated a certain angle around the original location, but from a given offset.

6.5 Geometry classes

59 Class Description Circle Extends Shape CircleSlice Extends Circle CompositeShape Extends Shape Line Extends Shape Rectangle Extends Shape Shape Abstract Figure 6.13: Shapes in org.ibusi.geom

6.5.3 6.5.3.1

Shapes Shape

Apart from some generic get and set functions, the abstract class Shape implements an intersection function for determining whether the shape intersects another given shape. How this works is covered in section 6.5.5. The inheritance hierarchy of the shapes is briefly shown in table 6.5.3. The class also contains some abstract functions, which is useful when a functionality can not be decided for every kind of shape. This forces the inheriting classes to implement these themselves. Among these are the functions for getting bounding boxes and circles of shapes. The specifics of these functions can be found in section 6.5.4. The last abstract function is clone, which, as the name implies, clones the shape.

6.5.3.2

Circle

The Circle shape itself is a regular circle, defined by its position and radius. The position is inherited from Shape, so the Circle class contains the radius. Circle has a sub class called CircleSlice that represents a portion of the circle circumference, along with the lines from circle center to the arc edges. It conatins a degrees variable of type double which determines the size of the slice, i.e. the spread angle. This angle is shown in figure 6.14 (angle I). This shape resembles a piece of a pie. The location variable still refers to the origo point of the complete circle.

Figure 6.14: Angle I: rotation from abstract class Shape. CircleSlice

Angle II: degrees from

60

6.5.3.3

Implementation

Line

A line can be described in many ways, such as by two points, by a point along with a length and an angle, or by a point and a vector. The implemented Line class can only be instantiated with two Points, but the other information is available through get methods in the class. Since a Line is not represented by just one location, the generic functions of the Shape class do not apply, since the location variable is not used, and must be overridden. Rather, the starting point of the line is defined as its location. The functions setLocation and translate are overridden so they translate both points. The function rotate rotates the end point a given angle around the start point. setRotation resets the end point so it is back in line with the start point and then uses rotate to set the rotation to the desired angle. All just using simple trigonometry. The matching get functions are also altered, so getLocation returns start point instead of location, which is not used in this class. The getRotation function is also altered, so it calculates and returns the angle between the global x-axis and the line. Line also has a function getOrtho that returns a vector, or a Point actually, that is perpendicular to the line it is performed on, and has a given length. The function is used when drawing wall openings, to calculate the perpendicular lines in the opening. This makes it easier to draw the openings because the orthogonal lines are easily accessible.

6.5.3.4

Rectangle

As with a line, a rectangle can be described in more than one way. With the Rectangle class it is possible to create a rectangle from either a position, length, and a width, or by four Points representing the corners of the rectangle. This 4-point constructor is added for convenience, because many other classes return a list of corner points when returning their values. Rectangle, unlike Line does not need to override any of the generic functions from Shape. Although a rectangle obviously has more than one point to take into account, they are all translated and rotated according to eachother, so only one reference Point is needed. Every other calculation can be made from this point and the length and width of the rectangle. The point of reference is defined as the point in the top left corner, according to the local coordinate system. The Rectangle class has a function, getCorners, which is used on various occasions. It calculates the Points for all four corners through trigonometry and returns them in an ArrayList.

6.5 Geometry classes

6.5.3.5

61

CompositeShape

A composite shape is a shape that consists of one or more shapes. With the CompositeShape shape it is possible to create advanced shapes by connecting one or more primitive shapes. This shape is initialized without any parameters. Once the object is made, other Shape objects can be added using the function addShape. Each Shape has a location relative to the global coordinate system, these are not relative to the location of the composite shape, which is (0,0) initially. Whenever a CompositeShape is translated, all the contained Shape objects will be translated accordingly.

6.5.3.6

New shapes

New shapes are easy to implement. Let us assume that a square is to be implemented. It would make sense to create that as a sub class of Rectangle. Here is how it would be implemented:

public class Square extends Rectangle { public Square(int w) { super(w,w); } public String toString() { return "[ Square @ " + getLocation() + " width " + getWidth() + " rotation " + getRotation() + " ]"; } public boolean equals(Square s) { return s.getWidth() == getWidth() && s.getLocation() == getLocation() && s.getRotation() == getRotation(); } }

This demonstrates the easiness of creating new types of shapes. The Square class has been added to the program as an example, although it is currently not used.

6.5.4

Bounding shapes

Sometimes calculations have to be performed on shapes that are very complex, or that need to know a point on a certain part of the shape. To make these calculations easier it can be very helpful to be able to have just a rough sketch of where the shape might be. All Shapes implement the functions getBoundingBox and getBoundingCircle, which return a Rectangle or a Circle, respectively. This way a function will not have to differ between

62

Implementation

which kind of Shape it is currently handling. getBoundingBox is used mostly in connection with the InteractionMap. getBoundingCircle was initially meant to be used for ray tracing, but that problem was solved another way by using the bounding rectangle instead. The function, however, remains to be used wheen needed. The idea behind the getBoundingBox algorithm is basically to find the x and y-coordinates in a shape that are the furthest apart. The CompositeShape runs through the bounding boxes of all its contained shapes, and uses the coordinates of the individual shape to determine the maximal and minimal x and y values used.

6.5.5

Intersecting shapes

When it is needed to know if two shapes intersect, it makes sense to ask one shape if it collides with the other. However, if a Line has to know how to intersect with a Circle and vice versa, it quickly gets complicated if some of the code has to be altered. Therefore all the intersects functions have been collected into a helper class ShapeIntersection. This way, only the abstract class Shape has to implement an intersection function, with just a single line of code. The Shape class simple calls the helper class. The helper class is located in the geom package and can only be accessed by other classes in that package. For most of the simple intersections the class handles the calculating itself, such as in the case of intersecting two circles. In this case, it is simply checked whether the two radii added together are greater than the distance between the two circle center points. For most of the more complex intersections, we have taken advatage of Java AWT’s shape functions.

6.6

Detecting a person - line-of-sight

For now, only MotionSensors are able to detect a person. Line-of-sight detection is used when the motion sensor has to calculate whether a person can trigger it or not. If the person is in range, and the detection algorithm states that there is a clear line of sight from the sensor to the person, the sensor will trigger. The “eye” of the sensor is defined as the point where it is positioned, and from there it must be possible to draw a straight line to somewhere on the person. The person has a certain solid angle, seen from the sensor. Inside this angle, the line must end somewhere on the line between the right-most in-line point on the person, denoted as the right bound point, and the left-most point, denoted as the left bound point. If that is the case, the line of sight is said to be clear. Now, a person’s shape can be a composite shape, consisting of several other shapes, so it could easily prove difficult to find the bound points. This is where the bounding circle, mentioned

6.6 Detecting a person - line-of-sight

63

earlier, is taken into use. The bound points of a circle are the easiest to find since the distance from center to periphery is constant. The situation can be seen graphically in figure 6.15.

y bounding circle radius solid angle

right bound point

ce

an ist

d

right bound inner angle right bound global angle

x

sensor

Figure 6.15: Person detection issues The primary task is to find the right bound point, so a line to there from the sensor center can be made. From here, lines can be iterated until left bound global angle is reached. To calculate that, more information is needed. The right bound line is the tangent to the bounding circle, and touches the right bound point. This means, that if a line is drawn from the center of the bounding circle to the right bound point, they will be perpendicular. Thus we have a right-angled triangle from the center of the sensor position to the center of the bounding circle to the right bound point. Since we are dealing with that specific type of triangle, all that is needed to calculate the right bound inner angle is the length of the hypothenuse, which is the distance, and the length of the opposite side, which is the radius of the bounding circle.

sin(inner) =

radius distance

Once the local inner angle is calculated, the global can be found by subtracting the local inner angle from the global angle of the line between the sensor position and the center of the bounding circle, which is calculate using the getAngle function of the Line object. With this angle the right bound point can now be found by using Point’s function getTranslated. The point will be translated the distance out the x-axis and rotated right bound global angle. distance is not exactly the distance between the sensor and the right bound point, but it is close enough for this purpose. The primary task has now ben fulfilled, and the second stage of the algorithm can begin. All left to do, is to check if a line can be made somewhere in the angle space of right bound global angle and left bound global angle that does not intersect any obstacles. First, the line between the sensor and the right bound point just found is checeked. From here lines

64

Implementation

can be iterated up until the left bound is reached. Each line is checked by using the function getIntersectedObjects in the interaction map, which returns all objects intersected by a shape. If any of these shapes is an obstacle, the line of sight is said not to be clear. Here, the step size, i.e. the angle that is added for each iteration, should be taken under consideration. It should be small enough to find all open areas, so a situation like figure 6.16.A does not happen. In this situation, a hole between two walls is missed because the step size is too large.

H

an s

-H

en ri

k



ve ng

re

en

is

nic

e.

C

hri sti a

n

D

am

sg aa rd

Je

ns e

n

is

als

o

nic

e.

stepsize

stepsize

A

B Figure 6.16: Ray tracing

The situation where a grid in the interaction map is missed, also because the step size is too large, like figure 6.16.B is pretty much irrelevant. If the circle contains the two points on either side in a grid, the closest part of the person will always be at least as close as the closest corner of that grid. That is, unless the angle between the grid’s bottom line and the right bound angle is very acute. Even in that case, the area of the grid that is not occupied by the person is so small, that if that area exact is not occupied, part of the neighbouring grids are almost definitely also not occupies, which will be checked by the algorithm. Again, for this purpose the accuracy is good enough. The conclusion of the step size discussion is that it should, as maximum, be the diameter of a person, or the size of a grid in the interaction map, whichever is the smallest. Once all objects in all grids that are found along all lines between the bounds have been checked, the function for ray tracing in MotionSensor returns true if a line was found where no obstacles were in the way, and false otherwise. This method is implemented using the intersects function of the shapes.

6.7

Saving and opening files

This section covers the implementation used for saving and opening files in IBUSI. As described in the design section, the application uses XML as the format for saved files. A library called NanoXML is used to read and write the actual XML. This library is very

6.8 Concurrency issues

65

lightweight and simple to use, but it still has the features needed to parse and read an XML file. Its object-oriented nature allows the use of inheritance, which saves on the amount of “copy and paste” programming needed. For example, the amount of code needed to describe a motion sensor can be reduced to:

public class MotionSensorElement extends PositionableSensorElement { public MotionSensorElement(MotionSensor s, int range) { super(s,"motionSensor"); setAttribute("range", range); } }

Because the rest of the parameters are inherited from the superclasses, this could result in an XML snippet like the following:



The XML content is compressed with the gzip algorithm, using Java’s built-in gzip output stream:

GZIPOutputStream out = new GZIPOutputStream(new FileOutputStream(file)); XMLWriter xmlwriter = new XMLWriter(out); xmlwriter.write(xmlElement);

In Java, I/O is handled with the Stream framework. This example creates a gzip-compressing stream from a regular output stream, and then NanoXML’s XMLWriter writes to this stream, causing a gzip compressed XML file to be written to file.

6.8

Concurrency issues

On two occasions the need for threading has presented itself; when handling policies or scenarios. For the PolicyHandler the use of the synchronized key word is sufficient to make sure nothing can go wrong. With the scenarios things get a little more complicated. The class TimeLine is accessed from multiple classed, for instance by the TimeLinePlaybackThread, which is a scenario player running concurrently with everything else. However, TimeLine is also accessed from TimeLinePanel which is the bottom panel that can add and remove frames stop points in the scenario. The problem here is, that since the player runs in a thread for it self, the timeline can be accessed from the bottom panel while the player is running and change the route as the player continues on.

66

Implementation

The damage that can be done is minor, but still a nuisance. To prevent anything from going wrong, a semaphore is put into TimeLine. This semaphore is initialized to having only one resource, so only one object at the time can access it. The semaphore is handled through methods in the class, so all methods request it the same way. When requesting the resource the function tryAcquire is called on the semaphore, which acquires the resource, if it there, without consideration for any queues that might be. This means that fairness is ignored, which is why the semaphore is not initialized with that feature. The fairness aspect is ignored because the tasks performed are small and harmless. In actual use, there will not be any fairness problems as only one user at the time accesses the feature. A graphical representation of the solution is shown in the Petri net in figure 6.17. left panel

bottom panel

start player make changes player

stop player

Figure 6.17: Time line concurrency Petri net

The player can now be started in the TimeLineLeftPanel, where the semaphore’s resource is acquired when the play button is pushed. The semaphore is returned when the playback has finished, or when it is interrupted. When making changes to TimeLine from the bottom panel, the resource is also acquired before being able to perform the changes. In both cases, if the resource is taken nothing will happen.

Chapter

7 Test

The single most used tool for testing the program has been the GUI. Drawing objects and making sure they look as they are supposed to. Examples of how we have tested can still be found int the org.ibusi.test. The TestLauncher class is instantiated in FloorViewCanvas so tests are easily performed. Each type of test has its own class which is instantiated in TestLauncher, so all the tests that one does not want performed are just commented out in there. Most of the tests are visibly verifiable. Eg bounding boxes are tested just by making a shape, drawing it, retrieving the bounding shape, drawing that, and then look at the GUI to verify that it actually contains the entire shape. Below is an example where the bounding box of a circle is found and drawn.

public static void circleBoundingBox() { Circle circle = new Circle(2000); circle.translate(new Point(-9000,1000)); GLShapeDrawer.drawShape(circle,Color.RED); GLShapeDrawer.drawShape(circle.getBoundingBox(),Color.GREEN); }

Some of the models are visually verified by adding reference circles where something is supposed to be, and then confirming that they are actually there. For instance when testing walls, small circles are drawn where the corners are expected to be. When the corners turn out not to be there, the code is debugged before trying again. This continues untill the corners of the wall are centered in the circles.

68

Test

Some aspects are mathematically verifiable. Like some of the functions of a line. For instance verifying the calculated angle of a line is tested by specifying a line and an expected result, and the automatically comparing the expected result to the calculated result. An example of this is seen below.

System.out.println("\nLine parallel to x-axis, direction left"); line = new Line(new Point(0,0), new Point(0,200)); expectedAngle = 90.0; calculatedAngle = line.getRotation(); System.out.println("Line: "+line); System.out.println("Expected angle: "+expectedAngle); System.out.println("Calculated angle: "+calculatedAngle); System.out.println("Result: "+(calculatedAngle==expectedAngle?"PASSED":"FAILED"));

Chapter

8

Documentation

Thorough documentation is supplied with the application, both for end users, potiential users, and developers. Along with this report, a complete Javadoc API reference has been created. It is located on the IBUSI web page only, at this URL: http://ibusi.org/docs. The Javadoc reference provides future developers and project maintainers with detailed descriptions of the project’s packages, classes, interfaces, and methods. On the web page, a list of frequently asked questions (FAQ) can also be found. This list provides potential users with answers for the most typical questions that arise when hearing about the project. It is written in such a style that it requires minimal technical knowledge, except for the few questions that are of a technical nature themselves. For users of IBUSI, a User’s Guide has been written. It provides information about how to start IBUSI and perform the basic tasks. It also includes a section written mainly for the more advanced users. This section describes how to set up custom rules (policies) in the building, and has a slight bit of info on how the system works internally. It is distributed in PDF and HTML formats, making it easily accessible for all users. The User’s Guide is included with this report, please see appendix D. All documention is located on this URL:

http://ibusi.org/docs

70

Documentation

8.1

Commenting policy

The code is commented according to the commenting policy outlined in “Proper Linux Kernel Coding Style”: [1] “Comments are good to have, but they have to be useful. Bad comments explain how the code works, who wrote a specific function on a specific date or other such useless things. Good comments explain what the file or function does and why it does it. They should also be at the beginning of the function and not necessarily embedded within the function (you are writing small functions, right?).”

8.2

Project management

Even though the development of IBUSI has only been carried out by two persons, some work has been done in order to prepare for development on a bit higher scale.

8.2.1

Web Page and Online Documentation

The web page ensures that all project members are informed about the project, and that they have access to current API documentation. The Web Page is located on a virtual server running Linux, giving future possibilities for running different server socket applications or interactive development tools such as the project management system Trac.

8.2.2

Subversion Repository

The version control system “Subversion” has been used from the beginning of the development process. The repository is located on the IBUSI web server on this URL: http://ibusi.org/svn Anyone can check out the repository. On UNIX-like systems, the repository can be checked out with this command: svn checkout http://ibusi.org/svn ibusi This will place the repository in a directory named “ibusi”, in the current working directory. On Windows an SVN client such as TortoiseSVN1 . 1

TortoiseSVN web page: http://tortoisesvn.tigris.org/

8.2 Project management

8.2.3

71

Milestones

Project work was initiated in the fall of 2007, and finished in April 2008. Please note that some of these dates are approximate, and may vary with up to 2-3 days from the actual dates. Milestone 1 1.1 1.2 2 2.1 2.2 2.3 3 3.1 3.2 4 5 6 7

Task

Estimate

Actual date

Project start Requirements outlined Model classes designed and implemented Basic GUI and OpenGL view implemented Mockups of final GUI done Usability tests performed GUI for placing new objects implemented Policy Handler implemented Policy Handler GUI implemented Scenario (Timeline) editor done, with GUI Feature freeze Procedural tests performed Documentation done (including this report) IBUSI Released

November 1 November 14 December 1 January 1 January 1 January 14 January 14 February 1 Febuary 14 February 20 March 1 March 20 March 29 April 1

November 1 November 10 December 14 January 5 January 14 January 14 January 25 February 10 March 5 March 10 March 15 March 25 April 1 April 1

The choice of release date is in no way related to the common April Fools’ day.

72

Documentation

Chapter

9 Results

9.1

Accomplishments

We have created a simulator of buildings that can have a certain intelligence. This includes:

• Comprehensive models for items in a building: – Walls – Doors, windows, and other openings – Furniture – Electrical Appliances – Different floors – A person • A graphical floorplan view of these features, made in OpenGL. Ability to zoom and pan the OpenGL view, and switch color schemes. • Graphical interface for placing the person, walls, sensors and appliances • Graphical interface for attaching sensors to any given object (albeit these cannot, at the moment, perform any actions) • Graphical inspector-style interface for editing any positionable object (such as the position, angle, and item-specific properties) • Graphical interface for making a scenario (route) that the person should follow around the house

74

Results

• Ability to play back scenario, and edit stop points • Ability to freely roam around the building and watch the results • System for managing the policies that apply in the building • Graphical user interface for adding new policies, along with conditions that must apply, and actions that should be carried out when the conditions are met. • Exporting the project to XML, and re-import from XML file. • Tests of geometry-related classes and further tests via the GUI • Comprehensive usability tests based on paper mock-ups • Adjustment of GUI based on above usability tests. • Documentation on the abovementioned items • Web page with resources, http://ibusi.org

9.2

Other programs

There are no other progams, that we know of, with the same intensions or purpose as this program. The closest thing to compare it to is other programs in which you can create a floorplan of you house.

ConceptDraw One program for creating floorplans was mentioned by a participant in the usability test. A trial version of the program, called ConceptDraw Pro, was downloaded to get a quick overviev of what it can do. Initially it does not look like a program for creating floorplans, because apparently it is more of a generic tool for drawing associated figures. It took a while to figure out how to make the program draw building plans. You can go to the file menu and select Template gallery. From there you can select Building plans and then select which kind of building plan is needed. We have only tried out home plan which is the one that seems to be most like this program. Once the home plan environment is loaded it is a pretty basic choose-and-insert program. Basic is not meant in a bad way, the program contains pretty much what one might expect from such a program. Items that can be inserted can be found in a menu on the left side of the screen, split into types of items. To insert a wall, for instance, you double click the wall icon. A fixed sized wall is then inserted, on which you can change the length, but for some reason not the width.

9.2 Other programs

75

ConceptDraw has a lot of different objects to insert, actually pretty much anything you can imagine, but some of them are very useless. For instance it can insert “T-rooms” and “Lrooms” which are rooms shaped like the letter they are given, but they only scale one part of the room when you try to change their sizes. So unless you have a room built exactly like those, they are completely useless. The time it takes to break these predefined rooms apart and change the sizes of other parts of the rooms is at least the same as the time it would have taken to build the room from scratch. However, most of the insertable objects which are not predefined rooms scale very well, so maybe they just forgot to implement scaling on rooms. A good example that objects have no real connections is when doors and windows are inserted. To insert these you also double click an icon, and then you can move the door or window freely around the room. They do not have to be connected to a wall. All objects work exactly the same way. This is both good and bad. Good, because when you have inserted and altered one item, you know pretty much how to alter all items. Bad because the connectivity is gone, anything can be put anywhere which does not always corrospond to the real world. The program implements snap-to, and fixed angles of rotation that jump 45 degrees when holding down the shift button, which very nice. The canvas is pretty small initially, but it can be set in prefferences, and it can only be moved around using the scrollbars. All in all it is a pretty good program, but it is very obvious that it is not specialized in any specific area, but wants to be sn all-round as possible.

Elsparefonden On Elsparefonden’s home page1 you can find a place they call “Min bolig” (“My home”). Here you can draw a very simple floor plan of your home, the emphasis here being “simple”. It has one type of door, one type of window and one type of wall. This is very nice for the purpose that it is meant for. But since it is that simple, dividing into types seem over the top. There is an entire menu for that one door, and another entire menu for the one window. And the button for drawing walls has been put all the way on the other side of the screen. It seems like they have either taken a larger existing program and removed a lot of insertable items, or maybe they are planning on expanding. Walls are inserted by clicking “Tegn væg” (“Draw wall”), then clicking and holding the mouse button somwhere in the canvas where you want the wall and then drag to where you want the wall to end. If you just click somewhere, a wall with a defaul length and angle is inserted, and it can then be altered by clicking “Vælg og flyt” (“Choose and move”) and then clicking the wall to select it, and then dragging either end to make it longer, or somewhere in the middle to move it. A door or a window is drawn by selecting the appropriate group, and then dragging the door or window from the menu out on the canvas. In this application doors and windows can also 1

http://minbolig.elsparefonden.dk/

76

Results

be placed anywhere on the canvas. The canvas itself is fixed size, like in ConceptDraw, but is big enough to draw very large buildings. At the moment this application does not imlement snap-to or fixed angles.

Comparison

Each of the three programs, IBUSI, ConceptDraw and “Min bolig”, have different ways of doing most of the tasks. The basics of inserting is actually getting to insert. Here there even are three different ways of doing this. One is to double click an icon, whereafter the object is placed somewhere in the canvas, another is to drag an object from the menu out to the canvas, and the third is to click the icon and then click the canvas where you want it placed. IBUSI obviously implements the latter, because this is what we feel is more intuitive. ConceptDraw has chosen to insert all object the same way, which is nice for the user. “Min bolig” has one way of inserting doors and windows and another for inserting walls, both of which are described earlier. IBUSI, as ConceptDraw, has one way of inserting objects, which is to select what to insert, and then click to place it on the canvas. In IBUSI, the fact that the method for inserting an object is similar for all objects is considered important, because the user should then intuitively know how to insert any other objectswhen it has been done once before. Walls in IBUSI are inserted by clicking a button, and the clicking the canvas where the wall should start. Each of the next clicks will then create a new wall in a series of connected walls. This way the user will not have to worry about making sure that each wall they insert overlap, because they automatically will. When it comes to inserting doors and windows, ConceptDraw and “‘Min bolig” have very similar approaches. In both cases doors and windows can be placed anywhere, even if there is no wall. In “Min bolig” they will snap to the wall if they are close enough to it. Of these, the method from “Min bolig” would be preffered for IBUSI. However, another, more accurate to real life, method was chosen. Doors and windows are selected to be inserted, and then it is only possible to place them on a wall. IBUSI does not, at the momemt, implement snap-to, but it is one of the extensions mentioned later on. The canvas of IBUSI resembles “Min bolig” the most, by having a fixed size canvas, but making sure it is going to be big enough for practically all buildings. This way the user again has one less thing to worry about. In all, IBUSI probably resembles “Min bolig” the most. Mainly based on the simplicity, and on some of the implementation choices. Althouh, the way doors and windows are placed is not like anything, as of yet, seen in any program.

9.3 Bugs and shortcomings

9.3

77

Bugs and shortcomings

Not everything mentioned in the design has, for various reasons, been implemented, most because of time constraints. Some things that were left out were more significant than others. This section describes some of the more pivotal neglections.

Sensors Attachable sensors In the design phase the concept of attachable sensors and motion sensors extended to more specific sensors. There were supposed to be at least one, more specific, attachable sensor that had been limited to be used only on one certaint item type, such as doors. However, since the focus of the project lies more on motion sensor the specialized attachable sensors were neglected.

Motion sensors There was also supposed to be an implementation of physical motion sensor, however time did not allow for a thorough test and analysis a sensor. This is not really a shortcoming as such, since the generic motion sensor can be adjusted after positioning to emulate almost any common motion sensor. For instance, to model a garage door sensor that has a range of 10 meters and an angle of detection that is 45 degrees, these two numbers are just typed in the settings panel, and the model is done. However, it is not possible to set the possibility of detection, which is a possible extension mentioned later on.

Actuators In the design, the concept of actuators was mentioned. This idea was mainly mentioned so it could be implemented if time permitted it. Sadly, it did not.

Usability Among the issues discovered during the usability test, there was one concerning the settings of a motion sensor where users did not understand what the terms “range”, “angle” and “direction” covered, mainly the last two. The conclusion was to add icons to the settings. The icons were were made, and can be seen in version two of the GUI prototype in appendix A.5. However, they were never implemented. Mainly because of the difficulty setting it up

78

Results

with Java AWT. After spending some time on it, it was concluded that other things were more important, so the issue was dropped.

Shapes Shapes have been implemented to the degree that we need them. This means, for instance, that ShapeIntersection cannot calculate intersections between all shape, only the ones we actually use. However, these functions are very easily implemented if needed. If an unimplemented intersection is requested, an exception is thrown. The CompositeShape is also a little short in functionality. It cannot be rotated like the other shapes can. This is mainly because it is not needed in the current implementation. If the functionality is ever needed is just has to be implemented in the setRotation function. Had it just been a simple implementation it would have been implemented. When rotating each shape in the composite, the shape must be rotated around its own center, otherwise the composite as a whole will not look the same. However, they do not all rotate around their center, so when rotating around the center of the composite shape, this difference has to be corrigated. This would take a lot of time to implement and get right, time better spent on more important issues that are actually used in the program.

Multiple floors A given building was supposed to be able to have several floors. The structure of the program allows it, it is possible to add and delete floors. But only manually through changing the source code. There is also a selction box in the program that lets you choose the floor to work on, but for obvious reasons there exists only one floor at the moment. A button for creating new floors has also been made, but the actual creation has not been implemented. The feature is pretty easy, though time consuming, to do, but is not considered important, that is why it has not been done.

9.3.1

General

A file structure for containing objects that can be placed inside the building was supposed to be created, but due to the minimal number of objects it seems that the time was best spent elsewhere.

Bugs After implementation ended, some bugs were naturally discovered. Most of them have been fixed, but a few probably remain. The bugs we are currently aware of are these:

9.4 Extension options

79

When an openeing has been added to a wall it is possible to resize or move it so it overlaps other openings, or exceed the boundaries of the ends of the wall. As long as this situation does not occur when placing an opening, but only when changing it, it is just considered a minor problem. There are also some Java GUI issues. Depending on which platform the program is run, the font size may vary. This can cause some texts to be partly cut off.

9.4

Extension options

As the project developed, more and more ideas surfaced. A lot of things had to be left out because of timeconstraints. Most of the ideas were always luring in the background during the whole process, so it was made sure that extending the program would be possible. Below is some of the extensions and features we would like to have built into the program, but did not have time.

Multiple scenarios It would be a very nice feature to have multiple scenarios, in stead of having to start over each time you want to simulate a new scenario. As it is now, you can save the project, and the scenario will bew saved with it, and then you can have multiple scenarios on the same building that way. However, if the buildng is changed it must be changed in all saved projects.

Record scenario Since there is no attachable sensors as such, the scenario does not work as described in the design, where a user is prompted when in range of an object on which some events can be performed. But when this is implemented, recording the person’s path and the choices he makes would be a nice feature to have.

Handle multiple items The program currently functions so you can select and move a single item. But due to the implementation of shapes and points, which all support translation, it should be possible to translate several selected items together. It is already possible to select multiple items by holding down the shift key while clicking, so moving the selected items is simply a matter of implementing a settings panel for when this is the case, and then calling the “translate” function on all selected items.

80

Results

Automatic interior catalog When more sensor types and interior items are implemented, it would be nice if they were dynamically put into the catalog in the bottom menu, instead of having to make buttons for each of the items. This was mentioned in the design, but due to the small amount of items that can be added it was not implemented.

Dynamic sensor formulas To better model how a given motion sensor works, the area that the sensor covers, and the possibility of detection could be modelled by a mathematical function. The function for possibility of detection would map the distance from the sensor to a probability of being detected, i.e. a double value between 0 and 1, where 0 is no detection ever, and 1 is detection every time. However, it must then be assumed that the possibility does not vary depending on how close to the angular edges the person is. If that assumption does not hold, the function must take a vector from the sensor to the person, and then return the possibility. In the latter case, the area the sensor covers and the possibiliity for detection can be described in the same formula. If the first case is implementet, the cover area would have to be represented by another mathematical formula.

Keep focus on person Currently when the person walks out of th visible area, you will have to move the canvas yourself to see where you are going. Here it would be nice if the focus was automatically put on the person when he exits the visible area. The canvas could either just be moved, or the “zoom to fit contents” function could be applied.

Show / hide scenario path Having the ability to hide the path of the person in the scenario is not necessarily useful, but nice to have if you want to display a scenario for other people.

Snap-to / grid A very useful thing to have is the snap-to functionality, where you limit where an item can be placed or moved to, or at which angle it should be placed. This way adding walls to an existing building gets much easier, and you can make sure from the start that the walls are perpendicular, if you want them to be.

9.4 Extension options

81

In the spirit of snap-to, an option for showing and hiding a grid layout is very useful to gain perspective of what you are building.

Shortcuts More shortcut keys would definitely be an improvement. When using programs like this, one often tend to reach for some selected buttons that you are used to using. For instance implementing the escape button as a means to escape what ever you are doing, like when you are placing items and would like to stop, you press escape, and the the bottom menu goes back to its initial state. The delete button is also very useful, espacially in the floor plan view, where pressing the delete button could delete all selected items. Or in the scenario mode where it could delete the chosen stop point. Backspace is less commonly used in these programs, but could be used for stepping back in a sequence of consecutive actions. For instance when building consecutive walls, pressing backspace could delete the last place you clicked, removing the last wall.

Appliances At the moment only one appliance has been implemented, a floor lamp. It would be good to be able to interact with other types of appliances too. Luckily this is not very difficult because of the way the program has been designed.

82

Results

Chapter

10

Conclusion and recommendations

Through the five-month process, a user-friendly program for simulating intelligent buildings has been developed. The program allows the user to create a floor plan of the shell of their home, consisting of walls, doors and windows. It also lets the user place motion sensors and lamps, and set up a relationship between these through the help of policies. The simulator has been subjected to usability tests, because of which several alterations have been made to increase usability. The user is then able to walk around in the home, either following a predefined route, or by moving around a virtual person with the arrow keys on the keyboard. They can then trigger sensors, and have lamps turned off and on, depending on how the policies are defined. The majority of the available time and resources has gone to creating a well designed framework that is easily extendable. This has entailed that the number of insertable objects is minimal. The chosen objects are basic ones, which means the afforementioned walls, doors and windows. Also implemented is a generic motion sensor, an attachable sensor and a floor lamp. This, however, should be sufficient for a user to get acquainted with how the Rule Manager, i.e. the part in the program where policies are set up. Other appliances and so forth can always easily be added later on, because the structure invites to easily addable “plug in” classes. Also a small framework for handling redoing and undoing has been implemented. Currently only undoing is being used, and only for building walls, but the framework is ready to be implemented in all parts of the program. The IBUSI project is very well suited for further development, partly because of the framework design, but also because of the extensive documentation and API reference material, which can be found on the home page. For further development, it is recommended to take a look at the extension possibilities mentioned in earlier chapters. It is also recommended to submit

84

Conclusion and recommendations

the finished program to another usability test, as issues from prototypes may differ from issues found in the actual program.

Bibliography

[1] Kroah-Hartman, Greg, Proper Linux Kernel Coding Style, http://www.linuxjournal.com/article/5780, July 1, 2002.

Linux

Journal,

[2] Molich, Rolf, Usable Web Design, 1st edition, 2007. [3] Nielsen, Jakob, Usability Engineering, 1993. [4] “Caspian”, About LWJGL, LWJGL Home Page, http://lwjgl.org/about.php [5] Krzyska, Cyryl, Smart House Simulation Tool, Lyngby, DK: IMM, 2006.

86

BIBLIOGRAPHY

Appendix

A Usability

A.1

Pre-task session interview questions

1. Do you know of any other programs resembling this? 2. Which functionalities do you expect from the program? 3. Which functionalities would you be glad to find? 4. Do you own a computer? 5. Which programs do you use (most, if many)? 6. Name three keyboard short-cuts used in most programs.

A.2

Partcipant tasks

Task 1 Build a room of your choice. Test intuitiveness of room building Task 2 (If not done in task 1) Add doors and windows. Test intuitiveness of door and window addition. Task 3 Add a door somewhere that is 1.3 meters (130 cm) wide. Test assumed unit choice, and set up task 3.

88

Usability

Task 4 You are unhappy with the size of one of the doors, and would like to change it. Test intuitiveness of how to change item settings. Task 5 You would like to know when someone goes through a door. What do you do? Test intuitiveness of sensor adding. Task 6 You would like to know when someone moves anywhere inside a room. What do you do? Setup next task. Task 7 You actually just want to know if anyone moves in the upper half of the room. Test intuitiveness of sensor settings. Task 8 You would like to see what happens when you walk around inside the room. Test intuitiveness of button naming and positioning, and method of moving person (keyboard). Task 9 (If not done in 8) “Walk” around in and out of the room. Test intuitiveness of method of moving person (keyboard). Task 10 You would now like to create a pre-defined route for the person to follow. Test intuitiveness of button naming and positioning. (Just has to find right tab, and realize it.) Task 11 (If not done in 10) Create a pre-defined route of 4 or 5 steps for the person to follow, and start him of. Test intuitiveness of timeline creation. Task 12 After the person has gone to step 2, you actually want him to go somewhere else before going to step 3. Test intuitiveness of timeline alteration. Task 13 You would like to alter the route, so the person changes direction less times. Test intuitiveness of timeline alteration.

A.3

Post task session interview questions

1. Does the program live up to your expectations?

A.3 Post task session interview questions

89

2. How did the program compare to other similar programs you have used? 3. Was it easy to navigate the program? 4. Was it obvious what the different buttons did, and what labels and settings represented? 5. Name three good things about the program. 6. Name three things that could be improved.

90

A.4

Usability

GUI prototype version 1

Figure A.1: bottom menu builder

Figure A.2: bottom menu builder app

Figure A.3: bottom menu builder app lamp

Figure A.4: bottom menu builder app oven

Figure A.5: bottom menu builder app siren

Figure A.6: bottom menu builder sensors

A.4 GUI prototype version 1

Figure A.7: bottom menu builder sensors attachable

Figure A.8: bottom menu builder sensors

Figure A.9: bottom menu builder sensors light

Figure A.10: bottom menu builder sensors motion

Figure A.11: bottom menu builder wo

Figure A.12: bottom menu builder wo door

91

92

Usability

Figure A.13: bottom menu builder wo opening

Figure A.14: bottom menu builder wo walls

Figure A.15: bottom menu builder wo window

Figure A.16: bottom menu simulator

Figure A.17: bottom menu timeline frame

Figure A.18: bottom menu timeline frame chosen

A.4 GUI prototype version 1

93

Figure A.19: frame

Figure A.20: left menu builder

94

Usability

Figure A.21: left menu simulator

Figure A.22: left menu timeline

Figure A.23: pu approve delete floor

Figure A.24: pu approve delete item

A.4 GUI prototype version 1

95

Figure A.25: right menu

Figure A.26: right menu builder object

Figure A.27: right menu builder opening

96

Usability

Figure A.28: right menu builder opening door

Figure A.29: right menu builder opening window

Figure A.30: right menu builder sensor

A.4 GUI prototype version 1

97

Figure A.31: right menu builder wall

Figure A.32: right menu object

Figure A.33: right menu opening

98

Usability

Figure A.34: right menu opening door

Figure A.35: right menu opening window

A.4 GUI prototype version 1

99

Figure A.36: right menu sensor

Figure A.37: right menu wall

100

A.5

Usability

GUI prototype version 2

Figure A.38: frame v. 2

Figure A.39: left menu timeline v. 2

A.5 GUI prototype version 2

Figure A.40: right menu builder sensor v. 2

Figure A.41: right menu builder wall openings v. 2

101

102

Usability

Appendix

B Glossary

• AWT Graphics and User Interface library part of Java. Contains many geometry-related classes. • Actuator Device which can perform an action on an appliance or furniture. • Affine Transform Transformation which preserves parallelism of lines. • Alarm System System detecting the presence of intruders. These systems typically alert in case of intrusion. • Appliance Electrical unit such as a lamp, television or oven. • Artificial Intelligence Algorithm or procedure for making decisions in a logical and intelligent way. • Attachable Sensor Sensor that can be attached to any positionable object, and act when events occur on the positionable object. • Builder Program mode in which the user can place objects on the floor plan. • Color Scheme Template for colors used in the main floorplan view. • Composite Shape Shape consisted of several other shapes.

104

Glossary

• ConceptDraw Diagramming software supporting the creation of flowcharts, floor plans, and lots more. • Convenience system System making daily-life operationgs easier or more convenient, such as light turning on automatically. • Copenhagen Capital of Denmark, Europe. • Crossbow Company making networked sensors (also known as motes). • Elsparefonden Danish foundation promoting conservation of power. • Floorplan Map of a house, containing the walls, windows, doors, appliances and sensors. • Free roam Mode in which the user is able to use the arrow keys to move the person freely around the floor plan. • Grid Point Point used as a key in the interaction map. • House Rule See Policy • IBUSI Application used to simulate intelligent buildings. • Intelligent Building Simulator See IBUSI • Interaction Handler Mediator between the graphical user interface and the model classes. • Interaction Map Cache which maps a given 2D point to a list of objects. Uses a hashtable to store the objects. • Javadoc Tool made by Sun Microsystems for creating Java documentation in HTML format. • Java Object-oriented programming language developed by Sun Microsystems. • LWJGL Lightweight Java Game Library is a program library for Java, focused on game development. It is a wrapper API for OpenGL.

105

• LaTeX Markup language and document preparation system used for creating this report. • Lightweight Java Game Library See LWJGL • Linux Free, UNIX-like operating system used in many different settings and on many different platforms. • Logistics Management System System for tracking and managing assets such as cars or deliveries. • MSP410 Motion sensor made by Crossbow. Has mesh network capabilities. • Mac OS X NEXTStep-based operating system used on Apple Macintosh computers. • Min Bolig Program developed by Elsparefonden, simulating a smart house. • Mote See MSP410 • Motion Sensor Sensor that triggers whenever a person enters its range, or moves about within the range. • Mouse Shape 2D shape that follows the mouse cursor within the floorplan view. • NanoXML Lightweight XML library written in Java. • Netto Supermarket chain commonly found in Denmark and some other European countries. • OpenGL Open API for rendering 2D and 3D graphics. • Petri net Special type of diagram used to model concurrent systems. • PIR Passive infrared. Term used about motion sensors. • Policy A rule consisiting of a number of conditions and a number of actions to be executed when the conditions are met. • Positionable Sensor Sensor that can be placed in 2D space on the floor plan.

106

Glossary

• Positionable Interface used in IBUSI. Any sensor or appliance that can be placed in 2D space is positionable. • Power Savings System System aiding in saving power. An example could be a system turning off lamps when no motion has been detected for a while. • Pseudo Sensor Global sensor that does not exist in the 2D space. An example could be a sensor that gathers general weather information in the area. • Raytracing By tracing the path of a line, different properties can be calculated. IBUSI uses raytracing to find out which grids a line passes through. • Rick Astley A man who is never gonna give you up. A man who is never gonna let you down. A man who is not gonna let you stand around, and desert you. • SVN See Subversion • Scenario Predefined set of actions that make up a person’s behaviour. In practice, a route for the person to walk. • Selectable Common denominator for any object that can be selected in the graphical user interface. • Semaphore Method with which an application can coordinate related operations using a flag. • Sensor Device capable of sensing properties of the surroundings, and communicating them to other units. • ShapeIntersection Utility class aiding in calculating whether two shapes intersect or not. • Shape Class representing a shape. Main class of the IBUSI Shape framework. • Smart House House with some sort of intelligence or automation. • Stop point A point in the scenario where the person’s position is defined. A scenario is composed of multiple stop points. • Subversion Free and Open Source version control system.

107

• Swing Java GUI toolkit developed by Sun Microsystems. Used in IBUSI for rendering the user interface. • Theme See Color Scheme • Timeline See Scenario • Transformation Matrix Short way of describing an object’s position, rotation and size. Used internally in OpenGL when rendering objects. • UML Unified Modelling Language. Standard way of drawing diagrams of different nature, eg. class diagrams. • UNIX Operating system. One of the first multi-user, multitasking operating systems. Commonly used on servers and in academic settings. • Windows Widely used operating system developed by Microsoft. • XML Standard for composing structured documents. Subset of the SGML language. • bash.org Online database of popular or humorous quotes. • eXtensible Markup Language See XML • gzip Compression algorithm. Short for GNU zip, a free replacement for the former compress program used in some early UNIX systems.

108

Glossary

Appendix

C Example Project XML

A building with two rooms. Can also be found on http://ibusi.org/resources. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35