Object-Oriented Specification of User Interfaces

Object-Oriented Specification of User Interfaces Ari Jaaksi Nokia Telecommunications, Network Management Hatanpäänvaltatie 36 B, 33100 Tampere, Finlan...
Author: Joshua Byrd
4 downloads 2 Views 94KB Size
Object-Oriented Specification of User Interfaces Ari Jaaksi Nokia Telecommunications, Network Management Hatanpäänvaltatie 36 B, 33100 Tampere, Finland [email protected]

Published in Software Practice & Experience; Nov. ‘95. ---------------------------------------------------Address: Ari Jaaksi Nokia Telecommunications, P.O.Box 779 FIN-33101 Tampere, Finland Telephone: +358-31-2407 239 Fax: +358-31-2407 222 E-mail address: [email protected]

Keywords: user interface specification, object orientation, OMT++, reuse, MVC++

Abstract This paper presents an object-oriented approach for the specification of graphical user interfaces. Specification starts with the analysis of the end user's operations. The user interface is then designed on the basis of this analysis. Operation analysis is followed by structure and component specification which presents the dialogue structure of the application and the contents of each dialogue. Visualisation produces the final screen layouts, and task specification documents the usage of the user interface for the purpose of creating user's guides. The method presented in this paper makes it easier for a designer to take the end user's needs into account. Still, it does not automatically guarantee good quality user interfaces. The top-down nature of the method allows the designer to concentrate on the most important aspects of the user interface and split the design procedure into manageable pieces. Also, the visibility of the process allows the designer to communicate with other people while specifying the user interface. This paper connects the method with the object-oriented specification of entire applications. It briefly explains the connections with object-oriented analysis and design, and demonstrates how to implement the specified user interface in an object oriented fashion. The approach presented in this paper is being applied in the development of a large network management system with about two million lines of C++ code running in the X11 environment. Still, the method does not require the specification being implemented with any specific windowing system. The only requirement is that the user interface is based on graphical elements, such as dialogues, push buttons, and text fields.

Introduction Graphical user interface libraries and user interface builders make the implementation of user interfaces easy. Windowing environments, such as Microsoft Windows, OSF/Motif, and Apple Macintosh provide complete sets of simple user interface elements. Push buttons, text fields, bitmaps, and other devices are provided for the use of application implementors. This seeming simplicity may lead to haphazard construction of user interfaces. Instead of designing the best possible tools helping the user to work with the application, it is often too easy to fill windows with arbitrary sets of different apparatus and gears. This may lead to user interfaces that do not answer the user's needs, as pointed out, for example, by Johnson 1. To void this we should first concentrate on interaction and only then on the possibilities of the windowing system in use. This kind of a top-down approach allows us to think about the main elements before focusing on detail. A well-structured method for the specification of user interfaces can help to improve their quality. The final product is more likely to be well structured, simple, and do what it is supposed to do. Object-orientation, on the other hand, allows us to reuse elements of the user interface and refine them for different purposes. Also, an object-

2

oriented user interface is easier to connect with an application that is designed and implemented in an object-oriented manner. Leading object-oriented software development methods, such as OMT 2, Fusion 3, Booch 4, and OOSE 5, do not present detailed steps for user interface specification. Still, specifying user interfaces is more than just implementing presentations of the objects of the application. Specification must produce a usable interface between the application and the end user. Therefore, the outcome of the user interface specification should be object-oriented from the implementor's point of view, and at the same time it must answer to the user's needs. This user interface specification method is currently in operational use in Nokia Telecommunications, Cellular Systems (NCS), which develops one of the leading network management systems, the Nokia OMC, for the digital GSM/DCS 1800 cellular networks. The size of the system is about two million lines of C++ code, and the development is carried out by using UNIX workstations, the X Window System, OSF/Motif and the C++ programming language.

Key Concepts Interaction When in contact with the user interface, the end user uses mental models to describe, explain and predict the behaviour of the system 6,7,8,9. Mental models are representations of reality, and they explain the objects and associations concerned with it. An ideal mental model is accurate and sufficiently correct to allow the end user to solve problems, operate the system and learn more about it. In order to help the end user to tune his mental models, every meaningful manipulative action should provide immediate and perceivable visual feedback. Through operation and perception, the end user can reinforce and change his mental models towards the ideal ones 10. The user must therefore be able to see and understand the connections between his mental models and the visible user interface. The user interface is an intermediate entity between the end user and the application itself, as illustrated in figure 1. The basic elements of the object-oriented computation, the objects, are linked to objects in the physical world 2,3,4. The code of the application includes imitations or reflections of the objects of the application domain. The user interface must therefore be capable of representing the objects of the application so that the end user can understand the relationship between representation and the real world.

3

Real World

End User

User Interface manipulation feedback

mental models

Application requests responses

ok

cancel

Figure 1. The user interface as an intermediate entity When using an application the user has goals on his mind. For example, he may want to keep a diary, write a letter, or pay a bill on a computer terminal. The user interface should provide the necessary means for the end user to reach his goals. Thus, when we specify the user interface we must concentrate on the user's goals, what kind of means and equipment he needs to reach his goals, and how these goals can be reached with the means provided. Elements of the User Interface A graphical user interface is a hierarchical construction. Figure 2 depicts the elements of the user interface using the object model notation of OMT 2. In OMT, inheritance is depicted as a triangle, aggregation as a diamond, and other associations as lines from one class to another. Multiplicity of associations is depicted by using a filled circle.

user interface

dialogue

component

tool

manipulation tool

feedback tool

combined tool

Figure 2. Elements of the user interface At the highest level we see the graphical user interface as a collection of dialogues. To simplify concepts, all dialogues and windows are simply called dialogues. Each dialogue contains one or more components. A component is a collection of windowing system elements, which belong together semantically in the context of one application, or provide means for the end user to perform a meaningful set of actions. Such a component forms a cognitive unit. A cognitive unit is an elementary object through which the user and the application communicate with each other 11. In the final implementation, components are often accentuated by using visual separators, such as borders, colours, and grouping. If so, components can improve intelligibility and highlight relationships between the elements of the user interface 12. Typically, a

4

component will be implemented as a class. This class will include elements of the windowing system library or other components as aggregates 13. Tools are the most basic elements of the graphical user interface. Each component consists of tools which are either manipulation tools, feedback tools, or combinations of these two types. With manipulation tools the end user makes the application function. Feedback tools are used by the application to present things to the end user, and combined tools include both the manipulation and feedback behaviour. Working with the User Interface Similarly with the elements of the graphical user interface, the actions of the end user are seen as a hierarchical construction. Not all activities of the end user are of equal semantic importance. Therefore, these activities are divided into categories as illustrated in figure 3. end user

performs

performs

performs

operation

task

action

Figure 3. Activities An operation is an activity that the application is implemented for. Operations, such as keeping a diary, paying bills, or drawing pictures, are on the user's mind, for example, when he is purchasing an application. Ivar Jacobson has presented a concept of a use case 5. Each different way of using the system is called a use case. Operations can be found by investigating the use cases, and each use case is a special sequence of operations. An operation consists of one or more tasks. A task is the smallest set of actions that is beneficial to the end user as such; an elementary procedure that the end user carries out with the application. A task is a meaningful unit of work performance, and it should be described so that a worker can be asked "how often" the task is performed, without any other job context being required to establish the meaning of that job action 14. Saving a text file, drawing an ellipse, and deleting a paragraph, etc., are typical tasks, and they are good candidates for elements to be explained in a user's guide. An action is the smallest activity that an end user can perform on the user interface. Actions performed on a modern graphical user interface, such as pressing a button or moving a slider, resemble physical actions in real life. Usually, there is no need to explain actions in a user's guide. Object-Oriented Development of User Interfaces We call the method presented in this paper object-oriented for several reasons. The user interface is seen as a collection of objects. These objects include data and functions and their characteristics can be inherited by other objects. Also, the method is based on the object-oriented analysis of the problem space, and it is followed by an object-oriented design of the entire application. Finally, the object-oriented programming phase follows the analysis, user interface specification and design phases. 5

We should choose an object-oriented programming language, such as C++, if we want to fully exploit the phase products of the user interface specification. The method presented in this paper is not used in isolation. Instead, it is an essential part of the OMT++ method 15, which is based on OMT2, Fusion3, and OOSE5. OMT++, as well as OMT, divides the construction of applications into object-oriented analysis, design, and programming, as illustrated in figure 4. Customer Requirements

OOA

Object Analysis

Analysis Object Model

Behaviour Analysis

User Interface Specification

Operation Specifications

OOD

Structure Specifications, Task Specifications

Object Design

Design Object Model

OOP

Class Specification

Class Declarations

Behaviour Design

EventTraces, State Machines

Class Implementation

Implementation of Methods

Figure 4. The Main Phases of OMT++ In OMT++, the object analysis process produces an analysis object model that documents the key concepts and their relations in the problem domain. The analysis object model does not contain any user interface elements. After the object analysis, we understand the key concepts of the problem domain. The next step is to define the user's operations in relation to the concepts that we have obtained. In OMT++, we first specify the user's operations without going into details concerning the user interface. It is only after we have gained an understanding of the key concepts and the operations related to them that we can start the user interface specification phase. The design phase follows the analysis phase. Object design refines the analysis object model; implementation issues are concerned, and classes representing the user interface are added to the object model. In OMT++, the architecture of the interactive applications is managed according to a modified Model-View-Controller -paradigm 15,13. Behaviour design follows the object design. Behaviour design defines the operations for the classes of the design object model. In OMT++, we use event traces and state machines to illustrate the behaviour of the classes. The purpose of the design phase is

6

to produce an object model and behaviour descriptions that can be transformed into programming language code.

User Interface Specification Process The OMT++ software development process is described in its entirety by Aalto and Jaaksi 15,13. Therefore, the rest of this paper concentrates on user interface specification only. Specification starts with the analysis of the end user's operations. This operation analysis is followed by structure and component specification, which produces the dialogue structure of the application and determines the contents of each dialogue. Visualisation produces the final screen layouts, and task specification provides documentation on how operations are performed with the user interface. Prior Activities According to OMT++, object and behaviour analysis precedes user interface specification 15. Let us suppose that we are specifying an application allowing the user to pay bills on a computer terminal. Such an application gives the user access to his bank accounts. It also allows him to pay bills by transferring money from one of his accounts to the accounts of other people and companies. The operations listed in table 1 were discovered during the behaviour analysis phase. Num. Operation 1.

The user checks the balances in his accounts.

2.

The user pays bills, i.e., he puts bills on the waiting list to be paid on the due date.

3.

The user modifies the bills that are waiting to be paid on the due date.

4.

The user removes bills from the waiting list.

5.

The user checks recent transactions.

Table 1. Operations of the Bank Terminal application Object analysis produces the analysis object model, which documents the key concepts in the problem domain. The analysis object model of the example application is illustrated in figure 5 using the object modelling notation of OMT 2.

7

manages

User

pays

accessID:string is-inbusinessrelationwith

owns

launches

Transaction date:string affects

Payment relatesto

1,2 Bank

Account

Bill

name:string

balance:number number:string

code:string amount:number information:strings dueDate:date

is-owned-by is-inbusinessrelationwith

Deposit

Withdrawal

value:number

value:number

WaitList

Creditor sets

name:string

Figure 5. The analysis object model of the Bank Terminal application, where the system boundary highlights the system interfaces Operation Analysis The first phase of user interface specification is operation analysis. In the operation analysis phase we analyse each end user's operations and create tasks to match them. Tables 2, 3, 4, 5, and 6 show the tasks corresponding to the operations in table 1. Num.

Task

1.

Select a user's account.

2.

Read the balance of the selected account.

Table 2. Tasks needed by the user to check the balances in his accounts Num.

Task

1.

Select a user's account.

2.

Modify information concerning the bill to be paid.

3.

Put the bill in the waiting list for payment on the due date.

Table 3. Tasks needed by the user to pay bills Num.

Task

1.

Select a bill that is waiting to be paid on the due date.

2.

Modify information concerning the bill to be paid.

Table 4. Tasks needed by the user to modify a bill that is waiting to be paid on the due date 8

Num.

Task

1.

Select a bill that is waiting to be paid on the due date.

2.

Remove the bill from the waiting list.

Table 5. Tasks needed by the user to remove a bill from the waiting list Num.

Task

1.

Select a user's account.

2.

Read transactions related to the selected account.

Table 6. Tasks needed by the user to check recent transactions After we have analysed operations and divided them into tasks we typically notice that some operations include common tasks. For example, in tables 3 and 4 there is the task: "Modify information concerning the bill to be paid". This kind of a situation is only fortunate. Now we need to implement the application with simpler functionality and the end user does not need to learn so many different tasks. Now we can collect and list all the tasks that will be performed with the bank terminal application. The tasks are listed in table 7. Num.

Task

1.

Select a user's account.

2.

Read the balance of the selected account.

3.

Read transactions related to the selected account.

4.

Modify information concerning the bill to be paid.

5.

Put the bill on the waiting list for payment on the due date.

6.

Remove the bill from the waiting list.

7.

Select a bill that is waiting to be paid on the due date.

Table 7. Tasks of the bank terminal application. Structure Specification After we have analysed the operations and divided them into tasks, we concentrate on the structure of the user interface. Structure specification aims at identifying the dialogues needed for the application. We place the tasks into dialogues and illustrate the relationships of the dialogues. As already stated, all windows and dialogues are simply called dialogues. Dialogue diagrams, such as shown in figure 6, visualise the structure of the user interface . The notation of dialogue diagrams is modified from the statechart notation 16. Here, a state stands for a dialogue and an event refers to the selection made by the user. The dialogue diagram thus gives us a visible image of the structure of the graphical user interface. In dialogue diagrams, a 'do:' statement places the tasks on a numbered task list, for example as in table 7, into dialogues. A line inside a dialogue indicates the main window and a star denotes a modeless dialogue. For example, in figure 6, although the 9

modeless Account Selection dialogue is visible, the end user can still operate with other dialogues. Thus, the dialogue diagrams only resemble the state diagrams and the dialogues cannot be considered as independent states. Parallel dialogues can occur and the dialogue diagram shows only the structure of the dialogue network, how the dialogues are opened and closed, and how tasks are placed in the dialogues. Selections may have additional qualifiers. Sometimes the movement from one dialogue to another may take place only on certain conditions. These safe guards are written inside square brackets. For example, in figure 6, movement from the 'Bank Terminal' dialogue to the 'Account Selection' dialogue is possible only if there are selectable accounts available. A slash mark is followed by operations that take place during the movement from one dialogue to another. In figure 6, if the 'Ok' selection causes the movement from the 'Bill Selection' dialogue back to the 'Bank Terminal' dialogue, information of the selected bill is loaded. Account Selection do: 1

*

Close Select Account [no. of accounts > 1] -------------------Bank Terminal do: 2,4,5,6

Select Bill [no. of bills > 0] Ok / load selected bill

Bill Selection do: 7

Cancel

History [no. of transactions > 0] Close Transactions do: 3

Figure 6. Dialogue diagram of the Bank Terminal application When constructing the dialogue diagram we still do some operation analysis. The most important and common tasks should be easy to perform on the user interface. Therefore, we rate tasks according to their importance and group them to form dialogues. The main window is typically a place for the most important primary tasks. Secondary tasks are typically performed to allow some primary task to be carried out. Secondary tasks are often placed in individual dialogues. By analysing tasks and dividing them to be performed in separate dialogues we minimise the total amount of information needed in each dialogue. Also, information that is only rarely used is available only upon request. These aspects can improve the usability of the final user interface 12. Component Specification When specifying the structure of the user interface we placed the end user's tasks inside dialogues. In the component specification phase we specify what kind of user interface components the user needs to perform these tasks. Thus, we concentrate on one dialogue at a time.

10

As already stated, dialogues are made of components, and components are made of manipulation and feedback tools or combinations of these. Components should provide user-friendly means allowing the end user to manipulate objects in the application and perform different operations. Manipulation tools, such as push buttons and sliders, are used for this purpose. Components are also used to inform the end user about the states of the application objects and about ongoing operations. This is done with feedback tools, such as labels and bitmaps. Sometimes we want to achieve both manipulation and feedback behaviour with a single tool. In this case, we use combined tools, such as lists, text fields and interactive graphics. To depict the components of the dialogues, we draw dialogues using the notation illustrated in figures 7, 8 and 9. A component, which is illustrated by a grey, rounded rectangle, consists of tools. An ellipse stands for a manipulation tool, a rectangle denotes a feedback tool, and a rounded rectangle indicates a combined tool. When drawing the components, we have not yet made a decision concerning the final layout of the user interface. Instead, we concentrate on providing valid equipment for the end user to perform all tasks in the dialogue. BankTerminal

Balance

Bill

SelectedAccountNo

CreditorAccount

Value

CreditorName Information Date::DueDate

ApplicationControl Quit Transactions SelectAccount SelectBill

Exit Transactions AccountSelection BillSelection

DDMMYY Value PutOnWaitingList Remove

Figure 7. The Bank Terminal dialogue The Bank Terminal main window must support four different tasks, as the "do:" clause of the Bank Terminal dialogue in figure 6 shows. These tasks are: "Read the balance of the selected account", "Modify information concerning the bill to be paid", "Put the bill on the waiting list for payment on the due date", and "Remove the bill from the waiting list". Let us suppose that a bill consists of the following information: the number of the creditor's account, the name of the creditor, textual information in free form, the due date, and the amount of money to be paid. Now we can depict the contents of the Bank Terminal dialogue by using the component notation as illustrated in figure 7. The dialogue consists of four components. There is an instance of the Balance component. Also, there is an instance of the Bill component, which includes an instance of the Date Component. In addition, we need an instance of the Application Control component to open other dialogues and enable the user to exit the application. The general Date component, which can be used whenever the user needs to manipulate dates, is presently used for entering the due date. Thus, the Date component class is instantiated as the Due Date component object. Instantiation can be marked with two colons; Date::DueDate.

11

In figure 8, we see the rest of the dialogues of the Bank Terminal application. As we can see, there are two instances of the Selection Dialogue on the user interface. Obviously, the Selection Dialogue is reused. With the first one, named Account Selection, the user selects one of his accounts. With the second one, named Bill Selection, he selects a bill to be edited. Once again, the class - instance relationship is marked by two colons. DataDisplayDialogue ::Transactions TextField

SelectionDialogue :: AccountSelection TextList

Strings

DialogueWiper Close

SelectionDialogue :: BillSelection

*

TextList

SelectableStrings

SelectableStrings

DialogueControl CarryOut

DialogueControl CarryOut

Cancel

Cancel

BankTerminal BankTerminal

BankTerminal

Figure 8. Other dialogues in the Bank Terminal application. The functionality of a complex component must be explained. The explanation specifies the manipulation and feedback behaviour of the component. The explanations should also clarify implementation aspects, such as updating policy, ordering issues, default values, legal and illegal values, masking and value checking, and selection styles. Figure 9 illustrates the explanation of the Date component used in the Bank Terminal dialogue. Date DDMMYY

The date component is used for entering a date. Manipulation: The user can enter the date by entering text in the appropriate text fields. Invalid numbers are not accepted in the text fields. Feedback: The format of the date information is "dd-mm-yy". The default value is the current date. If the user tries to enter an invalid character, nothing appears on the text field but the application is informed.

Figure 9. Explanation of the Date component The concept of a component increases the reusability and consistency of the user interfaces. Well designed compositions of windowing system elements can be implemented as classes within class libraries. Instead of creating user interfaces from scratch by using push buttons, sliders, lists etc. and implementing different functions connected to these elements, we can exploit the component and dialogue classes that are readily available, and use them over and over again. Layout and functionality can be modified and changed by exploiting or overriding operations of the component classes 13. If the same component is used every time that the same objects of the application are manipulated or presented, or the same functionality is needed, the user will soon learn this relationship. This kind of connection increases the familiarity and 12

learnability of large systems. Our experiences also show that components seem to be the only way to harmonise the layout and functionality of the user interfaces of such systems. Visualisation The visualisation phase aims at visualising each dialogue of the application, and it defines the exact layout and wording of the user interface. Figure 10 illustrates the final user interface of the Bank Terminal application. When visualising the user interface we should follow the style guides of the windowing system used, such as Motif Style Guide 17 for OSF/Motif. We should also consider the artistic and aesthetic aspects of the user interface, although these are not among the topics of this paper. We can use GUI builders, such as X-Designer 18, when specifying the final appearance of the user interface. GUI builders also generate code implementing the layout of the user interface. By using the generated code we can implement prototypes and check the validity of the specification with the end users.

Figure 10. Visualised dialogues

13

If we started directly from the visualisation phase, we would be more likely to forget the main elements and concentrate on less meaningful details. Visualisation can only be done well if the preceding steps are completed properly. Prototypes alone, for example, cannot guarantee consistency of the user interfaces of large systems. Only by thoroughly designing all dialogues and components, we can collect a library of highly reusable interface elements, which lead to harmonised user interfaces. Also, arbitrary prototyping without systematic consideration of the end user's tasks and user interface structure may lead to lacks in usability. After the first prototype is introduced, the end user and designer start to think in terms of the prototype. They can slightly modify the details of the prototype, but they may not be capable of rethinking the very basics of the user interface anymore. Therefore, we believe that it is important that the fundamentals of the user interface are well thought out before the introduction of the first prototypes. This is true especially of inexperienced end users and designers. Task Specification The last phase of the user interface specification is task specification. It specifies in detail how tasks are performed on the user interface. In the task specification phase we go through every task by using the specified user interface. This kind of activity is called a specification walkthrough 19. In OMT++, we can perform these walkthroughs by using dialogue diagrams, component specifications, visualised dialogues, and prototypes together. By doing this we not only document the steps for the user's guide but also verify that the user interface fulfils all requirements. In many organisations, software designers both specify the user interfaces and write the user's guides. In this case, task specification can produce the final user's guides. The designer performs the tasks specified in the operation analysis phase by exploiting the final user interface specification. These sessions are documented for the purpose of writing the user's guide. Pictures, such as dialogue diagrams and final visualised dialogues, can also be used in the user's guides. Some organisations, such as ours, have separated the design of the user interfaces and the construction of the user's guides. In this case, task specification is the main source of information for the final user's guides. The designer of the user interface performs the operations and tasks by using the final user interface. At the same time, he documents the necessary steps by using event traces, which are very similar to the interaction diagrams of OOSE 5. The user's guides will be written on the basis of these steps. We are also studying automatic generation of user interface tests based on the event traces. In event traces the end user, the system, and the elements of the user interface communicate with oneanother, as illustrated in figure 11, in which the user selects an account. The end user, the elements of the user interface, and the rest of the system are depicted in the upper part of the graph. Integration of these objects is illustrated above the arrows with a notation resembling C++, where the active part of the interaction is where the arrow starts and the target is where it ends. The time axis runs vertically from top to bottom.

14

User

User presses the Bill button in the menu bar.

BankTerminal. Balance

BankTerminal. ApplicationControl

System shows the number of the selected account and the balance of the account in the Balance frame of the Bank Terminal window

AccountSelection. TextList

AccountSelection. DialogueControl

System

Bill->Press Action: Dialogue Opens

Account Selection Dialogue opens. User selects an account from the list by pointing it. User presses the Apply button in the Account Selection Dialogue.

AccountSelection

SelectableStrings->Select CarryOut->Press SelectedAccountNo->Show(account number) Value->Show(balance)

Figure 11. User selects an accoun

Scaling Up Real systems are always larger and more complex than examples such as the bank terminal used in this paper. Therefore, methods must scale up to cope with complex application domains. The method presented in this paper matured during the development of large systems, and suitability for large systems has been one of its main goals. It would be an overkill to use it with small user interfaces with all its details. Still, particularly during operation analysis and structure specification we may need to do additional work in order to handle really large user interfaces. In the development of large systems, where we may have a great number of end user’s tasks, a flat task list is not enough. We can then group tasks to form clusters, where each cluster consists of consecutive or closely related tasks. Instead of analysing each task separately we can then analyse the purpose and importance of each cluster, and construct dialogue diagrams and the contents of dialogues based on these clusters. Clustering can also help us to concentrate more on work flows rather than individual tasks.

Using The User Interface Specification in Object-Oriented Design In the OMT++ method, the design phase follows the user interface specification, as illustrated in figure 4. Design of interactive software is based on the MVC++ approach 13, which is a modification of the original Model-View-Controller approach of the Smalltalk environment 20. Each application consists of three types of objects. The objects of the model are domain objects. They represent "the real world" and they are based on the analysis object model. The model objects are managed by the controller objects, and the model objects are not "aware" of the objects of the view. The view objects form the outer layer which is visible to the end user and contains the user interface components and callback functions. Although the view objects know how to present things to the end user, and how to receive user's actions, they do not decide how to respond to these actions. Instead, the view objects pass the user's requests to the controller objects. They make all the application-specific decisions, since the 15

objects of the controller know how this application should work. Thus, the controller objects integrate the model and the view objects in an application-specific way, as illustrated in figure 12.

Feedback Objects forming the User Interface

Action requests

Results Objects making Decisions

Domain specific Objects Action requests

Interpreted actions Manipulation

View Objects

Controller Objects

Model Objects

Figure 12. An MVC++ application developed according to OMT++ Object design starts by transforming the analysis object model into design object model according to the rules of OMT++ 15. Typically, every dialogue of the dialogue diagram forms a view object in the design object model. Each view object contains the components that have been specified in the component specification phase as object members. Controller objects are connected to view and model objects, and there is a single controller object for every view object. A controller object may have associations with multiple model objects; on the other hand, a single model object may be connected to multiple controller objects. The tools illustrated in the component specification phase have counterparts in attributes of view component classes. Manipulation and feedback behaviour explained in the component and task specification phases are implemented as member functions of the view and view component classes. There are three main categories of methods in a view class: manipulation methods (MM) are used to capture the user's actions, feedback methods (FM) present things to the end user, and query methods (QM) are designed for the controller to investigate the state of the view. Typically, we implement manipulation methods for manipulation tools and feedback methods for feedback tools. For combined tools, we usually implement all types of methods. While the application specific logic is implemented within the controller classes, the model classes include the domain specific knowledge. OMT++ uses event traces and state machines to find final member functions for the classes during the behaviour design phase 15. The final design object model depicts all classes of the application. After the design phase is completed the implementation can start. Figure 13 illustrates the classes corresponding to the Bill Selection dialogue, and figure 14 depicts all user interface classes of the Bank Terminal application.

16

TextListViewComponent

SelectionView

DialogueControlViewComponent

string : widget

dialogue : widget

carryOutButton : widget closeButton : widget

StringSelectedMM ShowTextFM(text) GetSelectedQM : text

ShowTextFM(text) GetSelectedQM:text ShowDialogueFM() HideDialogueFM()

CarryOutPressedMM ClosePressedMM

BillSelectionController InitializeView GetSelectedBill : bill

Bill code:string amount:number information:strings dueDate:date

Figure 13. Design object model of the Bill Selection dialogue

Balance ViewComponent

TextField ViewComponent

ApplControl ViewComponent

DialogueControl ViewComponent

Bill ViewComponent

DialogueWiper ViewComponent

DataDisplayView

TextList ViewComponent

BankTerminalView

SelectionView View Classes

Transactions Controller

BankTerminal Controller

AccountSelection Controller

BillSelection Controller Controller Classes

Model Classes Model Classes

Figure 14. User interface classes of the Bank Terminal application

Conclusions The method presented in this paper allows the use of the top-down specification of user interfaces. Instead of just combining windowing system elements into dialogues, we first concentrate on the end user's operations. We design interaction between the user and the application and think about the main lines before we start thinking about the details. The final outlook and details are considered only after we have analysed the

17

purpose, operations, and the structure of the user interface. We focus on providing valid and consistent means for the end user to interact with the application. Object-orientation of the method increases reusability. All elements of the user interface are objects, which can be reused and refined using inheritance 13. Reusability also increases consistency and learnability of the user interface. Since the same user interface elements are always used in the same context, the user can learn this relationship. This allows the end user to make predictions and interpret his perceptions correctly. One objective of the method is to reduce need for iteration. Iteration itself is injurious, expensive, and time consuming. The systematic method with carefully carried out phases gives us a chance to make better guesses from the outset. The better these first guesses are, the less we need iteration. The method presented in this paper does not guarantee good usability of the final user interface. Usability can be improved by validating the specification with future end users. Phase products, such as dialogue diagrams, component specifications, visualised dialogues, and prototypes can be used for this purpose. We can test the completeness and usefulness of the user interface by using these artefacts. Still, some additional means, such as style guides, are needed to set rules for the visualisation and functionality of the user interfaces. The style guides, along with term, phrase, and icon banks, raise the consistency of the user interfaces. Still, only reusable user interface components seem to harmonise the user interfaces of large systems. When user interfaces are constructed from library classes, each individual feature is implemented only once. This helps us to gain a consistent outlook and functionality. The approach described in this paper is used in the development of a large network management system, and the user interface specification of different applications is done by individual designers. The method enables other designers, user interface specialists, marketing representatives, customers, and other interested parties to contribute to the user interface specification and give comments during the process. Object-orientation makes it possible to smoothly connect the method to our objectoriented software development process 15. Object-orientation also forms the basis for the reusability of user interface components 13. Our well structured method of user interface specification also allows us to use it repeatedly. Therefore, we can tune and develop the method based on our experiences and learn to create better and better user interfaces.

Acknowledgements This paper has been reviewed by Ilkka Haikala, Reino Kurki-Suonio and Kari Systä from the Tampere University of Technology, by Matti Pettersson from the University of Tampere, and by Juha-Markus Aalto and Sari Hänninen from our own organisation. I would like to thank them all for their valuable comments on earlier versions of this paper.

18

References 1

J.Johnson. 'Selectors, Going Beyond User-Interface Widgets', in P. Bauersfield, J. Bennett, G.Lynch, editors, HCI '92 Conference Proceedings, ACM Conference on Human Factors in Computing Systems, 273 -279. Addison-Wesley Publishing Company, 1992.

2

J. Rumbaugh, M. Blaha, W. Premerlani, F. Eddy, W. Lorensen. Object-oriented Modeling and Design. Prentice-Hall, 1991.

3

D. Coleman, P. Arnold, S. Bodoff, C. Dollin, H. Gilchrist, F. Hayes, P.Jeremaes. Object-Oriented Development. The Fusion Method. Prentice Hall, 1994.

4

G. Booch. Object Oriented Design with Applications. The Benjamin/Cummings Publishing Company, Inc., 1991.

5

I. Jacobson, M. Christerson, P. Jonsson, G. Övergaard. Object-Oriented Software Engineering. A Use Case Driven Approach. Addison-Wesley Publishing Company, 1992

6

D.A. Norman. 'Some Observations on Mental Models'. In D. Gentner, A.L. Stevens, editors, Mental Models, Lawrence Erlbaum Associates, 7-14, 1983.

7

W. Rouse, N. Morris. 'On Looking Into the Black Box: Prospects and Limits in the Search for Mental Models', in Psychological Bulletin, vol. 100, no. 3, 349-363, 1986.

8

J. Carroll, J. Olson. 'Mental Models in Human-Computer Interaction'. in M. Helander, editor, Handbook of HumanComputer Interaction, Elsevier Science Publishers B.V., 45-65, 1988

9

Y. Waern. Cognitive Aspects of Computer Supported Tasks, John Wiley & Sons, 1989.

10

U. Neisser. Cognition and Reality, W.H. Freeman and Company, 1980.

11

M. Kitajima. 'A formal representation system for the human-computer interaction process', in International Journal of ManMachine Studies, no. 30, 669-696, 1989.

12

T. Tullis. 'Screen Design'. in M. Helander, editor, Handbook of Human-Computer Interaction, Elsevier Science Publishers B.V., 377-411, 1988

13

A. Jaaksi. 'Implementing Interactive Applications in C++', in Software Practice & Experience. Volume 25, No. 3, 271289, March, 1995.

14

M. Phillips, H. Bashinski, H. Ammerman, C. Fligg, Jr. 'A Task Analytic Approach to Dialogue Design'. in M. Helander, editor, Handbook of Human-Computer Interaction, Elsevier Science Publishers B.V., 835-857, 1988

15

J-M. Aalto and A. Jaaksi. 'Object-Oriented Development of Interactive Systems with OMT++'. In R.Ege, M.Singh, B.Meyer, editors, TOOLS 14, Technology of Object-Oriented Languages & Systems, 205-218. Prentice Hall, 1994.

16

D. Harel. 'Statecharts - a Visual Formalism for Complex Systems'. in Science of Computing Programming 8, 1987.

17

OSF/Motif Style Guide, Revision 1.2. Open Software Foundation. 1992.

18

Imperial Software Technology Ltd. X-Designer rel. 3 User's Guide, 1993.

19

E. Yourdon. Structured Walkthroughs. 4th edition. Yourdon Press, Englewood Cliffs, NJ, 1989.

20

G.E. Krasner, S.T. Pope. 'A Cookbook for Using the Model-View-Controller User Interface Paradigm in Smalltalk-80'. Joop, August/September, 1988.

19