System Description QuAMobile

System Description – QuAMobile Sten Amundsen, Ketil Lund, Frank Eliassen Simula Research Laboratory {stena,ketillu,[email protected]} Draft –6 October...
Author: Luke Chapman
6 downloads 0 Views 244KB Size
System Description – QuAMobile

Sten Amundsen, Ketil Lund, Frank Eliassen Simula Research Laboratory {stena,ketillu,[email protected]}

Draft –6 October 2005 Abstract This system description …

QuAMobile has a small core with hooks for QoS management plug-ins. The plug-ins manages QoS on behalf of the application, provided that the application is deployed together with information about alternative application configurations and their QoS characteristics. Applications are modelled as compositions of services, where each service is defined by a service type. For each service type there is one or more alternative implementations, i.e., component compositions and parameter configurations.

1.

INTRODUCTION

The objective of our research, within the area of QoS-management and dynamic mobile middleware, is to show how a mobile middleware platform to manage QoS on behalf of the application. This we refer to as platform managed QoS. A series of concepts is needed to give the properties required. To validate and refine these we have developed a prototype, called Quality of service-aware component Architecture for MOBILE computing (QuAMobile). QuAMobile has five key properties that together give us a mobile middleware with platform managed QoS: - Component-based; simplifies the development and maintenance of the middleware itself. - Openness; gives system engineers the means to deploy and replace middleware services, QoS management mechanisms, and application configurations. - Small core; makes the middleware deployable on both telco-server clusters and devices with very limited memory space and processing power. - Separation of concern; between application logic and QoS mechanisms, simplifies the design and improve reuse of both application components and QoS mechanisms. - Safe-(re)configuration; forces application designers to specify in detail each application configuration, ensuring that the application and middleware services are stable after the initial configuration and following reconfigurations. - Hooks for QoS-elements; to configure the mobile middleware with mechanisms and frameworks needed to provide QoS-awareness for different application types and execution environments.

1

This technical report describes the concepts and architecture in QuAMobile, and how they are designed. - Section 2 describes the various concepts implemented in QuAMobile. The description goes into great detail, since they are the main scientific contributions from our work within the areas of QoS-management and dynamic mobile middleware. - Section 3 presents the architecture, its’ entities and their purpose. - Section 4 describes how to configure the QuAMobile with plug-ins, the component model, and the format of the files that is needed deployed onto QuAMobile to describe the services it provides and their configuration, dependencies to context elements in the environment and QoS characteristics. - Section 5 presents the extension that has to be made to component based software engineering methods, when designing QoS-sensitive applications for QuAMobile. - Section 6 contains the design models of the prototype. - Section 7 discuss the validation of the concepts and list observations that were made during the design, implementation and testing of the QuAMobile design. - Section 8 conclusion and list topics for future research in the area of QoS-management and dynamic mobile middleware.

1.1

QoS Terms and Definitions

- QoS management: Any set of activities performed by a system or communication service to support QoS monitoring, control and administration [1]. - QoS characteristic: A “quantifiable aspect of QoS, which is defined independently of the means by which it is represented and controlled” [1]. - QoS dimension: A “dimension of the quantification of QoS characteristics” [2]. Provide details describing the characteristic at a level suitable for QoS management. - QoS value: A number that quantifies the QoS characteristic along a QoS dimension. - QoS abstraction level: The level at which the QoS characteristics are viewed. - QoS model: A model of the QoS characteristics, which is defined using QoS dimensions independently of the execution environment. - QoS mapping: A logical relationship between QoS at different abstraction levels.

2

2.

CONCEPTUAL MODELS

2.1

QoS-framework

To guide the design and implementation of a QoS-aware mobile middleware, we have defined a behavioural model of the general QoS framework. The objective is to bridge the gap between loosely coupled QoS mechanisms and principles at a conceptual level, and QoS behavioural elements executing on commercial OSes and mobile communication systems. 2.1.1

Design Principles

A behavioural element is here an active software element that implements one or more QoS sub-elements. To ensure that the behavioural model is suitable for mobile computing, general design principles are used to guide the design of the behavioural elements Error! Reference source not found.. The emphasis is put on the three principles considered most relevant for the implementation of a QoS framework: • Resilience; towards temporal loss of connectivity. • Small memory footprint; to allow for deployment on mobile terminals. • Modular; to allow the middleware to be tuned to the context and application types. Resilience is achieved by instantiating behavioural elements on both mobile terminals and servers. This will require that the QoS-element interact over the mobile communication system to coordinate QoS decisions and synchronise system states. Small memory footprint is best achieved by optimizing the code by using short message paths, minimise the number of active objects/components, avoid recursive structures and unnecessary encapsulation. Behavioural elements should therefore include QoS elements that interact often and are tightly coupled. In our opinion, it is neither feasible nor sensible to design a mobile middleware that can support all different application types, since this will give a monolithic platform with inflexibility and maintenance problems. Therefore, the proposed QoS framework behavioural model is modular and defines hooks where middleware developers can plug-in behavioural elements. In an implementation of the QoS framework behavioural model these hooks are interfaces that specify the communication between behavioural elements, without defining how to implement the behavioural elements. This enables the middleware designer to plug-in existing solutions and to combine their implementation of a QoS-element with others.

2.2

Plug-ins

QoS elements, shown in Error! Reference source not found. and Error! Reference source not found., have been grouped and associated with behavioural elements. The grouping was done according to the memory and modular design principles. Behavioural elements defined are: context awareness, resource management, dynamic reconfiguration and service planning. These elements interact with each other locally or remotely, and are self-contained, to adhere to the resilience design principle. Figure 1 shows the relationship between the QoS elements in the generalised QoS framework and the behavioural elements. The overall responsibility of each behavioural element in the QoS framework follows.

3

Context manager. This behavioural element provides aggregated context information about the mobile terminal, execution environment, and mobile communication system, to the service planner. It implements the QoS elements context information and context information management. The behavioural element should have sensing agents that provides and updates context information, which is processed, aggregated and sent as events to the service planner (publish-subscribe interface Error! Reference source not found. to provide the needed decoupling). The MobiPADS and CARISMA prototypes are examples of a possible implementation of this behavioural element. In addition to traditional context information, the context manager receives QoS values from QoS monitors (in the resource manager behavioural element). This enables the context manager to store statistical QoS data for the different contexts. Examples of this are found in the L2imbo prototype and Barwan project. They use QoS monitors to acquire information about the context and store it for later use. Resource manager. This behavioural element incorporates the QoS mechanisms resource availability, admission control, resource reservation, QoS monitoring, and data flow control. To what extent each of these mechanisms shall be included in the plug-in, is governed by the application type and the support for reservation and monitoring in the OS and the mobile communication system. For instance, ANSAware’s resource manager measures round trip times (QoS monitoring) and has a real-time thread scheduler (admission control), while Prayer’s resource manager combines monitoring with network reservation (resource reservation). At minimum, the resource manager shall provide updated information about remaining resource capacities to the service planner. A resource model should be used to structure the information and to map between different abstractions levels, making the information available at the level needed for the service planner. In particular the mobile communication system is of major concern. If the communication system has QoS management mechanisms, the resource manager shall use these to reserve capacity. In cases where the QoS characteristic of the data bearer is known, the data traffic flow shall be controlled according to negotiated QoS contract. When OS or mobile communication systems do not provide any guarantees, the resource manager shall use QoS monitors to detect changes in resource availability and report over-utilisation to the service planner. Furthermore, in contexts where the middleware controls all applications running on the mobile terminal or server, the resource manager can perform admission control, negotiation, and allocate resources between the applications, e.g., the loose contracts in Prayer’s adaptive resource management algorithm Error! Reference source not found.. Service planner. This is the core behaviour element, which implements the QoS elements: user QoS requirements, system QoS requirements, QoS mapping, QoS-driven service planning, QoS maintenance and QoS degradation. It identifies and chooses between alternative application or middleware service configurations. To be able to make sound QoS decisions, the service planner needs information from 1) the resource manager about resource availability, and 2) context manager about the current context (mobile terminal, execution environment and communication system). It then maps between the user level, application and resource level (merged QoS specification), to identify a configuration that meets the end-user QoS requirements for the given context and resource availability. When the resource manager and/or context manager plug-in detects context changes or resource fluctuations, information about the changes is used by the service planner to decided if, and if so, what the appropriate actions are. For instance the service planner may decided to re-allocate resources, alter configuration parameters in the running service, or change the way the service is delivered.

4

Reconfiguration manager. The behavioural elements shall include the functionality for dynamic reconfiguration of application and middleware services. When the QoS framework is executing in a context where resources can be reserved, the reconfiguration manager can be omitted. What to reconfigure and the new configuration are input data from the service planner. Alternative concepts can be used in the implementation of the reconfiguration manager. Open implementation together with reflection is a strong candidate, which is implemented in MobiPADS, ReMMoC and UIC, but content adaptation like in Odysse is also viable.

End-user

Application developer

SystemQoSRequirements

UserQoSRequirements

QoS-driven service planning Application

QoSMapping

QoSMaintenance

ServicePlanner

Reconfiguration manager

QoSDegradation Admissioncontrol

MiddlewareServices



Resource availability



Context information management

ResourceManager ResourceReservation

ContextManager

Data flow control



Context information QoSMonitoring

Figure 1. QoS framework behavioural model

5

2.3

Combined Resource and Context Model

- Model

2.4

QoS-Driven Service Planning

Service planning is a concept that we have defined to be a process that identifies components, component configurations, and how to compose them, to form a suitable service composition that will function correctly. Basis for this definition is the general idea that instantiated components offer services through their local and remote interfaces, and they can be composed to new services that again are used in other compositions, creating a hierarchy of services. QoS-driven service planning extends the process with the notion of QoS requirements, like: accuracy, security and real-time. This gives the following definition for QoS-driven service planning: “A process that identifies components, component configurations, and how to compose them; to form a functional correct service composition, which meets a set of QoS requirements”. Intuitively, one may view the definition of QoS-driven service planning as a system development process, where application developers: identify components suitable for a particular service, make the appropriate deployment descriptors, and deploy the service. This is, however, insufficient when user QoS requirements are unknown at design time and both system context and available resource capacities fluctuate during runtime. Hence, service planning is extended to runtime and includes support for adaptation, making service planning a three-phased process as illustrated in Figure 2a. In the first phase, service design, application developers perform two tasks: i) service modelling, identify and specify compositions of services, components implementing the services, and parameter configurations, and ii) QoS modelling, identify the QoS dimensions and specify the QoS mapping functions between the different abstraction levels. Results are stored in service plans, which are referred to as partial, since choosing the implementation of each service is postponed to runtime. General principles for service and QoS modelling are described in chapter 2.4.2 and 5.2, respectively. Users that want to use the application send a service request to QuA’s service planner, which starts initial service planning. The model for this interaction is illustrated in Figure 2b. Information elements needed for initial service planning are: service type, a predicate characterising one or a set of services made available to the client/user, and utility functions, a set of dimensional utility functions for each user QoS dimensions to the service(s) made available to the user. Outcome of initial service planning is a complete service plan specifying the application configuration. Basis for the service planner’s QoS decisions is the resulting application QoS characteristics for the current system context, resource availability, and

6

the user’s minimum-maximum utility values. The utility functions and service plan are described in more detail in chapter 2.4.1 and 2.4.2. If context changes or resource fluctuations makes it impossible for the QoS mechanics to maintain the QoS level, the service planner will start the service re-planning phase. This phase is similar to initial service planning, except that the service planner now have monitored QoS values, and updated resource-context information available for its QoS decisions.

Partial service plan

1. Service design

«precondition» {Service type, Overall utility function}

New complete service plan

Complete service plan

2. Initial service planing

3. Service re-planning

«uses»

Time Design time

«uses» QoS-driven service planning

Service plan

User

Runtime

a)

b)

Figure 2. a) Service planning phases, and b) model for initial service planning

2.4.1

Utility Function

Users in general do not think in terms of QoS characteristics when specifying their preferences. They are interested in using the application and not fine tuning the application. So one must employ a solution that users find easy and fast to use and that operates at an abstraction level they are comfortable with. For QuA we chose the notion of utility together with utility functions, since this gives the abstraction we are seeking [8]. A utility value is a measure of usefulness, represented by real numbers in the range [0, 1], where 0 represents useless and 1 represents as good as perfect [7]. Utility is a function of QoS. In QuA there is a one-to-one relationship between utility and a user QoS dimension, i.e., dimensional utility functions, where usefulness, Ui, is a function, gi, of the user QoS value, qi , along the user QoS dimension, i: U i = g i ( qi )

(1)

The shape of the utility functions is the result from user tests or psychophysical theory, which is outside of our scope, but it is possible to make some general observations about function’s form: - There is a minimum qi. Below this the service is, in general, considered useless (Ui = 0). - There is a maximum qi. Above this the increase, in general, has no impact on perceived quality (Ui = 1). - Between minimum and maximum, the function gi is strictly monotonic increasing. Ui

1,00

0,80

0,60

Useless

No improved perceived quality

0,40

0,20

7 0

2

4

min_q i

6

8

10

12

14

16

max_q i

18

20

qi

Figure 3. Generic dimensional utility function

We assume that the dimensional utility function is part of the user interface, e.g., graphical users interface (GUI) or remote method invocation/remote procedure call (RMI/RPC) interface. Users are then free to either: i. Specify minimum and maximum utility for the provided dimensional utility functions, or ii. Alter the dimensional utility functions. During initial and service re-planning, the service planner uses the dimensional utility functions to calculate the user’s QoS requirements, min_qi and max_qi, from the specified minimum (mandatory) and maximum (optional) utility values (see Figure 4 for illustration). Likewise, the application-user and {resource, context}-application QoS mapping functions, described in chapter 5.2.5, are used to predict maximum QoS at the user level for all user QoS dimensions, i, in the current context and availability of resources. If the specified min_qi are larger than predicted maximum user QoS, QoSi, the service planner has identified an application configuration (composition and parameters) that meet the user’s QoS requirements.

Ui

1,00

maxU i

0,80

0,60

minU i

0,40

0,20

0

2

4 min_q i

6

8

10

12

14

16

18

20

qi

max_q i

Figure 4. User QoS requirements

The service planner also uses the utility functions to choose between alternative (complete) service plans, i.e., plans that specify different configurations of the same service. There are two alternative selection methods. One is fast but sub-optimal and the other is slow but optimal. Common for both selection methods are that they rely on two properties that the dimensional

8

utility functions provide: i) have the same user QoS dimensions, and ii) the functions are strictly monotonic increasing. This makes it possible to compare two alternative complete service plans, and one knows that an increase in resource allocation will improve utility. We describe the suboptimal selection method first. The starting point is the current availability of resources, R, and context information, C, which is mapped to application level and then to user level. The result is predicted maximum user QoS for i dimensions; max userQoSi (equation (2)). The predicted QoS values are used to make a short-list with complete service plans that met the user’s QoS requirements; max userQoSi >= min_qi. From the list the plan with the highest total utility is chosen. Total utility is the average of the utility for i dimensional utility functions, U i = 1 i ∑1i U n . This selection method gives a result fast, but the risk is that one has chosen the service plan that doesn’t give the highest utility if resource availability decreases. For services that have a short lifetime, this risk is very small. So we argue that the fast selection method is a viable alternative for many application types. {R, C } → max appQoS → max userQoS where

R = [R l , R 2 ,.., R l ]

C = [C l , C 2 ,.., C t ]

(2)

appQoS = [QoSl , QoS2 ,.., QoSk ]

userQoS = [QoSl , QoS2 ,.., QoSi ]

The second selection method on the other hand will find the optimum complete service plan, but it requires longer time to make the selection. The starting point is identical to the first selection method, current availability of resources, R, and information about the current context, C. For those service plans that are on the short-list, i.e., plans where predicted QoS is equal or larger than the specified QoS requirements, the service planner maps from specified min_qi to minimum required allocation of resources. The selection method now goes into an incremental allocation loop that increases the availability of resources, and: - Allocate the increase in resource capacity to the user QoS dimension that gives the highest increase in utility. We employ the gradient of utility function to identify which dimension that will result in the largest increase in usefulness. Mathematically the gradient is the first derivative of the utility function: U i' = gi' (qi ) (3) - Order the complete service plan according to increasing total utility and give points equal to the position. The loop are stopped when one of the utility functions reaches specified maximum utility, maxUi, or the allocation of a resource equals the maximum resource availability. The service planner then choose the complete service plan with the highest number of points, knowing that this plan has QoS characteristics within the acceptable QoS region and on average has the highest utility. 2.4.2

Service Plan

Service plans play a central role in QuA, and serve three purposes: i) provide the link between a service type and an implementation of the type; ii) specify service composition, component and

9

parameter configuration of the implementation; and iii) describe the QoS characteristics of the implementation. An application is modelled as recursive services in a hierarchy, see chapter 5.1. At the service level and sub-service level implementations are compositions of service types, while at the atomic service level implementation are components which includes alternative parameter configurations of the components. In a service plan there are information elements. QuA does not specify what these information elements are, but a middleware implementation will. Hence, the nine elements illustrated in Figure 5 are a result from analysing our scenario. The composition specification is the element for specifying the composition of service types or the component that implements the service type that the plan is associated with. Both compositions and single components offer services to the user. These services are either stateful or stateless, where the behaviour and QoS characteristic of a stateful service is governed by its state machine. States with similar QoS characteristics are specified inside the ServiceState element which is attached to offered service element. QoS characteristics of the offered services for different states (one or more) are specified inside the QoS model and QoS mapping function(s) elements. - Dependencies; Requirements to the execution environment, libraries, and static dependencies to front-end or back-end systems. - Parameter configuration; Parameter(s) that the component composition or component is to be configured with. - Composition specification; A graph specifying the construction of the service, i.e., composition of service types and the bindings between them. - Role; A role name space and role names for service types and component types in the composition. Same role name in two service plans will during reconfiguration be interpret as identical services and, hence, not be replaced during dynamic reconfiguration. - Offered services; Services/operations to the composition/component, which the associated QoS model and QoS mapping functions specify the QoS characteristics of. - ServiceState; State(s) to the offered service. - Input QoS contract; QoS values along QoS dimensions that users of this composition/component must adhere to. - QoS model; A model of the QoS characteristics, which is defined using QoS dimensions independently of the execution environment. The model specifies the acceptable range of QoS values along the QoS dimensions. - QoS mapping; Functions that establishes the logical relationships between QoS characteristics at different QoS abstraction and service levels.

10

ServiceType 0..1 1

ServicePlan

1..*

Dependencies

ParameterConfiguration 0..1

CompositionSpecification

1..* 1..* 1..*

OfferedService 0..*

0..1

1

Role

1..*

0..*

InputQoSContract

1

1

QoSModel QoSMapping

1

ServiceState

0..1

QuAComponent

Figure 5. Service plan

2.4.3

Initial Service Planning - Describe the activities that QuA must perform during initial service planning for a user in context PDA-GPRS. - Emphasis on QoS decisions and validate that the application configuration designed for this context is chosen, i.e., composition, parameter configuration, and implementation. - Model the information flow between plug-ins.

2.4.4

Service Re-planning - Describe the activities that QuA must perform during service re-planning for a user changing between contexts PDA-GPRS and PDA-WLAN. - Describe the QoS decisions and validate that the application configuration designed for this context is chosen, i.e., new composition, parameter configuration of Sink/Source, and new implementation of Movie. - Describe how one decides the gap between old and new application configuration, and how meta-data is used by reconfiguration manager. - Describe how reconfiguration manager close the gap. - Model the information flow between plug-ins. - Model the information flow between configurator and application components.

11

3. 3.1

SYSTEM ARCHICETURE Core

We have designed and implemented a new component-based QoS-aware middleware, called QuA (Quality of service-aware component Architecture), which has hooks where QoS management components can be inserted as plug-ins [3][4]. To make the middleware executable on both mobile terminals and large servers the architecture has a small core. The architecture has evolved during our research, where the interfaces in the core for the QoS management plug-ins are the most prominent change. Figure 6 shows the core architecture in its current form. The middleware has been developed in both Java and Smalltalk, and we have published the Smalltalk source code and a platform independent model (PIM) [5]. QuAMobile has basif functionality for component life-cycle management from deployment to removing the componet instance from the runtime environment. The core also have functionality for finding the

QuAType 1

QuA

QuA-Component

1



ContextManager

1

ResourceManager

1

1 ServiceContext 1



0..*

1

1



1..*

1

0..* 1..*

1..*

1..*

Capsule

Respository

1

1 1..*

ServicePlanner

1 1

Broker



ConfigurationManager

AdaptationManager

Figure 6. Middleware overview

Fundamental in our approach, is to let the middleware take those QoS decisions that relies on runtime information about context and resources (CPU, storage, network, etc.). This we achieve by allowing application developers to deploy, in a repository, alternative application configurations with different QoS characteristics. Users specify their QoS requirements using utility functions. These functions are a means of specifying the users’ priorities with respect to different QoS characteristics, and provide the guidelines for the middleware when configuring a service. The service planner is part of our QoS-driven service planning concept, described in chapter 2.1. When the user requests a service, the QuA core invokes the service planner with the utility function and service name as in parameters. The planner asks the broker for a list over published service plans that implement the advertised service. Each service plan in the list is then pulled out of the repository. It should be noted that since most services are compositions, consisting of multiple components, the returned plans normally refer to a number of sub-plans in a recursive structure. These plans are merged into complete service plans. The service planner now uses the utility function from the user and the QoS mapping functions from the plans to assess the suitability of each complete service plan. From the context manager plug-in it receives

12

information about the current execution environment (terminal, execution environment, and mobile communication system), while the resource manager plug-in provides information about availability of the different resources (CPU, memory, storage and network). Thus, the service planner is able to select the plan that provides best utility for the user, based on the actual context and resource situation [6]. Next, the configuration manager uses the selected plan to instantiate and configure the application. Finally, to enable dynamic QoS management, sensing agents and resource monitors are deployed as components in the repository. The sensing agents and monitors needed for the application are requested by the context and resource managers and configured to report context changes and resource fluctuations to their respective manager. This enables QuA to discover important context changes and resource fluctuations, and re-plan the service if necessary, in order to maintain QoS. Such dynamic reconfiguration is managed by the adaptation manager. More details about dynamic reconfiguration are given in chapter 5.2.

3.2

Plug-in interfaces

3.3

Packages

13

4.

TECHNICAL DESCRIPTION

4.1

Plug-ins

4.2

Presentation-Business Layer Interworking

4.3

Service type – WSDL-file

14

4.4

Service plan – XML-file

In the XML file the top level tag is under which all other tags fall. This XML file can be represented like a tree with a root, children and leaf nodes. serviceName serviceType

serviceState receptical

classFile implementation

servicePlan

serviceName composition

dependencies

contextElement

serviceName

facet

propertyType

contextValue

setMethod parameterConfiguration

parameter

configValue inputqoscontract_id qosmodel_id

operation

qosmapping_id offeredServices

inputqoscontract_id state

qosmodel_id qosmapping_id

operation inDimension contract_id inputQoSContract qosCharacteristics

model_id qosModel

lowestQoS dimension unit direction minQoSvalue maxQoSvalue funcDimension

qosMapping

mapping_id

function

Figure 7. Service plan tree structure

15

Here is a list of the required elements, each with a brief description, an example, and if optional or mandatory. Element

Description

serviceType

Tag denoting the type of the implementation specified by the service plan.

Example

Mandatory

implementation

Describes the implementation of the type, either by referring to a component or specifying a composition of service types.

Mandatory

composition

Specify a component composition.

Optional

serviceName

Tag gives the name of the services in the composition, and has two leaf nodes to specify local to-from bindings.

Optional

dependencies

List all the context elements, which the specified implementation has dependencies to.

Mandatory

contextElement

Tag is the name of the context type, and relates this to the leaf nodes specifying the property types and context values. A classification of context elements. This classification corresponds to the model of the environment, which is reflected in the design of the sensors and storage of context information.

ExecutionEnvironment

propertyType

A property of the classified context element that the implementation has a dependency to.

envirType

parameterConfigurations

List all parameters that the implements must be configured with.

Optional

parameter

Specify the parameter to be set.

Optional

qosCharacteristics

Lists the mapping functions (context-resource ÅÆ application QoS) for the different QoS dimension to the implementation.

Mandatory

offeredServices

Lists all the operation names to the service type and relate them to the QoS characteristics.

Mandatory

operation

Tag’s is the operation name. The tag and relates operation to input QoS contract (if relevant), QoS models and QoS mapping functions.

sendRTSPmsg, connectTo, etc.

stateName

Tag specify the states to an operation, and relate input QoS contract a, QoS model and QoS mapping to these states.

Disconnected, Connected, etc.

inputQoSContract

Lists the input QoS contracts for this service.

contract_id

Tag defines a unique identifier for the input QoS contract, and relates leaf nodes defining the contract to it.

qosModel

Lists the QoS models for this service.

model_id

Tag defines a unique identifier for the QoS model, and relates leaf nodes defining the model to it.

qosMapping

Lists the QoS mapping functions for this service.

mapping_id

Tag defines a unique identifier for the QoS mapping function, and relates leaf nodes defining the function to it.

Required

Optional

Optional

Mandatory Optional Mandatory

contractNo4 Mandatory modelDelay, modelAvail, etc. Mandatory trtspTransCon, availNet, etc.

Table 1. Children

16

Element

Description

Example

serviceName

Defines the service name of which the service plan specifies an implementation.

RTSPTransport

Mandatory

state

When minimum one of the operations implements a state machine where the resulting QoS characteristics to the operations is state dependent, the leaf node is specified to be to be stateful.

Stateful or stateless

Mandatory

classFile

The path to the location for the byte encoded component.

/com/signalling/ RTSPTransportQuAM.class

Optional

fromPort

Specify the name of the port (as given in the WSDL file), which has the role of a receptacle in the component composition.

Optional

toPort

Specify the name of the port (as given in the WSDL file), which has the role of a provider in the component composition.

Optional

contextValue

A value (can also be a text) quantifying the porpoerty of the context element that the implementation has a dependency to.

setMethodName

Name of the setter operation on the service for configuring a parameter.

J2RE

Required

Optional

Optional

configValue

Parameter value that is to be set in the service.

Optional

inputqoscontract_ id

The identifier to the input QoS contract that must be adhere to when using the operation given by the child node that this leaf node is attached to.

Optional

qosmodel_id

The identifier to the QoS model of the operation.

Mandatory

qosmapping_id

The identifier to the QoS mapping function of the operation.

Mandatory

inDimension

QoS dimension that the input contract has a QoS requirement.

Optional

lowestQoS

The QoS requirement to input contract.

Optional

dimension

QoS dimension of the QoS model

Mandatory

unit

Metric for the QoS values along the QoS dimension.

Mandatory

direction

Increasing or decreasing

Mandatory

minQoSvalue

Mandatory

maxQoSvalue

Mandatory

funcDimension

Mandatory

functions

Mandatory

Table 2. Leaf nodes

17

4.5

QuAMCom Component Model

18

5. 5.1

SOFTWARE ENGINEERING METHODS Service Modelling

QuA views the application as a service that is constructed from sub-services, which again can be constructed from other sub-services. If a service or a sub-service can not be broken up into services of finer granularity it is considered an atomic service. Each service has a type and its’ implementation can be a service, sub-service or atomic service. This representation of the application creates a service graph illustrated in Figure 8. The objective with this approach is twofolded. Firstly, it is suitable for designing component based applications, and secondly it gives application developers the means to design alternative application configurations. The service type defines the service name and the provided functional services in the format of operation signatures and semantic description. There are different ways to represent a service type, like Web Service Description Language (WSDL) and OMG’s interface definition language (IDL), and Java interface description language (JIDL). Implementation of the service types at the different abstraction levels, are: - Service and sub-service levels Æ compositions of service types. - Atomic service level Æ component with the operations specified in the service type. There can be one or more alternative implementations of a service type, where the implementations have the same functional properties but different QoS characteristic. In the illustrated service model (see Figure 8) there are two alternative implementations of the service types ii1 and iii2, located at sub-service and atomic service level. To specify the implementation of a service type and its’ QoS characteristics, QuA uses the service plan, described in the previous chapter. ServiceTypei

Service level



Servicei



ServiceTypeii1

Sub-service level



OR Subservice 1

ServiceTypeiii1

Subservice 2



Atomic-service level

ServiceTypeiik



ServiceTypeiii2

Subservice k



ServiceTypeiii3





ServiceTypeiiil

OR Atomicservice 1

Atomicservice 2

Atomicservice 3

Atomicservice 4

Atomicservice l

Figure 8. Service model

When application developers model the services to an application, they should start by identifying the alternative application configurations and after that specify these alternatives in

19

the service model (see Figure 9). Results from the service modelling are service types, compositions of service types, atomic services, alternative compositions, and alternative atomic services: - Service types; the interface to the different parts of the application that are deployed as a WSDL file. - Composition of service types are specified in service plans (the composition specification information element) that during deployment are associated with the service type it implements. - Atomic services are specified in service plans (related type to the component that implements the type) that during deployment is associated with service types at the atomic service level. - Alternative compositions are specified in different service plans that during deployment are associated with the same service type. - Alternative atomic services are either components or parameter configurations of the same type. The alternatives are specified in different service plans, which during deployment are associated with the same service type at the atomic service level.

Application configuration

Service Model

Deploy

ServiceTypei

AtomicService_1 ServiceTypei



AtomicService_2 AtomicService_1

ServicePlan_1



ServicePlan_2

AtomicService_2

C1



C2

Figure 9. From application configurations to deployment

5.2 5.2.1

QoS Modelling Introduction

The availability of resources (memory, CPU, network, secondary storage, etc.) and the context (hardware, OS, network, runtime environment) governs the QoS characteristics of each atomic service, sub-service, and service, and thus the application. QuA does not define how the QoS characteristics are to be modelled, but a middleware that implements QuA will specify the semantics (meaning) and syntax (rules) for modelling QoS. At design-time knowledge about the execution environment and communication system, i.e., the different system contexts, is very limited. One must therefore be pragmatic when modelling the QoS characteristics. For each implementation of a service type, one model the QoS and maintain the QoS mapping between the atomic, sub-service, and service levels for the abstraction QoS levels user, application and resource. First, the relationships between input messages to the service, i.e., operations, and the output messages are established including states if relevant for the QoS characteristics. From the input-output relationships QoS models are defined, which gives the QoS dimensions and acceptable range of QoS values along the dimensions. In some cases the QoS model assumes that the input messages have certain QoS characteristics. These are input QoS

20

guarantees that the user of the service must give. Lastly, the QoS mapping functions are specified, which establishes the logical relationship between the abstraction levels relevant for the service. Figure 10a shows the tasks involved in QoS modelling, and Figure 10b the parts of the service plan where results from the QoS modelling are stored. More details on QoS modelling of components, atomic services and sub-services are provided in chapter 5.2.2 to 5.2.5. C1: type (service type, stateful OR stateless): Stream, stateful dependencies ( required services, hardware, OS, runtime): C2, JRE1.4

C1

Input-output relationship

Offered services

States

Input QoS gurantees

QoS model

offered service ( operation, input QoS contract, states, qosmodel, qosmapping): service1 inQConI {idle} modelA mapK service1 inQConI {active} modelB mapL service5 {active} modelC mapM inQConI: dimension (name): frame rate unit (metric): frames/s range (minQoS, maxQoS): [5, 50]

Services & States

Input QoS contract

QoS mapping

Resources (CPU, MEM, Net, etc.) Context (HW, OS, Net, Runtime env., etc.)

a)

modelA: dimension (name): delay unit (metric): ms direction: decreasing range (maxQoS, minQoS): [0, 1000] modelB: dimension (name): delay unit (metric): ms direction: decreasing range (maxQoS, minQoS): [0, 90]

QoS Model

modelC: dimension (name): availability unit (metric): p direction: increasing range (minQoS, maxQoS): [0.90 , 1.0] mapK: dimension (name): delay mappingfunction (value along dimension): t = h()

mapL: dimension (name): delay mappingfunction (value along dimension): t = h()

QoS Mapping

mapM: dimension (name): availability mappingfunction (value along dimension): A = h()

b)

Figure 10. a) QoS modelling tasks and b) relevant parts in the service plan for storing results

5.2.2

Services and States

Components in a composition both offer and require services from each other, illustrated in Figure 11 for a composition with three components. Furthermore, a component often offers multiple services, which again may have different QoS characteristics. Hence, separate QoS models and QoS mapping functions may be required for each offered service. States, “the binding of values to mutable variables in any given point in time” [9], typically have an effect on the component’s QoS characteristics. Due to the component’s encapsulation property, states are considered an abstract aspect of the service. Hence, when modelling the QoS characteristics, one must consider each service, its’ states, and take into account state transitions.

21

5.

1.

Component

C1 C2 C3 3.

2.

Offered

Required

C1

2, 5

4,1

C2

3

2

C3

4

3

4.

Figure 11. Offered and required services

5.2.3

Input QoS Contract

In cases where the input messages must have certain QoS characteristics, one specifies these QoS requirements in an input QoS contract. Requirements are defined at the application or user QoS level, depending on the abstraction level to the QoS model. Any client that uses the offered service associated with an input QoS contract must adhere to these requirements; otherwise the QoS model and mapping functions are not valid. The design of the service plan allows application developers to associate an input QoS contract to each single service or a set of services. The latter is useful when the component implements multiple interfaces for different users/clients. 5.2.4

QoS Model

The objective with the QoS model is to express the QoS characteristic in a manner that ensures that the quantification is interpret correctly and independently of the service and middleware platform. In addition, the quantification should use concepts and terms specified in the UML profile for QoS Modelling [2], a standard from the Object Management Group (OMG), for two reasons: it uses widely accepted terms and meta-models, and the quantification can be included in the design models of the application. The foundation for the QoS model is the causalities between input and corresponding output messages to the service. Causality is the transformation that the service performs, which is observed by studying the time the output message is sent and the content of the message. The message content in input and output messages is associated with QoS, which is used to identify the QoS dimensions and quantify the reduction or increase in quality. When studying causality, one must include all possible combinations of input and output messages to ensure that one both identifies and quantifies maximum and minimum QoS for the identified QoS dimensions, QoSk: - maxQoSk, is the quality of a service with an ideal behaviour, i.e., when the service executes completely and correctly on an execution environment with unlimited resources. - minQoSk, the lowest quality that the service can provide and still execute completely and correctly, when the execution environment has limited resources. QoS to input and output messages can be viewed at two abstraction levels, user or application, depending of the position the service has in the service hierarchy. In general, QoS models for atomic and sub-services are at the application QoS level while (higher ordered) services are at the user level. When modelling services with asynchronous interfaces, it is easy to identify the input and output messages. For synchronous interaction it is more difficult to establish the causal relationship, since there is no clearly defined output message. The practical solution is to consider

22

return objects and out parameters as messages. This is illustrated in Figure 12, for a composition with two atomic services with synchronous interaction. Messages, synchronous interaction

operationX():ObjectType

Application configuration

operationY(ObjectType):ObjectType C1

operationX()

C2

operationY(ObjectType)

operationY(Objectype)

Input-output relationship

Atomic service level

Input-output relationship return(ObjectType)

return(ObjectType)

return(ObjectType)

Input-output relationship

Service level

operationX() Input-output relationship return(ObjectType)

Figure 12. Modelling of synchronous operations

There are three basic relationships one can encounter when studying input and output messages: one-to-one, many-to-one, and one-to-many. These can be combined to other forms for relationships, e.g., many-to-many where n in ≠ j out. One-to-one. For each input message there is one output message. The difference in QoS between n input and output messages, gives the QoS dimensions, k, and minimum-maximum QoS values, {minQoSk, maxQoSk}, for these dimensions. To measure the QoS of a message, we define a point in a k dimensional space; one dimension for the delay, and one dimension for each QoS dimension of the message content. Figure 13 illustrates the input and output message sets for a two-dimensional space. X-axis is the time when the service received and sent out the messages. Y-axis is the QoS characteristics of the message content. Arrows between the points are the QoS degradation, deltaQoS, or improvement as the case might be, which gives the QoS model: deltaQoS = QoS out − QoS in → {k , min QoS k , max QoS k }

QoS

QoS n in

QoS 1in

where QoSin =

QoS n out

[

QoS 1in , QoS 2 in ,.., QoS n in

QoS out =

[

]

QoS 1out , QoS 2 out ,.., QoS n out

(4)

]

QoS 1 out

Time

Figure 13. One-to-one

Many-to-one. In some cases a service receives more than one input message before sending out the resulting output message, i.e., concatenation. This is captured in the model, by identifying

23

the subset of m input messages in the total set of n input messages. For each subset m, one identifies the difference in quality between m input and one output message, illustrated in Figure 14. This is done for all n/m subsets, giving all QoS dimensions and minimum-maximum QoS values:

deltaQoS = QoS out − QoS in → {k , min QoS k , max QoS k } Q oS

where Q oS 1 m in

Q oS

11 in

Q oS 1 o u t

⎡QoS 11in , QoS 12 in ,.., QoS 1n in ⎤ ⎥ ⎢ ⎢QoS 21in , QoS 22 in ,.., QoS 2 n in ⎥ QoS in = ⎢ ⎥ ⎥ ⎢.. ⎢ m1 m2 mn ⎥ QoS , QoS ,.., QoS in in in ⎦ ⎣

[

QoS out = QoS 1out , QoS 2 out ,.., QoS n out

(5)

]

T im e

Figure 14. Many-to-one

One-to-many. Likewise one input message may result in m output messages, i.e., segmentation. The difference in QoS between one input and m output messages defines the QoS dimensions and m possible QoS values, see Figure 15. This is done for all n input messages, to give all QoS dimensions and minimum-maximum QoS values: deltaQoS = QoS out − QoS in → {k , min QoS k , max QoS k }

QoS

where

QoS 1in

[

QoSin = QoS 1in , QoS 2 in ,.., QoS n in

] (6)

QoS

m out

QoS

1 out

QoS out

⎡QoS 11out , QoS 12 out ,.., QoS 1n out ⎤ ⎢ ⎥ ⎢QoS 21out , QoS 22 out ,.., QoS 2 n out ⎥ =⎢ ⎥ ⎢.. ⎥ ⎢ ⎥ m1 m2 mn , ,.., QoS QoS QoS out out out ⎦ ⎣

Time

Figure 15. One-to-many

5.2.5

QoS Mapping

QoS abstraction levels. QoS is traditionally defined at different abstraction levels: user, application, middleware and resources. The logical relationships between these levels are expressed by the QoS mapping functions. We have combined the four QoS abstraction levels with QuA’s hierarchical service model, to establish relationships between QoS at the different service levels while being conformant with the traditional QoS abstraction layers. Openness is one key property that QuA provides. Part of our approach to get the openness we seek, is to deploy middleware services as atomic services alongside the applications’ business logic. This

24

removes the traditional strict split between the middleware and application QoS, and replaces it with a smooth transition. A QoS-aware application together with the middleware platform is a system (often distributed), which depends upon and is influenced by the resources’ QoS characteristics and the technical aspects of the context. QoS mapping between resource and middleware/application QoS layers, therefore, include context information (static information about hardware, communication system, OS, and runtime execution environment) and resource QoS characteristics (dynamic information about memory, CPU, network and secondary storage). Figure 16 shows the QoS mapping including context information together with the service model and the four QoS abstraction levels. In the three paragraphs that follow, we describe how one at design time defines the user-application, application-application, and application{resource, context} QoS mapping functions.

QoS abstraction levels

Service Model

QoS Mapping Functions User QoS characteristics Application-User QoS function

User level Service level

Application QoS characteristics Application-Application QoS function Application QoS characteristics

Application level

Sub-service level

Application-Application QoS function Application QoS characteristics Resource,context-Application QoS function

Middleware level

Atomic service level Resource QoS characteristics

Context information

Resource level Resource types

Context information types

Figure 16. QoS mapping of services in the hierarchical service model

Application-User QoS Mapping Functions. User QoS characteristics are specific to the application domain, just like the application QoS characteristics. So to be able to define mapping functions between these two QoS levels, one must have knowledge about the application domain and how increase/decrease in application QoS influences the user experience. A powerful and useful approach is to perform experiments where the users give their rating to the different combinations of QoS values, e.g., frame rate versus quantisation scale [13]. Mapping between application and user QoS is usually positioned at the service level, illustrated in Figure 16. Though, it can be placed at sub-service level for a step-wise instantiation process of the application (this is exemplified in the scenario).

25

Application-Application QoS Mapping Functions. Inside the application QoS level one must map between the QoS characteristics to the sub- and atomic services. The mapping is in its simplest form a summation or multiplication of QoS values along the QoS dimension to the application QoS characteristics. The application-application QoS mapping functions is also a useful tool for reducing the number of QoS dimensions. {Resource, Context}-Application QoS Mapping Functions. Since middleware services are deployed alongside the (application) atomic services, the middleware QoS level is omitted and instead it is mapped directly between application and resource level QoS. Formally, the application QoS characteristics for an atomic or sub-service is a QoS vector of length k where k is the number of QoS dimensions at the application level. QoS values in the vector are related to i) input QoS vector, ii) resource availability, R, to the l resource types, and iii) context information C, to the t context types. The general {resource, context}-application QoS mapping function for each single output message is thus: QoS 1out = h(QoS 1in , R, C ) where

[ = [QoS

QoS 1out = QoS 1out1 , QoS 1out 2 ,.., QoS 1out k QoS 1in

⎡ R11 ⎢ R = ⎢L ⎢⎣ Rl1 ⎡C11 ⎢ C =⎢L ⎢⎣ Ct1

1 1 1 in1 , QoS in 2 ,.., QoS in k

]

]

(7)

L R1k ⎤ ⎥ L L⎥ L Rlk ⎥⎦ L C1k ⎤ ⎥ L L⎥ L Ctk ⎥⎦

The function is extended to cover n input-output relationships. This, plus the number of k dimensions for complex message structures, makes it very difficult to define the functions. Fortunately we frequently are only concerned with i) an overall measure of the relationship and ii) one QoS dimension for each mapping function. We therefore define a {resource, context}application QoS mapping function over a series of n-relationships for a single QoS dimension. Each function is typically defined as an aggregating statistical measure such as maximum, mean or variance, making mapping functions useful and suitable for computations: QoSout k = h(QoSin , Rk , Ck ) k

where

[

1 QoSin = QoSin , L, QoSinn k k k

Rk = Ck =

[R1k , [C1k ,

]

(8)

L, Rlk ]

L, Ctk ]

To define {resource, context}-application QoS mapping functions one must take into account i) the different combinations of hardware, OS, & runtime platform, ii) non-linear resource types, and iii) that resources are shared with unknown software entities. These practical considerations

26

must be addressed in a manner that enables application developers to put-forward computable mapping functions. Our approach is pragmatic, and employs classification and mean statistical measures. The following sections describe our approach for four resource (and context) types: memory, CPU, network and secondary storage. These four are chosen since they are important entities in the video streaming scenario, which is used to define the requirements to the mobile middleware and to test-validate concepts and solutions. Memory. Local memory must have sufficient space for the application and middleware to avoid out of memory faults and disk swapping. Total memory requirements are the sum of all application components (plus the middleware), but an allocation of beyond what is required will not improve QoS. The application components minimum memory requirements to a component are, in most cases, easy to identify by employing a tool or gathering statistics from the runtime environment. CPU. Mapping between CPU and QoS is difficult to specify, since processing power of any computer is determined by the chip set, motherboard and OS. Furthermore, the fundamental problem with computer intensive functionality is that the context shift between internal processes gives a non-linear relationship between an increase in CPU availability and the performance. A pragmatic approach is thus needed that gives us reasonable mapping functions that apply to different computers. When the component is running on the target platform, the middleware can then improve the mapping functions with monitored data about resource availability and achieved QoS, i.e., self-learning mapping functions. To define these rough mapping functions, we have two simple rules: i) the intersection between minimum acceptable QoS and minimum CPU availability is one point only, and ii) the relationship between delta increase in CPU and QoS is following a negative exponential distribution. These two rules give equation (9), where minQoS is achieved quality when CPU availability is equal to minimum required allocation of CPU, minCPU. To tune them for different combinations of chip set, motherboard and OS, the mapping function has a correction factor, Cf. The effect different correction factors have on QoS is illustrated in Figure 17, for an arbitrary QoS dimension that have decreasing values for higher quality, e.g., delay. The two rules have been partially validated by employing them on performance tests of FEC Reed-Solomon algorithm that replace dropped streaming symbols [21]. Test configurations included different computers (hardware and OS combinations). From the measured maximum throughput on the different computers we were able to classify the computers into three groups, each with their own correction factor and put forward a general function that predict the throughput depending on available CPU and correction factor. But it still remains to prove the rules through empirical studies of other types of functionality. QoSout k = h( RCPU k , CCPU k ) where RCPU k = [CPU ∆ ],

[ ]

→ min QoS out k e

( − (CPU ∆ C f ))

∀ RCPU ≥ min CPU

(9)

CCPU k = C f

27

QoSoutk

1,2 Mobile

1,1 minQoS

Laptop 1,0

Server

0,9 0,8 0,7 0,6 0,5 0,4 0,3 0,2 maxQoS

0,1 0,01

0,10

0,20

0,30

0,40

0,50

0,60

0,70

0,80

0,90

1,00 CPUa

minCPU

Figure 17. Negative exponential function for dimensions with decreasing QoS-values

Network. The resource and context types of the network(s) are mapped to application QoS dimensions, such as: throughput, packet drop rate, delay, and availability. Networks have three key properties that makes it difficult to define accurate but general mapping functions: data traffic share a finite network capacity, access networks have different access methods and protocol overhead, and bit error rate (BER) vary over time. Therefore we use, like for the CPU resource, rough mapping functions that at runtime are updated with monitored data about resource availability and resulting QoS. The foundation for our rough mapping functions are two rules: i) end-to-end data rate is identical to the data rate in the access network, since the access network has the lowest data rate along the route, and ii) the sharing of medium/channel/network as well as layer two protocols overhead are expressed by a network utilisation factor, Nf. By applying these two rules we define the general mapping functions for the QoS dimensions throughput, packet drop rate and delay. - Throughput at the application level is, thus, the data rate in the access network, Racc_net, multiplied with the network utilisation factor, Nf, adjusted for the protocol overhead layer three to seven, Po, see equation (10). Size of the message transported, Ms, is application dependent. During runtime the middleware can then improve the mapping functions with monitored throughput data by adjusting the network utilisation factor. QoSthroughput = h( Rnet Cnet )

[ ] Cnet = [Po , N f ]

→ Rdata _ rate =

Ms Racc _ net N f M s + Po

( B / s)

where Rnet = Racc _ net ,

(10) - Packet drop rate is primarily governed by the packet size (message size, Ms, plus protocol overhead, Po) and the BER in the wireless access networks, see equation (9). Network congestion may also lead to packets being dropped, but this has less impact on the total probability of dropping a packet.

28

QoS packetdroprate = h( Rnet Cnet )

→ PDRerror = ( M s + Po ) 8 BER

where Rnet = [BER ],

( B / s)

Cnet = [Po ]

(11) - Time delay over the network is message size, Ms, plus protocol overhead, Po, divided by the predicted throughput, see equation (12). If application or layer 4 protocols have error detection and retransmission mechanisms, delay will increase. Equation (12) is therefore extended with the notion of BER and retransmission, which gives equation (13). Figure 18 illustrates how increasing network data rate improves delay for three different combinations of BER and network utilisation, Nf. Delay cannot be modelled solely by mean values, due to network jitter. Since there no general method to represent this variance in delay [10][11][12], jitter modelled as a QoS dimension along side delay. Where mapping between network and application QoS is expressed by standard deviation, confidence levels, or a table relating the variation in delay to another system condition (like signal strength). QoS delay = h( Rnet Cnet )

→ tnet =

[ ] Cnet = [Po , N f ]

( M s + Po ) Racc _ net N f

(s)

where Rnet = Racc _ net ,

(12) QoS delay = h( Rnet Cnet )

→ tnet =

[ Cnet = [Po , N f ]

]

( M s + Po ) (1 + ( M s + Po ) 8 BER ) Racc _ net N f

(s)

where Rnet = Racc _ net , BER ,

(13) t NET (ms) 25,00

Nf = 0,1 BER=0,0

Ms 1000B Po Mobile IP 20B IP 20B TCP 20B

Nf = 0,3 BER = 0,0

20,00

Nf = 0,1 BER = 10-4

15,00

10,00

5,00

1,00

2,00

3,00

4,00

5,00

6,00

7,00

8,00

9,00

10,00 Racc_net (kB/s)

Figure 18. Mean delay versus available data rate for Nf = 0.1, Nf = 0.3, and Nf=0.1& BER= 10-4

29

- Availability at application level, ANET, is governed by the probability that the terminal is within the coverage area of the access network, Ac, and the availability of the access network infrastructure, Ai, equation (11).

QoS availability = h( Rnet )

→ Anet = Ac Ai

where Rnet = [Ac , Ai ] (14)

( p)

Secondary storage. The secondary storage QoS characteristics are typically mapped to application QoS dimensions such as delay and availability. When secondary storage is used to store media files, like video, the mapping functions must take into consideration that media files are not transferred in their entirety. Instead the secondary storage reads i-seconds of the media at a time, allowing the secondary storage to serve multiple users. This implies that the secondary storage moves its disk head between each request. To estimate time delay for reading a part of the media, tstore, we therefore divide the size of the requested data, S, by the disk block size, B, to get the number of disk requests. The request for a disk block is waiting time, queueDelay, in a queue, before being processed. It is assumed random allocation of disk blocks, and therefore add to the reading delay the seek time, ts, rotational latency, tr, and transfer time for a disk block (disk block size, Bsize, divided with the transfer rate, Ttfr). Equation (15) shows the resulting mapping function. Note, that this function is only valid as long as the total load of all requests, αr, is below the transfer rate of the secondary storage, Ttfr. Availability at application level, Astor, is mapped to the availability of the secondary storage, Astore_sys, equation (13). Obviously the availability is the result of reliability, mean time between failures, MTBFstor, and maintainability, mean time to repair, MTTRstor, of the secondary storage. ⎡ S ⎤⎛⎜ Bsize → t store = ⎢ ⎥⎜ queueDelay + t s + t r + Ttfr ⎢ Bsize ⎥⎝

QoS delay = f ( Rstorage , C storage ) where Rstorage = queueDelay,

[

C storage = Bsize , t s , t r , Ttfr

⎞ ⎟ (ms ) ∀ α S < T r tfr ⎟ ⎠

] (15)

QoSavailability = f ( Rstorage )

→ Astore = Astor _ sys =

where Rstorage = [MTBFstore , MTTRstor ]

MTBFstor MTBFstor + MTTRstor

( p)

(16)

30

6.

DESIGN MODELS

6.1

Use Cases

6.2

Interaction Models

6.3

Structural Models

31

7.

DISCUSSION AND OBSERVATIONS

Stregnths and weakness of the research mothod. How useful rptootyping was, and …

32

8.

CONCLUSIONS AND FUTURE WORK

We consider a mobile middleware with platform managed QoS as a useful execution environment for applications that must meet a set of system QoS requirements or/and users QoS requirements. The proposed solution is the component architecture QuA, it’s QoS management plug-ins, and service plans. A video streaming scenario has shown that a QoS-aware mobile middleware can manage QoS on behave of the application. The scenario describes and exemplifies principles and models that the application developer must take into consideration during design time, like the service model and the QoS model. Results are specifications and predictions of end-to-end QoS and resource requirements for a fixed set of application configurations suitable for different system contexts. An important part of QuA is the mapping functions that specify the relationship between QoS at different abstraction levels. It proved difficult to make accurate and general mapping functions between application QoS and resources. Hence, it is a need for self-learning mapping functions. Furthermore, future work should address runtime consideration, like initial service planning, service re-planning and dynamic reconfiguration of parameter configurations and component compositions.

9.

REFERENCES

[1] International Organization for Standardization. CD 15935 Information Technology: Open Distributed Processing Reference Model -Quality of Service, ISO document ISO/IEC JTC1/SC7 N1996, October 1998. [2] UML Profile for Modelling Quality of Service and Fault Tolerance Characteristics ad Mechanisms, OMG Adopted Specification, June 2004. [3] A. Solberg, S. Amundsen, J. Ø. Aagedal, F. Eliassen, “A Framework for QoS-Aware Service Composition”, In Proceedings of 2nd ACM International Conference on Service Oriented Computing, 2004. [4] S. Amundsen, K. Lund, F. Eliassen, R. Staehli, “QuA: Platform-Managed QoS for Component Architectures”, In Proceedings from Norwegian Informatics Conference (NIK), November/December 2004, pp. 55-66. [5] Simula Research Laboratory: QuA documentation, http://www.simula.no:8888/QuA/55. [6] R. Staehli, F. Eliassen, S. Amundsen, “Designing Adaptive Middleware for Reuse”, In Proceeding from 3rd International Workshop on Reflective and Adaptive Middleware, 2004. [7] J. Walpole, C. Krasic, L. Liu, D. Maier, C. Pu, D. McNamee, D. Steere, “Appears in Database Semantics: Semantic Issues in Multimedia Systems”. Edited by Robert Meersman, Zahir Tari and Scott Stevens, Kluwer Academic Publishers. January 1999. [8] D. McNamee, C. Krasic, K. Li, A. Goel, D. Steere, J. Walpole, “Control challenges in multi-level adaptive video streaming”. In Proceedings of the 39th IEEE Conference on Decision and Control (CDC2000), Australia. December 2000. [9] E. Bertino, A. K. Elmagarmid, M. Hacid, “A Logical Approach to Quality of Service Specification in Video Databases”. In Multimedia Tools and Applications, 23 (2), Kluwer Academic Publisher, June 2004, pp. 75-101. [10] J. Chesterfield, R. Chakravorty, J. Crowfort, P. Rodriguez, “Experiences with multimedia streaming over 2.5G and 3G Networks”, In Proceedings of First Annual International Conference on Broadband Networks (BORADNETS 2004), October 2004.

33

[11] Y. Koucheryavy, D. Moltchanov, J. Harju, “Performance evaluation of live video streaming service in 802.11b WLAN environment under different load conditions”, In Proceedings of MIPS’2003, November 2003. [12] D. Loguinov, H. Radha, “Measurement Study of Low-bit rate Internet Video Streaming”, In Proceedings of ACM SIGCOMM Internet Measurement Workshop (IMW), November 2004. [13] J. McCarthy, M.A. Sasse, D. Miras, “Sharp or Smooth? Comparing the effects of quantization vs. frame rate for streamed video”, In Proceeding of CHI 2004, April 2004, pp. 20-24. [14] IETF RFC 3344, IP Mobility Support for IPv4, C. Perkins (Editor), August 2002. [15] R.M. Soley, D.S. Frankel, J. Mukerji, E.H. Castain, “Model Driven Architecture - The Architecture of Choice For A Changing World”, OMG 2001. [16] S. Hallsteinsen, J. Floch, E. Stav, “An architecture centric approach to building adaptive systems”, In Proceedings of Software Engineering and Middleware, September 2004. [17] L. Capra, S. Zachariadis, C. Mascolo, “Q-CAD: QoS and Context Aware Discovery Framework for Adaptive Mobile Systems”, In Proceeding of IEEE International Conference on Pervasive Services, July 2005. [18] IETF RFC 2326, “Real Time Streaming Protocol”, H. Schulzrinne, A. Rao, R. Lanphier, April 2002. [19] M. Zink, C. Griwodz, R. Steinmetz, “KOM Player –A Platform for Experiment VoD Research”, In Proceeding of the 6th IEEE Symposium on Computers and Communications (ISCC’01), July 2001 [20] M. Zink, C. Griwodz, J. Schmitt, R. Steinmetz, ”Scalable TCP-friendly Video Distribution for Heterogeneous Clients”, In Proceedings of the Multimedia Computing and Networking 2003 (MMCN'03), January 2003, pp. 102113. [21] P. Halvorsen, T. Plagemann, V. Gobel, “Integrated Error Management for Media-on-Demand Services”, In Proceedings of the 20th Annual Joint Conference of the IEEE Computer and Communications Societies (INFOCOM 2001), April 2001, pp. 621-630

34