Delivering Presentations from Multimedia Servers *

Delivering Presentations from Multimedia Servers* Nevzat Hurkan Balkir, Gultekin Ozsoyoglu Department of Computer Engineering and Science Case Western...
Author: Patricia Wade
1 downloads 0 Views 188KB Size
Delivering Presentations from Multimedia Servers* Nevzat Hurkan Balkir, Gultekin Ozsoyoglu Department of Computer Engineering and Science Case Western Reserve University Cleveland, OH 44106 Abstract Most multimedia servers reported in the literature are designed to serve multiple and independent video/audio streams. We think that, in the future, multimedia servers will also serve complete presentations. A multimedia presentation contains explicitly specified synchronization information describing when each multimedia data needs to be delivered in the playout of the presentation. Multimedia presentations provide unique opportunities to develop algorithms for buffer management and admission control, as execution-time consumption requirements of presentations are known a priori. In this paper, we examine presentations in three different domains (heavyweight, middleweight, and lightweight presentations) and provide buffer management and admission control algorithms for the three domains. We propose two improvements (flattening and dynamic-adjustments) on the schedules created for the heavyweight presentations. Results from a simulation environment for the proposed algorithms are also presented.

1. Introduction Today’s increasing capabilities of computers allow users to create advanced multimedia presentations [Magel97, Oeft97]. Many of these presentations contain continuous multimedia objects with real time consumption requirements as well as images, text, tables, charts, etc. Possible multimedia data types in a given presentation include variety of text data in different formats (txt, doc, ps, ppt, etc.), images in different formats (jpeg, tif, gif, etc.) as well as video and audio data of different types (avi, mpeg2, etc.). Multimedia presentations also contain explicitly specified synchronization information describing when each data type needs to be delivered in a given presentation. In this paper, we model a multimedia presentation as a presentation graph, which is an acyclic directed graph that defines the playout order of objects within the presentation. Continuous multimedia server research in the literature exclusively deals with the delivery of ondemand video/audio streams, e.g., movie-on-demand systems. We think that, especially in the context of digital libraries, electronic books and distance learning, delivery of multimedia presentations will be very important as well. This paper deals with buffer management and admission control issues in on-demand multimedia presentation servers. One important advantage of admitting and serving presentations over admitting multimedia streams (e.g., movies) is that execution-time consumption requirements of presentations are known a priori. We *

Preliminary version of this paper was published in IEEE International Workshop on Multimedia DBMS [BalO98]. This research is supported by the National Science Foundation Grants IRI 96-31214 and CDA 95-29503. 1

make use of this information, and introduce buffer management and stream prefetching techniques to improve the delivery of presentations from multimedia servers. Under various workload characterizations, we analyze the buffer requirements of a multimedia server delivering presentations. We investigate efficient algorithms for the admission control of presentations. The rest of this section (i) defines an abstract multimedia presentation server model, (ii) classifies different presentation domain types as lightweight, middleweight, and heavyweight, and (iii) briefly lists the related work. In section 2, we describe the presentation admission problem in the lightweight presentation domain. Section 3 investigates the issues introduced by the middleweight domain, and gives a buffer management and admission control algorithm based on prefetching. In section 4, we give a variation of the prefetching-based buffer management and admission control algorithm and two other algorithms to improve on prefetching for heavyweight presentations with variable production rates. In section 5, we experimentally evaluate the algorithms of sections 2, 3 and 4. Section 6 briefly describes ViSiOn, an OODBMS that implements the algorithms described in this paper. Section 7 concludes.

1.1 An abstract model for a multimedia presentation server First we give a multimedia presentation server model for investigating buffer requirements of serving multimedia presentations (as opposed to serving multimedia streams). In our model, scheduling is done in rounds. A round is a short time duration in which a set of admitted presentations are served some amount of data. The round time duration is constant. And clearly, the amount of data served per presentation varies. Disk Servers

Multimedia Server

Network

Production Buffers

Network Consumption Buffers

Client Client Client

Output Output Device Output Device Device

Consumption

Production

Figure 1.1 An abstract model of a multimedia presentation server We use an abstract model of a multimedia presentation server, which is similar to multimedia stream server models and is shown in Figure 1.1. The presentation server shown in Figure 1.1 uses the double buffering scheme; i.e., while the production buffers are filled, consumption buffers are emptied. This is a well-accepted scheme for serving streams as well as presentations which require real-time deadlines for playback [NgY96]. Two different types of sources, namely, disk servers and network sources can produce the input to a multimedia server. Disk servers are specialized servers which deliver objects that are required in a presentation and are local to the multimedia server; i.e., disk servers are connected to 2

the multimedia server through a high bandwidth local bus. Network sources are connected to the multimedia server through the network. In the literature, there are many detailed studies of both type of sources; disk scheduling studies for disk servers include [CheKY93, KenS97], and network traffic modeling studies for network sources include [RanRVK93]. In this paper, we assume that the input to the multimedia presentation server is only from the disk server. In this paper, we do not investigate the effects of unreliable network connections (packets lost in the network) or unreliable servers (server site failures). Similarly, the unreliability of sources introduced in Section 4 does not refer to disk crashes due to malfunctions; but it refers to actions of other disk users which lower the production rate of the disks for the presentation management system.

1.2 Levels of abstraction Since our interest in this paper is mainly on admission control and buffer management for presentations in a multimedia presentation server, we will use an abstract model for disk servers. To clarify the need for different levels of abstractions in this paper, we now describe the different types of presentation domains. We classify presentation domains depending on the amount of workload they create on the system. The first domain is the lightweight presentation domain. A slide show is an example of this domain. Slide shows require relatively low data bandwidth, and have no continuous media requirements. Currently available disks are capable of delivering higher data bandwidths easily. Thus, for the lightweight presentation domain, we assume that the production and consumption rates are very large (infinite, for all practical purposes). The second type of presentations belongs to the middleweight domain. Educational (classroom) presentations or News-On-Demand are examples of presentations from the middleweight domain. These presentations may contain still images, slides, audio, and short video segments. Clearly, unlimited production and consumption rate assumptions (as in the lightweight domain) are too optimistic for this domain of presentations. On the other hand, since objects in these presentations have low interaction with each other and we expect that objects are well distributed over disk servers, we assume that disk servers for this domain have a constant production rate. Finally, the third domain is for heavyweight presentations. Video On Demand is an example for the heavyweight domain. Large production rates and real-time consumption deadlines are properties of heavyweight presentations. For the heavyweight presentation domain, we assume that production rates for disk servers vary depending on the number of streams they serve.

1.3 Related Work Recent research on multimedia presentations concentrated on synchronization models for presentations [Stein90, RanRVK93, BlaS96, Haindl96], presentation querying languages [LSBBOO97, 3

ACCKS96], systems for authoring, retrieval and scheduling of presentations [CandPS96, Dalal96], and automated construction of presentations [HKO97, HO97]. The area of continuous media storage servers is currently a very active research area [OBRS94, GarOS98, GhKS95]. Multimedia storage servers deal with [GVKRR95] file management

[AOG92], disk management [RangV93, CheL96, KwonCS97], I/O

scheduling [RedW93, CheKY93, GemmH94, KenS97, WolfYS97], buffer management [NgY96], network management, resource scheduling [VinGG95, DanSS96, GarOS98], admission control and QOS guarantees [VinGGG94]. Anderson [And91] defines the problem as a metascheduling problem of continuous media resources. CHIMP [CandPS96] is a multimedia authoring and presentation system that deals with authoring, retrieval, and scheduling of presentations. Nerjes et al [NMPRTW98] introduces scheduling continuous and “discrete” data for multimedia servers. Little [Little93] also deals with the management of presentations and, in that sense, is close to our work; however, serving presentations from a server, buffer management, presentation admission control issues are not dealt with. We are not aware of any studies that investigate scheduling requirements of different presentation domains and provide solutions such as flattening and dynamic adjustments. Also, scheduling studies (of multimedia streams) have been limited mostly to prefetching. In “An optimal resource scheduler for continuous display of structured video objects”, Escobar et. al. [EGS96] deals with the delivery of independently requested streams, and suggests a greedy resource scheduling algorithm which is similar to the prefetching algorithm in Section 3.1. The greedy resource scheduling algorithm uses prefetching which is based on the “current” knowledge of future consumption requirements. In comparison, in our work all consumption requirements of a presentation are known a priori (we have the total knowledge of admitted presentations, and, thus, their consumption requirements). Also, Escobar et. al. does not analyze the effects of placement strategies of multimedia objects on the disks for the greedy algorithm as is done in this paper in the algorithm “Prefetch for Sources”. In “On coordinated display of structured video” Escobar et. al. [EG97] extends their work with replication techniques to avoid bottlenecks caused by disk data bandwidth. In our paper, we use a different method, flattening, to avoid bottlenecks caused by disk data bandwidth. Flattening does not employ data replication that may bring additional workload to disk and network resources; instead, flattening uses a memory based rescheduling algorithm. In “Avoiding retrieval contention for composite multimedia objects” Chaudhuri et. al. [CGS95] focuses on a special case of delivering multimedia objects; where all multimedia objects have the same bandwidth requirements. Similar to Escobar et. al, this study centers on prefetching algorithms. Sliding of buffers up and down (in time) is proposed as a prefetching technique. In comparison, in our study, we examine prefetching strategies for different presentation domains (presentation domains with objects of variable bandwidth requirements). Flattening technique is not used by Chaudhuri et. al. 4

2. Unlimited level of production: lightweight domain In this section, we investigate the buffering requirements of lightweight presentation. For the lightweight domain, we make the following assumptions for all rounds to simplify the model of the multimedia presentation server: 1) the production rate, i.e., the data production rate of the underlying disk server, is more than the sum of the playback rates of all presentations, and 2) the consumption rate, i.e., the network data delivery rate, is more than the sum of the playback rates of all presentations. And each client has enough local buffer space for the presentation that is currently playing out. We relax these assumptions later for the middleweight and heavyweight domains. With the above two assumptions, we model a multimedia presentation server as in Figure 2.1. Multimedia Presentation Server Production

Production Buffers

Consumption Buffers

Consumption

Figure 2.1 A simplified model of a multimedia server

2.1 The presentation admission problem and space constraints Since the buffer size in a multimedia presentation server is the only limitation in this abstraction, an admission decision for a new presentation delivery request is based solely on the satisfaction of buffer requirements. We first define some terminology: T : Unit time duration for each round. start: An integer value which corresponds to the first round [ T0 , T1 ) . This is also called the initialization round. There is no consumption in this round; and, only production buffers are filled. start+k-1: An integer value which corresponds to the last round. This is also called the ending round. There is no production in this round; and, only the last consumption buffers are consumed.

G1 , G2 ,..., GN : Presentations (graphs) that are already admitted to the system. GN + 1 : Presentation that is questioned for admission. FinTi : Finish time of presentation Gi according to the current schedule, 1 ≤ i ≤ N . FinTN+1: Duration of GN+1 + now. k : The number of rounds needed to serve all of the currently admitted presentations (including the initialization round and ending round). Let T0 = now .

5

 max( FinTi ) − now   , 1≤ i ≤ N+1. T 

Let Tk = T0 + k * T where k is as above and k = 

B : Total amount of buffers available (in terms of bytes). b : Unit buffer size (in terms of bytes) (b 0) {For all nodes in UtilizationHeap} 3 begin 4 node := HEAP-EXTRACT-MAX(UtilizationHeap); {node has the highest PerRoundUtil} 5 X := Find_A_Prefetchable_Data_Buffer(node.RSs,t); 6 if (X ≠ NULL) 7 begin 8 node.RSs,t := node.RSs,t -X; 9 node.PerRoundUtil := Recalculate_Utilization(node.RSs,t); 10 HEAP-INSERT(node); 11 Update_Where_X_is_Prefetchable(); 12 end 13 end Figure 4.5. Algorithm Flatten At line 5 of the Flatten Algorithm in Figure 4.5, only those buffers that do not decrease the capacity of a source are returned. In other words, a data buffer b of a stream s in round r is “prefetchable” if the following conditions hold (r’ is a round): •

Stream s is fetched in round r’,



Start time of r’ < start time of r,



PerRoundUtils,r’ < 1,



Total buffer requirement of all rounds between r’ and r are less than B.

4.3 Execution time adjustments We now give an algorithm which modifies the schedule created by the Prefetch for Sources algorithm at each round, while the presentations are served. Dynamic-Adjustment algorithm works during the presentation delivery time. At the end of each round, it recalculates the buffers needed to be fetched at all future rounds. We assume that, when a request for data is made at a source, it is possible to specify priorities. Furthermore, the source delivers high priority data before low priority data. Algorithm: Dynamic-Adjustment input: Distribution of presentation objects at sources. Data rates for sources. 17

A schedule RSstart,...,RSstart+k which is created by the Prefetch for Sources algorithm. output: Modifications to RSstart,...,RSstart+k at each round. Line: Code: Comments: 1 BuffersInMemory = {}; LeftOverBuffers = {}; 2 for t = (start+k) down to start // From end to start 3 begin 4 CandidateSet = (RSt ∪ LeftOverBuffers) – BuffersInMemory; 5 Preempt prefetched buffers for future rounds if there is not enough space for CandidateSet; 6 Mark buffers already in CandidateSet as high priority ; 7 while ((SizeCandidateSet + SizeBuffersInMemory) < B) or (CandidateSet contains all buffers required by presentations that are not in BuffersInMemory) 8 begin 9 X = closest future buffer that is not in BuffersInMemory ; 10 if (the source that X comes from is not maximized) then 11 add X to CandidateSet with low priority ; 12 end 13 fetch buffers 14 if all high priority buffers are not fetched then 15 LeftOverBuffers = non fetched high priority buffers; 16 end Figure 4.6. Algorithm Dynamic-Adjustment Dynamic-Adjustment algorithm attempts to increase or decrease the prefetching of streams depending on the work load of the sources. If a source cannot deliver as expected in a round t, it is given a chance to make up for the loss of delivery in the rounds immediately following the round t. Assume that a data buffer b, which has to be delivered at round t+m is scheduled to be prefetched at round t. If b cannot be prefetched at round t, it can still be fetched at any of the rounds t+1, …, t+m-1. Similarly, if a source delivers better than expected, the Dynamic-Adjustment algorithm tries to utilize the increased production rate by prefetching more buffers. However, prefetching more than scheduled may fill the memory prematurely. We overcome this problem by introducing priority between data buffers and preempting lower priority data buffers when needed (line 5 of algorithm DynamicAdjustment).

5. Simulations This section describes the experiments conducted to evaluate the performances of seven schedules created by using different combinations of the presentation admission and dynamic adjustment algorithms described in the previous sections. The schedules and the algorithms which are used to create that specific schedule are shown in Table 5.1.

18

To evaluate the algorithms, we have simulated an electronic classroom. Electronic classroom is an education environment where students decide on the length and the content of a presentation about a lecture using various constraints. The system decides on the streams included in a lecture (a presentation) as well as the graph structure of the presentation using the constraints [HKO97, HO97]. Since all properties of streams are known a priori, admission of a requested lecture into the system is equivalent to the admission of a presentation into the presentation server. Name of schedule (abbreviation) Unlimited (U) Constant With No Prefetching (CWNP)

algorithms used to create the schedule Formulas from Section 2 Formulas from Section 2 with constant production rate Constant With Prefetching (CWP) Algorithm Prefetch Variable With Prefetching (VWP) Algorithm Prefetch for Sources Flat Variable With Prefetching (FVWP) Algorithms Prefetch for Sources and Flatten Dynamic Adjustment Variable With Prefetching Algorithms Prefetch for Sources and (DAVWP) Dynamic-Adjustment Flat Dynamic Adjustment Variable With Algorithms Prefetch for Sources, Flatten, Prefetching (FDAVWP) and Dynamic-Adjustment Table 5.1 Schedules compared in the simulations 5.1 Simulation environment We have used five components (users, sources, streams, presentations, and simulations) to model the electronic classroom environment. The parameters are summarized in Table 5.2, Table 5.3, and Table 5.4. Users: This component represents the users of the system (students). Users awake periodically to request presentations from the system. Users have two parameters. The first parameter is the initial sleep time, which defines when the first presentation request by a particular user will be initiated. The second parameter is the sleep time. This parameter defines the period the user is inactive between two presentation requests. In our simulations we have three different users profiles, namely, frequent users, semi frequent users, and infrequent users. Sources: The sources for presentations are the disk drives that contain the streams included in the presentations. Sources have four parameters. The first three parameters define the speed of a source; these parameters are seek time, rotational latency time, and data transfer rate. The last parameter defines the expected reliability of a source (expectancy). Expectancy models the delays or performance decreases caused by operating system delays or other users accessing the same disk drive. We assumed zipf distribution for production rate variations. Zipf distribution is commonly used for user requests and data distribution in databases. Both of these domains can be the causes of production rate variations. In our 19

simulations, there are six different source profiles, namely, fast and reliable sources, medium speed and reliable sources, slow and reliable sources, fast and unreliable sources, medium speed and unreliable sources, and slow and unreliable sources. Parameter Name Simulation time Memory size User initial sleep time User sleep time Stream Type Maximum Duration Ave. Data Requirement

Parameter Range Parameter Name 5-20 hours Source seek time 32-256 Mbytes Source latency 1-60 min. (exponentially Source transfer rate distributed) 1-120 min. Number of users Table 5.2. Simulation parameters text 1 hr. 1 KB/min

audio 20 min. 120 KB/sec

slideshow 30 min. 180 KB/sec

animation 10 min. 256 KB/sec

Parameter Range 10-40 mili sec. 0.3-1.5 mili sec. 10-80 Mbytes/sec. 50-150 users mpeg 2 hr. 5 MB/sec

mpeg2 2 hr. 25 MB/sec

Table 5.3. Simulation parameters for streams Presentation Type Lightweight Middleweight Heavyweight

#of text streams (min-max) 3-10 2-5 0-5

#of audio streams (min-max) 1-2 1-5 0-3

#of slideshow streams (min-max) 0-2 0-3 0-2

#of animation streams (min-max) 0-2 1-2 0-2

#of mpeg streams (min-max) 0-0 0-0 1-2

#of mpeg2 streams (minmax) 0-0 0-0 0-2

Table 5.4. Simulation parameters for number of streams per presentation type Streams: Presentations are composed of streams. Each stream has a length and a type which defines its profile. Each stream profile contains a minimum and maximum number of buffers per round (range for the number of buffers allowed for a specific stream profile). Stream profiles defined in our simulations are of type text, audio, slideshow, animation, mpeg, or mpeg2. Presentations: Presentations are lecture entities that can be requested by users from the system. They consist of streams. The number of streams included in each presentation is random. However, each presentation profile controls the number of specific stream profile included in a presentation. There are three different presentation profiles, namely, lightweight, middleweight, and heavyweight presentations. Simulations: Each simulation has a time parameter which defines the duration of the simulation. Before each simulation is started, the number of users and the number of sources of each type in the simulation as well as the simulation time and the available memory size are decided. In our experiments we have created simulations which (i) lasted between 5 to 20 hours, (ii) contained 50 to 150 users, and (iii) had 10 to 50 sources. The memory sizes varied between 32 Mbytes and 256 Mbytes. We have repeated each simulation with different random seeds 30 times to eliminate the outliers. Each simulation collects statistics about the seven schedules described in Table 5.1. For every schedule, the following statistics are accumulated: -

number of presentation admission requests,

20

-

number of presentation admissions,

-

number of admitted streams (number of streams in admitted presentations),

-

number of buffer elements successfully delivered,

-

number of dropped buffer elements.

5.2 Results: Effect of memory size The first set of results show the effects of memory size variations. The simulation time is 10 hours and each simulation contains 30 sources and 50 users. The number of presentations admitted by the system for only heavyweight, only middleweight, and only lightweight presentations are shown in figures 5.1a, 5.1b, and 5.1c, respectively. Figure 5.1d shows the number of presentations admitted by the system if the presentations are distributed evenly into all presentation profiles. Middleweight Presentations - 10 Hours

Heavyweight Presentations - 10 Hours

# of admitted presentations

3000

U

2500 FVWP VWP

2000 1500

CWP CWNP

1000 500

# of admitted presentations

3500

0

3100 3090 3080 3070 3060 3050 3040 3030 64

96 128 160 192 224 256 memory size

(MB)

Figure 5.1a. Heavyweight presentations with memory size variations

3095 3090 3085 3080 3075 3070 3065 3060 3055 3050

(MB)

Figure 5.1b. Middleweight presentations with memory size variations Mix of Heavyweight, Middleweight, and Lightweight Presentations - 10 Hours

Lightweight Presentations - 10 Hours 3200

U FVWP VWP CWP CWNP

U # of admitted presentations

# of admitted presentations

CWNP

32

32 64 96 128 160 192 224 256 memory size

U FVWP VWP CWP

3000 2800

FVWP VWP CWP

2600

CWNP

2400 2200

32

64

32

96 128 160 192 224 256 memory size

96 128 160 192 224 256 memory size

(MB)

Figure 5.1c. Lightweight presentations with memory size variations

64

(MB)

Figure 5.1d. Mix of presentations with memory size variations

Among the five schedules that are compared in Figure 5.1, Unlimited schedule outperforms all other schedules. This is expected as sources in Unlimited schedule have no bounds. Hence, this schedule is for the ideal case and forms an upper bound on all possible schedules.

21

Observation 5.1: For heavyweight presentations, the schedule Flat Variable With Prefetching schedule outperforms Constant With No Prefetching schedule, Constant With Prefetching schedule, and Variable schedule (Figure 5.1a). This can be attributed to the low probability of having overlapping production peaks for two presentations and hence having a lower chance for the second presentation to get rejected. Observation 5.2: Flattening reduces the peak production requests in presentations, but it requires more memory. This can also be observed from Figure 5.1a; as the memory size increases Flat Variable With Prefetching schedule outperforms Variable With Prefetching schedule more (up to %15 at 256 Mbytes). Observation 5.3: For lightweight and middleweight presentations neither Flat Variable With Prefetching nor Variable With Prefetching schedules increase the number of admitted presentations significantly (Figure 5.1b and Figure 5.1c). When only lightweight and middleweight presentations are admitted, sources never become a bottleneck for the system. Observation 5.4: Simulations that contain heavyweight presentations require the Flat Variable With Prefetching schedule to admit the most number of presentations. This can be observed from Figure 5.1d where a mix of lightweight, middleweight, and heavyweight presentations are considered for admission. The mixed set of presentations included are: 1/3 lightweight presentations, 1/3 middleweight presentations, and 1/3 heavyweight presentations. Table 5.5 verifies the observations 5.1 through 5.4. Lightweight Presentations Middleweight Presentations Heavyweight Presentations Mix of Heavyweight, Middleweight, and Lightweight Presentations

U 99.30

CWNP 98.17

CWP 99.03

VWP 99.11

FVWP 99.25

DAVWP 99.11

FDAVWP 99.25

99.20

98.00

98.81

98.90

99.10

98.90

99.10

76.77

18.84

21.94

63.55

66.45

63.55

66.45

97.00

72.26

76.77

89.52

96.13

89.52

96.13

Table 5.5 Percentage of admitted presentations to total presentation requests for different schemes and presentation types.

22

5.3 Results: Dynamic-Adjustment gains Mix of Heavyw eight, Middlew eight, and Lightw eight Presentations - 10 Hours

60 D A FVW P

100

50 40 30

D A VW P

20 10

% gain in dropped buffers

% gain in dropped buffers

Heavyw eight Presentations - 10 Hours

DAFVWP

80 60 40

DAVWP

20 0

0 32

64

96

128

160

m em ory size

192

224

32

256

(MB)

Figure 5.2a. Percentile gains in dropped frames for heavyweight presentations.

64

96

128 160 192 224 256

m em ory size

(MB)

Figure 5.2b. Percentile gains in dropped frames for a mix of different weight presentations.

The second set of results evaluates the effectiveness of the dynamic adjustment approach. We have measured the effectiveness of Dynamic-Adjustment algorithm on schedule Flat Dynamic Adjustment With Prefetching and on schedule Dynamic Adjustment With Prefetching. The goal of dynamic adjustment algorithm is to decrease the amount of buffers dropped. Note that, buffers are dropped as sources are not reliable. We demonstrate the performance changes in the two schedules by plotting the percentile gains (decreases) in the amount of dropped buffers as the memory size changes. Since the schedules in question are only beneficial for heavyweight presentations, we plot the percentile gains for simulations that contain only heavyweight presentations in Figure 5.2a and the percentile gain for simulations that contain a mix of different weight presentations in Figure 5.2b. Observation 5.5: In both Figure 5.2a and Figure 5.2b, a bigger portion of the dropped buffers are saved when memory size increases. As expected, a larger portion of the dropped buffers are saved in the flattened schedule as lower (flattened) production peaks allow more flexibility for dynamic adjustments. We have repeated the above experiments by changing the number of users, the number of sources, and the simulation time. Increasing the number of users or decreasing the number of sources have exactly the same effects as decreasing the memory size. Similarly, decreasing the number of users or increasing the number of sources have the same effects as increasing the memory. Changing the simulation time does not have an effect on the shape of the curves in figures 5.1 or 5.2 as expected.

6. ViSiOn ViSiOn is an object-oriented distributed database management system with a graphical user interface and an efficient object storage and retrieval mechanism for multimedia databases. ViSiOn utilizes the client server methodology in its architecture (Figure 6.1).

23

The client portion of ViSiOn (ViSiOn client) contains two major components. Presentation Manager is the user’s access point to the system. Presentation Manager communicates with Client Scheduler (for presentation scheduling and presentation data). The second major component of the ViSiOn client is GVISUAL. GVISUAL [LSBBOO97] provides an iconic graphic language to formulate queries. GVISUAL supports graphical temporal operators (such as, NEXT, CONNECT, and UNTIL) to query presentations. There are three different servers in ViSiOn. The first server is the Disk server which stores and retrieves the objects to and from the physical disks. Disk servers are capable of delivering multimedia streams. The control of data flow between the server and client is crucial when jitter free presentations of multimedia is desired. For this reason, Disk Servers contain a component called Client Scheduler. The Client Scheduler controls the data flow between ViSiOn clients and the Disk server. The algorithms described in sections 3 and 4 will be incorporated into Client Scheduler. The second type of servers are VStore servers. VStore servers execute GVISUAL queries. An object-oriented algebra, O-algebra, is defined for ViSiOn. Disk servers and VStore servers are multi-threaded applications. The last kind of server is the ViSiOn Administrative server. This server holds the information necessary to distribute queries and presentation requests among the ViSiOn servers. Users GVISUAL

Users

Users Presentation Manager

GVISUAL Graphical Query User Interface

Application Socket Interface

Presentation Display Browser (display of (display query scheduled results and arbitrary presentations) objects) Application Socket Interface

ViSiOn Clients Legend: Operational Under Implementation Communications

Administrative Server Administrative Server (Distributed Query and Presentation Scheduling) Users

VStore Server

Disk Server

Graphical User Interface for Schema Definition

Application Socket Interface

Application Socket Interface

VStore Engine (O-algebra, schema manipulation)

Disk Server (storing and retrieving objects,)

Storage Medium

Figure 6.1. ViSiOn System.

24

Client Schedular (presentation scheduling)

7. Conclusion and Future Work In this paper we have addressed buffer management and admission control in multimedia presentation servers. We have proposed three domains for multimedia presentation; heavyweight, middleweight, and lightweight presentations. To find suitable algorithms for each of the multimedia presentation categories, we have used the idea of prefetching data similar to many studies in multimedia servers. The observation that data requirements for presentations are known a priori, led us to the idea of flattening in admission control. We have also modified prefetching with dynamic-adjustments at execution time to decrease the amount of buffer units lost when presentations are played out. Finally, we have presented a set of simulation results that verify the validity of the algorithms we have proposed. For our future research direction, we plan to relax some of the assumptions in the presentation admission decision. We have assumed that all presentation streams have strict synchronization requirements. However, many presentations contain streams with more flexible synchronization deadlines. We can utilize the synchronization flexibility to lower the load on the multimedia server and even to increase the number of presentations admitted to the system. A slack time (a time gap) can be introduced between streams as an approach to synchronization flexibility. However, the flexibility introduced by the use of the slack time creates a search space, which is exponential in the number of streams. We have also assumed that each presentation streams has strict playout duration. We plan to investigate the effects of adjusting the playout duration of a stream (which is equivalent to changing the QOS requirements for that stream) to achieve better schedules for multimedia presentations. In our simulations, we assume that no user interaction occurs on a presentation that is already admitted to the system. One easy solution to handle user interactions would be, for each user interaction, (i) release all the resources for the presentation, (ii) attempt to readmit this presentation with the new requirements. However, this scheme may cause a presentation to be dropped when a user interaction occurs, which is not acceptable. In general, the presentation admission schemes described in this paper can be extended to give support for user interaction. For example, extra memory can be reserved for every admitted presentation, which will lower the chances of a presentation to be dropped if a user interaction occurs. Please note that our algorithms are not compute-heavy, and do not necessitate the investigation of incremental versions. For example, the Prefetch algorithm is an O(k) algorithm where k is the number of rounds needed to play all the presentations. The computation of the algorithm will not be a bottleneck even for large k values (such as k = 10000 rounds). We plan to investigate on incremental solutions in the future. Prefetch algorithm can be modified to reconsider presentations that are rejected at time t. Rejected presentations can be resubmitted into the Prefetch algorithm after a short delay t’. Acceptable delays (maximum t’) can be defined by the user. 25

8. References [ACCKS96] Adali, S. et al, “The Advanced Video Information System: Data Structures and Query Processing”, ACM Multimedia Systems Journal, 1996. [And91] Anderson, D., “Metascheduling for Continuous Media”, ACM Trans. Computer Systems, Aug. 1993. [AOG92] Anderson, D., Osawa, Y., Govindan, R., “A File System for Continuous Media”, ACM Trans. Comp. Systems, Nov. 1992. [BalO98] Balkir, N.H., Ozsoyoglu, G., “Multimedia Presentation Servers: Buffer Management and Admission Control”, IEEE International Workshop on Multimedia DBMS, 1998. [BlaS96] Blakowski, G., Steinmetz, R., “A Media Synchronization Survey: Reference Model, Specification, and Case Studies”, IEEE Jour. On Sel. Areas in Communications, Jan. 1996. [CandPS96] Candan, K.S., Prabhakaran, B., Subramanian, V.S., “CHIMP: A Framework for Supporting Distributed Multimedia Document Authoring and Presentation”, ACM Multimedia Conf., 1996. [CGS95] Chaudhuri, S., Ghandeharizadeh, S., and Shahabi, C., “Avoiding Retrieval Contention for Composite Multimedia Objects”, Proceedings of the VLDB Conference, 1995. [CheKY93] Chen, M-S., Kandlur,

D., Yu, P.S., “Optimization of the Grouped Sweeping Scheduling

with

Heterogeneous Multimedia Streams”, ACM Multimedia Conf., 1993. [CheL96] Chen, H-J., Little T.D.C., “Storage Allocation Policies for Time-Dependent Multimedia Data”, IEEE TKDE, Oct. 1996. [Dalal96] Dalal, M., et al, “MAGIC”, AMIA fall Symp., Oct 1996. [DanSS96] Dan, A., Sitaram, D., Shahabuddin, P., “Dynamic Batching Policies for an On-Demand Video Server”, ACM Multimedia Systems Journal, 1996. [EGS96] Escobar-Molano, M.L., Ghandeharizadeh, S., and Ierardi, D., “An Optimal Resource Scheduler for Continuous Display of Structured Video Objects”, IEEE Transactions on Knowledge and Data Engineering, Volume 8, Number 3, June 1996. [EG97] Escobar-Molano, M.L., Ghandeharizadeh, S., “On Coordinated Display of Structured Video”, IEEE Multimedia, Volume 4, Number 3, July-September 1997. [GarOS98] Garofalakis, M., Ozden, B., Silberschatz, A., “On Periodic Resource Scheduling for Continuous Media Databases”, IEEE RIDE’98 Conf, 1998. [GeistD87] Geist, R., Daniel, S., “A Continuum of Disk Scheduling Algorithms”, ACM Transactions on Computer Systems, Vol. 5, No. 1 (Feb. 1987), Pages 77-92 [GemmH94] Gemmell, J., Han, J., “Multimedia Network File Servers: Multichannel Delay-sensitive Data Retrieval”, ACM Multimedia Systems Journal, 1994. [GhKS95] Ghandeharizadeh, S., Kim, S.H., Shahabi, C., “On Configuring a Single Disk Continuous Media Server”, ACM Sigmetrics Conf., 1995. [GVKRR95] Gemmell et al, “Multimedia Storage Servers: A Tutorial”, IEEE Computer, May 1995. [Haindl96] Haindl, M., “A New Synchronization Model”, IEEE J. on Sel. Areas in Communications, Jan. 1996. [HKO97]

Hakkoymaz, V., Kraft, J., Ozsoyoglu, G., “Constraint-Based Automation of Multimedia Presentation

Assembly”, ACM Multimedia Systems Journal, to appear.

26

[HO97] Hakkoymaz, V., Ozsoyoglu, G., “A Constraint-Driven Approach to Automate the Organization and Playout of Presentations in Multimedia Databases”, Jour. Of Multimedia Tools and Applications, 1997. [KenS97] Kenchammana-Hosekote, D.R., Srivastava, J., “I/O Scheduling for Digital Continuous Media”, ACM Multimedia Systems Journal, 1997. [KwonCS97] Kwon T-G., Choi, Y., Lee, S., “Disk Placement for Arbitrary-Rate Playback in an Interactive Video Server”, ACM Multimedia Systems Journal, 1997. [Little93] Little, T.D.C., “A Framework for Synchronous Delivery of Time-Dependent Multimedia Data”, ACM Multimedia Systems Journal, 1993. [LSBBOO97] Lee, T. et al, “Querying Multimedia Presentations Based on Content”, submitted for journal pub., 1997. [Magel97]

Magel,

M.,

“Comparative

Review

of

Authoring

Tools”,

http://www.allencomm.com

/p&s/software/quest/whtpgs/quwhite.html [NgY96] Ng, R. T., Yang, J., “An Analysis of Buffer Sharing and Prefetching Techniques for Multimedia Systems”, ACM Multimedia Systems Journal, 1996. [NMPRTW98] Nerjes et al, “Scheduling Strategies for Mixed Workloads in Multimedia Information Servers”, IEEE RIDE’98 workshop, 1998. [OBRS94] Ozden et al, “A Low-Cost Storage Server for Movie on Demand Databases”, VLDB Conf., 1994. [Oeft97] Oeftering, A., “Picking a multimedia authoring tool”, http://www.datatech.com /hot/s96_3.htm. [OzdRS97] Ozden, B., Rastogi, R., Silberschatz, A., “ Periodic Retrieval of Videos from Disk Arrays”, IEEE ICDE Conf., 1997. [RangV93] Rangan, V., Vin, H., “Efficient Storage Techniques for Digital Continuous Multimedia”, Multimedia Information Systems, Aug. 1993. [RanRVK93] Rangan, P.V. et al, “Techniques for Multimedia Synchronization in Network File Systems”, Comp. Comm. Journal, March 1993. [RedW93] Reddy, N., Wyllie, J., “Disk Scheduling in a Multimedia I/O System”, ACM Multimedia Conf., 1993. [ShahG95] Shahabi, C., Ghandeharizadeh, S., “Continuous Display of Presentations Sharing Clips”, ACM Multimedia Systems Journal, May 1995. [Stein90] Steinmetz, R., “Synchronization Properties in Multimedia Systems”, IEEE Journal on Selected Areas in Communications, Vol. 8 No. 3, April 1990. [VinGG95] Vin, H., Goyal, A., Goyal, P., “Algorithms for Designing Large-Scale Multimedia Servers”, Computer Communications, March 1995. [VinGGG94] Vin, H., Goyal, P., Goyal, A., Goyal, A., “ A Statistical Admission Control for Multimedia Servers”, ACM Multimedia Conf., 1994. [WolfYS97] Wolf J.L., Yu, P.S., Shachnai, H., “Disk Load Balancing for Video-On-Demand Systems”, ACM Multimedia Systems Journal, 1997.

27

Suggest Documents