Scheduling Real-Time Transactions: Performance Evaluation ROBERT
A
K. ABBOTT
Digital Equipment
Corp.
and HECTOR Stanford
GARCIA-MOLINA University
Managing transactions with real-time requirements presents many new problems. In this paper we address several: How can we schedule transactions with deadlines? How do the real-time constraints affect concurrency control? How should overloads be handled? How does the scheduling of 1/0 requests affect the timeliness of transactions? How should exclusive and shared locking be handled? We describe a new group of algorithms for scheduling real-time transactions that produce serializable schedules. We present a model for scheduling transactions with deadlines on a single processor disk resident database system, and evaluate the scheduling algorithms through detailed simulation experiments. Categories and Subject Descriptors: H.2.4 [Database rency; scheduling;
D.4. 1 [Operating Management]:
Systems]:
Process Management—concur-
Systems—concurrency;
transaction
process-
ing
General Additional
Terms: Algorithms,
Performance
Key Words and Phrases:
Deadlines,
locking
protocols,
real-time
systems
1. INTRODUCTION Transactions for example
in a database system can have real-time program trading, or the use of computer
constraints. programs
Consider to initiate
trades in a financial market with little or no human intervention financial market (e.g., a stock market) is a complex process whose
[34]. state
A is
This research was supported by the Defense Advanced Research Projects Agency of the Department of Defense and by the Office of Naval Research under contracts NOO014-85-C-0456 and NOO014-85-K-0465, and by the National Science Foundation under Cooperative Agreement DCR-8420948. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the U.S. Government. Authors’ addresses: R. K. Abbott, Digital Equipment Corp., 151 Taylor St. TAY 1, Littleton, MA 01460; H. Garcia-Molina, Department of Computer Science, Stanford University, Stanford, CA 94305. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requirefi a fee and/or specific permission. 01992 ACM 0362-5915/92/0900-0513 $01.50 ACM Transactions
on Database Systems, Vol 17. No. 3, September 199Z pages 513-560.
514
R. K. Abbott and H. Garcia-Molina
.
partially captured by variables such as current prices, volume of trading, trends, and composite
stock prices, changes in stock indexes. These variables and
others can be stored and organized in a database to model a financial market. One type of process in this system is a sensor/input process which monitors
the state
of the physical
database with representation certain
real-time
A second this
process
system
new information. If of the current market
(i.e.,
the stock
market)
and updates
the
the database is to contain an accurate then this monitoring process must meet
constraints.
type
of process
reads
is an analysis/output
and analyzes
database
process.
information
In general
in order
terms
to respond
to a
user query or to initiate a trade in the stock market. An example of this is a query to discover the current bid and ask prices of a particular stock. This query may have a real-time response requirement of say 2 seconds. Another example is a program that searches the database for arbitrage opportunities. Arbitrage trading involves finding discrepancies in prices for objects, often on different London
markets. and fetch
short-lived notice.
For example, an $10.50 in Chicago.
and to exploit
Thus
certainly
them
the detection
a real-time
ounce Price
one must
of silver might sell for $10 in discrepancies are normally very
trade
and exploitation
large
of these
volumes
on a moments
arbitrage
opportunities
is
task.
Another kind of real-time database system involves threat analysis. For example, a system may consist of a radar to track objects and a computer to perform
some
image
processing
and compared against collection and signature database A real-time
and
control.
A radar
signature
tional database management systems and with so called However, a RTDBS lies at the interface and is not quite type of conventional system. transactions and guarantee However,
conventional
Like that
database
for
individual
scheduling decisions aborted), individual
real-time the same
systems. as either
a database system, a RTDBS must process the database consistency is not violated. systems
constraints or deadlines for transactions. usually expressed in terms of desired constraints
is collected
a database of signatures of known objects. The data look up must be done in real-time. system (RTDBS) has many similarities with conven-
do not emphasize The performance auerage response
transactions.
Thus,
when
(e.g., which transaction gets a lock, real-time constraints are ignored.
the notion
of time
goal of a system is times rather than the
system
which
makes
transaction
is
Conventional real-time systems do take into account individual transaction constraints but ignore data consistency problems. Furthermore, real-time systems typically deal with simple transactions (called processes) that have simple and predictable data (or resource) requirements. For a RTDBS we assume
that
transactions
make
common situation in a database much harder, and this leads to real-time system and a RTDBS. no time constraints are violated, a RTDBS, on the other hand, constraints,
so we strive
ACM TransactIons
unpredictable
data
accesses
(by far the more
system). This makes the scheduling problem another difference between a conventional The former usually attempts to ensure that i.e., constraints are viewed as “hard” [27]. In it is very difficult to guarantee all time
to minimize
the
ones that
are violated.
on Database Systems, Vol. 17, No. 3, September 1992,
Scheduling In the previous (our definition other definitions to have
Real-~ me Transactions:
paragraphs
hard
time
constraints
earlier.
that
what
.
we mean
515
by a RTDBS
For
and
instead
minimize
the
number
of data
However, we believe that the type of RTDBS that we matches the needs of applications like the ones men-
instance,
best to miss a few good compromise the correctness tions
“defined”
Evaluation
will be made more precise in Section 2). However, note that and assumptions are possible. For instance, one could decide
consistency violations. have sketched better tioned
we have
APetiormance
in the
financial
market
example,
it is probably
trading opportunities rather than permanently of the database, or restrict the types of transac-
can be run.
We should at this point make two comments about RTDBS applications. It may be argued that real-time applications do not access databases because they
are “too
Current
slow.”
database
provide
the
This
is a version
systems
service
needed
cycle is by studying
have
of the
few
real-time
for real-time
a RTDBS,
“chicken
and the
facilities,
applications.
designing
and
The
the proper
egg” problem. hence
way
facilities,
cannot
to break
the
and evaluating
the performance (e.g., what is the price to be paid for serializability?). It is also important to note that with good real-time facilities, even applications
one does not
typically
consider
“real-time”
may
benefit.
For
example,
consider a banking transaction processing system. In addition to meeting average response time requirements, it may be advantageous to tell the system the urgency of each transaction so it can be processed with the corresponding already
have
As
a matter
of fact,
some of these
priority.
facilities,
but
by the database The design problems:
What
constraints?
management
and evaluation is the best
What
a “real”
banking
not provided
system
in a coherent
may
fashion
system. of a RTDBS data
mechanisms
presents
model? are
How
needed
many
new and challenging
can we model for
transaction
describing
and
time
evaluating
triggers (a trigger is an event or a condition in the database that causes some action to occur)? How are transactions scheduled? How do the real-time constraints affect concurrency control? Should transaction time constraints be considered when scheduling 1/0 requests? In this paper we focus on the last three questions. In particular, if several transactions
are ready
transaction holder estimate
requests
if the
requester
of their
to execute a lock
at a given
held
by
has greater
running
time,
another
urgency?
time,
which
one runs
first?
If a
transaction,
do we
abort
the
can provide
an
transaction
is
If transactions
can we use it to tell
which
closest to missing a deadline and hence should be given higher priority? If we do use runtime estimates, what happens if they are incorrect? How are the various strategies affected by the load, the tightness of the deadlines? How
the number of database conflicts, and should disk requests be scheduled?
Should we use the same real-time priorities for disk scheduling as we use for CPU scheduling? After a short survey of related work, we summarize our transaction model and basic assumptions in Section 2. In Section 3 we develop a group of new scheduling/concurrency control algorithms for RTDBS. The performance of the
various
algorithms
has
been
studied
via
detailed
ACM Transactions on Database Systems,
event
driven
simula-
Vol. 17, No. 3, September
1992.
516
R. K. Abbott and H. Garcia-Molina
.
tions.
Section
ments
and Sections
7 presents
4 explains
the
answers
some
simulation
5 and 6 present to the
model
that
questions
for
we use
our experimental
results.
posed
the
in
our
experi-
Finally,
previous
Section
paragraph.
1.1 Related Work In
recent
years,
real-time
a number
database
current
paper
that
and description
buffer
management
papers
on related
timed
issues
[19,
Sha
control
et al.
is most
deadlines
[31].
determining
used
for
concurrency for
the
use
al.
[13,
than
locking A
The
protocols
than
to
guide
performed
at
the
there
assumes
30],
29,
31],
number
of
on a protocol for
real-time process
for
priority
ceiling
real-time
real-time
in
deadlock
forma-
however,
is
a
by Harista
a system
control
paper
is
to have
requirements.
concurrency
This
appears
price,
is
protocol
is explored
that
and
algorithm
it prevents
techniques show
differ-
priorities
algorithm
resource
[25]
transactions
ceiling
The
by
Son
important
periodic
a priority
concurrency
algorithm.
some
and
transaction
times.
control studies
discussed
Lin
rate-monotonic
since
and
control and
for
that
and
optimistic
High
17,
25,
synchronous
[17] are
The
priorities
can that
late
perform
control
shows
where
better
algorithm there
[2]
are
better
Priority.
control
algorithm
is presented the
et al.
blocking
Priority
concurrency
techniques
used
High
3,
A
languages
concurrency
priority
concurrency
locking
6]. work
to
and
environment
simulation
17,
protocols
time
Huang
a priori.
transaction
discarded,
as the
mixed
mistic
real-time
Their
are
used
model
The
4,
include
transac-
[2,
16,
[3,
mechanisms
known
control.
14,
relational
However,
of transaction
14].
locking.
was
work.
13,
[2,
[33].
14],
transaction
of optimistic
transactions
[13,
Their
bounds
knowledge
Stok
scheduling
are
hard
strictly
The et
own
present
for
priori
et al.
to our
used
and
by
of the
in
published
and real-time
recovery
adding
scheduling
al.
requirements
promise
feasibility
3,
These
fast for
issues
the identification
scheduling
scheduling
appeared.
a model
examined
resource
tion
The
systems [2,
1/0
[8],
and
Harista
et
hard
also
transaction
[31],
Sha
with
in on
similar
ences,
29]
[22].
work
and
on
already
include
transaction
resolution
20]
have
of papers
database
commitment
23,
communications
The
15,
published
20, 24, 25, 29, 31, 32]. (This
papers
in these
real-time
conflict
[6,
atomic
databases
process
and
been
and extension
of time-constrained control
have
6, 9, 13–17,
addressed
are
[1, 9, 16, 24, 31, 32],
concurrency
for
[ 1–4,
is an integration
3].) The subjects tions
of papers
systems.
by
serialization
Lin
order
transaction
commit
that
and
Son
during
combines [25].
execution.
point.
No
locking
and
Priority-based A final
performance
opti-
locking
is
validation
is
evaluation
was
presented. An
experimental
sented
by
value
function
cality
is
priority
evaluation
Huang
et to
derived
first
and
for
conflict
for
a function resolution
ACM Transactions
transaction
The
the
value
scheduling
transaction
transaction
CPU that
of
[17].
capture from
functions
al.
time
function
scheduling:
model
techniques uses
constraints. (highest
combines
criticalness
(concurrency
control)
deadline and are
They first,
criti-
study most
pre-
and
A transactions
value).
earliest
is
a deadline
three critical
deadlines.
Five
methods
considered.
One
protocol
on Database Systems, Vol. 17, No 3, September 1992.
a
Scheduling makes
use
used
of a virtual
to compute
other
four
about
transaction
tion
In
In this
The
a main but
we
pool
the
transaction
not
free
large disk
disk
and
commit
time.
system.
involved.
Parameters
transaction known page
Our
the
time
execution
The the
RTDB
must
address
missed
the
their
i.e.,
is tardy,
arbitrage
sell
silver
decision tion
may
the
is
is worthless
by
11:00
wish
am.
operation
to go ahead
and
If at
may
to reconsider
the all;
have the
finished
its
can
yet.
This is
changed.
is
missed,
the The
conditions user
who
not one
estimate
but
to estimate pattern.
In
at all.
of minimizing deadlines,
that
have
are
has
at
maybe may that submitted
one
already least
missed
two
its
dead-
reasonable
in
to buy
and
submitted it
is
given
their
There that
time as the
or write
access
objective
be aborted.
all,
soon
transaction
miss
be
on an
access
as
or not
a transaction
deadline after
can
disk
to read
data
the
a transaction
that
tape)
maximum
it is easier
to transactions
not
(or
practice).
of a runtime
happens
that
Suppose
the
can
have
assume
of
asks
If transactions
of what
a disk
desired
system
be wrong with
write
d, and a runtime
the
to predict could
to
transaction
transaction
and
it
be that
of the transaction
because
transactions
deadlines.
the
can
assume
in
CPU
knowledge
than
to disk
has
scenario
pattern
executes,
is justified
schedules
but
to
and
back
a
on
in
is flushed
We
is the
the
held
strategy
is kept
deadline
a disk
are
never
time
known
that
log
r, a deadline
both
pool,
aborting
the duration
to assume
an estimate
example.
the
The
access
requirements
to
log
Transactions
written
[12].)
common
time
E are
the are
Thus,
the
account
the
time,
earliest
transaction
question
One
our
perform
d, and
deadlines
alternatives. line,
r,
of missed
and
of interest,
pages
management
most
the
of a transaction
system
number
that
time.
into
E is simply
case,
also
buffer
a transaction
a release is
arrival
decision
of data
any
the
transaction
page.
the
FORCE
commit.
(the
has time
However,
no knowledge
that
assume
takes
As
is
the in
pages
buffer
E approximates
It
advance.
at a time.
so
disks
the
arrives. in
how
database,
Modified
that
and
after
We
release
At
(This
is
pool.
modified
m STEAL
enough
Estimate
unloaded
the
pool.
database
is usually
real-time
case
found
to the
completes.
transaction
The
and
consider
is not
page
until
writes.
the
E.
started
the
buffer
to
arriving
estimate
execu-
examine
a disk-based
processor
we
a page
m ATONIIC, is
from
Each
the
pool
no
on information
of remaining and
is
The
results.
processor,
multiple
Finally,
pages
separate
If
transaction
as
involves
clock
it yet.)
commits.
buffer
modified
our
517
decisions.
based
discussion
assumptions
granularity
of pages.
in
basic
(The
addressed
the
space
with
This
scheduling
estimation
to this
.
transaction.
in
decisions
and
of a single
to transfer
characterized the
our
pool.
database
until
to each
is used make
return
compare
consists
is initiated
the
[17]
buffer
of
will
describe
a sequence
read
to
we
have unit
access
protocols
system
memory
The
that
Evaluation
ASSUMPTIONS
section
model.
is assigned
criticalness
et al.
AND
that
resolution
7 we
APetiormance
deadline
deadlines,
Section
of Huang
2. MODEL
clock
a virtual
conflict
time.
results
Real-Time Transactions:
be
best
not
triggered the
to the
transac-
operation.
ACM Transactions
on Database
Systems.
Vol. 17, No. 3, September
1992.
518
R K, Abbott and H. Garcia-Molina
.
A
second
option
eventually, correct
mode
rather
decide
Tardy
abort
his
must
be
transactions
increases.
anyway,
may
assume
and that
say,
late
than
their
study
both
that
higher
both
(Of
this
and
higher
cases,
must
on his If
tardy
priority.
as their the
time
transactions
(e.g.,
must
complete,
increases
tardi-
deadline
be
we will
(they
are not
[1] discuss a more detailed the “value” of a transaction
approaches
executions
may
convenient
must
the
would
of their
missed
tardy
as the tardiness
as the deadline
transaction
user
urgency
If they
be
matter.)
already
when
and Garcia-Molina can specify how
the
more
completed
may
customers
question
to a later,
be
This
another
the
can be aborted. increases
course, is
they
not. where
still
since
be postponed
priority
over time,
We assume
at all.
is
must
or
system
but
hand,
put off). (Incidentally, Abbott deadline model where users changes
not
receive
they
tardy
a banking
there
other
transactions
are
transaction,
simply
when
all
they
in,
could
On the
they
that
whether
executed,
execute at night). In this paper we will completed
assume
of
transaction to
transactions
ness
to
of operation
do the
own
is
regardless
and passes,)
be serializable
[10].
For most
applications we believe that it is desirable to maintain database consistency. It is possible to maintain consistency without serializable schedules but this requires
more
executed
[ 11]. Since
specific
information
we have
about
assumed
the
very
kinds
little
of transactions
knowledge
tions, serializability is the best way to achieve consistency. Finally, we assume that serializability is enforced by protocol.
Our
mechanisms.
Instead
mechanism uled will
and
using
algorithm,
we
to
have the
Of
a well-understood ways
course,
protocol,
by further
it
may
that is
using
study
of
conceivable
be better
for
a locking concurrency
and
transactions
transac-
widely-used can be sched-
that
some
a RTDBS,
other
but
this
research.
ALGORITHMS algorithms for assigning
and a policy
actions
a comparative
chosen
have four priorities
for scheduling
1/0
that an overload occurs whenever The overload management policy initiate
do
different
mechanism.
to be addressed
Our scheduling loads, a policy
not
an optimistic
3. SCHEDULING
nism,
is
explored
this like
have
purpose
being
about
to handle
the
components: a policy to tasks, a concurrency requests.
In a real-time
to manage overcontrol mechasystem,
we say
transaction timing constraints are violated. is used to detect when overloads occur and
overload.
The
priority
assignment
trols how transaction time constraints are used to assign a transaction. The concurrency control mechanism can be thought for resolving conflicts between two (or more) transactions that the same data object. Some concurrency control mechanisms
policy
con-
priority to a of as a policy want to lock permit dead-
locks to occur. For needed. The fourth
these a deadlock detection and resolution mechanism component controls how scheduling of the 1/0 queue
done, i.e., whether which 1/0 request Each component
a transaction’s real-time constraints is serviced next. may use only some of the available
transaction. ACM
Transactions
In particular on Database
we distinguish Systems,
Vol.
between
17, No. 3, September
policies 1992.
are
used
information which
is is
to decide about
do not make
a
Scheduling
Real-Time Transactions:
use of E, the runtime understand use it.
3.1
Managing are
method
determine
Possibilities
be
both
concerns
and/or
examine
the issue
Finally,
is
the The
All
three
that
the
handle
An
is to
algorithms
overloads.
observant
its deadline
to take
when
which
an overload
any has schedule under
is detected.
to “cause”
This
A
method
if executed
are thought
transactions.
paper
does
the not
transactions.
of how
the
overload detector
detector
runs
often
first
is called.
and
management whenever
if an overload
When
this
finishes,
the
is detected, control
passes
a new job for the CPU.
different Under
to
is to call the overload
routine
no job
and
miss
alternative
chooses
Eligible.
means
will
actions
solution
which
We consider 3.1.1
affects
predictive.
alternative
management
to the scheduler
detect
or
transactions
question
Our
is invoked.
the overload
what
of executing
is invoked.
scheduler
estimate
transactions and determines if technique would build a candidate
aborting
executing
there
to
obseruant
if a transaction
include
overload,
This
of ways
can
that schedule. Another issue
module
519
.
do. A goal of our research
of the runtime
examines all unfinished its deadline. A predictive
and then
that
Evaluation
Overloads a number
detection simply missed
and those
how the accuracy
that
There
estimate,
APetiormance
policies this
for managing
policy
is unilaterally
overloads.
no overload
detection
aborted
all jobs
and
is performed. are
eventually
executed. 3.1.2
Not
missed aborted. that
An overload
deadline.
3.1.3
detection Feasible
is detected
Transactions
Jobs which
this
action
Tardy.
its
currently method
if an unfinished have
are not tardy
missed
remain
transaction
their
eligible
has
deadlines
are
for service.
Note
is observant.
Deadlines.
has an infeasible
which
An overload
deadline.
is detected
A transaction
if an unfinished
T has a infeasible
transdeadline
of service time that T has at time t if t + E – P > d where P is the amount received. In other words, based on the runtime estimate there is not enough time to complete the transaction deadlines are aborted. Transactions for service. Note that this policy
before its deadline. Jobs with infeasible with feasible deadlines remain eligible is predictive and uses E, the runtime
estimate. 3.2 Assigning There studied 3.2.1 the
Priorities
are many
ways
to assign
priorities
to real-time
tasks
[21, 26]. We have
three. First
Come
First
transaction
with
the
times
then
F~FS
is
we have that
it
the does
Serve. earliest
This
policy
release
traditional
version
not
use
make
ACM Transactions
time.
assigns If
of FCFS. of
deadline
the
highest
release
times
The
primary
information.
priority equal
weakness FCFS
to
arrival of will
on Database Systems, Vol. 17, No. 3, September 1992.
520
R. K. Abbott and H. Garcia-Molina
.
discriminate an
against
older
task
a newly
arrived
priority
to a task
By assigning
that
will
miss
still
have
its
an
urgent
in favor
deadline.
This
of
systems.
deadline.
that
a high
deadline
a chance
such
deadline
has the
highest
have
an urgent
Earliest Deadline. The transaction with the earliest deadline 3.2.2 the highest priority. A major weakness of this policy is that it can assign
real-time
not
with
not
for
may
task
is
desirable
which
already
their
missed
and system
anyway,
to meet
has
priority
we deny
deadlines
or is about
resources
resources
to miss
its
to a transaction
to transactions
and cause them
that
to be late
as well.
One way to solve this problem is to use the overload management policy Not Tardy or Feasible Deadlines to screen out transactions that have missed or are about
to miss
3.2.2.1
Least
their
The slack
T and
still
without
time
negative
is
its
deadline. then
an
slack
deadline
time
that
Least
of
a task
priority
we
which
slack.
is
not
executing
transaction effect,
the
of this
second
method,
wish
to know
information Our
to
rolled
it
how
time
also incurs studies
first,
the
S = d –
more have
Deadline
not
system
slack in
A
missed
its
in
of
(Its
transaction
the
arrives. is in
Section
service
a transaction’s
evaluation,
must
the The
a transaction
of that
evaluate
that
received.
change.
as a new
discussed
has
time
transaction the
the are
to
A negative deadline.
already
it
priority
static
is executed
deadline.
transaction
as the
restarted,
adjustment
its
T
the
has
slack
often
the
when
is reentering
make
does
the
if
deadline.
time
The
how
With
as long
that
its
Earliest
service
Hence is
expect
meet
executed
continuous evaluation, the transaction’s priority.
but
a slack
to
from
equally.)
once
and
we
or before
cannot
much
consider
for
priority
performance
define
a transaction
is being
methods.
back
transaction
cations
we
impossible
different
increase
priority
is
is
when
that
on
evaluated
transaction’s
it
decreases.
two
is
at
is very
time
question
a transaction
then
finish
which
clock
consider
S >0
either
depends
the
We
If
estimate
Slack
and
increases. A natural
T
will that
results
of a transaction
time
it
estimate
or when
Note
the
time
meet
a transaction
that P is the amount of service time received by T so is an estimate of how long we can delay the execution of
interruption
slack
slack
For
Slack.
(t + E – P). (Recall far.)
deadlines.
be
This the
slack
of
value
is
system.
(If
recalculated.
arrival. 3.3.3.
The
a In
ramifi-
) Under
the
the slack is recalculated whenever we This method yields more up-to-date
overhead. shown
that
sometimes
it is better
to use
static evaluation and sometimes it is better to use continuous evaluation. (See Section 5.) The majority of our experimental results use static evaluation. We chose this because static evaluation performed better than continuous at higher load settings, which is where we performed many of our experiments. 3.3
Concurrency
Control
If transactions are executed concurrently then we need a mechanism to order the updates to the database so that the final schedule is serializable. Our ACM
Transactions
on Database
Systems,
Vol.
17, No
3, September
1992.
Scheduling mechanisms
allow
concurrent
Real-Time Transactions:
shared
readers.
terminology
and
locking
shared
with
The priority
explain
exclusive
object
hold
the
conventions
and exclusive
which
locks.
presenting the
of a data
transactions
and
Before
APetiormance
Evaluation
Shared
locks
algorithms
we
we
use
.
permit
521
multiple
introduce
to implement
some
two-phase
locks.
O is defined
a lock on object
to be the maximum
priority
O. If O is not locked
then
of all
its priority
is undefined. Let T be a transaction requesting a shared lock on object O which is already locked in shared mode by one or more transactions. Transaction T is allowed
to join
maximum exclusive higher
the
read
group
only
if the
priority
of T is greater
than
priority
than
Conflicts
arise
exclusive
lock
and a shared
all
from
waiting
writers.
incompatible
request
conflicts
lock request
We are particularly
Otherwise
locking with
modes
both
conflicts
with
interested
the
shared
and
an exclusive
in conflicts
reader
in the usual
that
way.
exclusive lock
must
wait.
That
is, an
lock
means
the lock lock
T has a higher
on O, either
priority the
that
on O. Transaction voluntarily
inversion,
object,
detection We
method
can lead
to priority
priority
inversion.
requesting Furthermore,
the priority
have
a priority
always
blocks
modes
O that
is already
of waiting
and
may
lead
to a
that
by transaction
T~ has a higher
of T~ is greater
of
transactions.
that
locked
than
to
a deadlock
let T~ be a transaction
are incompatible the priority
and
method
All conflicts
the Wait inverting waits
for
for
most
is T~.
priority
TH. Namely
policy, priority inverting conflicts conflicts. That is, the requesting the
DBMS
are handled
nism makes no effective ments FCFS scheduling Figure
object
its
lead
the priority
Of course
conflicts
holds
releases
cannot
is less than
cycles
discussion
which
holder(s) which
wait.
inverpriority than T.
we
inversion.
Wait. Under as nonpriority
tions.
to detect
of O. Thus
3.3.1 exactly standard
requester
to resolve
In the following
the lock
than
the
techniques
a lock on a data
the lock Conflicts
be employed
four
the transaction(s)
until
of the requester
by having
must
discuss
then
wait
or involuntarily.
i.e., the priority
are handled
now
priority
T must
modes,
mode.
sions. A priority inversion occurs when a transaction T of high requests and blocks on a lock for object O which has a lesser priority This
the
priority of all transactions, if any, which are waiting to lock O in mode. In other words, a reader can join a read group only if it has a
data
object
which
to become
do not
identically
execute
free.
This
real-time
and the concurrency
use of transaction for access to data
are handled transaction
priorities. The Wait items. The algorithm
control
is the transacmecha-
policy impleis shown in
1.
To illustrate
the Wait
time r, deadline Table I. Note that transactions
policy,
d, runtime
consider
estimate
the
set of transactions
E and data
transactions A and B both update must be serialized. If we use Earliest
requirements
with
release
as shown
in
item X. Therefore these Deadline to assign priority
and Wait to resolve conflicts then the schedule shown in Figure 2 is produced. A time line is shown at the bottom of the figure. A scheduling profile is shown ACM
Transactions
on Database
Systems,
Vol. 17, No. 3, September
1992.
522
R. K. Abbott and H. Garcia-Molma
.
L&!-2zl Fig. 1.
Wait conflict
Table I.
Example
resolution
policy.
1: Transaction
Table
Transaction
I
~
g
A
o
2
7.5
x
B
1
2
4
x
c
2
3
7
Y
I
w
I
1
1
I
I
1.5
2
5
5.5
7
B=-
[ 0
1 Fig. 2.
for each transaction. on the CPU. hatching hatching Finally, required
when
begins
when
schedules
to make
Transaction
line
line
means
the
lock
assume
is the
has
is granted that
job
that
a lock and
estimates
decisions
only
means
in
or rollback
the
system
is executing
is not executing. on
ends are
deadline
the transaction
the transaction
a transaction
scheduling
A
1 schedule: Wait with earliest
An elevated
A lowered
shows the
Example
a data when
perfect
object.
the and
lock
The
cross
The
cross
is released.
ignore
the
time
transactions.
at time
O, so it
gains
the
processor and executes until time 1 when transaction B arrives. During this time it requests and gains an exclusive lock on data object X. Since B has an earlier deadline than A, B preempts A and begins to execute. At time 1.5, B attempts to lock data object X which is already locked by A. Under the Wait strategy B must wait until A finishes and releases the lock on X. Thus B ACM TransactIons
on Database Systems, Vol. 17, No. 3, September 1992.
loses
the
arrives
Scheduling
Reai-Tlme Transactions:
processor
and
and
tion
C executes
execution When
to
resumes
this
their
and the
It finishes
schedule,
deadlines.
execution.
finishes
lock
selection,
its
The overall
however,
constraints of the locks are detected whenever victim
a new
transactions ing
arc is added
entire
completed
complete
before
it could
A was preempted be scheduled
Promote,
handles
When with
This
lowest
is a possibility. algorithms [18]. of the
time
a deadlock
is detected
priority
of the two
Other
methods
for select-
priority
transaction
as the lock
occurs.
(Since
locks
priority
until
it commits
e.g., because
of deadlock,
by increasing
requester
are retained
until
the priority
commit
it assumes would
B blocked
in the
time,
its normal demote
than
control
policy,
Wait
priority
TH
conflict
keep its inherited
that
priority.
the
inverting
T~ will
event
C and
of the lock holder
a priority
In the
with A to
on A and
deadline
concurrency
T~ whenever
or is restarted.
inheritance
because
B has an earlier second
a
lesser
here.
happened
C. Our
problem
to be as high
of priority
finish.
by C. However before
this
C both
The previous example exposed an obvious fault that transaction B had to wait for both C and
namely
tion
graph.
methods
policy,
5.5. and
at time
A and
and
consideration
the transaction
these
should
resumes
is 7.
with
cycle. We do not consider Promote.
A
at time
in the deadlock. In our simulations, deada wait-for graph and searching for cycles
the
Wait
C
A. Transac-
of computation
locks; thus deadlock one of the standard
be done
transaction
B is unblocked
3 units
length
select
the Wait then
by
the cycle in the graph.
are possible,
2,
than
5. Then
thus
1.5 units
e.g.,
3.3.2
time
X,
deadline
to the
by choosing
that
a victim
should
time
523
.
of computation
on item
schedule
tasks involved by maintaining
is selected
at
its remaining
B misses
Evaluation
deadline
0.5 units
Note that transactions can wait for Deadlock detection can be done using Victim
At
C has an earlier
its remaining
it releases
execution.
7. Under
resumes
completion
and completes
it commits,
meet
A
A because
preempts
APetiormance
T~
A pure of
T~
is restarted, implementaif
T~
were
aborted before T~ finished. We chose not to implement demotion. Our tests showed that it occurs so seldom that any difference in overall performance is not measurable.) This method for handling priority inversions was proposed by Sha et al. [30]. The higher order
reason priority
for promoting transaction.
to get it done
TH is that it is blocking the execution of T~, a Thus T~ should execute at an elevated priority in
and removed
ensures preempt
that only a transaction T~ from the CPU.
priority
greater
T~. But with now executing Figure
than
so the T~ can execute.
with priority A transaction
T~ and less than
priority inheritance, on behalf of T~.
3 shows
the Wait
Promote
Priority
inheritance
greater than T~ will be able to TI of intermediate priority, a
T~, would
TI has a lesser
normally priority
be able to preempt than
T~
which
is
algorithm.
Figure 4 shows the schedule that is produced when Wait Promote is used to schedule the transactions of Table 1. As before, transaction A gains a lock on object X and computes and then is preempted by transaction B at time 1. A conflict arises when B requests a ACM Transactions on Database Systems,Vol.
17, No. 3, September
1992.
524
R. K. Abbott and H. Garcia-Molina
.
P(TR ) > l’(TH )
IF
THEN
TR blocks TH inherits
ELSE
Fig. 3.
I
I
o
1
Fig. 4.
lock
on
at time
X
preempt
A because
C can
1
I *
1.5
2
2.5
4
7
1 schedule: Wait promote
of
B.
A has
and
B waits
Thus
the
priority For
in
deadline
same
at time
inversion
example.
than
only
inherit
the
the
in the on object
read
transaction
then
is
which
blocked
T~~
group
7. In
will
ACM Transactions
only
some in
remains
this
to be released
the
system
and
it
does
4, and
A not
C has
a
for
the
T~~, priority
all
transactions
transaction?
In
priority
of
transactions
may
have
read
group.
The
meet
a priority These
that other
all
that
a
group.
is greater
transactions
transaction as the
event
Note a read
of the
every
as high
this
TR. in
priority
Thus
is at least
the
of priority
one
of the
requester.
that
schedule is also 7. the
unchanged.
a priority
the
lock
as B, namely
inherit
transaction of the
transaction
inherit
will
transactions
is waiting
property by
affect
priority
O has
the
group
requesting
of the
greater
lock
T~
can
some
tions
Finally,
read
the
C enters
deadline.
A commits at time 2.5 and releases its locks. and resumes execution to finish at time 4. Then
finish
the
with earliest
for
when
their deadlines. The overall schedule length by more than What if the object is locked transactions
policy.
1
7. Transaction B is unblocked
execute
resolution
I
deadline
deadline of Transaction
conflict
I
1.5. Transaction
the
of TR
TR blocks
Wait promote
Example
inherits
the priority
will transac-
holding
highest
a
priority
lock. inheritance and
the
is priority
transitive. of
of T~.
on Database Systems, Vol. 17, No 3, September 1992.
T~~
If,
for
is
less
example, than
T~
Scheduling
Real-Time Transactions:
APetiormance
Evaluation
.
525
>\\Y’N+”x
RN.%??
lzz%iizw%. /’7
c
//
I
I
o
1
I
I
i
1.5
2
3
Fig.5.
Note
that
ously
locks. priority.
is
An
resolve
true
and
thus
the
lock
O to become
schedule As
5
time
1.5 when
than
A and
thereby
than
every
thus
freeing at
starting
from
schedule,
deadlines.
is executed
with
conflict,
continuwhich
object
back
and
policy, the
contested at
priority on
T~.
T~
the
of object O,
then
can
scheduled
for
of O then
T~ blocks
priority
is to The
priorities
a lock for
high
priority.
to lock
the
its
a very
higher
than
holding the
release
Priority
the
is greater
to the
has
High
transac-
and
transaction
transaction
rolled
requesting
finish
is allowed
comparing
freeing
schedule in
in
the
time
first
we
resume
restart.
If the to wait
on
completes
the
beginning, misses
on item
X.
3. Transaction
and
the
time
executes its
schedule
High
during
Priority
deadline length
which 1 and
Since
B has
X. conflict
with
is
an
by
an
used
to
a lock
on
by
units
is 8 because
and and
finishes B
a portion
rolling
than the
and
at
deadline
A
back
processing
deadline
A regains
a conflict
earlier
continues
earlier
2 units 0.5
acquires causes
is resolved
B
6. Finally, for
it
at time
Transaction C,
at
unit
processor
priority,
lock
when
I.
time
the
a lock
a higher
overall
produced
Table
1? gains
the
A
The
Slack, function
transaction by the
of TR
are
the to
taken
or equal
it requests
processor
Least
a priority
policies,
transaction
by
thereby
the
Transaction
completes
deadline.
free.
A runs
before, X.
this
priority
transactions
item
two
requesting
of the
If the
holders
shows
the
but
transaction
one
policy
of T~ is less than
Figure
first
the
winner
holders
the
with
priority
holding
of the
this
greater
lock
processing; priority
the
favor
conflict.
the
the
lock
approach,
in
implement
of the
with earliest
is combined
a static
when
alternative
We
abort
the
even
transaction,
object. time
Under
for
a conflict
favored
for
Priority. waits
This
not
8
of T~.
slack
High
always
High Priority
inheritance
inherits
1 *
6
lschedule:
priority T~
the
3.3.3 tion
O,
when
evaluated,
evaluates
Example
1
and
A, gains
processor
and
at
time
8. In
C
meet
their
of transaction
A
twice. ACM Transactions
on Database Systems, Vol. 17, No. 3, September 1992.
526
R. K. Abbott and H. Garcia-Mollna
.
I
IF
For all TH holding P(TR
) > P(TH ) AND P(TR ) >
THEN
Abort
each lock holder
ELSE
TR blocks
Fig. 6.
An
interesting
transactions.
its
Recall
beginning
under
the
Least
priority
scheduler
is invoked,
Our each
lock
notation of T~
holder
T~
the
T~,
prioritize
on
de”pends
raises
its
loses
which
to
priority
a conflict
to proceed, can have after the abort. The next time the by T~. T~ may again conflict with T~
rollback.
that
it to be aborted,
to
a transaction
back
to O and
time
is to compare
problem
P(TH ) to denote were
be preempted
Slack priority
Rolling
transaction
immediately
assuming
policy.
a transaction’s
received.
priority
p(+))
Least
a transaction
Thus
and
use
we
service
a higher
abort
to this
solution
effective
T~ will
resolution
policy,
it has
policy.
than
another
when
this
that
its
Slack
a higher T~ initiating
under
time
to allow
conflict
arises
that
reduces
is aborted
and
High-priority
problem
of service
the amount
on O
a lock
the
can write
priority
and
this
of T~
were
holder
of T~
priority we
the
lock
P(TA
against
aborted.
) to denote
algorithm
that
Using the
as follows
of the
priority
in Figure
6. For
FCFS
matter
and
Earliest
Deadline
if we use the original
High
policies,
Priority
P(TH ) = P(T&
resolution
rule
3.3.4
Conditional
Let
Restart.
us assume
i.e., T~ aborting
has T~
Sometimes
we have
a greater priority because we lose
sumed.
We
resolve
conflicts.
the lock, requester,
that
can be more
clever
The idea
here
chosen
than all the
High the
Priority first
by using
the
not
Slack
one
priority
may be too conserva-
branch
of the
algorithm,
would like to avoid that it has already con-
T~ and T$. service time
is to estimate
does
or the modified
above. Since the modified rule is clearly superior for Least assignment, we will use it for our performance evaluations,
tive.
), it
We
Conditional
if T~,
policy
Restart
the transaction
can be finished within the amount of time that T~, can afford to wait. Let S~ be the the slack of TR and let
to
holding the
lock
E~
– P~
be the estimated remaining time of T~. If S~ > E~ – P~ then we estimate T~ can finish within the slack of T~. If so, we let T~ proceed to that completion, release its locks and then let T~ execute. This saves us from T~. If T~ cannot be finished in the slack time of T~ then we restarting T~ (as in the previous algorithm). This modification yields the followrestart ing algorithm in Figure 7. Note that if T~ blocks in the inner T~.
This
inheritance
is exactly
algorithm. Figure 8 shows Restart is used to schedule ACM
Transactions
on Database
the
branch, same
then
T~ inherits
as described
in the
the schedule that is produced the transactions of Table I.
Systems,
Vol
17, No. 3, September
1992.
the priority Wait
when
Promote
Conditional
of
Scheduling
Real-Time Transactions: IF
~(~~
THEN
A Performance
) > F’(TH
) -AND
SR 2EH
D? THEN
o
1
Fig. 8.
As this
before, time
A
and without
conflict
resolution
I
1.5
2
2.5
4
7
1 schedule: Conditional
occurs
calculates
until
B.
of it
A has
the
slack
remaining
priority
A
finishes
time
regains at
inherited
the
that
this
In
Promote
when
Priority We there
when only is
no
blockings. the
requester
schedule
the
first
the read
exactly
branch
we
conflicts
only
make exactly
time
A. Therefore
B waits
and
to
finish
meet
their
same
as
C
B, namely at
4.)
executes does
not
Transac-
4. Then
time
C
deadlines.
that
produced
by
Restart
behaves
like
Wait
like
High
condition
is taken,
and
Wait
is taken. Restart
involved. with
at
processor of
Conditional inner
X
2.5. (Transaction
deadline
the
of the
Conditional
group is,
branch
second
implement That
is
it is easy to see that
fact
deadline.
on
for the
time
E
1.5. ,At for 1? as S = 4 – 1.5 – 1.5
a lock
time
run
B is unblocked and resumes execution executes to finish at time 7. All transactions Note
with earliest
B requests
when
the
restart
tion
Promote.
policy.
1
the
A because
restart
I
exactly
preemption
TH
I
algorithm
inherits
preempt
Conditional
Example
equals
of TR
I
a conflict
the
= 1. This
Abort
the priority
TR blocks
Fig. 7.
I
)
TR blocks
ELSE
I
A
) < I’(TH
527
-PH
TH inherits
ELSE
~(~~
.
Evaluation
if
Furthermore the
ACM Transactions
lock
conflict we
special
one
the
do
is not
Conditional holder
on Database
Restart
and
Systems,
one-on-one, consider
the
lock
i.e., chained
decision holder
Vol. 17, No. 3, September
if
is not 1992.
528
R. K. Abbott and H. Garcia-Molina
.
blocked
waiting
indicated them
in
algorithms,
that are
not
to
in reality
read-only),
and
In
4 we
Section
compare
the
lock.
Experience
are
rare,
with
so that
our
the
simulations
payoff
for
has
handling
is limited.
caution
algorithms
instance,
other
blockings
way
we
different
some
chained
a clever
Finally,
the
for
that
the
examples
greatly
prove
that
one
transactions
this
will
various
may
obviously
discuss
we
simplified.
affect
to
the
model
to
than
items
another.
For
at all
of the
algorithms.
that
control
the
motivate
or none
performance
concurrency
illustrate
presented
is better
several
simulation
and
used
are
algorithm update
a detailed
scheduling
have
They
can
(i.e.,
help
us
to
options.
3.4 I / O Scheduling In
a nonmemory
which
can
tional
systems
system.
the
One
algorithm seek
throughput,
various
goal
is
is
[28]
is
to
may
be
request
from
the
is this
by
using
may
a transaction
good
an
is
1/0
so that
the
maximizing
trying
a batch
early
the
scheduling
for
which
conven-
of
requests
be
order
with
In
a disk
of 1/0
system
SCAN
resource
criteria. throughput
may
a real-time
example,
is an important
the
sequence
While for
disk
performance
maximize
) to order
bad For
the
accomplished
minimized.
deadlines. 1/0
system,
optimize
this
SCAN
it
transaction an
that
time
database to
usual
way
(e.g.,
mean
that
resident
be managed
to
meet
of requests
deadline
is
so
serviced
last. In
this
paper
1/0
requests
disk
head
we
seek
in
somewhat ated
by
the
3.4.2 1/0
the
to
other
requests
expect
this
the
one
been
be
random
1/0
1/0
requests
service
are
order
requests ordering
request
the
with
queue.
are
is
gener-
is essentially
disk.
which
with
of
1/0
queue,
1/0
schedule
minimization the
This
The
on the
each
a transaction
to
priority.
transaction
have
ordering
by
policy
the
because
position
to
to
to schedule
generated.
priorities
this
is
from which
are
is scheduled
of the
ways
to schedule
they
algorithms
as opposed
at two
is used
to cylinder
service
request
looked
which
Under
of using
priorities
transaction
priority
request
arrived
have
in
which
respect
consequences
FIFO
to
Priority. to
We
order
CPU,
with
equal
transaction
When the
related
random
the
on
times.
3.4.1 FIFO. serviced
study
based
issued highest
a high
has
a priority
the
request.
priority, priority
Thus can
waiting
longer
in
the
with
respect
to
cylinder
which The
1/0
is next
a newly
leapfrog
queue.
over
We
positions
also
on
the
issued
by
disk. In
our
model
unfinished tions
that
separate writes to
are
are
reads
writes
are
their
so it
receives
sequential over
does
not
types writes
of 1/0 that
requests: are
updates
back
only
writes
log
and
ordered
by cylinder
is
desirable
because
writes which
Transactions
two and
flushing
device,
transactions
ACM
there
transactions,
are
enhance on Database
trying
to meet
performance Systems,
Vol
their directly
reads,
generated to
disk.
which
(The are
position.) it
will because
are
log
Giving the
Giving the 1992
transac-
resides
serviced
speed
deadlines.
17, No. 3, September
that
by committed
on
FCFS.
higher
priority
completion high
transactions
a
Log of
priority which
to
Scheduling issued
the
giving delays
have already
writes
high
Real-Time Transactions:
priority
the
to
servicing
as writes
however, buffer
must
In
requests.
The
.
as our
studies
have shown,
performance
if
fact,
decrease
be completed
our
experiments
them.
writes
are given a relatively
committed
For
writes
issued
the
If we use
priority
in
static
of writes
to free
order
uncommitted
transactions. times
times
Our
it
space
529
excessively
cannot
be too
in
the
low,
memory
to
meanings
of the
resources
are which
The
ally,
four
is,
RTDB
Slack,
continuous
will
usually
that
deadlines
the
of smaller
use
system
was
simulation
parameters in Table
that
II.
The
speed
buffer
pool
database
we
pages
and
slack
than
times
those
evaluation, be
of
trans-
of
then
smaller
than
the
control database
as the
built
using
language
[5].
the
configuration
log
is maintained
database
disks.
SIMPAS,
an
names
and
The
of the
Each
system
on a separate disk
has
its
own
requests.
a single that
a system
is of equal
database
which
Least larger
transactions
simulate
given
of service
times
that
means
those of uncommitted
than
we
this
transactions.
discrete
queue
arrival
to implement if
transaction
Deadline,
MODEL
program
device
(The
necessarily
However
as the
priority
Earliest
earlier
not
of committed
event-oriented
contain
usually
are
of uncommitted
SIMULATION
same and
priority.
evaluation
transactions
slack
the
FCFS high
are
of committed
slack
have
priorities
transactions
actions.)
4.
can
Evaluation
pool.
In
the
committed.
writes
of read
APetiormance
do not
have
collective
set.
generates
a random
modeled We
maintain
been
When
is
object.
a free
modified.
a set
list
each
of pages, we
attempts
variable
of pages
model
Instead
a transaction boolean
as do not
which
has
nor
model
to
of which page
do we the
read the
each
buffer
keep
buffer
an
value
true
track pool
object,
can
individu-
the with
of
as
a
system probabil-
ity
iWemSize DBsize If the
value
continue created
is
true
then
processing. and
If
placed
is partitioned
in
equally
the the
page
value
the
input
over
the
is is
in
memory
false
queue disks
“
of the and
and
then
we
an
the
I\O
appropriate use
the
transaction
service disk.
can
request
The
is
database
function
i X NumDisks
I
D=
to map
an object
Transaction III.
and
time
equals
chosen
from are
disk
where
characteristics
Transactions
times
items
i to the enter
they
are
ready
arrival
time).
a normal chosen
the
are
with
to execute The
with the
by the
parameters
exponentially
when
number from
1
it is stored.
controlled
system
distribution
uniformly
DBsize
they
enter
of objects mean database.
listed
distributed the
accessed
Pages Each
and
system by the
page
in Table
interarrival (i.e.,
release
a transaction actual is
is
database
updated
with
ACM TransactIons on Database Systems, Vol. 17, No. 3, September 1992.
530
R. K. Abbott and H, Garcia-Molina
.
Table II.
System Resource Parameters
Parameter
Meaning
DBsize
Number
of pages in database
MemSize
Number
of pages in memory
NumDisks
Number
of disks
IOtime
Time
to perform
Table III. Parameter
Meaning Mean
arrival
Pages
Mean
number
Compactor
CPU computation
Update
Probability
&fin
until
slack slack
EstErr
Error
in runtirne
Rebort
Time
needed to rollback
in
share
size
has
chunks
computation denote
Compactor. specific
(We
disk
and
disk.
transaction Assuming
directly
three into
Since
the
commits,
writing this
of 1/0
can
service
second
time
modified
be done time
needed
the
time
needed
the
modified
included
one
disk
to in
Let
= Pages’
of
pages
~ x
for
a
for
to read
a
pages
write
a log
pages
back
to
occurs
after
a
runtime
the
total
C
to
disk
the
access,
with
the
requirement
to write
is not with
number
back
pool
requests
accessed.
service time
other
buffer
Thus
then
1/0
pages
needed
lock
accessed.
actual
the
to disk.
of items
is the is
needed time
alternates
‘The
first
in
out
page
number the
exclusively,
stored
transaction;
mean.) the
the
is the
logging 1/0
the
components:
flushed
to the
to denote
than
are each
a
locked
are
which for
for
memory;
third
that
related
Pages’
use
the
of precommit
profile
are
pages
they
one
of total runtime
and abort a transaction
updated
then
execution
rather
has
the
record;
and
requirement
transaction
transaction from
is
CPU
as a fraction
Updated
of computation,
time the
an
estimate
are
mode.
commits
A transaction
the
Pages which
Update. locked
a transaction
equal
per page accessed
that a page is updated
Maximum
are
Parameters
of pages accessed per transaction
Minimum
probability pages
or write)
rate of transactions
MaxSlack
Slack
pool
a disk access(read
Transaction
ArrRate
buffer
estimate.
expected
amount
is
MemSize I = IOtime
~ Pages’
~
1 –
[ Thus an
the
total
unloaded
ACM Transactions
expected system
runtime is
service
R = C + 1. The
+ IOtime.
DBsize needed accuracy
by
1 a transaction of
a transaction’s
on Database Systems, Vol. 17, No. 3, September 1992.
executing runtime
in
Scheduling E with
estimate
Real-Time Transactions:
respect
when
we
The
discuss
assignment
tion’s
slack the
time
needed
executed.
When
ready
so an
A
back
abort
and
does
not
the
by
Rebort
where
is explained
from
later
that
generate
an
are
1/0
service
management
of’ CPU
and
after
consists
transaction
back
placed
is
not
again
in
transaction
request.)
policy,
uniformly
amount
a transaction The
flushed
time
the
system.
it is rolled
updates
iklinSlack on a transac-
a slack
controls
Aborting the
is restarted
parameters
respectively
choosing
a transaction. it
two
bound
bounds.
overload
531
.
EstErr,
parameter
of EstErr by
upper
assigned
a transaction
by
controlled
is
removing
(Recall
the
value
the
or restart
and
queue.
generated
is
a lower
by
by
the
Evaluation
results.
set
deadline
to abort it
choose
a deadline
specified
of rolling the
of
range
we
experimental
which
time.
from
How
the
MaxSlack
and
R is controlled
to
E = R X (1 + EstErr).
APetiormance
Note
restarts
commit,
that
aborts
result
are
from
lock
conflicts. The
simulator
lock
does
manager,
routines
are
of these
per
the
and
a deadlock
searching
lesser
graph.
the
ready
queue
We
waiting use
sured average
total
study load The
of time
the
we
execute
the These
that
much
the
CPU
switching
costs
time
and
which
to
the
is
time
enter
by
this
input
deadlines
is
Tardy
jobs
conflicts
the
the the
because
it
is placed
in
system
of
of deadlocks.
particular
we
mea-
deadlines,
the
their
deadlines,
and
or
system
step
This
in
until
their
missed
lock
when
In
missed
transactions
victim
formation
algorithms.
cycle
either
system.
graph.
transaction
the
the
the
to the the
queue
system,
excessive
a wait-for
added
a special
happens
which
caused
is
choosing
completed
in the
the the
perform simulate
by
that
this
maintaining
node
placed exits
again
of restarts
How
and
transactions
algorithms
increase.
a new
to prevent
by
assume
how
by
is selected
to evaluate of
states
detected
When
allowed
metrics
number
how
Section
and
we
Context
transactions
is aborted.
percentage
amount
are
deadlocked
is necessary
several
the
it
and
to
manager.
ignored.
back
it
needed
detection
that
accesses.
whenever
two
time
basis
variable
a victim
rolled
which
or because
enforced
cycles
for
deadlock
object
deadlocks
of the is
with
commits
is also
for
victim
transaction
the
is detected
priority
The
data
in
earlier,
When
account and
a transaction
scheduler
graph
the
that
mentioned
with
on a per
included
object
to execute As
are
explicitly
manager,
executed
calls
needed
not
conflict
deadlocks.
We
experiences
function
will
also
a sudden
be explained
in
6. percentage
of missed
calculated
with
the
following
equa-
tion:
%Missed
A
job
tardy tions. zero. In
is processed time
is simply
A transaction Aborted this
YoMissed
study metric.
we
Jobs processed
if either
it
the
average
of the
that
commits
before
transactions have For
executes
do not included some
+ Aborts x 100.
=
completely tardy
or on its
contribute tardy
applications
ACM Transactions
or it
times
deadline
to this
jobs
and it
on Database
is aborted.
of all
has
mean
transac-
a tardy
time
together
in
of
metric.
aborted
may
The
committed
be
Systems,
jobs useful
to
describe
Vol. 17, No. 3, September
the a 1992.
532
R. K. Abbott and H. Garcia-Molina
.
Table IV.
Summary
ComDonent
M&
Overloads
AE - All
of Scheduhng
Pohcies
Eligible
NT - Not Tardy FD - Feasible
Deadlines
FCFS - First Come First Serve
Priority
ED - Earliest N
- Last
Deadline
Slack
(Static
LSC - Least Slack w -
Concurrency
evaluation)
(Continuous
evaluation)
wait
WP - Wait Promote HP - High
Priority
CR - Comhtionai
1/0 Scheduling
Restart
FIFO Priority
separate
metric
completed reasons
not
time
is not
All
of
critical
case.
Section
Section
format
denoted
similarly.
for
assigning
Sections
differences
our
TransactIons
denote
6 we
discuss
we
but
with run,
the
except
have
on Database
same
for
the
main
results
for
disk
resident
the
Taking
the
the
methods
the
algorithm Wait
when
of the
Due
to the
the
results
product
algorithms
of the
many
20
each different
function
Vol. 17, No. 3, September
best
the are
different we
cannot
illustrate
experiment random
experiment, 1992
use
combining
considerations For
3 and We
Other
which
1/0 yields
to them. by
and
Also
of Section
formed
graphs for
step
cross
referring
space
algorithms.
overloads
concurrency.
Promote.
some
input
managing
ways.
parameters the
Systems,
results for
use
selected
of the
for
study
response
cases: the
managing
will
and
performed.
performance
simulation Each
to
two
and
summarizes
we
Slack
results
and
IV
Least
5 and that
different
that
LS/WP
the
this that
deadline. of
5 presents
methods
priority
Table
is
For
conven-
conclusions.
different
in two
its
Section
Unlike
reason
one
never
jobs.
mechanisms,
The
in
6 presents
our
three
Tardy,
experiments all
case.
Section
be done
NT/
Not
seeds.
and
resident
abbreviations
policies
present
disk
conducted
were
tardy
here.
control
meets
were
which
simply
metrics
times.
as a transaction
tasks
than
other
of concurrency
algorithms. the
these
response
7 summarizes
can
provides
study
represent serious
evaluations
case
different
they
more
transaction
the
each
as
be
not
as long
and
methods
In
do
3 proposed
scheduling
jobs
may
experiments
case memory
four
on
our
main
ACM
we
focus
memory
the
such
performance
does
the
aborted
as
of space
tional
96
for
and
the we
ran
number continued
Scheduling until
at
least
numerous
700
Real-~ me Transactions: transactions
performance
runs.
It
is
these
bars)
which
are
averages
begin
ments
our
our
model
any
since
and
studying
The
base
These were
chosen
chose
the
0.84
than
rate
of It
that
not
lightly
simplifying are
20
as vertical
in
experi-
scheduling the
database
performance.
important
case
a specific
within
a wide
(We
that
simplifying
the allow
since
memory.
memory
resident
per
two
simplifies
data
dropping,
many
Further-
sizes
are
range
second) the
that
all
are
shown
in
real-time
growing
is
high
to this is
locks
are
transactions.
issue
assumption
in
is
and
will
then
to
test
first
7). Also is
1. This
and
study
relax
it
the
loaded
Section
updated locks
We
average
a heavily
exclusive We
(an
enough in
accessed
V. but
values.
utilization
algorithms
Table
application
of possible
CPU
return
a page
means
in
impacts is
their
corresponding
system.
algorithms
6 we
system all
to test
that
exactly
this
the
assumption
understand
Section
model
arrive
loaded
between
under
the
interesting
assumption
rithms
values
probability
tested,
over
of a restriction.
memory to
computation
is more
the
conflicts
the
the
scheduling
hold
less
so that
1/0
steadily
meant
as reasonable
seconds
note
for
are
arrival
algorithms. rather
are
becomes
parameters
values
algorithm (shown
This to
In
resident
currently
prices
533
DATABASE
studying
easier
how
a memory
residence
.
averaged
intervals
resident.
it
study
systems
memory
memory
by
memory
makes
we
each
and
RESIDENT
on performance.
and
real-time
more,
and
resident
case,
existing
MEMORY
is
impact
For
collected
confidence
analysis
database
their
to be disk In
the
9070
Evaluation
graphs.
RESULTS:
somewhat
and
the
performance
where
options
in
executed.
were
and
plotted
5. EXPERIMENTAL We
were
statistics
APetiormance
all
lock
the
to
algo-
allow
read
locks. The log
1/0
system
records
activity
induces
request
are
database.
are
disk.
Since
on
transaction
are
writes
flushed
In
this
experiment of 0.5.
V. Under
these
of computation
deadlines tions which
shows
all
back
since other
we varied
the
The
parameters
other
system.
The
disk
resident
the
an up-to-date
consists device.
of the have
reads
type
copy
they from
of This
second
copy
commits,
transaction
arrival
CPU
second.
Management. compared jobs
NT with
meet overload
rate
their
from
of
of the
database no
the
affect
disk,
the
policy.
deadlines.
to 8 jobs/see
values 0.72
given
This policies
show
reduce
Aborting
a few
is illustrated for
in
to 0.96
experiments
substantially
management
ACM Transactions
base
ranges
FD AE
6 jobs/see
the
simulation
and the
from
had
utilization
Our
policies
three
type log
Load
per
other
1/0 to
first
separate
transactions.
the
the
no
the
the
a transaction
settings
missed
helps
on the
to maintain
with
management
to
load
arriving
Ouerload
5.1.1 overload
of requests:
pages
after Also
5.1 Effect of Increasing increments
low
occur
interfere
types
sequentially
of modified
tardiness.
cannot
two
written
a relatively
writes
Updates
on
writes
processes
that
FCFS
the
in Table
seconds
that
the
number
late in
of
transacFigure
9
scheduling.
on Database Systems, Vol. 17, No. 3, September 1992.
534
R, K. Abbott and H, Garcia-Mollna
.
Table V.
Base Parameters:
Memory
Resident
Database
~Parameter
Value
m
Parameter
&
~DBsize
400
Pages
MemSize
400
Pages
~ArrRate
7
Jobs/sec.
Pages
12
Pages
! Compactor
10
ms.
Update
1
~MinSiack
0.1
sec.
MaxSlack
1
sec.
Rebort
5
ms
;EstErr
o
~
AEJFCFS/W
+ - *
NT/FCFSfW
9
FD/FCFS/W
~
~
+
----
6.0
7.0
6,5
The three the load the
algorithms
decrease other
in
Unless schedule the
code 5.1.2
the rency
load
transactions,
will
be omitted
Priority
performance control
ACM Transactions
when
perform
significantly
NT
and over
Assign
the
off,
algorithms i.e.,
FD
~.
yield
This
all
the arrival
rate
better
an
same
than
approximate behavior
holds
is lowest.
As
AE,
and
50
percent
true
for
at the
as well. remaining the legend
graphs
more for
we lock
show
difficult the
To eliminate
rnent.
of the turned
the clearly in
management.
comparably
policies
otherwise,
all
Overload
FD
deadlines
assignment
noted
memory;
perform setting,
missed
priority
Main
NT and
increases,
highest
9.
8.0
(jobs/see)
ArrRate Fig.
7.5
requests
If
the
policy
which is AE
graph.
concurrency
performed
algorithms
case.
an were
control experiment
granted
on Database Systems, Vol. 17, No. 3, September 1992.
as a factor with
immediately
in
concurand
Scheduling 45.0
Real-Tree Transactions:
m 36.0Q 31 .5-
j270
4-+
ED/W
535
W.m
ISrw
F-v
LSC/W
/“J~ 40.5
/“
36.0
‘/
.,(“ / ~/
/t
31.5
,
‘4 /“ / d /“ ,.’
22.5-
~
. I 45.0
FCFS/W i
~
Evaluation
I
40.5
.s
APetiormance
27.0
“/ ,/
/“
22.5
t
~8,0_
18.0
t .;
13.5-
2
9.0-
$+
4.5-
1 9.0 4.5
6.0
7.0
6.5 ArrRate
Fig. 10.
Main memo~
7.5
8.0
~obs/see) Priority
assignment
(No CC).
the resulting schedule was nonserializable. (A similar effect can be achieved by setting all transactions to be read-only, thus there will be no lock conflicts. However, this will slightly alter the ru~time and deadline characteristics of transactions
since
performance
for
because
Ill
evaluation. lines load
no
logging
would
each
of the
priority
appears
once with
As expected,
all the
as the load increases. settings. This is not
be necessary.) algorithms.
static
evaluation
algorithms
Algorithm surprising
miss
Fignre There
and
10 shows are
four
once with
a greater
the
graphs
continuous
number
FCFS misses the most deadlines since it does not use transaction
of deadfor all time
constraints when assigning priority. At lower load settings ED performs best. As the load increases, the performance margin of ED and LSC over FCFS narrows. As mentioned earlier (Section 3), ED performs poorly at higher load settings
because
it assigns
high
priorities
to transactions
which
have
or are about to miss their could meet their deadlines
deadlines. This causes other transactions to be tardy. The same is true for LSC
performance
that
that
curve
ED (or LSC)
but that
follows
is usually
of ED
very
a good performing
it loses its performance
margin
closely.
We will
algorithm
over other
when
algorithms
missed which and its
see repeatedly the load is low when
the load
increases. At higher load settings LS is clearly the superior policy. Because the slack time is evaluated only once when the transaction enters the system, this algorithm avoids the weakness that is common to both ED and LSC. Since LS (static evaluation) is dramatically better than LSC we will use it as the ACM TransactIons on Database Systems, Vol. 17, No. 3, September 1992.
536
R. K. Abbott and H. Garcia-Molina
.
0.40
1 0.40
1 FCFS/W
nO U 0
.36-
~0 W
0.36
.32-
~0.28-
+-+
ED/W
❑ .9
LS/W
0.32 .’ .“
0.28
.“
E ~024_
I
.“
LSC/W
w-w
:
.“
.’
Ao.203
.“
~0.16...
.“
0.24
.“
.“
0.20 0.16
.,
+
..
f.40.12-
0.12 0.08
1°”08 0.04-
0.04
0.00
0.00 6.0
7.0
6.5 ArrRate
Fig. 11.
preferred
version
otherwise. Ideally
we want
deadlines minimize
our
the
Priority
assignment
remainder
algorithms
8.0
Qobs/see)
Main memory;
of LS for
7.5
of the
to schedule
(No CC).
experiments
unless
all transactions
are met. However, if this is not possible, the amount by which tardy transactions
such
noted that
all
then we would want to miss their deadlines.
Figure 11 graphs the mean tardy time in seconds (Concurrency control is still turned off.) It is interesting
against arrival rate. to note that ED has
the least mean tardy time, then LSC, then FCFS and finally LS. These results are not surprising for it is known that ED minimizes the maximum task tardiness and LS maximizes the minimum task tardiness [7]. 5.1.3
Concurrency
concurrency policies.
control (We
do
Control. mechanisms not
consider
We now when
examine they
LSC.)
the performance
are paired
Since
we
are
with in
of the four
the three the
main
priority memory
database case, using FCFS to schedule transactions results in a serial execution of transactions. The currently executing transaction can never be preempted by an arriving transaction. Thus there is no difference in performance when FCFS is paired with the different concurrency control mechanisms. Figure 12 graphs the Wait and Wait Promote concurrency control strategies for each of the three priority policies. For reasons of clarity, only one curve is shown for FCFS. At lower load settings ED/W and ED/WP perform better than both FCFS and LS. As the load increases, the performance
margin
at higher ACM
load
Transactions
of ED over FCFS settings. on Database
narrows.
Although Systems,
LS\WP Vol
Again
we see the problem
is not
17, No, 3, September
as good as either 1992.
with
ED
ED/W
or
Scheduling 50 45 40 35 30
Real-Time Transactions:
A Performance
‘$
537
-50
I FCFS/W
-45
1: +-+
ED/W
❑ .9
ED/WP
V.-v
LS/W
-40 -35 -30
LSIWF’
25
Evaluation
/; -25
20
-20
15
-15
10
-10 -5
5 I
0
I
I
I
1
I
6.0
6.5
7.0
7.5
8.0
(jobsJsec)
ArrRate Fig. 12.
ED/WP
at the
lowest
Mainmemory;
settings,
it
-o
Concurrency
is clearly
control.
the
superior
policy
at higher
loads. Figure algorithms
13 shows the results for the High Priority and Conditional for each of the three priority policies. (Again we show
Restart
only me
graph for FCFS.) The results are similar to Figure 12 except that ED/HP and ED/CR lose their performance margin over FCFS even sooner. This occurs
because
When
the
both
load
HP and CR will
is high
increase the transaction mance of ED degrades In Figure
and
transactions
are
frequent,
arrival rate. Under as explained earlier.
14 we plot
No algorithm
abort
conflicts
only
the
ED and LS with
is best at all load settings.
when
conflicts
these
aborts
increased
load,
each concurrency
Algorithms
ED/WP
occur.
effectively the
perfor-
control
policy.
and ED/CR
are
best at the lowest load settings while LS/WP and LS/CR perform better at higher loads. The ED algorithms are bunched closely with ED/WP and ED/CR performing slightly better than ED/W and ED/HP. Finally the worst combinations are LS/W and LS/HP. An obvious question raised by Figure 12 is why LS/WP is so much better than
LS/W,
particularly
between
ED/W
and
permits
a greater
under ED/WP
degree
high
loads.
is small.
of concurrency
By contrast
The
reason
(average
the performance is that
number
LS
of active
gap
scheduling jobs in the
system at any time) is memory resident.
than does ED scheduling. (Remember that the database Preemptions occur only when a higher priority transac-
tion joins the wait for 1/0.)
queue.
ready
The
current
job
never
gives
up the
processor
to
ACM Transactions on Database Systems,Vol. 17, No. 3, September 1992.
538
,/ /4
R, K. Abbott and H. Garcia-Molina
.
50
50
FCFS/HP
4.5 40
30
1-
25
4
,)
ED/CR
❑ 9
35
45
ED/HP
e-+
.//
.,
..’
..”
35
.’
v-v
LS/HP
Ie-’o
LS/CR
l“’”.’”” /’
.>.
40
30
25
20
20
15
15
10
‘l
10
W-----
5
5 0
0
i7.0
6.5
6.0
ArrRate Fig.
13.
Main
7.5
8.0
(jobsjsec)
memory;
Concurrency
control.
50
50
ED/W
45-
45
+-+
ED/WP
❑ .=
ED/HP
40
35 –
v-v
/ED/CR
35
30-
H
Mm
30
A-A
Ls/wP
OQ
Ls/HP
Q-u
LS/CR
40 –
25 – 20-
25 20
CA m
15 10 5 0 6.0
7.0
6.5
ArrRate Fig. ACM
Transactions
14.
Main
on Database
memory; Systems,
Priority
7.5
8.0
(jobs fsec) assignment
and concurrency
Vol. 17, No. 3, September
1992
control
Scheduling
Real-Time Transactions:
APetiormance
Evaluation
,
8.0
539
.
i
8.0
FcFs/w –7.3
7.3 +-4
EDjW
❑a
EDIWP
6.6
/“
5.9
/’
v
/“
Lsm
-v
/’
5.2
–5.9 –5.2
I
/“
LS/WP
4.5
I
–6.6
, /“
–4.5
)
3.8
-3.8
3.1
–3.1
2.4
.. ,_ ----- .-
1.7 1.0
15.
.-*---
I
I
6.5
7.0
ArrRate Fig.
–2.4
~e.-.-.-.-
I
6.0
.-’
Main
r. Tr.7.*.=.-.
q.q -.-*
I
-1.7
+1.0
1
7.5
8.0
(jobsjsec)
memory
;Priority
assignment.
To confirm this, Figure 15 graphs the average number of active jobs for the LS and ED algorithms in the arrival rate experiment. We see that both LS algorithms
produce
higher
levels
of concurrency
than
the
FCFS
and
ED
algorithms. We also see that the average number of active jobs for LS/ WP is less than that of LS\W for nearly all load settings. To explain the results of Figure 15 we note that with LS priority assignment
there
priority,
or
correlation
is no correlation slack.
By
between
which arrives deadline which preempt
arrival
a transaction’s
with
time
Earliest
and
arrival
Deadline
priority.
time
there
Namely,
is
and
its
a direct
a transaction
much later then transaction T~ is more likely to is later than TA. Thus it is less likely that transaction
transaction
For example,
between
contrast,
T~
have a T~ will
TA.
transaction
T~ arrives
at time
4, has a slack
time
of 3, and a
deadline of 10. Transaction T~ arrives at time 8, has a slack of 2 and a deadline of 16. If we use LS to determine priority then T~ will preempt T~ when it arrives. However, if we use ED to determine priority then T~ will not preempt One lock
TA. consequence
conflicts.
transaction priority
of this
blocks inversion.
the
right
the
priority
higher
Furthermore,
action
on When
by
of the
a lock
are
held
by
a priority
promoting lock
level
there
requester.
the
of concurrency
is that
there
more
where
a high
a
lower
inversion priority This
ACM Transactions
conflicts priority
occurs, of the
shortens on Database
holder waiting
Systems,
Vol
more
priority
transaction,
algorithm
lock the
are
i.e.,
LS/WP
to be as high time
for
a
takes the
17, No. 3, September
as high 1992.
540
R. K Abbott and H Garcia-Molina
.
priority
transaction
and increases
the chances
that
it will
meet
The Wait strategy does not do this. This demonstrates handling priority inverting conflicts correctly. This
same
characteristic
of LS,
namely
a higher
the
average
its deadline. importance
level
of
of concur-
rency, also makes it more likely that a deadlock will occur. Although we do not include the graphs, our experiments show that LS algorithms do produce more
deadlocks
than
ED algorithms.
more
deadlocks
than
LS/WP.
5.2
Biasing
the Runtime
Estimate
estimate
E is used
The runtime
Furthermore
by three
LS/W
of the
suffers
scheduling
significantly
policies
that
we
presented in Section 3. Of the three overload management policies, Feasible Deadlines is the only one which makes use of the runtime estimate E. The priority policy Least Slack also uses E as does the concurrency control policy Conditional
Restart.
how they
will
for LS however, To study nents
how
error
biased
was designed
in the runtime
the runtime
to introduce
both
vated
a final
estimate
easy to predict
(see below).
high
estimate
we devised estimate
E affects
three
The case
in a different
a random
amount
the different
different way.
of error
the
error
third
priority
and
low
experiment
policy error
LS
The first
(within
performed
parameter
where
half
each
experiment
a certain
This
transactions
range)
to bias all the equal amounts.
nearly
settings.
the
compo-
experiments,
E. The second experiment was designed in the same direction and by proportionally
experiments
under
FD and CR it is relatively
in the runtime
algorithms
into the estimate runtime estimates both
to error
is not as simple.
of scheduling
of which
In
For the policies
respond
equally finding were
well moti-
made
to
overestimate their runtime estimate and the other half underestimated. This technique for biasing E did yield changes in performance for the LS policy. Thus
it is this
below. For
half
E
x
= R
technique
of the
(1 – EstErr).
increments
that
is used
in the
experimental
results
transactions E = R X (1 + EstErr ); for the The value of EstErr was varied from
of 1. Thus
when
EstErr
= O, E = R and
there
bias in the runtime estimate. When EstErr = 1, half the E = O and half have E = 2 x R. (For values of EstErr negative runtime estimates are converted to 0.)
reported other O to
half 4 in
is no deliberate transactions
greater
have
than
1,
We would expect the accuracy of the run5.2.1 Overload Management. time estimate to have a large effect on the policy FD. The overload management policy is responsible for aborting transactions and since aborted transactions
are
counted
affects the performance. the runtime estimate.
as having
missed
their
deadlines,
the
policy
directly
It is easy to see how this policy is affected by error in When the runtime estimate for jobs is zero, FD
behaves like NT, aborting jobs only if they have missed their deadlines. When the estimate is high, FD thinks that jobs are much longer than they are and will judge incorrectly, that they have infeasible deadlines. Thus jobs with feasible deadlines are unnecessarily aborted. The predicted behavior is conACM Transactions on Database Systems,Vol. 17, No. 3, September
1992
Scheduling
Real-Tree Transactions:
APetiormance
Evaluation
.
541
35.0
35.0 ~~~_ ‘28.02 “~24.5“d d21.
AE/ED/wP
e--e *-+
NT/ED/WP
❑ a
??D/ ED/wP
.,
..
..
..
..
..
* –31.5 -28.0
.IJl””’
(3_ ..
iL7.5-
.,
.’
..
..
..
.“
–24.5 –21.0 -17.5
n“
d g14.0.-
L
., ..
..
... .
..
L
–14.0
L
–10.5
ii’O”’*
-70
+----A------+-------+-----”+ ,.”
7“0 3.5-
3.5
er””
0.0
I
I
1
I
I
0
1
2
3
4
0.0
EstErr Fig.
firmed
in
5.2.2
Priority
used
in performance
the error
is high
continuous
Figure
with
each
( EstErr
5.2.3
AE and NT
more
Concurrency
Restart
times
the
the
are not affected
results
concurrency
when
the
error
for
control is low
is not shown),
points.
to make Only
scheduling
the
decisions.
concurrency
estimate.
control
Algorithm
strategy
CR uses
the
transaction holding a lock priority transaction requesting
We can easily describe E = O, CR will always
of CR for the extreme
this
holder
can
requester this
is true finish
will
behavior
At
nearly
the
other
always
the behavior
if the
always
for
the
extreme,
that ACM
slack the
same
when
that has
the
wait
is exactly
judge
(assuming
transaction
within
The
= O) and
the performance
estimate to decide if a low priority within the slack time of the higher
slack,
(static
in the runtime estimate. This is understandmeans that the inaccurate runtime estimate
runtime
judge
LS
policies.
(EstErr
a few percentage
is used (graph
Control.
uses
17 graphs
= 4) is only
evaluation
be used many
tional
management.
using
of the
between
of LS, is more sensitive to error able since continuous evaluation will
Overload
algorithms
Assignment.
when
difference When
memory;
in EstErr.
evaluation) when
Main
16. Note that
Figure
by changes
16.
lock
the
not
of
lock
missed
the
holder.
its
lock
values
requester deadline)
requester.
Since
promotion
Condiruntime
can finish the lock. of E. When
has
a positive
that Thus
the the
is used
lock lock
by CR,
as WP.
E is much
the lock Transactions
holder
larger
than
cannot
finish
on Database
Systems,
R, the
algorithm
within
the
slack
Vol. 17, No. 3, September
will time 1992,
542
R. K. Abbott and H. Garcia-Mollna
.
24.0
24.0
22.6– m
I
821.2– c ~19.8–
. ..
d
,.,
,. I 1
,..
....
. . . . ...”
.....L ~~~~~~ .1 ’226
I
21.2
19.8 18.4
i:;:;_
17.0
d $15.6-
.s. -
i’4”2,.
012.8
t
■a t
v
o
14.2 12.8
LSjWP
+-+
+
10.0
:
LSfW
11.4 :
15.6
—----’
-fw-l
/“-
“
*“’”
-----
b
----
-.. s.
-.+
LS/HP
11.4
LSJCR
-Y
I
I
I
I
1
2
3
4
10.0
EstErr Fig.
17.
Main
memory;
Priority
assignment.
of the lock requester. Thus the lock holder will be rolled back This behavior is the same as the concurrency control strategy of E is not so extreme
value
5.3
Cost of Serializability
We
can
turned
use the off
the
results
CR will
of the
understand
how
behave
experiment the
somewhere
where
enforcement
and restarted. HP. When the
between
WP and HP.
concurrency
control
of serializable
was
schedules
affects performance in terms of missed deadlines. Figure 18 shows the performance of the serialized and unserialized versions for one of the better versions of ED and LS, namely ED/WP and LS/WP. The unserialized versions perform better than the serialized version for each algorithm. Thus serializability does cause the algorithms to miss more deadlines. However, missed deadlines is only one cost metric. Database inconsistency occurs as a result of unserialized schedules. For some applications the cost of database inconsistency may far outweigh the performance benefit in terms of missed deadlines
5.4 In
gained
Increasing this
by ignoring
concurrency
control.
Conflicts
experiment
we
varied
the
value
of
DBsize
from
200
to
400
in
increments of 50. The parameter MemSize was varied in the same way so that we remained in the main memory case. The other parameters had the values shown in Table V. In this kind of experiment, the overall load and ACM Transactions on Database Systems,Vol. 17, No, 3, September 1992.
Scheduling
Real-Time Transactions:
APetiormance
Evaluation
543
. 45.0
45.0 40.5 36.0 31,5
~
ED/WP
+-+
ED/W
❑a
LS/WP
v-v
LS/W
40.5 36.0 31.5 27.0
27.0 22.5
22.5
18.0
18.0
.+ 13.5
13.5
9.0
9.0
4.5
4.5
2
0.0
1
1
I
I
I
I
6.5
7.0
7.5
8.0
i
6.0
ArrRate Fig. 18.
transaction
(jobs/see)
Mainmemory;
characteristics
Serialized
remain
0.0
versus unserialized,
constant.
However,
since
transactions
ac-
cess the same number of objects, the probability of conflict is higher when the size of the database is small. The probability of conflict decreases as the number of objects in the database increases. Thus we can compare how the various concurrency control strategies perform as the number of conflicts changes. Figure 19 shows the results for all four concurrency control strategies each paired
with
creases, vation
ED
is that
is only
and
LS.
all scheduling a small
the curves difference
It
confirms
algorithms
will
our
expectation
perform
better.
for the ED algorithms between
the small
that,
as
are remarkably database
DBsize
One noteworthy
in-
obser-
flat,
i.e., there
and the large
database
performance values. Recall from Section 5.1.3 that ED scheduling in a main memory database results in a very low level of concurrency. If the average number of active jobs is very low (less than two) then it doesn’t matter how small the database is because there are not enough active jobs requesting locks to conflict. However, LS scheduling results in higher levels of concurrency (Figure 15) and
algorithms
using
LS
are
more
sensitive
to
changes
in
database
size
(Figure 19). In fact LS/W and LS/HP perform very poorly when the database is small. By contrast, LS/WP and LS\CR, which use priority inheritance to manage priority inversions, maintain good performance even when the database is small. Again this demonstrates the importance of managing priority inversions correctly. ACM
Transactions
on Database
Systems,
Vol. 17, No. 3, September
1992.
544
R. K. Abbott and H. Garcia-Molina
.
60
60 ~
ED/W
*-*
ED/WP ED/HP
54-
. .
“~ 42 TJ ~36n
❑
(
~48-
...
..
.,
a
-54
V.-v ~
ED/CR
A-A
LS/WP
0.0
LS/HP
30-
-48 -42
Lslw
-36 -30
Q ~24_
-24
a.
A...
$18
., .>x
2