An Analysis of Strategic Behavior in ebay Auctions

An Analysis of Strategic Behavior in eBay Auctions.∗ Raul Gonzalez Facultad de Economías Universidad Autónoma de Nuevo León Monterrey, N.L. México. A....
Author: Daniel Hudson
3 downloads 1 Views 332KB Size
An Analysis of Strategic Behavior in eBay Auctions.∗ Raul Gonzalez Facultad de Economías Universidad Autónoma de Nuevo León Monterrey, N.L. México. A.P. 288 [email protected]

Kevin Hasker Department of Economics Bilkent University 06800 Bilkent, Ankara, Turkey [email protected]

Robin C. Sickles Department of Economics Rice University Houston, Texas 77005-1892 [email protected] May 25, 2004

Abstract This paper presents structural estimates of bidding behavior in eBay computer monitor auctions. We use our estimates to reject the use of Jump Bidding (Avery 1998) or “Snipe or War” bidding (Roth and Ockenfels, 2000). We also find that longer auctions only have a small effect on price and experienced auctioneers respond to this incentive. JEL Classification Numbers: C1, C7, D44, L1. Keywords: Internet auctions, English auctions, structural econometrics, simulated nonlinear least squares. Running Title: Strategy in eBay Auctions

1

Introduction

Perhaps the most exciting economic innovation on the Internet is consumer to consumer auctions. In September of 1995 eBay used the power of the Internet to create a marketplace for consumers to auction goods.

This combined the power of a “for sale” add with millions of viewers and the

price discovery power of auctions. At last a mechanism well understood by economists was used not just for a few expensive items, but for the sale of everyday items to everyday consumers. This market’s success and growth rate have been breathtaking. In 1998 Reiley (2000) estimated that the ∗ We would like to thank Chad Dacus, Ali Hortaçsu, Al Ockenfels, and Alexandar Ruiz for their valuable comments, and would like to acknowledge the financial support from the Rice University Center for the Study of Institutions and Values. For their assistance with data collection we would like to thank Wes Brown and Jason Murasko, with special thanks to Robert Hasker. Earlier drafts of this paper were given at Rice University, WZB (Berlin) and the 2003 North American Winter Meetings of the Econometric Society.

1

industry was going to produce over one billion dollars worth of sales, in 1999 eBay alone reported $2.82 billion. Last year (in 2002) they had $14.88 billion in sales and this year they can expect $25 billion. Their international membership is currently over 75 million, and will probably surpass the population of the United States next year. What is known about this market? There have been exploratory papers but there has not been a thorough structural analysis. The basic theory of how the market operates is understood since the goods are sold by auction. Since the transactions take place on the Internet it is possible in theory to monitor these transactions and develop data retrieval protocols facilitating in-depth empirical analysis.

In fact data sets of very large sizes can be assembled because of the enormous volume

of transactions and their public domain nature. We have collected and processed records of over 10,000 auctions of computer monitors, allowing a thorough structural analysis. In this paper we estimate structural bidding functions and then test which bidding strategy the bidders use. This question is commonly overlooked in empirical research despite the unique insight empirical analysis can give on this question.

Many well-specified economic models have multiple

equilibria, and only through empirical analysis can we discover which equilibrium people are using. We fail to reject the null hypothesis that they are using competitive bidding strategies instead of the alternative “snipe-or-war” or “jump-call” bidding strategies. eBay has two different auction formats. The common format is an English auction with a hard stop time. This is the type of auction used in 87 percent of our data set and the type of auctions on which we focus. Bidding goes on from three to ten days and stops at a preset time. This hard stop time does have a significant effect because bidders frequently wait until the last minute to post their bid, and there are reports of bids not getting in on time. While there are many papers that analyze Internet auctions none use structural techniques on such a comprehensive data set. Reiley et al. (2000) did collect a large data set, but limited the reduced form analysis to 461 data points. Bajari and Hortaçsu (2003) used parametric structural analysis with endogenous entry, but only had 418 observations to base their estimates on and were forced to make some relatively strong assumptions. A consistent finding in Bajari and Hortaçsu (2003), Houser and Wooders (2001), Reiley et al. (2000), and Melnik and Alm (2001) is that the auctioneer’s reputation matters. On eBay reputation is captured in a “feedback rating” for both bidders and sellers. After a sale, sellers and bidders can rate each other as positive (+1), neutral (0), or negative (-1). A participant’s feedback rating is the sum of the evaluations after each auction. The feedback rating can be manipulated by both bidders and auctioneers, but bidders react to it indicating it is valuable information. However, as Melnik and Alm (2001) point out, the economic impact of the feedback rating is not significant. Both Melnik and Alm and Reiley et al. (2000) also analyze how the length of auction affects sales price. Melnik and Alm find that longer auctions do not significantly affect price, while Reiley et al. find they do.

2

As Melnik and Alm point out this is not surprising since Reiley et al. analyze a rarer good and thus there could be fewer bidders per auction.

However, the reduced form methodology prevents this

structural issue from being thoroughly understood. What the coefficient measures is the increase in sales price. This could be because many more bidders come to the auction or it could be because there were only a few bidders in the first place. In the latter case even the addition of one more bidder would raise price significantly. Melnik and Alm argue there are fewer bidders in three day auctions Is this true? With our methodology we estimate the number of bidders for all lengths of auction, thus we know both how competitive the market is and how the number of bidders varies with auction length. Bajari and Hortaçsu (2003) do not provide these estimates. Roth and Ockenfels (2002) point out an interesting facet of the auctions we will be analyzing. In eBay auctions a large percentage of the bids are submitted in the last minute–over 11 percent in our data set–and most auctions have some bidders bidding more than once–around 77 percent in our data. This last minute bidding happens despite the well documented evidence that these bids sometimes do not get registered. They show that since these bids are not always registered there is an equilibrium where bidders bid seriously only in the last minute. In essence the risk of not getting a bid in allows bidders to “collude” by implicitly agreeing not to bid until this time, resulting in less competition and higher returns for the bidders. Currently this is the only theory which explains both the last minute and multiple bidding behavior on eBay. The theory predicts that in Amazon auctions there should be less last minute bidding, since the auction does not close until ten minutes after the last bid is received and they verify that this does, in fact, occur. We develop a within sample test of their equilibrium and can not accept that bidders are using this strategy. Another interesting class of equilibria are strategic jump bids (Avery, 1998). These strategies call for bidders to place high bids early in the auction to intimidate their competitors–much like bluffing in poker. As discussed in Avery (1998) this type of bidding is well documented in auctions, but previously it had not been shown to be equilibrium behavior. Despite the importance of this class of equilibria has not to our knowledge been tested.

Due to institutional rules on eBay the

strategies Avery specifies are not equilibria. We modify his equilibria for the eBay environment and call the modified strategy jump-call bidding. However, even though we prove that there are equilibria of this type in eBay auctions, we do not verify that bidders are using these strategies. Our estimation techniques are based on methods developed by Laffont et al. (1995).

Their

method utilizes the simulated method of moments estimator developed by McFadden (1989) and Pakes and Pollard (1989). It is framed in terms of simulated nonlinear least squares wherein the bidders’ private values are simulated on the basis of an assumed distribution. Distance between these simulated bids and the true bids is minimized. Their techniques were developed for first price auctions but can be modified for our use in English auctions. Two alternative structural methodologies are maximum likelihood and Bayesian estimation with linearly scalable bidding functions.

3

Maximum likelihood estimation (see, for example, Donald and Paarsch, 1993) is problematic for two reasons. First it requires solving a high dimensional integral. Second it must address violations of regularity assumptions used to justify standard maximum likelihood asymptotics since boundary conditions on the random variables are functions of the parameters. Bajari and Hortaçsu (2003) used a Bayesian methodology, but require the bidding functions to be linearly scalable, a restriction unnecessary with our approach and violated by our structural form. In the next section we briefly introduce the eBay marketplace. In section 3 we describe our data collection methodology. The structural model and estimates are presented in Section 4. In Section 5 we test against the alternative equilibria. Section 6 concludes.

2

The eBay Auction Market

In September 1995 eBay opened the first Internet based consumer to consumer auction.

The

corporate model was to provide a central market for the sale of goods. Independent sellers use eBay to sell their goods through auctions lasting from three to ten days so that bidders can bid at a convenient time. eBay’s revenues are primarily from the posting and sales of goods. They extract two primary fees, a listing fee and a sales fee. These fees are increasing in the reservation price the auctioneer sets and the final sales price, with a maximum of 5 percent of the final sales price and listing fees under two dollars. The mean listing fee was one dollar for the monitors in our data set, and final sale fee was $2.50, with a median final sales fee of $1.50. For all of eBay at the time our data was collected the average fee per item auctioned–not all of which sell–was $1.41, and 7 cents in fees were generated for every dollar of sales. eBay is operating two basic types of auctions. If an auctioneer wishes to sell two or more items in the same auction she has to use a “Dutch” auction. Bidders enter the number of items they want to buy and the price they were willing to pay for them, and then the good is sold to all bidders at the price offered in the highest losing bid. This type of auction has been studied by EngelbrechtWiggans and Kahn (1998) and Ausubel and Cramton (1996). The equilibrium is different than in the English auctions we study, thus we drop these observations. If an auctioneer has one item to sell she uses an “English” auction. An auctioneer interested in selling a monitor in an English auction could first look at the monitors currently available and at all auctions that closed in the last thirty days to get a sense of the market. Then the auctioneer had to make three primary decisions at the time our data was collected. First the standard reservation price was set, a publicly visible amount below which the good will not be sold. The auctioneer had the option of setting a secret reservation price. This is a reservation price that is not revealed to the bidders–though they know if there is one and if it has been met. The final decision was how long the auction should run; this could be either three, five, seven, or ten days.

4

If, for example,

seven days is selected then the auction will end precisely seven days to the second after it opens on eBay. Note that unlike traditional English auctions the bidding does not continue if someone wants to raise his bid, and that the auctioneer can end the auction early though this is rarely done. When a bidder enters eBay he firsts come to a summary page listing all the computer monitors available–around 580 on an average day in our data set–sorted by the length of time until that auction closes. He can then click on an item to see a full description of the monitor, and with a few more clicks see the bidding history; information about the auctioneer’s feedback; etcetera. The bidding history shows how many people have bid and their ranking–though not their actual bid amounts. A bid is registered by entering a maximum price into a proxy bidding program.

The

computer program then bids for the bidder as if he was in an English auction–raising this bidder’s bid until either the program hits his maximum or no one else raises their bid.

If the price rises

above this bidder’s maximum the bidder is notified by e-mail and can raise his maximum price if he so chooses. A significant problem that eBay faces is moral hazard. If there is no way to differentiate duplicitous from scrupulous auctioneers and bidders, then the usual lemons problem will select the scrupulous ones out of the market.

To correct for this, eBay has instituted a “feedback rating”

mechanism. After a transaction, bidders and auctioneers can rate each other. While such a system may have its drawbacks, it does not seem to be significantly manipulated and eBay members do actually pay attention to this.

We have observed bidders who have retracted their bids with

comments like “not a single positive comment on his feedback page,” “too many negatives on his feedback page.” eBay makes it quite clear that an auctioneer can request to have a bidder not bid on their item–with sanction by eBay if the bidder does anyway. Furthermore, many studies have found that this feedback rating does affect price. It would seem that this system has the intended effect.

3

The Data Set and Our Collection Techniques.

eBay saves all information about closed auctions on their website for a month after the auction closes.

This allows people who participated in the auction to verify the outcome, and provides

the source for our data set. Our data was collected using a “spider” program which periodically searches eBay for recently closed computer monitor auctions and downloads the pages giving the item description and the bid history. Software development was done in Python–a multi-platform, multi-OS, object-oriented programming language.

It is divided into three parts. It first goes to

eBay’s site and collects the item description page and the bidding history page. It next parses the web pages, and makes a database entry for each closed auction. The final part iterates through the database entries stored, and creates a tab-delimited ASCII file. This method has allowed us to

5

collect information on approximately 9000 English auctions of PC computer monitors. The original data processing program did not process all of the data. It provided us with the core of the data which was augmented with further processing of the raw html files. Using string searches we have managed to collect extensive descriptive information for the entire data set. With further data processing we have managed to collect all of the bidding histories. This process provided us with information on the 6543 auctions that are used in the estimates. Our data set consists of PC color computer monitors with a size between 14 and 21 inches which were auctioned between February 23, 2000 and June 11, 2000. All monitors are in working order, and we ignored touch screen monitors, LCD monitors, Apple monitors, and other types of monitors that are bought for different purposes than the monitors in our sample. Also, if there were any bid retractions or cancellations (this happened in 7.4 percent of the auctions) we dropped the observation because the retractions might indicate collusion.1 Descriptive variables except for monitor size were constructed using string searches. In Appendix B the strings that were used for each variable are detailed. This allowed us to collect data on whether there was a secret reservation price, whether it was met, monitor resolutions, dot pitch, whether a warranty was offered, several different brand names, whether the monitor was new, Like-New, or refurbished, and whether it was a flat screened monitor. “Brand name” is used for monitors that are from one of the ten largest firms represented in our data set. These firms are Sony, Compaq, NEC, IBM, Hewlett Packard, Dell, Gateway, Viewsonic, Sun, and Hitachi in order of size.

Sony

has around a 10 percent market share, the smallest are all around 3 percent, in total these 10 firms represent 57 percent of the market. Dot pitch (DPI) and resolution are not reported in all of the auctions. DPI is reported in 35 percent of the auctions, resolution in 58 percent. In Table 5 the descriptive statistics of variables of interest are presented. Note that some of our auctions last less than three days. The auctioneer has a (rarely exercised) right to end the auction early. It is not uncommon within this group for someone to put up an item and then recall it within ten minutes or so. In Table 5 we also report the correlation matrix between the variables we uses in our structural model, and five subsidiary variables–Number of Bids, Number of Bidders, Secret Reserve Price (Price Met), Secret Reserve Price (Not Met), and Auction Length. Secret reserve price is positively 1 It is possible for the seller to have a colleague bid to raise the price the winner pays (“shill bidding”). Sometimes this will result in the shill bidder bidding too high, and then he or she would have to cancel their bid. Likewise, one bidder can bid low and then have his colleague bid preposterously high (“bid shielding”). In this case the high bidder always has to withdraw at the last moment, letting the original bidder get the item for a song. Because of these possibilities we dropped these auctions. However we do not believe there is a serious amount of fraud on eBay. Bid retractions almost always do not seem to have affected the final price paid. In almost every case there were multiple bidders. Explanations for bid retractions frequently seem reasonable. Examples range from not trusting the seller to getting information via e-mail from the seller which made them not interested. Also the various “I just won another monitor,” “my job won’t let me buy a used monitor,” “my kid/roommate was bidding using my account,” “I got a speeding ticket.” Then there is the most widely cited, “whoops, I didn’t mean to.”

6

related to a high winning bid. This is consistent with an observation of Bajari and Hortaçsu (2003) that secret reservation prices are used more frequently for valuable goods.

Sellers with higher

feedback rating (more experience) are more likely to report both resolution and DPI, less likely to use a secret reservation price, and prefer shorter auctions. One interesting feature of the data is the near independence of the length of auction and most other variables. For example the most important variable–winning bid–has a very weak correlation with the length, and so does the number of bidders. This seems to suggest that a knowledgeable auctioneer would always prefer short auctions, which is verified by the correlation between seller’s feedback rating and length of auction. Our estimation methodology allows us to find out the number of bidders in the auction (which includes the people who thought about bidding but did not) and we will use this to see whether the true number of bidders also follows this trend.

4

The Model and Estimates

In this section we present our model and estimates.

We simplify the market environment with

several standard assumptions. The nature of the eBay marketplace makes some of these assumptions stronger than in much empirical analysis, but weakens others. For example since the marketplace is so competitive–we have over 17,000 individual bidders–we do not worry about possible market power effects or widespread collusion. However since the market is thick–with items being sold at all times of the day and night–we must be more concerned about the effects of entry and exit. Consistent with most of the literature this paper will assume that entry and exit are exogenous, in other words we will study each auction in isolation. Entry is not problematic for our estimation, but exit might be. However since eBay is a saturated marketplace, with at least 484 monitors for sale on every day for which we have a complete data set, we feel comfortable assuming that the market is in steady state. In this case the approximate effect of exit will be to change the constant term in our regression. We also assume that computer monitors have a private, or use driven, value. This is a standard simplifying assumption and in this market is easily defensible.

The usual reason to assume a

common value is that bidders intend to resell the good in the future, and this is falsified both by the data and casual intuition. Due to the rapid rate of technological advance in computer monitors the value of monitors falls precipitously, decreasing the benefit of resell. We also do not see many buyers who buy in large volume, only 1% of our winners buy more than 3 monitors. Thus it seems that the average buyer is using the monitor to fulfill a personal need. Post estimate tests (described below) also led us to drop some of the auctions.

To assure

independent observations, we dropped auctions where the price setting bidder has set price in more than one auction. To be certain bidders have unitary demand, we dropped auctions where the price

7

setting bidder had bought more than one monitor. In the model we have I bidders (which may be a function of the length of the auction) each of whom draws an independent private value v i i ∈ {1, 2, 3, ...I} from the same distribution. These

bidders bid for 3, 5, 7 or 10 days using increasing bidding strategies. Let bi be their final bid. By the rules of proxy bidding, the sales price (bw ) will be equal to the second highest bid, b(2:I) , plus the bidding increment ∆. In our data set ∆ is small, almost always between 1% and 2.5%, and thus we assume ∆ = 0. Following Haile and Tamer (2000) we assume that the bidding strategy follows two intuitive rules: 1. No bidder ever bids more than he is willing to pay. 2. No bidder allows opponents to win at a price he is willing to pay. We call a bidding strategy that follows these two rules a competitive bidding strategy.

These

rules are weakly dominant but there are equilibria that violate these two assumptions, like jump bidding (Avery 1998) and snipe-or-war bidding (Roth and Ockenfels 2002). After developing our estimates under the null of competitive bidding we will test against these alternatives. Haile and Tamer show that these two assumptions imply that if v (2:I) is the second highest value among the I bidders then bw = b(2:I) = v (2:I) . This is an intentionally incomplete model of bidding. A complete model would also inform us about how every bid is calculated, b(k:I) for k ∈ {3, 4, 5, ...I}. Unfortunately the only known model

for these bids is falsified by the data–the equilibrium is to have everyone bid their true value or

b(k:I) = v (k:I) . In this equilibrium everyone must bid only once, and the median number of bids per bidder in our data is 1.6 with a maximum of eleven. To understand why bidders must enter their true value as their first bid consider a bidder who bids $50 for a monitor instead of his true value of $100 when the auction opens.

If two other bidders immediately bid more than $100 then the

first bidder will not update the $50 bid and his final bid will not be his value. On the other hand if only one person bids more than $100 then the first bidder will update the $50 bid and bid twice. Thus observing bidders who bid more than once indicates bidders’ final bids might not be their true value. vni

For auction n there will be a known set of exogenous variables, xn and we will assume that ¡ ¢ 0 = exn β ρin , where ln ρin is distributed N (0, σ). Thus in auction n if there is no secret reservation

price the equation we estimate will be:

n o 0 (2:I) ln bw = max ln r , x β + ln ρ n n n n

(1)

³ ´ ³ ´ (2:I) (2:I+1) If there is a secret reservation price then ln ρn will be replaced with ln ρn , or we will use the assumption of Bajari and Hortaçsu (2003) that the secret reservation price is equivalent to having an extra bidder in the auction. 8

´ ³ ¡ ¢ (2:I) Note that while ln ρin is distributed normally the error term in the above equation, ln ρn ,

is not since it is the second order statistic from a sample of I bids. One method to estimate (1)

is to use a Tobit model controlling for heteroscedasticity. However while this method is consistent it is a reduced form approach, leaving the analyst with no information about the true distribution of private values or how competitive the auction is.

An auction is competitive if the number of

potential bidders, I, is large, and this variable can not be estimated using reduced form techniques. (2:I)

A second methodology is to use maximum likelihood. However, ρn

is the second highest of I

draws from the log-normal distribution and thus characterization of its distribution requires the calculation of a rather complex integral.

In general even if one calculates this integral standard

maximum likelihood methods can not be used since the boundary conditions on the bids are a function of the coefficients, thus violating standard regularity conditions. An elegant solution to this quandary proposed by Laffont, Ossard, and Vuong (1995) is to simulate the auction. Although Laffont et al. focused on first price auctions but the methodology is the same for the auctions analyzed here. Imagine running S auctions with I bidders in each auction. In each simulation the second highest value is selected (Xsn (β, I)) and these values are averaged to £ ¤ ¯ n (β, I) = 1 ΣSs=1 Xsn (β, I). If S is large then the distance between X ¯ n (β, I) and E v (2:I) form X S ¯ n (β, I) and will be small, and assuming one has the correct value for I then the distance between X E [bw n ] will be small. However, an unbiased methodology must take into account that in practice S is not large, and thus the objective function should compensate for the variance of the simulated ¢ ¡ 1 ¯ n (β, I) 2 . Estimation of β and estimator. This variance is VSn (β, I) = S(S−1) ΣSs=1 Xsn (β, I) − X

I are then given by:

arg min QS,N (β, I) = β,I

where N is the number of auctions.2

i ¢ 1 N h¡ w ¯ n (β, I) 2 − VSn (β, I) Σn=1 bn − X N

(2)

Note that the distribution of v (2:I) will be a non-degenerate function of {σ, β} and I. This allows

identification of I.

On an intuitive level this is because I determines the amount of “skewness”

in the observed prices. More precisely the log of the sales price is the second order statistic of I draws from a normal distribution. As I increases this distribution is skewed more strongly to the right, allowing identification. Notice that I in our model is not the observed number of bidders in an eBay auction.

This value is unimportant in the description of how competitive an auction

is, what we estimate is the number of people who consider bidding.

For example, assume that

1

I = 3, and consider an auction where these bidders have values v = 10, v 2 = 20, and v 3 = 30. If bidders 2 and 3 bid before bidder 1 the observed number of bidders will be two since bidder 1 will 2 Note that our asymptotics are not based on Laffont et. al. since they use importance sampling to construct Xsn (β, I), which we found did not perform well with our data. We can, however, satisfy the assumptions required for Pakes and Pollard (1989), thus we use this as the basis for our asymptotics.

9

not bid. However the number of people who actually bid is unimportant, the auction will always have the same outcome, and will always be equally competitive. The true measure of competition in an auction is the number of potential bidders; we use a grid search to estimate this number and establish the competitiveness of eBay auctions.

4.1

The Results

Results are presented in Tables 1 and 2.

Since we have missing observations on dot pitch and

resolution for our monitors in Table 1 we focus on different treatments for these variables.

In

Table 2 we allow the number of bidders to vary with the length of auction and estimate our bidding function on subsets of the data. Note that in all treatments except (6) most of the coefficients are stable. The exception is the standard deviation, or its inverse which is denoted 1/Std. Dev. in the tables. Like Laffont et. al. (1995) in previous versions of this paper we were unable to estimate the standard deviation. We found this was due to numerical problems in GAUSS and were able to get conversion for the inverse, but the coefficient is still not stable. The coefficients on variables dealing with missing observations of dot pitch (DPI) and resolution change in treatments 1-4, but this is to be expected since this is the focus of analysis. As expected, size (the diagonal length of the screen) has a positive coefficient, and “new” monitors are more valuable than “like-new” monitors, which are more valuable than used monitors–the zero in this set of dummies. Somewhat surprisingly Refurbished monitors are less valuable than used monitors, but since a Refurbished monitor (in general) was once not functional it seems reasonable that consumers could be hesitant to buy them. Having a warranty or a flat screen both increase the value of the monitor, but contrary to expectations brand name monitors do not have a higher price, and in fact the point estimate is negative (but insignificant). This is because“brand name” really means “common brand.”

Many of these are monitors that were originally sold with a computer

and are of lower quality. Results for the seller’s feedback rating and its square, to allow for diminishing marginal benefit, are quite interesting. The feedback rating is a proxy for both experience and trustworthiness. For example in order for it to be ten the auctioneer must have been in at least ten auctions and always received positive feedback. The positive coefficient on the seller’s feedback reflects these two good effects. The negative coefficient on the squared term indicates decreasing marginal benefit of such information. Other papers (Bajari and Hortaçsu, 2003; Houser and Wooders, 2001; Reiley et al., 2000) have shown that bidders react to the amount of negative feedback an auctioneer has had independently of the total feedback rating. However, we were unable to collect this information for the auctions in our sample. The primary focus of treatments one to four is the missing observations of dot pitch (DPI, the number of dots per inch on the screen) and Resolution (the number of pixels shown on the

10

Treatment (1)

Treatment (2)

Treatment (3)

Treatment (4)

Constant

-12.1101∗

-12.2219∗

-12.0316∗

-11.7382∗

(0.1945)

(0.0135)

(0.0177)

(0.0429)

ln(DPI)

-1.2186∗

-1.2648∗

-1.2029∗+

-1.2648∗

(0.0294)

(0.0206)

(0.0134)

(0.0519)

ln(E[DPI])

-0.9798∗

–—

-1.2029∗+

-0.3261∗

(0.0134)

(0.0462)

1.3693∗

-0.3185∗

0.9367∗

(0.0637)

D No DPI ln(Resolution) ln(E[Resolution])

–—

(0.2236)

(0.0126)

(0.0826)

0.1184∗

0.1164∗

0.1159∗+

0.119∗

(0.0027)

(0.0033)

(0.0016)

(0.0089)

0.1148∗

–—

0.1159∗+

0.7086∗

(0.0016)

(0.0165)

0.7966∗

-0.0257∗

-4.2183∗ (0.1094)

(0.0057)

D No Resolution

–—

(0.3844)

(0.0125)

4.6578∗

4.6874∗

4.6585∗

4.5271∗

(0.0811)

(0.0043)

(0.0052)

(0.0129)

0.3745∗

0.377∗

0.3743∗

0.3746∗

(0.0178)

(0.0162)

(0.0151)

(0.0176)

0.3353∗

0.3354∗

0.3351∗

0.3311∗

(0.0400)

(0.0382)

(0.0268)

(0.0398)

-0.0488

-0.0478

-0.0485

-0.051

(0.0273)

(0.0263)

(0.0256)

(0.0267)

Warranty

0.1235∗

0.1243∗

0.1236∗

0.1188∗

(0.024)

(0.0243)

(0.0223)

(0.0251)

Brand Name

-0.0088

-0.0078

-0.0092

-0.0092

(0.0142)

(0.0132)

(0.0098)

(0.0218)

0.2279∗

0.2274∗

0.2279∗

0.2286∗

ln(Size) New Like-new Refurbished

Flat

(0.0143)

(0.0159)

(0.011)

(0.0148)

ln(Seller’s FB)

0.3901∗

0.3809∗

0.3893∗

0.367∗

(0.0432)

(0.003)

(0.0042)

(0.0111)

ln(Seller’s FB Squared+1)

-0.1969∗

-0.1927∗

-0.1965∗

-0.1856∗

(0.0234)

(0.0015)

(0.002)

(0.0041)

1.6667∗

1.702∗

1.7256∗

1.7344∗

1/Std. Dev.

(0.0790) (0.0943) (0.0324) (0.0580) Num. of Bidders 57 57 57 57 Num. of Auctions 6543 6543 6543 6543 Obj. Function 4004.98 4004.5 4007.96 3995.97 ∗ Indicates the coefficient is significantly different from zero at the 5% level. (two tail test.) + The coefficients on DPI and E(DPI) (and Res./E(Res.) ) are constrained to be the same.

Table 1: Various Treatments of Missing DPI and Resolution Observations.

11

screen).

Since these could be missing for strategic reasons various methods are used to analyze

this problem.

Treatment (2) uses a dummy variable to indicate that either DPI or Resolution

was not reported. Treatment (1) uses the more sophisticated approach of developing expectations of the missing observations based on other right hand side variables. Treatment (3) uses both a dummy and expectations, but constrain the coefficient on the variable and its expectation to be equal. Treatment (4) uses both and does not impose any constraints. The evidence indicates sellers are not reporting dot pitch or resolution strategically. Dot pitch (DPI) is frequently not reported but it would always increase revenue to report it. Resolution is commonly reported when it slightly decreases revenues. This is most clearly illustrated in treatment (3). Reporting dot pitch will increase the sales price by 37.5% while resolution will increase it by 2. 6%.

Treatment (1) reinforces the importance of reporting dot pitch because we can analyze

the worst case.

This is when the expected dot pitch is worst and the real dot pitch is the best.

In this case the auctioneer would still get a 3% increase in revenue by reporting dot pitch.

In

contrast in a similar situation the auctioneer would lose 5% by reporting the resolution. Based on treatment (2) we estimate that resolution should be reported if it is 1024 × 768 or higher, thus 29%

of the auctioneers who reported their resolution actually decreased their revenue by reporting it. Since auctioneers sometimes reduce their revenue by reporting resolution and could always increase revenue by reporting dot pitch it they can not be behaving strategically. Auctioneers report dot pitch and resolution if they are known. In choosing our core treatment for the rest of our analysis we first note that standard tests almost always reject with such a large data set. The standard F-test can be written as the percentage change in the objective function divided by the percentage change in the degrees of freedom.3 Thus small changes in the objective function will lead to rejections since with a data set of more than six thousand percentage changes in the degrees of freedom are essentially zero.

For example, in

comparing treatment (4) to treatments 1-3 the objective function never increases by more than .3%, but in order to fail to reject the objective function can only increase by .1%.

Treatment one,

however, does have some appealing theoretical and practical advantages. It is more parsimonious than treatment (4).

It does not require the imputation of missing values as does treatment (2).

Treatment (3) further requires that our expectations are optimal. Finally there is little economically significant differences between treatments (1) and (2)-(4). 3 The

F-test is usually written as:

(Q∗c − Q∗u ) /J Q∗u / (N − K) ∗ where Qc is the value of the constrained objective function, Q∗u is the value of the unconstrained, J is the number of constraints and N − K is the unconstrained degrees of freedom. One can write this as: ∗ Q∗ c −Qu Q∗ u (N −(K−J))−(N −K) (N −K)

12

=

%∆Q∗ %∆DoF

In Table 2 we investigate our assumption that all auctions have the same number of bidders and check our analysis on various subsets of the data. We use the results from treatment (1) for comparability. Treatment (5) provides estimates from the structural model modified to allow the number of bidders to be a function of the length of the auction. We find that the optimum is 45 bidders in 3 day auctions, 57 in 5 day auctions, 61 in 7 day auctions, and 61 in 10 day auctions. This indicates longer auctions do have a greater number of bidders, as expected, but the additional difference is not significant. In this regression we constrained the number of bidders to be increasing with the length of auction, without this constraint we found that 10 day auctions had 54 bidders, but the monotonicity restriction is not rejected at the 1% significance level.

Restricting the number of

bidders to be the same in all auctions is rejected, but the improvement is trivial and we prefer the more parsimonious model. From our structural estimates we can see that eBay auctions are very competitive. Like Melnik and Alm (2001), we find that the benefit of longer auctions is small and due to our structural estimates we know this is because short auctions have so many bidders.

The change in the final

sales price achieved by extending the auction from three to ten days is only 10.9%, and 8.4% percent of this gain can be captured by extending the auction from three to five days. An auctioneer choosing the length of his auction should respond to these incentives by only selling valuable items in long auctions. The data suggests that experienced auctioneers do respond to these incentives. Table 3 summarizes these results. Since an auctioneer must have been involved in more auctions than her feedback rating, the feedback rating (FB) is a good proxy for experience.

In the fourth line of table 3 you can see

that auctioneers with a high feedback rating sell more valuable items in longer auctions.

Thus

experienced auctioneers respond rationally to the incentives. Less experienced auctioneers do not, thus it seems to take time for auctioneers to learn market conditions. In Treatments (6) and (7) we test our assumption that a secret reservation price is equivalent to having the auctioneer as an additional bidder (Bajari and Hortaçsu, 2003). We do this by breaking the data into subsets with a secret reservation price [treatment (6)] and without [treatment (7)]. In treatment (6) notice that the coefficients on feedback and its square have a significantly larger magnitude than in all other regressions–at least 4 times–and that many of the other coefficients are changed. It seems clear that trust is a larger issue if the auctioneer is using a secret reserve, and the change in the coefficients reflects this. However we still estimate that there are 57 bidders in each auction. In treatment (7) we look at auctions with no secret reservation price, and find that the point estimate on the number of bidders is 56, but the null hypothesis of 57 bidders is not rejected at the 5% level. Therefore we fail to reject Bajari and Hortaçsu’s model that a secret reservation price is equivalent to an extra bidder in the auction. Given the number of reasonable restrictions

13

Treatment (1)

Treatment (5)

Treatment (6)

Treatment (7)

Treatment (8)

Constant

-12.1101∗

-11.9017∗

-12.7946∗

-11.9020∗

-11.8335∗

(0.1945)

(0.0064)

(0.0015)

(0.0163)

(0.003)

ln(DPI)

-1.2186∗

-1.2668∗

-0.9867∗

-1.2440∗

-1.2954∗

(0.0294)

(0.0096)

(0.0025)

(0.0117)

(0.0022)

ln(E[DPI])

-0.9798∗

-1.0365∗

-0.7783∗

-1.0010∗

NA

(0.0637)

(0.0305)

(0.0022)

(0.0149)

0.1184∗

0.0910∗

0.1291∗

0.1204∗

0.1197∗

(0.0027)

(0.0023)

(0.0017)

(0.0041)

(0.0018)

0.1148∗

0.0862∗

0.1158∗

0.1170∗

NA

(0.0057)

(0.0021)

(0.002)

(0.0043)

4.6578∗

4.6845∗

5.1116∗

4.5680∗

4.6535∗

(0.0811)

(0.0075)

(0.0003)

(0.0054)

(0.0051)

0.3745∗

0.3549∗

0.5484∗

0.3392∗

0.3513∗

ln(Resolution) ln(E[Resolution]) ln(Size) New

(0.0178)

(0.0059)

(0.0014)

(0.0173)

(0.002)

Like-new

0.3353∗

0.3327∗

0.5423∗

0.3136∗

0.2247∗

(0.04)

(0.0064)

(0.0016)

(0.0277)

(0.002)

Refurbished

-0.0488∗

-0.0638∗

-0.0607∗

-0.0489∗

-0.0346∗

(0.0273)

(0.0067)

(0.0016)

(0.0191)

(0.002)

Warranty

0.1235∗

0.1131∗

0.0008

0.1444∗

0.1357∗

(0.024)

(0.0064)

(0.0016)

(0.0218)

(0.0021)

Brand Name

-0.0088

-0.0042

-0.0265∗

-0.0103

-0.0013

(0.0142)

(0.0068)

(0.0016)

(0.0103)

(0.002)

0.2279∗

0.2536∗

0.1875∗

0.2292∗

0.2000∗

Flat

(0.0143)

(0.0057)

(0.0016)

(0.0118)

(0.0019)

ln(Seller’s FB)

0.3901∗

0.3200∗

1.1800∗

0.2253∗

0.2835∗

(0.0432)

(0.0044)

(0.0014)

(0.0035)

(0.002)

ln(Seller’s FB Sqrd+1)

-0.1969∗

-0.1629∗

-0.5737∗

-0.1162∗

-0.1507∗

(0.0234)

(0.0023)

(0.0013)

(0.0017)

(0.0012)

1.6667∗

1.8923∗

3.5913∗

1.6101∗

2.2524∗

1/Std. Dev.

(0.079) (0.0204) (0.0014) (0.0245) (0.002) Num. of Bidders 57 — 57 56 57 Num. Bdrs - 3 Day — 45 — — — Num. Bdrs - 5 Day — 57 — — — Num. Bdrs - 7 Day — 61 — — — Num. Bdrs - 10 Day — 61 — — — Num. of Auctions 6543 6191 624 5919 1805 Obj. Function 4004.98 4003.12 3576.17 4013.53 6047.1 ∗ Indicates the coefficient is significantly different from zero at the 5% level. (two tail test.)

Table 2: Treatements on different data sets and allowing I to vary with D.

14

The Value of Monitors, Experience, and the Length of Auction Length of Auction 3 days 5 days 7 days 10 days All Number Auctioned 2460 1210 1833 688 6191 Average Monitor Value 34.78 40.23 36.60 39.75 37.11 Ave. Val., Above Median FB 30.39 36.51 40.36 46.51 35.05 Ave. Val., Below Median FB 43.83 43.54 34.63 35.85 39.09 Table 3: The corrleation between the length of auction and the monitor’s value. The median seller’s feedback (FB) is 62. rejected this is strong evidence in favor of this model. Treatment (8) allows one final check on our basic model by estimating the bidding function only using auctions that have both dot pitch and resolution reported. Except for the inverted standard deviation the coefficients are essentially identical to those in treatment (1).

5

Testing against Alternative Equilibria.

It is common for well specified models–such as our model of eBay bidding–to have multiple equilibria. Different equilibria often have different empirical and policy implications. Thus an important post-estimation test is to check which strategy bidders use. In our case we test two alternative strategies. These are the “snipe or war” strategy from Roth and Ockenfels (2002) and the “jump-call” strategy which is a variation on jump bidding (Avery 1998). Both of these strategies support large sets of equilibria. The competitive bidding strategy (our null for estimation) is an equilibrium in both of these classes, thus the null is nested in the alternative and we can proceed with formal testing. The difficulty, however, is to find a test which is viable against all other equilibria in each class. We have devised two tests based on a well know characteristic of eBay. Due to congestion on the Internet and at eBay, entering bids takes an uncertain amount of time. Thus if you bid with five minutes left in an auction your bid might not get in until there are four and a half minutes left, or even three minutes, or perhaps four minutes and fifty five seconds. Roth and Ockenfels (2002) model this as a “sniping window.” A period of s∗ seconds where if you bid with less than s∗ seconds left in the auction your bid only gets in with probability p < 1; furthermore you can only bid once in this window. The existence of this sniping window has been verified by Roth and Ockenfels in a survey and backed up by solid circumstantial evidence. eBay chat groups often share strategies on how to snipe successfully. In addition eBay has a web page devoted to convincing those upset that their bids failed to register to discontinue their sniping. Our experience is that many bidders feel a bid must be entered with thirty to forty seconds left in the auction. Different aspects of each class of equilibria allows us to develop a test based on the sniping

15

window.

A jump-call strategy is essentially like bluffing in poker, and the last bid of the winner

is equivalent to putting his money on the table.

If other bidders think he might not put down

his money they will ignore the bluff and bid like usual; thus the winner’s last bid must be sure to be entered.

In a snipe-or-war strategy bidders are supposed to only enter trivial bids before the

sniping window, if anyone bids too seriously this triggers a bidding war.

Thus the behavior of

bidders depends on whether the auction ends before or during the sniping window. For our tests we first construct the private values using our estimates. We assume that all auctions have the same number of bidders. Thus: ρ ˆn(2:I) =

bn 0 x e n βe

(3)

always has the same distribution for any subset of bids under the null. Under the alternatives this distribution will depend on either when the second highest or the highest bid was placed. Thus we will break the data set into different subsets, the “late” data set will be auctions where the critical bid was placed in the sniping window.

Since we are unsure how long the sniping window is we

test auctions where the critical bid was entered with 15, 30, and 60 seconds left in the auction for the late window. We then compare this distribution with various “early” subsets, where either the second highest or highest bid was entered with between t and t¯ minutes left in the auction. For our tests we use a different data set than the one we used in our estimates. For example we drop all auctions where there was a secret reservation price since the observed bid price might be either the third or first moment among the bidders. We also drop auctions that ended unexpectedly since in all cases our tests assume bidders knew when the end of the auction was. In our estimates we also dropped auctions where the same bidder was the price setter in more than one auction. Here we do not, since these people probably have the same value in multiple auctions if we observe any difference between auctions it must be for strategic reasons, providing the purest tests of the alternatives.

5.1

Testing the Snipe-or-War Strategy.

The first alternative equilibria we will test for was designed to explain an interesting phenomena in Internet auctions. It is very common to observe bidders waiting until the last minute to bid, a strategy referred to as “sniping.” This behavior is surprising since sniping bids sometimes do not get entered, thus someone can lose an auction they should win. Roth and Ockenfels (2002) show that, surprisingly, this is equilibrium behavior. In a snipe-or-war equilibrium people only bid within the sniping window from a fear of others’ reactions if they don’t. In essence players cooperatively agree to wait until there is a chance that their bids will not get in before bidding in earnest.

If

they bid high early then their behavior triggers others to bid high early. A formal description of a strategy of this type is: 16

A Snipe or War Strategy. At the beginning of the auction bidders with values greater than the reservation price bid the reservation price. In the sniping window they bid their true value. If they observe anyone bidding above the reservation price or bidding at any time except the beginning of the auction they bid their true value immediately. Roth and Ockenfels (2002) prove this strategy is an equilibrium if there are two bidders whose reservation prices are independent and can take on only two values. Our data contains more than two bidders and we wish to allow for more general distribution functions. We thus generalize their results in the following proposition. Recall that p is the probability a bid is recorded in the sniping window, I is the number of bidders, let r be the reservation price and F (·) be the distribution of the bidder’s private values. Proposition 1 If

p 1+p

≥ F (r) then there is an equilibrium where all bidders with values ρ ≤ ρ∗

follow a snipe-or-war strategy and bidders with higher values bid their true value at time zero. Where ρ∗ is the maximal ρ ˆ such that for all J ∈ [2, I]:

R ρˆ J−1 F (z) dz p − pJ r ≥ R ρ ˆ J−2 1 − pJ F (z) dz r

(4)

The most surprising implication of this proposition is that it shows snipe-or-war is an equilibrium strategy even with a large number of bidders.

With I = 57, p = .95 and r = 0, 9% of auctions

will end with a sniping bid. Based on the empirical distribution of reservation prices and various guesses at the probability a sniping bid is entered we find that if our casual a priori of a sniping window of 30-40 seconds is correct this implies approximately 95% of snipe bids are entered. Percent of Snipe Bids that are Entered Percent of Auctions that end with Snipe Bids Length of Sniping Window (in minutes)

90% 0.42% 0.02

95% 7.67% 0.43

97.5% 24.34% 8.93

99% 41.04% 64.72

Table 4: Empirical probability of auctions ending with a snipe bid and the length of the sniping window. However, reading the survey information provided by Roth and Ockenfels, it seems unlikely that one out of twenty snipe bids does not get in.

Therefore it seems unlikely that bidders are using

this snipe-or-war strategy, but there are many others.

Can we be sure that they aren’t using a

snipe-or-war strategy? The answer is a qualified yes. We are able to develop a general test that gives us a clear picture of bidder behavior and decisively rejects that bidders are using the snipe-or-war strategy. Since we want to test against all snipe or war strategies our test might be weak against any particular strategy. We use tests for the equality of two distributions, which is known to perform poorly in

17

small data sets. However our data set is large enough to overcome these two problems. Our test results do not support the snipe-or-war strategy. We are interested in the distribution of sales price conditional on whether the price is set in the sniping window or not. If the price is set in the sniping window we call this a “Snipe” event and the other event is “No Snipe.” Let H (ρ|·) be the distribution of the sales prices given the event “Snipe” or “No Snipe,” then the next corollary establishes that these distributions will be different no matter which snipe-or-war equilibrium is being used. Corollary 1 If strategies are Lebesque measurable then there exists ρ such that H (ρ|N o Snipe) 6= H (ρ|Snipe)

(5)

Notice that this is essentially the only test which can be used against every strategy in this class. Alternatives like checking for differences in the means will not be true for some strategies, though they might be true for many others. Furthermore, this test directly addresses the incentive to use a snipe-or-war strategy. The benefit of sniping is essentially based on a weaker distribution of bids if everyone follows that strategy. Thus in general the “No Snipe” prices should always be compressed at the top and bottom of the support of prices, with the “Snipe” prices distributed in between. If these two distributions are nearly equal, then the incentive to snipe must be minimal. We transform the corollary into a test in the following manner. Take a subset of auctions where price is set with s < s∗ seconds left in the auction.

The distribution of this subset should be

different from all auctions where the price was set with more than s∗ seconds left in the auction. Thus we should be able to quickly find s∗ based on systematic rejections of all distributions where the price was set with more than s∗ second left For the late data set we consider auctions where the last bid was placed in the last 15, 30, or 60 seconds. These data sets have 154, 287 and 414 bids respectively.

We use two standard tests for difference of distributions, the Kolmogorov-Smirnov

and the Rank-Sum test. The full results are in Appendix C, Tables 7 and 8, in Figure 1 we graph the 60 second late window versus earlier periods for the entire data set (“All”) and auctions with experienced price setters (“Exp”).

We report the Rank-Sum statistic which has a normal distribution, thus the

appropriate rejection levels are ±1.96.

The darkened icons are rejections at the 5% level. Notice that we clearly reject the 1-1.5 minute

window and all windows from 15 minutes on when we use all of our data set (the “All” test series.) This pattern is also confirmed by the Kolmogorov-Smirnov test. The rejection of the 1-1.5 minute window is found in all of the test series we ran but it is inconsistent with the snipe-or-war strategy. The rejection from 15 minutes on only appears if we look at all of the data and test with a thirty or sixty second late window. However even if we assume 18

3.92

1.96

All

0

Exp.

-1.96

-3.92 45-60

30-45

20-30

15-20

10-15

7-10

5-7

4-5

3-4

2-3

1.5-2

1-1.5

Figure 1: 60 Second Late Window vs. Earlier Windows, All and Experienced subsets of the data. Darkened icons indicate rejections of the null that the distributions are equal at the 5% level. that this pattern is correct it implies a 15 minute sniping window and this is untenable.

Other

auction sites have a policy of not ending the auction until no one has bid in five to ten minutes. The intention of this policy is to eliminate sniping by giving people time to respond to other bids. Reiley (2000) found that the length of this window depended on auction site but was never more than ten minutes.

Thus an upper bound on the period which we consider is ten minutes, and a

rejection from 15 minutes on does not indicate that bidders are using a snipe-or-war strategy. One final argument could be made in favor of snipe-or-war: only experienced bidders use the snipe-or-war strategy. We test this by censoring the data for bids where the price setting bidder has a feedback rating of 4 or more (50 percent of the data). These bidders will be more experienced and therefore we should see the predicted pattern more clearly. We do not as is illustrated above with the 60 second late window (the “Exp.” test series.). There is an occasional rejection of some very early windows but we have no consistent pattern. This could be due to having a smaller data set for our tests, but the rejection levels of the 1-1.5 minute window and the occasional rejection of 30-60 second window do not change meaningfully.

19

5.2

Testing the Jump-Call Strategy.

Jump-Call Bidding is variety of Jump Bidding (Avery, 1998). A “Jump bid” is when one bidder bids a large amount early in the auction to intimidate his opponents.

On eBay it could also be

used to coordinate bidding besides possibly lowering the price. If two bidders with high values are both bidding for the same monitor it would be better for both if one went to a different auction, and jump-call bidding can facilitate this. We formally analyze strategies that assume exogenous entry but variations of these strategies will almost certainly work in an environment with entry. Furthermore these variants would be sensitive to the same test. Jump bidding is like bluffing in poker. In poker players often bluff by placing large bets early in the game in order to intimidate their opponents. Avery (1998) analyzed auctions in which players paid their bid, and he showed that players could place large bids early in order to scare away their opponents. The only difference between bluffing and jump bidding is that with jump bidding one has to pay the bet even if nobody matches it. In poker one only pays if someone else calls the bluff by betting an equal amount. On eBay no matter how high the winner bids he only pays the second highest bid, so to make the winner’s bluff stick the price setter has to call it. The price setter starts a bidding war, one that sometimes ends with him bidding above his value. We note that “bidding wars” are a commonly reported phenomena in Internet auctions. The desire to win overcomes bidders’ common sense and at the end of the auction bidders sometimes wish they could back out of their last bid.

While this might seem irrational Jump-Call bidding

takes this behavior and shows that it can be part of an equilibrium. The price setter doesn’t want to pay more than his value but he wants to make the winner pay. As we said there are a large variety of behaviors that are all jump-call equilibria, but they all have two essential stages. The first stage is the signal stage, where players signal that they want to jump bid. The second stage is the bidding war stage. This is where the highest and some lower bidder drives the price up to the appropriate level. Obviously there are many different ways the signal can be sent. It could be a bid that has a certain odd value, at a certain time, or a combination of both. There also could be multiple signal stages, with bids in different stages driving the price to different levels. The bidding war also could take place in multiple stages, the price setter and the winner bidding back and forth for many periods. We will describe a very simple member of this class, and then show that the winner’s last bid is never during the sniping window in any of these equilibria. A Jump-Call Strategy Select a window with between ¯j and j minutes left in the auction, if anyone bids in this window then he is signalling he wants to jump bid. Anyone who bids in this window should bid the current price plus the bidding increment. If only one person bids in this window then when there are less than j minutes left in the 20

auction he bids his true value. All other bidders bid Bj immediately afterwards. If no one or two or more people bid in this window everyone immediately bids their true value. £ ¤ Only bidders with values above ρj , where solves: Bj = E ρ(2:I) |ρ(2:I) ≤ ρj are to bid in the sniping window.

This strategy is quite similar to the strategy introduced in Avery (1998), and has only been altered to fit the eBay environment.

In Avery’s equilibrium the jump bid is a signal just like in

our modified environment. In both cases if other bidders ignore the signal then the strategy is not an equilibrium. The jump-call strategy is more appealing to the winner since they have not risked anything until the others respond to his signal. In the original strategy the signal and the jump bid are the same action, thus the action is riskier. As well, if the utility functions are affiliated then the jump-call strategy reveals worse information to a naive bidder.4

Such a bidder might assume

that the statement of intent bid represents the bidder’s value, and since this is low they will be less optimistic about the value of the good. The following proposition establishes that this is an equilibrium, and that the winner must bid his true value before the sniping window. Proposition 2 In an eBay Auction there are jump-call equilibria, and the winner will not bid in the sniping window in any of these equilibria. It is actually very important that the winner’s last bid is not a snipe. If he does not bid before the sniping window then the price setter will win the auction part of the time. Thus the price setter faces two outcomes. Most of the time he will just set the price but some of the time he will win. If he is only the price setter he does not care what the price is, but if he is the winner he has a strict incentive to bid his true value. Thus if the winner snipes, the price setter will not bid Bj , and the strategy falls apart. This fact allows us to test for these equilibria. If we censor auctions where the price is extremely low then auctions have higher prices if the winner snipes. Consider what it means when the winner snipes given the last proposition. It means that either no bidder wanted to jump bid or two or more bidders wanted to.

If it is the former case then

the price will be very low, in the latter case the price will be very high. Thus if we drop low price auctions, those with prices below Bj , the price in auctions that end in the sniping winner must be higher on average. Corollary 2 Assume that if there is no successful signal then the timing of the last bids are independent of bidders’ values. Then if B j is the lowest jump-call bid: h i h i E b(w) |b(w) ≥ B j , winner snipe − E b(w) |b(w) ≥ B j , no winner snipe > 0

4 Affiliation is defined in Milgrom and Weber (1982). Loosely speaking it means that if one person thinks an item is valuable then so do other people.

21

To carry out this test we have to find the unobserved B j , and we reject if we find some b such that the above statistic is significantly positive. Since we drop all auctions with a price below B j if we used the 15 second late window sometimes the data set for the test would be very small, thus we use the 30 and 60 second late window. The only requirement of the early data set is that we don’t have any bids in the sniping window. Thus we use all auctions where the last bid by the winner was with more than ten minutes left in the auction, allowing for a large early data set. We do find some evidence that supports jump-call bidding, but in the end we fail to reject the null hypothesis. The test statistic should be positive and significant under the alternative, thus we reject if the test statistic is greater than 1.65. The graph below summarizes the findings, darkened icons indicate rejection at the 5% significance level.

4.5 4 3.5 3 2.5 2 1.5 1 0.5 0 -0.5 0%

10%

All

20%

FB 2

30%

FB 5

40%

FB 14

50%

60%

70%

5% Significance Level

Figure 2: Jump Test, 30 Second Late Window. Darkened icons indicate rejection of the null that the means are the same at the 5% level, one sided test. With the “All” test series we clearly reject when B j is less than or equal to the sixtieth percentile of the data. However this is a sophisticated strategy, signals must both be sent and understood. One might doubt that inexperienced bidders can carry out this strategy. The test is not changed by dropping inexperienced bidders and the results should be stronger. They are not, instead they imply we fail to reject the null. We drop auctions in which either the winner or price setter is inexperienced and when we look at the most experienced 10% we fail to reject at all percentiles. When we look at the top 43% of auctions or top 25% we do reject at some percentiles, but the evidence is that bidders are not using a jump-call strategy. We observe some odd patterns of

22

behavior among inexperienced bidders but this pattern is not generated by jump-call bidding. In Appendix C Tables 9 and 10 you can see the full results. We also examined other early windows (more than one minute and more than thirty minutes) and other subsets of the data and the same pattern is always repeated.

When we look at all auctions we reject for low percentiles, when we

censor inexperienced bidders we fail to reject.

We include the subset without repeat bidders for

illustration.

6

Discussion and Conclusion

Our research is part of a maturing literature on Internet auctions, which has made strides both on the theoretical and empirical front. Ours is the second paper in this growing literature–after Bajari and Hortaçsu (2003)–to use a structural methodology. With such large and rich data sets such techniques are both feasible and rewarding. Due to our technique we are able to test which strategy bidders are using. We show that the simple competitive bidding strategy better explains the data than alternatives–in our case the snipe-or-war and jump-call strategies. In some ways our analysis has brought out more perplexing observations than it has explained. For example the tendency of bidders to only bid the reservation price–is this an optimal strategy in an auction marketplace like eBay or a simple suboptimal behavior? We also find some evidence that indicates learning by both bidders and auctioneers. Experienced auctioneers respond to incentives when deciding on the optimal length of auction but inexperienced do not. Also when we test over all bidders the average price paid is higher in auctions that end in the last seconds, but for experienced bidders it is not.

This could be evidence of inexperienced bidders getting excited or panicked,

but we do not analyze these issues here. Our goal is to look at a few key issues in eBay auctions, providing a complete understanding of the market is too large a task. We notice that some of the puzzles we observe can be explained by a relatively small change in the model. For example last minute bidding could be explained by small common values and the inability of some people to bid at the end of the auction. Since these auctions end at all times it is undeniably true that bids near the end of the auction are less likely to be responded to. When there is a common value component of the price this means high bids won’t drive up the price as much. The key evidence in favor of this relatively simple model is the consistent rejection of bids entered right before the true sniping window–entered with between one and one and a half minutes left. If we extend this model by assuming bidding near the end of the auction is more costly it might be able to explain even more. But we must leave this interesting and complex hypotheses to future analyses. Internet auctions are a rich environment that can be analyzed in detail. The benefit to both the theory and empirics of auctions from this resource is only beginning to be tapped. Internet auctions are an exciting market mechanism and combined with the development of this

23

market is the development of methodology that can thoroughly study it.

Using the sort of data

collection protocols we have utilized allows one to gather potentially enormous data sets. allows analysts to develop realistic models with substantial data requirements.

This

Recent advances

in structural estimation theory provides researchers with the techniques that can precisely estimate bidders’ behavior and test against alternatives. At the same time–as first pointed out by Roth and Ockenfels (2002)–the natural variation of auction mechanisms on the Internet makes possible empirical comparisons of format effects.

24

References [1] Ausubel, Larry and Peter Cramton (1996). “Demand Reduction and Inefficiency in Multi-Unit Auctions.” mimeo. [2] Avery, Christopher. (1998). “Strategic Jump Bidding in English Auctions.” Review of Economic Studies, 65:185-210. [3] Bajari, Patrick and Ali Hortaçsu (2003). “Winner’s Curse, Reserve Prices and Endogenous Entry: Empirical Insights from eBay Auctions.” RAND Journal of Economics, 329-355. [4] Donald, Steven and Harry Paarsch (1993). “Piecewise Maximum Likelihood Estimation in Empirical Models of Auctions.” International Economic Review, 34:121-148. [5] Donald, Stephen G., Harry J. Paarsch, and Jacques Robert (2002).“An Empirical Model of Multi-Unit, Sequential, Oral, Ascending-Price Auctions.” mimeo. [6] Engelbrecht-Wiggans, Richard and Charles M. Kahn (1998). “Multi-Unit Auctions with Uniform Prices.” Economic Theory, 12:227-258. [7] Haile, Philip A. and Elie T. Tamer (2003). “Inference with an Incomplete Model of English Auctions.” Journal of Political Economy, 111:1-51. [8] Houser, David and John Wooders (2001). “Reputation in Auctions: Theory, and Evidence from eBay ” mimeo. [9] Laffont, Jean-Jacques (1997) ”Game Theory and Empirical Economics: The Case of Auction Data.” European Economic Review, 41:1-35. [10] Laffont, Jean-Jacques, Herve Ossard, and Quang Vuong (1995). “Econometrics of First-Price Auctions.” Econometrica, 63:953-980. [11] McFadden, Daniel (1989). “A Method of Simulated Moments for Estimation of Discrete Response Models without Numerical Integration.” Econometrica, 57:995-1026. [12] Melnik, Mikhail and James Alm (2001). “Does a Seller’s eCommerce Reputation Matter?” The Journal of Industrial Economics, 50:337-350. [13] Milgrom, Paul R. and Robert J. Weber (1982). “A Theory of Auctions and Competitive Bidding.” Econometrica, 50:1089-1122. [14] Pakes, Ariel and David Pollard (1989). “Simulation and the Asymptotics of Optimization Estimators.” Econometrica, 57:1027-1057.

25

[15] Reiley, David (2000). “Auctions on the Internet: What’s Being Auctioned, and How?” Journal of Industrial Economics, 48:227-252. [16] Reiley, David, Doug Bryan, Naghi Prasad and Daniel Reeves. (2000). “Pennies from eBay: The Determinants of Price in Online Auctions.” mimeo. [17] Riley, John G. and William F. Samuelson (1981). “Optimal Auctions.” American Economic Review, 71:381-392. [18] Roth, Alvin and Axel Ockenfels (2002). “Last Minute Bidding and the Rules for Ending SecondPrice Auctions: Theory and Evidence from a Natural Experiment on the Internet.” American Economic Review, 92:1093-1103.

A

Proofs.

Proposition 1 If

p 1+p

≥ F (r) then there is an equilibrium where all bidders with values ρ ≤ ρ∗

follow a snipe-or-war strategy and bidders with higher values bid their true value at time zero. Where ρ∗ is the maximal ρ ˆ such that for all J ∈ [2, I]: R ρˆ F (z)J−1 dz p − pJ r ≥ R ρ ˆ 1 − pJ F (z)J−2 dz

(6)

r

Proof. First consider an agent deviating from equilibrium at the beginning of the auction, when bidders post their first bid. Without loss of generality assume that this agent bids his private value. Let E (π|snipe) be the expected payoff given a player gets his bid in during the sniping window, and E (π|war) be a player’s expected payoff if he deviates. The benefit from cooperating is: H (ρ) = pE (π|snipe) + (1 − p)I

1 (ρ − r) − E (π|war) I

Note that in all cases they bid their true value. The only difference is that when others snipe this induces a binomial distribution over the number of their competitors. Thus their expected winnings Rρ k given k people submit bids is E (π|k) = r F (z) dz where k ∈ {0, 1, 2, ..., I} (see John Riley and William Samuelson, 1981). After some algebra H (ρ) is equal to: µ ¶ Z ρ 1 I−1−k I−2 I − 1 k G (z, k, I) dz + (1 − p)I (ρ − r) p (1 − p) H (ρ) = Σk=0 k I r where k

G (z, k, I) = pF (z) −

1 − pI I−1 F (z) 1 − pI−1

Rρ This is decreasing in k, so if we show that r G (z, I − 2, I) dz is weakly positive then the above Rρ must be positive. It is immediate that r G (z, I − 2, I) dz is weakly positive if 6 holds. Next consider deviating at any time between the beginning of the auction and the sniping window. At this time a bidder will know that only J (J ≤ I) agents have values above r. Again condition (6) is sufficient. 26

Corollary 1 If strategies are Lebesque measurable then there exists ρ such that H (ρ|N o Snipe) 6= H (ρ|Snipe)

(7)

Proof. Let H (z|r, ·) be the distribution of the sales price given reservation price r. It is relatively simple to show that given r the condition holds. Assume, first of all, that all bidders enter bids in the sniping window. Then it must be that H (ρ|r, N o Snipe) > H (ρ|r, Snipe) since if a sniping bid is recorded the must be updating £ bidder ¤ her bid. On the other hand, assume that there is an interval of types ρ, ρ that do not enter bids ¡ ¢ in the sniping window. Then in this case H (ρ|r, Snipe) = H ρ|r, Snipe and H (ρ|r, N o Snipe) > ¡ ¢ H ρ|r, N o Snipe . Now to show that it holds for all r, since the strategy is Lebesque measurable it is a continuous function almost everywhere. Thus there exists an ρ∗ > 0 and r∗ > 0 such that if ρ ≤ ρ∗ and r ≤ r∗ either all bidders enter bids in the sniping window or not. Therefore it follows that for z = min {ρ∗ , r∗ } H (z|N o Snipe) > H (z|Snipe). Proposition 2 In an eBay Auction there are jump-call equilibria, and the winner will not bid in the sniping window in any of these equilibria. Proof. First, if no one or more than two people signal they want to jump bid then the strategy is the competitive equilibrium strategy and is an equilibrium. If one person signals he wants to jump bid then that bidder is supposed to immediately bid his value. If he does this then all other bidders will be willing to bid Bj since they know someone has bid a higher amount already. As well only bidders with types greater than ρj will signal they want to jump because for all lower bidders if they signal this they will either win a negative amount (they bid something greater than BJ ) or will win zero. Since in the competitive equilibrium strategy they can expect a weakly positive payoff they will not signal they want to jump bid. Those with signals greater than ρj will signal since this will not increase their expected price. ˆj and the sniping window has started, Finally assume the high bidder has not bid more than B ˆ where Bj is the final jump bid that was signaled in an arbitrary jump bid strategy. In this case we ˆj is above the value of the want to prove that this is not an equilibrium. Assume it is, and that B second highest bidder. Then this bidder knows that with positive probability they might win the ˆj and their value, and this would give them a negative payoff so they auction at a price between B will not do this. Thus with positive probability the second highest bidder will not be willing to bid ˆj and we are done. B Corollary 2 Assume that if there is no successful signal then the timing of the last bids are independent of bidders’ values. The if B j is the lowest jump-call bid: h i h i E b(w) |b(w) ≥ B j , winner snipe − E b(w) |b(w) ≥ B j , no winner snipe > 0 ˜j be the highest Proof. Let ρ be a value at which bidders signal they want to jump bid and B jump bid price this bidder will pay. Assume that a bidder with this value is the highest value bidder in an auction. Then there are two classes of auctions. In the first class he is the only person to ˜j in the second case there is some other signal he wants to jump bid and he wins at some Bj ≤ B 27

£ ¤ ˜j = E ρ(2:I) |ρ(2:I) ≤ ρ bidder with a signal greater than ρ ˜j (where B ˜j .) In the latter class he will ˜j . Also in the latter case there is a positive probability of the winner sniping, pay at least ρ ˜j > B whereas in the former there is not. Taking the expectation over ρ finishes the proof.

B

Data Collection Technique and Descriptive Statistics.

In this appendix we describe how we collected the data and present descriptive statistics.

B.1

Strings Used for Collecting Data.

We collected descriptive data for the items auctioned via string searches. Since all of our descriptive variables have at most a finite number of possible values all descriptive variables can be constructed using this technique. There are three data sources. The “short item description” is the 50 character description put by eBay on the main item listing page. This short item description links to the “full item description” which is the full text describing the monitor. Some information for these variables was also found on the “bid history” which has the history of bidding on the item. • Like-New–Searched the short item description for “Like new, Excellent cond, Hardly used, Barely used, Perfect, Pristine, Near new (Note the grammatically correct “Nearly new” did not find any items, but “Near new” did), Excellent Shape, Ex cond, Mint” and if the item also came up as “Refurbished” then it was declared not “Like-New.” • New–Searched the short item description for “New, Never used, Brand new, Still in box, In box” and if the item also came up as “Like-New,” or “Refurbished” then it was declared not “New.” • Refurbished–Searched the short item description for “Refurbished, Refirb, Ref (sometimes), Refurb” and the full item description for “Refurbished.” • Used–not either “Like-New,” “New,” or “Refurbished.” • Warranty–Searched the short item description for “Warranty, Warr, Wrnty, War” and checked to make sure they always were advertising a warranty. Also searched the full item description for “Warranty.” • Secret Reserve Price–Searched the bid history for (Reserve Price Not Met Yet), (Reserve Price Met). These are specific eBay designated strings and one is attached to every item with a secret reservation price. • Secret Reserve Price Not Met–Searched the bid history for (Reserve Price Not Met Yet). • Flat Screen–Searched the item description for “Flat.” • Resolution–Searched the full item description for both dimensions, for example for 1600x1200 searched for “1600” and “1200.” • Dot Pitch–Searched the full item description for “ .XX” and “0.XX” where XX ranges over the values .15 to .40. When multiple dot pitches were reported (approximately 150 items) went back and looked at these item descriptions. In cases where multiple dot pitches were mentioned either the diagonal or mid-screen dot pitch was used.

28

• Brand Name–Searched the short item description for the brand name. • Dropped Items– — Items that had a screen size under thirteen inches or over 22 inches (the maximum was 96). — Items that did not say “There have been no bid retractions or cancellations” on the bid history page. This is an eBay string and any items which did have retractions or cancellations were dropped due to possibilities of collusion. — Items that said “Monochrome” or “Greyscale” in the full item description. — Items that said “Macintosh” or “Apple” in the short item description. — Items that said “touch” in the full or short item description. monitors.

These are touch screen

— Items with “LCD” in the full item description. — Items with “for parts” or “not working” in the full item description.

29

Variable Name Num. Winning Bid (1) Size of Monitor (2) New (3) Like New (4) Refurbished (5) Used (6) Warranty (7) Brand Name (8) Flat Screen (9) Seller’s Feedback (10) Seller’s FB, squared (11) DPI+ (12) Expectation of DPI+ (13) No DPI reported (14) Resolution+ (15) Exp. of Resolution+ (16) No Res. reported (17) Secret Reserve (Met) (18) Secret Res. (Not Met) (19) Number of Bids (20) Number of Bidders (21) Auction Length (22) ++ These variables statistics are (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18) (19) (20) (21) (22) Num.

1.00 0.73 0.20 0.01 -0.02 -0.12 0.13 0.01 0.27 0.01 0.02 0.33 -0.37 -0.37 0.23 -0.11 -0.17 0.20 0.11 0.29 0.26 0.06 (1)

1.00 0.05 -0.01 0.01 -0.03 0.08 0.01 0.19 -0.03 -0.02 0.23 -0.27 -0.27 0.22 -0.06 -0.14 0.15 0.13 0.26 0.24 0.03 (2)

1.00 -0.05 -0.11 -0.54 0.11 0.00 0.13 0.04 0.06 0.12 -0.12 -0.12 0.03 -0.02 -0.02 0.03 0.10 0.07 0.06 0.09 (3)

Mean Median Mode 134.12 99.00 24.99 16.79 17.00 17.00 0.08 0.00 0.00 0.03 0.00 0.00 0.12 0.00 0.00 0.78 1.00 1.00 0.03 0.00 0.00 0.56 1.00 1.00 0.18 0.00 0.00 198.51 62.00 1.00 1.95E+05 3.72E+03 1.00 0.26 0.27 0.28 0.27 0.27 0.28 0.65 1.00 1.00 1105.30 1024.00 1280.00 1074.57 1011.36 966.26 0.41 0.00 0.00 0.08 0.00 0.00 0.09 0.00 0.00 7.26 4.00 0.00 3.81 3.00 0.00 5.38 5.00 3.00 reported only for non-zero observations

1.00 -0.06 -0.32 -0.02 -0.03 -0.02 0.01 0.01 0.00 0.01 0.01 -0.02 0.01 0.01 0.02 0.01 0.09 0.09 0.05 (4)

1.00 -0.68 0.01 0.01 0.01 -0.01 0.00 0.01 -0.01 -0.01 -0.02 0.03 0.02 0.00 0.01 -0.01 -0.02 0.01 (5)

1.00 -0.07 0.01 -0.08 -0.03 -0.04 -0.08 0.08 0.08 0.01 -0.01 -0.01 -0.02 -0.08 -0.07 -0.06 -0.08 (6)

1.00 0.00 0.13 -0.01 -0.02 0.11 -0.11 -0.11 0.07 -0.05 -0.05 0.00 0.02 0.01 0.00 -0.01 (7)

1.00 -0.01 -0.02 -0.01 0.00 0.00 0.00 0.01 0.00 -0.01 0.03 -0.02 -0.01 0.00 -0.01 (8)

Std Dev 138.61 2.38 0.27 0.17 0.32 0.42 0.16 0.50 0.38 394.82 9.71E+05 0.02 0.01 0.48 268.40 104.61 0.49 0.27 0.29 8.53 4.00 2.31

1.00 0.03 0.04 0.33 -0.32 -0.32 0.19 -0.17 -0.18 0.13 0.19 0.15 0.16 -0.01 (9)

1.00 0.87 0.17 -0.17 -0.17 0.04 -0.08 -0.07 -0.10 -0.10 -0.04 -0.04 -0.16 (10)

Kurt. 17.21 -1.07 8.00 29.86 3.60 -0.22 32.23 -1.94 0.84 27.04 321.77 -0.14 -0.13 -1.63 -0.80 -0.88 -1.86 7.77 6.18 1.84 0.74 -0.76

1.00 0.14 -0.14 -0.14 0.03 -0.05 -0.05 -0.05 -0.05 -0.01 -0.02 -0.08 (11)

Skew. 2.74 0.48 3.16 5.64 2.37 -1.33 5.85 -0.25 1.68 4.41 13.83 -0.77 -0.65 -0.61 0.37 0.66 0.38 3.13 2.86 1.38 1.08 0.54

Min. 0.01 14.00 0.00 0.00 0.00 -1.00 0.00 0.00 0.00 1.00 1.00 0.20 0.24 0.00 640.00 966.26 0.00 0.00 0.00 0.00 0.00 0.00

Max. 2227.00 21.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 5791.00 3.35E+07 0.31 0.34 1.00 1600.00 1281.93 1.00 1.00 1.00 55.00 23.00 10.00

(18) (19) (20) (21) (22) Num.

1.00 -0.09 0.20 0.21 0.03 (18)

1.00 0.14 0.14 0.00 (19)

1.00 0.92 0.09 (20)

1.00 0.09 (21)

1.00 (22)

1.00 NA NA 0.28 -0.26 -0.27 0.09 0.05 0.14 0.14 -0.02 (12)

1.00 1.00 -0.30 0.27 0.28 -0.09 -0.05 -0.14 -0.14 0.01 (13)

1.00 -0.29 0.27 0.28 -0.09 -0.05 -0.14 -0.14 0.01 (14)

1.00 NA NA 0.03 0.03 0.05 0.04 -0.17 (15)

1.00 1.00 0.02 0.01 -0.01 0.00 0.15 (16)

1.00 0.01 0.00 -0.03 -0.02 0.15 (17)

Table 5: Descriptive Statistics and the Corrrelation 30 Matrix. The numbers by each row and column in the Correlation Matrix correspond to the variable numbers given in the Descriptive Statistics.

C

Complete test results: Late Window has less than: 30 Seconds Early Window has # Late Bids: 23 at Least at Most # Early Bids Rank Sum K-S Stat. 30 Sec. 60 Sec. 20 -1.3636 0.400 1 Min. 2 Min. 46 1.1584 0.979 2 Min. 3 Min. 35 -0.1669 0.878 3 Min. 4 Min. 27 0.2239 0.648 4 Min. 5 Min. 18 -0.4466 0.559 5 Min. 7 Min. 30 0.2654 0.702 7 Min. 10 Min. 28 0.6436 0.070 10 Min. 15 Min. 33 0.3914 0.515 15 Min. 20 Min. 42 1.5777 0.335 20 Min. 30 Min. 69 0.3832 0.781 30 Min. 45 Min. 95 0.9952 0.549 45 Min. 60 Min. 94 1.5706 0.405 1 Hr. 1.5 Hr. 148 1.7973 0.242 1.5 Hr. 2 Hr. 120 1.7693 0.221 2 Hr. 3 Hr. 200 1.7063 0.147 3 Hr. 4 Hr. 171 1.4537 0.345 4 Hr. 8 Hr. 440 2.8615∗∗ 0.060∗ ∗∗ 8 Hr. 12 Hr. 255 2.6688 0.031∗ ∗∗ 12 Hr. 24 Hr. 460 3.1811 0.035∗ ––– ––– –––– –––– –––– 1 Min. 3 Min. 81 .8246 0.935 1 Min. 15 Min. 215 .7218 0.537 1 Min. 30 Min. 326 1.013 0.544 1 Min. 60 Min. 515 0.8358 0.487 ∗ Indicates a reject at the 5% confidence level (two tail test) ∗∗ Indicates a reject at the 1% confidence level (two tail test)

60 Seconds 43 Rank Sum K-S Stat. –– –– ∗ 2.1593 0.329 0.4571 0.164 0.9954 0.220 0.1898 0.769 0.4236 0.156 1.3530 0.006∗∗ 1.2209 0.091 2.5666∗ 0.041∗ 1.3969 0.095 2.1170∗ 0.050∗ 2.7458∗∗ 0.042∗ ∗∗ 3.0680 0.020∗ ∗∗ 3.1105 0.008∗∗ ∗∗ 3.1446 0.002∗∗ ∗∗ 2.7951 0.022∗ ∗∗ 4.6602 0.000∗∗ ∗∗ 4.3113 0.000∗∗ ∗∗ 4.9832 0.000∗∗ –––– –––– 1.6196 0.124 1.5010 0.028∗ 1.8192 0.023∗ ∗ 2.3809 0.018∗

Table 6: Sales Price / Third Highest Bid. Early data sets versus last minute bids.

31

Late Window has less than: 15 Seconds Early Window has # Late Bids 154 at Least at Most # Early Bids Rank Sum K-S Stat.. 15 Sec. 30 Sec. 133 0.849 0.634 30 Sec. 1 Min. 127 1.866 0.079 1 Min. 90 Sec. 85 3.555∗∗ 0.010∗ 90 Sec. 2 Min. 46 1.452 0.206 2 Min. 3 Min. 61 0.049 0.709 3 Min. 4 Min. 51 0.629 0.677 4 Min. 5 Min. 32 1.418 0.235 5 Min. 7 Min. 71 -0.026 0.989 7 Min. 10 Min. 59 -0.368 0.881 10 Min. 15 Min. 83 -0.126 0.697 15 Min. 20 Min. 69 -1.42 0.441 20 Min. 30 Min. 100 -1.296 0.518 30 Min. 45 Min. 146 -0.899 0.372 45 Min. 60 Min. 105 -1.091 0.474 ––– ––– –––– ––– ––– 15 Sec. 1 Min. 260 1.581 0.234 1 Min. 2 Min. 131 3.344∗∗ 0.009∗∗ 2 Min. 5 Min. 144 0.866 0.771 5 Min. 10 Min. 130 -0.232 0.932 10 Min. 30 Min. 252 -1.253 0.371 30 Min. 60 Min. 251 -1.155 0.46 ––– ––– –––– ––– ––– 1 Min. 10 Min. 405 1.643 0.328 10 Min. 60 Min. 503 -1.338 0.377 ∗ Indicates a reject at the 5% confidence level (two tail test) ∗∗ Indicates a reject at the 1% confidence level (two tail test)

30 Seconds 287 Rank Sum. K-S Stat.. –– –– 1.631 0.050 3.485∗∗ 0.003∗∗ -1.2092 0.194 -0.341 0.556 0.304 0.936 1.285 0.248 -0.419 0.9 -0.75 0.714 -0.538 0.627 -1.836∗ 0.274 -1.82 0.343 -1.457 0.05∗ -1.614 0.31 ––– ––– –– –– 3.288∗∗ 0.002∗∗ 0.482 0.84 -0.747 0.628 -2.009∗ 0.188 -1.889∗ 0.052 ––– ––– 1.352 0.14 -2.276∗ 0.088∗

60 Seconds 414 Rank Sum. K-S Stat.. –– –– –– –– 3.134∗∗ 0.011∗ 0.856 0.343 -0.692 0.512 -0.077 0.892 0.951 0.299 -0.834 0.578 -1.153 0.621 -1.026 0.233 -2.273∗ 0.099 -2.364∗ 0.089 -2.189∗ 0.016∗ ∗ -2.163 0.091 ––– ––– –– –– 2.882∗∗ 0.015∗ -0.057 0.996 -1.307 0.506 -2.831∗∗ 0.021∗ ∗∗ -2.768 0.016∗ ––– ––– 0.706 0.747 -3.373∗∗ 0.003∗∗

Table 7: Snipe or War tests, All bidders. These tests check for the Snipe or War strategy by comparing the distribution of last minute bids to earlier periods.

32

Late Window has less than: 15 Seconds Early Window has # Late Bids: 93 at Least at Most # Early Bids Rank Sum K-S Stat. 15 Sec. 30 Sec. 77 1.454 0.191 30 Sec. 1 Min. 82 1.827 0.019∗ ∗∗ 1 Min. 90 Sec. 45 3.395 0.000∗∗ 90 Sec. 2 Min. 21 0.391 0.772 2 Min. 3 Min. 32 0.283 0.505 3 Min. 4 Min. 27 0.229 0.895 4 Min. 5 Min. 19 1.500 0.186 5 Min. 7 Min. 42 0.770 0.816 7 Min. 10 Min. 29 0.394 0.430 10 Min. 15 Min. 46 0.423 0.750 15 Min. 20 Min. 37 -0.601 0.937 20 Min. 30 Min. 48 -0.063 0.809 30 Min. 45 Min. 80 -0.137 0.987 45 Min. 60 Min. 49 -0.384 0.434 ––– ––– –––– ––– ––– 15 Sec. 1 Min. 159 1.926 0.021∗ 1 Min. 2 Min. 66 2.800∗∗ 0.002∗∗ 2 Min. 5 Min. 78 0.868 0.388 5 Min. 10 Min. 71 0.755 0.670 10 Min. 30 Min. 131 -0.076 0.905 30 Min. 60 Min. 129 -0.285 0.831 ––– ––– –––– ––– ––– 1 Min. 10 Min. 215 1.824 0.054 10 Min. 60 Min. 260 -0.202 0.847 ∗ Indicates a reject at the 5% confidence level (two tail test) ∗∗ Indicates a reject at the 1% confidence level (two tail test)

30 Seconds 170 Rank Sum K-S Stat. –– –– 1.463 0.038∗ 3.261∗∗ 0.000∗∗ -0.017 0.645 -0.122 0.417 -0.254 0.612 1.216 0.348 0.278 0.903 0.084 0.383 -0.233 0.984 -1.181 0.505 -0.626 0.503 -0.842 0.733 -0.985 0.098 ––– ––– –– –– 2.562∗ 0.001∗∗ 0.32 0.591 0.249 0.593 -0.96 0.506 -1.126 0.262 ––– ––– 1.381 0.030∗ -1.233 0.261

60 Seconds 252 Rank Sum K-S Stat. –– –– –– –– 2.94∗∗ 0.002∗∗ -0.322 0.591 -0.487 0.347 -0.58 0.503 0.944 0.383 -0.094 0.826 -0.309 0.737 -0.697 0.597 -1.513 0.339 -1.038 0.349 -1.396 0.542 -1.442 0.072 ––– ––– –– –– 2.178∗ 0.011∗ -0.181 0.619 -0.253 0.948 -1.619 0.325 -1.817 0.134 ––– ––– 0.784 0.258 -2.098∗ 0.121

Table 8: Snipe or War tests, Experienced Bidders . These tests check for the Snipe or War strategy by comparing the distribution of last minute bids to earlier periods. In this series of tests we drop bids by inexperienced bidders (with a feedback less than four.)

33

Late Window has at most: 30 Seconds Minimal Feeback is: 0 2 Percentage of Data Set: 100% 43% # Early Bids # Late Bids Percentile Min. Max Min. Max. T-Stat T-Stat 0% 190 2042 64 418 4.316∗∗ 2.307∗ 5% 180 1932 61 400 4.276∗∗ 2.381∗∗ 10% 164 1818 58 389 3.619∗∗ 2.119∗ 15% 149 1703 57 372 3.329∗∗ 1.868∗ 20% 140 1597 54 352 3.317∗∗ 1.965∗ 25% 127 1499 51 337 3.065∗∗ 1.470 30% 117 1395 46 318 2.954∗∗ 1.464 35% 104 1283 41 298 2.790∗∗ 1.227 40% 96 1178 38 279 2.596∗∗ 1.586 45% 89 1071 36 263 2.166∗ 1.144 50% 82 960 32 245 1.716∗ 0.925 55% 73 860 28 223 1.579 0.348 60% 62 770 28 197 1.849∗ 0.420 65% 52 666 25 175 1.648 0.615 70% 47 561 23 155 1.222 0.465 75% 32 460 21 141 0.296 -0.456 80% 25 363 20 120 -0.337 -1.112 85% 18 268 15 91 -0.582 -1.198 90% 14 190 11 61 -0.274 -1.026 95% 5 90 6 34 -1.200 -1.352 ∗ Indicates a reject at the 5% confidence level (one tail test) ∗∗ Indicates a reject at the 1% confidence level (one tail test)

5 25%

14 10%

T-Stat 1.716∗ 1.579 1.102 0.656 0.991 0.694 0.920 0.746 1.085 0.925 0.867 0.411 0.142 0.152 0.004 -0.836 -1.278 -1.442 -0.836 -1.005

T-Stat 1.341 1.317 1.070 0.628 0.609 0.429 0.544 0.544 0.539 0.461 0.594 0.670 0.176 0.010 -0.030 -0.603 -0.904 -0.987 -0.971 -1.308

60 Seconds 0 2 100% 43%

# Late Bids Min. Max. 98 626 94 599 87 581 82 555 78 528 70 498 64 470 59 442 55 412 50 387 46 359 41 328 40 293 35 260 32 236 28 211 24 174 18 127 12 84 7 48

T-Stat 4.665∗∗ 4.540∗∗ 3.840∗∗ 3.503∗∗ 3.312∗∗ 3.342∗∗ 3.193∗∗ 2.908∗∗ 2.754∗∗ 2.304∗ 1.849∗ 1.626 1.753∗ 1.502 0.695 -0.185 -0.616 -0.587 -0.168 -1.188

Table 9: Jump Test, All bidders. This tests checks whether bidders are using a Jump-Call strategy by comparing the mean of last minute bids to earlier bids..

34

T-Stat 1.973∗ 2.024∗ 1.824∗ 1.653∗ 1.649 1.495 1.491 1.153 1.298 0.950 0.641 0.115 0.100 0.222 -0.081 -0.794 -1.203 -1.098 -0.932 -1.440

5 25%

14 10%

T-Stat 1.119 0.876 0.581 0.272 0.556 0.568 0.758 0.452 0.620 0.578 0.488 0.081 -0.115 -0.096 -0.333 -1.029 -1.288 -1.486 -0.872 -1.146

T-Stat 0.646 0.531 0.437 0.221 0.162 0.235 0.286 0.118 0.081 0.155 0.157 0.153 -0.264 -0.382 -0.407 -0.858 -1.006 -1.083 -1.000 -1.358

Late Window has at most: 30 Seconds Minimal Feeback is: 0 1 Percentage of Data Set: 70% 40% # Early Bids # Late Bids Percentile Min. Max Min. Max. T-Stat T-Stat 0% 183 1435 75 301 3.979∗∗ 2.595∗∗ 5% 171 1353 70 288 3.889∗∗ 2.683∗∗ 10% 155 1274 66 279 3.451∗∗ 2.453∗∗ 15% 142 1196 64 266 3.313∗∗ 2.090∗ 20% 134 1128 61 258 2.949∗∗ 1.724∗ 25% 126 1073 58 247 2.939∗∗ 1.551 30% 117 1000 56 238 2.540∗∗ 1.191 35% 99 917 52 224 2.293∗ 0.890 40% 91 844 49 211 2.071∗ 0.903 45% 83 766 47 199 1.668∗ 0.473 50% 78 693 41 186 1.340 0.217 55% 67 621 36 169 1.237 -0.285 60% 58 558 36 149 1.519 -0.105 65% 48 482 28 133 1.267 -0.086 70% 40 406 26 121 0.645 -0.439 75% 31 341 23 110 0.033 -1.205 80% 25 273 23 94 -0.437 -1.638 85% 21 202 19 72 -0.675 -2.025 90% 18 141 12 49 -0.603 -1.827 95% 10 70 8 26 -1.025 -1.516 ∗ Indicates a reject at the 5% confidence level (one tail test) ∗∗ Indicates a reject at the 1% confidence level (one tail test)

2 28%

9 10%

T-Stat 2.072∗ 2.200∗ 2.084∗ 1.849∗ 1.523 1.400 1.083 0.835 1.065 0.653 0.570 0.209 0.270 0.482 0.141 -0.591 -1.222 -1.271 -1.196 -1.290

T-Stat 1.098 1.211 0.981 0.637 0.601 0.546 0.366 -0.046 -0.153 -0.343 -0.095 -0.178 -0.594 -0.450 -0.721 -0.998 -1.344 -1.352 -1.062 -1.198

60 Seconds 0 1 70% 40% # Late Bids Min. Max. 101 447 95 426 87 411 84 396 80 383 75 365 73 352 69 335 65 316 60 296 54 276 48 251 46 225 37 201 34 185 30 167 27 138 22 99 14 69 10 38

T-Stat 4.527∗∗ 4.497∗∗ 4.107∗∗ 3.690∗∗ 3.306∗∗ 3.380∗∗ 2.886∗∗ 2.394∗∗ 2.099∗ 1.719∗ 1.372 1.237 1.337 1.012 0.171 -0.468 -0.752 -0.593 -0.615 -1.162

Table 10: Jump Test, No repeat bidders. This tests checks whether bidders are using a Jump-Call strategy by comparing the mean of last minute bids to earlier bids.

35

T-Stat 2.944∗∗ 3.121∗∗ 3.072∗∗ 2.571∗∗ 2.252∗ 2.109∗ 1.748∗ 1.218 1.075 0.659 0.185 -0.286 -0.305 -0.232 -0.625 -1.357 -1.596 -1.625 -1.513 -1.333

2 28%

9 10%

T-Stat 1.965∗ 2.168∗ 2.197∗ 1.834∗ 1.525 1.461 1.114 0.691 0.737 0.401 0.142 -0.189 -0.179 -0.052 -0.391 -1.023 -1.405 -1.276 -1.283 -1.534

T-Stat 0.797 0.806 0.786 0.448 0.405 0.430 0.197 -0.314 -0.422 -0.475 -0.334 -0.460 -0.752 -0.699 -0.936 -1.195 -1.386 -1.377 -1.104 -1.282

Suggest Documents