Best paper awards and uncertainty of innovation (Preliminary and Incomplete)

Best paper awards and uncertainty of innovation (Preliminary and Incomplete) Jorge Lemus∗ December 30, 2014 Abstract Uncertainty plays an important r...
Author: Mervin Holland
4 downloads 1 Views 417KB Size
Best paper awards and uncertainty of innovation (Preliminary and Incomplete) Jorge Lemus∗ December 30, 2014

Abstract Uncertainty plays an important role in the initial stages of innovative activities. Predicting the success of new ideas is a difficult task, even for experts in the field. This paper presents an empirical setting that allows us to quantify the uncertainty of novel research. We construct a dataset consisting of a large number of important conferences in computer science. Using citation rankings for “best paper” and “classic paper” awards, we build an ex-ante and ex-post measure of uncertainty, respectively. We find that papers chosen ex-ante as “best papers” rank, on average, at the 75th percentile of the distribution of citations. Ex-post, when papers are judged 10 years after their publication, the “classic paper” award ranks at the 95th percentile of the distribution of citations.

I always avoid prophesying beforehand because it is much better to prophesy after the event has already taken place. (Winston Churchill)

1

Introduction

Evaluating new ideas is crucial to make innovative decisions. Uncertainty makes it difficult to identify the most promising research directions, the startups that will become profitable, or the books and movies that will be popular. Many authors, including Knight (1921) and Arrow (1963), recognize that uncertainty is in many situations an unavoidable element of decision ∗

Email: [email protected]. I appreciate the comments and suggestions by Ben Jones and Emil

Temnyalov.

1

making. How hard is it to predict innovation success? This paper presents an empirical setting to quantify the answer to this question. There is a large amount of evidence showing that ex-ante uncertainty affects the selection of new ideas. For example, many best-seller authors had a hard time getting a publishing contract. Agatha Christie took over 5 years to sign a publishing contract.1 J.K. Rolling, writer of the popular saga “Harry Potter”, was rejected 12 times before finding a publisher.2 Gans and Shepherd (1994) surveyed authors of some of the most influential papers in economics. They found that, maybe surprisingly, many of these influential contributions were initially rejected from journals, and some of them even had a hard time getting published. Predicting future events is obviously hard to do. In a book published by Silver (2012) we can find overwhelming evidence of inaccurate predictions. In many cases, imprecise predictions are made by experts on a field. Consider for example: “Louis Pasteur’s theory of germs is ridiculous fiction” said Pierre Pachet, Professor of Physiology at Toulouse in 1872; “X-rays will prove to be a hoax” said Lord Kelvin, President of the Royal Society in 1883.3 Tetlock (2005) studies “expert political judgment” and finds that in many cases experts do worse than actuarial formulas, and that they suffer from cognitive biases. The paradox of why experts predict so poorly when they know so much is further explored in Camerer and Johnson (1997) and Ericsson and Charness (1994). Some researchers have questioned the ability of experts to make accurate predictions and they have offered some alternatives. Surowiecki (2005) shows that experts’ predictions are, under some conditions, surpassed by “wisdom of crowds”, which is the average prediction of many agents. This has led to the surge of prediction markets, trying to aggregate information and make better predictions. Another way of improving experts’ predictions is by collecting a large amount of data, using sophisticated algorithms to aggregate this information and to make inferences about the future. Many researchers have followed this methodology and they have developed complex models trying to improve predictions on natural disasters, crime, or even weekly weather. Computer scientists at Stony Brook University, for example, have recently developed a highly accurate algorithm to predict book sales. An emerging area of interest is to study, and perhaps predict, the success of researchers. The 1

Only William Shakespeare has sold more books than her. These examples are extracted from www.literaryrejection.com, visited on August 20, 2014. 3 Source: http://www.rinkworks.com/said/predictions.shtml, visited on August 20, 2014. 2

2

metric to measure success is often based on the citations of articles published in peer reviewed journals. Citations are a widely used metric for scientific impact. They are used to study trends in a field, and to determine promotions and grants, etc. Similar to other fields like finance and meteorology, researchers have been developing algorithms that attempt to predict impact of scientific publications. Sinatra et al. (2014), for example, presents an algorithm that “reads” the paper and based on some parameters predicts if better written articles are cited more often than badly written papers. Wang et al. (2013) presents an algorithm that estimates long run success by learning in a short period of time the pattern of citations of a paper. By analyzing the network of citations Shi et al. (2010) measures (and predicts) the success of research articles. This paper presents a setting where it is possible to quantify innovation uncertainty. We build a new dataset that consists of top conferences in several fields in computer science, and that give a “best paper award” to a small subset of the accepted papers. In each conference, a committee of experts select the winner paper and they recognize it as the best contribution out of all the papers submitted to the conference. We explore by accuracy of the ex-ante predictions made by these experts by computing the citations of winner and non-winner papers. Best paper awards are chosen by a committee of experts. Our approach measures how well experts in a field can evaluate ex-ante the quality of innovation. Conference committees try to pick the best paper among the papers submitted. There is no evidence that they are using sophisticated algorithms to pick winners. Selection is made based solely on opinions of the experts in the conference committees.4 Our setting offers several advantages to quantify ex-ante uncertainty. First, all papers submitted to the conference have not been published. Papers accepted to the conference will be published in the conference journal.5 At the moment of evaluation, all of the papers have no history of citations. There are no advantages and all the papers start the “race” for citations from the same starting point. Second, we observe citations for winners and non-winners, since both papers get published in the same conference journal. This offers an advantage with respect to a setting where only accepted papers are observed. Rejected papers might not get published at all, or might be published in a different journals, introducing selection issues. Another fact is that all the papers accepted to the conferences are peer-reviewed. Peer 4

In Appendix A, we provide a description of the awards and selection processes, for all conferences that we

could find information. 5 Most of these conferences have a low acceptance rate (about 25%).

3

review can be questionable, as exposed in Bohannon (2013), despite the fact that it is widely accepted as a selection mechanism for conferences and scientific journals. To corroborate that citations provide a good measure of success, and a correct measure of impact, we also present an ex-post measure of uncertainty. We consider conferences that give a “classic paper award”, also called “test of time award”. This award recognizes papers published in the conference journal 10 years ago that have caused the biggest impact in the field. If we accept that 10 years after the initial publication of the conference journal the ranking of papers (by citations) is relatively stable, then we can evaluate how citations correlate with what experts in the field consider important. We can do this by comparing how many citations a “classic paper” award winners has received versus non-winners papers. Our empirical evidence shows that for the “best papers” are ranked (on average) in the 75th percentile of the distribution of citations, while “classic papers” are in the 95th percentile. If experts’ knowledge is useless to evaluate the ex-ante importance of papers, because of future uncertainty and quick change in the field, then we should expect that citations of “best papers” are closer to the 50th percentile. This finding does not directly imply that experts have any predicting power. It might be the case that just by receiving the award, winner papers receive more citations, due to more attention by other researchers. That fact alone could direct future research and push the ranking of the winner papers far above the 50th percentile. This bias implies that predictions might be even worse. We provide some evidence to control for this bias by looking at “runner-up” papers or “honorable mentions”. These papers were in the shortlist of papers considered as potential candidates to win the best paper award. It is reasonable to think that these papers were “better” than the papers that did not make this list and “worse” than the actual winners, although the difference with the winners should not be so large. We find that winning the award gives a positive boost on citations and that “honorable mentions” are ranked on average in the 62nd percentile. If experts care about citations, then we should expect that the “classic paper” award is given to the most cited paper in each conference. We find this is generally the case, although the expost prize is awarded to papers that rank on average in the 95th percentile of the distribution of citations. The next section we describe our dataset, and how it was collected. In section 3, we present our main empirical findings. Finally, in section 5, we summarize our findings and provide some thoughts for future research.

4

2

Data

We collected data for a total of 21 large conferences in computer sciences for different years, with a total of 194 conference-year observations, and a total of 14,563 papers. The details are provided in Appendix A. These conferences were selected because they are important in computer science6 and because in each of them there is an award for “best paper,”, or an equivalent award (which does not include “best student paper”). These awards are typically granted by a committee of experts, which usually includes the program chair, to the “best” paper submitted to the conference. The definition of “best” somehow varies among conferences, but in the majority of the cases it corresponds to the paper whose quality out-stands over the rest, and whose contribution to the field is significant. In Appendix A we also provide a brief description of the purpose of the award for each conference in which the description was publicly available. To construct our database, we manually searched for and collected papers from the proceedings publication of each conference. We then used a script that queries Google scholar, and extracted the total number of citations that each paper has received since its publication date.7 We also collected information on “classic paper” awards, or equivalent awards, that are given to the paper that has proved to be the “best” among all the papers published in the conference proceedings published 10 years prior. Unfortunately, we have a limited number of overlapping conferences with “classic” and “best” paper awards simultaneously. In particular, we can only collect “classic paper awards” up to 2004. Our “classic paper” awards data consists of 9 different conferences for different years with a total of 103 conference-year combinations. Finally, we found 24 conference-years with information about “runner-ups” or “honorable mention” papers, which we use to control for the effect that winning the award has on future citations. Combining all this information, we constructed our database in which an observation is defined as:

conference, year, paper title, authors, citations, type of award. 6 7

Rankings: http://www.conference-ranking.com/CS_Conference_Ranking.html The script is a combination of routines in Python and iMacros. For questions about the code, please email

[email protected]. It only takes about 10 minutes to download all the citations for the papers in a particular conference, so our results are not biased by the timing of data collection.

5

According to the description provided on each conference, the award is chosen by a committee that tries to avoid conflict of interests. Also, the review process is done by multiple members of the program committee, and sometimes they go through a voting decision. There is no evidence that committees are using sophisticated algorithms to select the winners. For most of the conferences we were not able to find information about the precise committees that selected each award, but in general these committees are formed by the same members that reviewed papers to the conferences, keynote speakers, and/or the program chair.

3 3.1

Results Best Paper Awards

We construct an ex-ante measure of uncertainty, by computing the percentile rank for the distribution of citations for all papers in each conference. Then, we identify the percentile that corresponds to winner papers. Formally, let ci,j be the number of citations for paper i in conference j, and let Nj the total number of papers in conference j. We order the papers according to the number of citations in increasing order, and we define n(i) as the position of paper i in this order. Then, we define for each paper the percentile rank as: perci,j =

n(i) ∈ (0, 1]. Nj

In the next figure, we present the histogram of pi,j , such that paper i won an award in conference j, for all of the conferences. In other words, the frequency for the percentile rank of all the award-winning papers.

6

Figure 1: Distribution of the percentiles corresponding to best paper awards.

From figure 1 we can see that about 25% of the time, the award is given to a paper that ranks among the 5% most cited papers presented on the same conference. About 40% of the time the winner paper ranks between the 70th and 95th percentile. The rest of the time, the paper is almost uniformely distributed among the 5th and 70th percentile. The next figure shows the histograms disaggregated by conferences. We can see that predictions in some conferences are relatively better than others, for example ACL, KDD and STOC, but in general there is a lot of variation even within conferences:

7

Figure 2: Distribution of the percentiles corresponding to best paper awards, by conference.

TO DO: Why is this? can we classify conferences by “areas”? Can we study the journals on those areas? Is this saying that is harder to predict in some areas? or the committees and methods of selection are different in different conferences? If the committee that selects the best paper award cannot judge ex-ante the quality of the papers, and if obtaining the award does not influence future citations, we should expect that the best paper award is on average at the 50th percentile. However, in our data, we find that the best paper award is on average in the 75th percentile of the distribution of citations. The table below reports some of the aggregate statistics:

perc

Mean

Q25

Median

Q75

sd

Obs

0.755

0.608

0.815

0.95

.224

335

Table 1: Statistics for winner papers.

8

Below, we report the same statistics by conference:

Conference

Mean

Q25

Median

Q75

sd

Obs

AAAI

.7040915

.527027

.7248908

.9189189

.2217871

37

ACL

.9244294

.9428571

.9661478

.9752066

.1305164

14

CHI

.7789879

.7127072

.8311259

.9277979

.1894366

41

CIKM

.7788339

.5970819

.8026005

.9744769

.2101981

8

FSE

.7445542

.6363636

.7755102

.96

.2310258

23

ICML

.910889

.9050633

.9214286

.9924812

.1025624

5

ICSE

.7782057

.6811688

.8145833

.9433333

.198667

32

KDD

.8111078

.6666667

.9090909

1

.2424881

13

MOBICOM

.6741162

.4375

.7666667

.8181818

.2065281

3

OSDI

.806209

.6842105

.8076923

.9583333

.1701462

15

PLDI

.6653449

.5357143

.668554

.9230769

.2744651

10

PODS

.6564441

.4452381

.7321429

.8766603

.2588485

16

SIGCOMM

.6473064

.3030303

.7777778

.8611111

.3010492

3

.789371

.7391304

.8378378

.8974359

.1675459

15

SIGMETRICS

.7544107

.7857143

.8333333

.875

.2198791

9

SIGMOD

.6918259

.5116279

.7480392

.8541667

.2276331

18

SOSP

.6431062

.4210526

.7309211

.8695652

.3003972

22

STOC

.9042497

.8571429

.9631777

.987013

.1269121

14

UIST

.8285278

.6842105

.88

.9677419

.1810233

13

VLDB

.6550766

.3359375

.7120717

.9

.2946979

10

WWW

.7341697

.5625

.7854664

.9350649

.2240166

14

.755205

.6082474

.8148148

.95

.2237426

335

SIGIR

Total

Table 2: Statistics for winner papers by conference.

3.2

Classic Paper Awards

Using the same methodology described in section 3.1, we compute the percentile rank for papers that received a “classic paper award”. If experts in the field only cared about citations, ex-post we should observe that the most cited papers are the ones that obtain the classic paper award. In hindsight, we should expect 9

that winners of the “classic paper award” to be highly cited and to be close to the 100th percentile. We find that the “classic paper award” is on average at the 95th percentile of the distribution of citations. The figure below shows the distribution of percentiles of classic paper awards.

Figure 3: Distribution of the percentiles corresponding to classic paper awards.

We can see that about 55% of the winner papers are in the 95th percentile (or higher), which is more than twice the amount of papers compared to those who obtained the “best paper” awards. About 20% of the rest of the papers are located among the 85th and 95th percentile. We also found some variation between conferences, although much less compared to the best paper awards.

10

Figure 4: Distribution of the percentiles corresponding to best paper awards, by conference.

variable perc

Mean

p25

Median

p75

sd

Obs

.9364757

.9183673

.9863014

1

.1055418

115

Table 3: Statistics for winner papers.

Below, we report the same statistics by conference:

11

conference

mean

p25

p50

p75

sd

N

AAAI

.9655945

.9750031

.9905318

1

.0929256

16

ICML

.9986755

1

1

1

.0029617

5

ICSE

.9473434

.9354839

.98

1

.0782506

23

PLDI

.9599715

.96

1

1

.0822854

15

SIGCOMM

.8663245

.7083333

.9590301

1

.1718917

10

SIGMETRICS

.9512811

.9130435

.96875

1

.0593879

6

SIGMOD

.9162821

.8648649

.9330357

.974359

.0734246

18

UIST

.984127

.952381

1

1

.0274929

3

VLDB

.9077343

.877193

.9767442

1

.1508844

19

Total

.9364757

.9183673

.9863014

1

.1055418

115

Table 4: Statistics for winner papers by conference.

3.3

Impact of the award on future citations

A key question is whether a paper obtains more citations just for winning the “best paper award”, rather than by its intrinsic quality. One reason why winner papers can get more citations is that winning the award gives more visibility to the paper. Usually, award winners have a special session during the conference, and they are also highlighted in the conferences website. Since many researchers might not have been able to attend the conferences, given that they have limited amount of time, they can selectively choose to read the award winner paper over other papers. This, in turn, can lead to more future research spanned by winner papers, which have an impact on future citations. To measure the impact of winning the award on citations, we can use some of the conferences which give a “runner-up” or “honorable mention award”.8 These papers, a selected subset of the papers presented in the conference, are papers that had a high chance of being the winners, but they did not win. In other conferences, we know which papers were candidates for best paper awards. We can use these papers as a control group, although it is not a perfect one, because we do not have information regarding how close it was the fight for the award. But we should at least expect that candidates for the “best paper award” have more citations than non-candidates. 8

The subset of conference-year pairs for which we have information about the runner ups are 31.

12

We should expect that the winner papers have more citations than the candidates that did not win. If we think that the fight for the award was close, and winning the award does not have a very large impact on future citations, then candidates and winner papers should have a similar number of citations. Table 5 below shows the statistics for papers that won the “best paper award” in conferences for which we have information about the candidates to win the award. Table 6 shows the statistics for the candidates, and Table 7 for non-candidates.

variable perc

mean

p25

p50

p75

sd

N

.7862741

.6896869

.8375084

.9637701

.2029687

68

Table 5: Statistics for ‘best paper award’ winners in conferences for which we know the candidates.

variable perc

mean

p25

p50

p75

sd

N

.6072197

.3693694

.6461539

.8380952

.2717286

219

Table 6: Statistics for ‘best paper award’ candidates in conferences for which we know the candidates.

variable perc

mean

p25

p50

p75

sd

N

.4932942

.2431813

.4915254

.7382784

.2873218

3944

Table 7: Statistics for non-candidates in conferences for which we know the candidates.

By looking at these two tables, we can see that winners are more cited on average than candidates, and candidates are more cited than non-candidates. Non-candidates average the 50th percentile, while candidates the 61st percentile and the winners in the 78th percentile.

13

3.4

Predicting ranking or predicting impact?

In this section we construct a measure of the papers’ impact forecasted by experts. For each conference that awards k “best papers”, we add up the total number of citations generated by the k most cited papers. That is, we rank the papers by number of citations and we add up the citations of the first k papers. We call the total number of citations STj for conference j. We also add up the total number of citations generated by the k winners of conference j and we call the total number of citations WTj . Experts make “mistakes” by picking papers that are not the most cited ones, but it might be because all papers in the conference are of similar “quality” in terms of citations. Therefore, even if a paper is in the 60th percentile, the impact generated by these papers might be similar to the impact of the most cited paper. To measure the difference in impact between the winner papers and the most cited papers per conference, we construct the following index Ij =

WTj STj

∈ [0, 1].

This index tries to capture the percentage of the conference “impact” explained by the “best paper award” winners. When I j = 1, the winner papers generate the same total number of citations than the top papers on conference j. The smaller I j , the less is the impact generated by the winners, relative to the top cited papers in conference j. In table 8, we report the average index per conference, and in the table 9 by year.

14

conference

mean

p50

sd

N

AAAI

.2267883

.2105263

.18986

15

ACL

.4793555

.5318927

.2494875

9

CHI

.3384879

.3715888

.1431374

6

CIKM

.3111218

.1265823

.3563189

7

FSE

.6657616

.646789

.2258219

9

ICML

.3781676

.3497409

.2372553

5

ICSE

.5502875

.5363583

.1987319

8

KDD

.4776333

.273546

.4381985

12

MOBICOM

.2306952

.215873

.0569375

3

OSDI

.4835012

.4368308

.3120911

9

PLDI

.4257182

.1759997

.4302906

8

PODS

.329942

.2440037

.2843554

14

SIGCOMM

.2625332

.3198146

.1543615

3

SIGIR

.3362752

.2923976

.269836

15

.366385

.2554406

.2837375

8

SIGMOD

.2471462

.1356934

.2332092

14

SOSP

.4494676

.327459

.3612703

7

STOC

.5620487

.5721591

.1958025

8

UIST

.551723

.4605789

.3021346

12

VLDB

.310584

.1322849

.3970679

9

WWW

.2885236

.1365114

.341506

13

.39197

.3046837

.3049864

194

SIGMETRICS

Total

Table 8: Index impact by conference

15

year

mean

p50

sd

N

1987

.0877052

.0877052

.

1

1988

.2683307

.2683307

.

1

1994

.5677253

.7805964

.3688563

3

1996

.3472758

.2776967

.243475

6

1997

.4079077

.3202622

.3648317

6

1998

.1045036

.1045597

.0975238

5

1999

.4008158

.1539179

.4146108

9

2000

.5911678

.4368308

.3990891

9

2001

.3569635

.2420986

.3330154

9

2002

.2938519

.2166781

.3012394

9

2003

.3383464

.2634449

.2756203

12

2004

.4248194

.3181358

.3355619

16

2005

.4376837

.4541034

.3088906

17

2006

.482122

.4477848

.3504406

17

2007

.40307

.3540354

.2553728

18

2008

.4331351

.3497409

.3052036

19

2009

.337557

.3198146

.2472052

19

2010

.3195437

.217404

.2452843

18

.39197

.3046837

.3049864

194

Total

Table 9: Index impact by year

Notice, in table 9, that this index is very low in the year 1998 and for the WWW conference. This is in part explained by the paper, “The Anatomy of a Large-Scale Hypertextual Web Search Engine”, by Sergey Brin and Lawrence Page, which has over 12,000 citations, and the first line of its abstract reads: “In this paper, we present Google, a prototype of a large-scale search engine...” The “best paper award” winner of the WWW conference of 1998 has only 53 citations. Thus, the impact it caused is very low compared to the highest cited paper for that conference, which explains why our impact index is very low.

16

4

Next Steps

There are some parts of the paper that will be improved, and others that will be completed. 1. Increase database: There are several conferences that give awards, but have not been collected yet. These include conferences that give “best” paper awards: CVPR, FOCS, ICCV, IJCAI, INFOCOM, NSDI, S&P, SODA. Also “classic” paper awards: POPL, OOPSLA, ICFP, PODS, SIGIR. 2. Studying variation among different areas. From the data it seems to be the case that some areas are more “certain” than others. In particular, theory conferences seem to have better success predicting citations. Is this because these areas are not so innovative and maybe not so uncertain? How uncertain is technological progress in different fields? 3. Look at the conference journals, and see how they rank among all the journals in computer science. 4. Quantify how stable the rankings over time are. It might be the case that a paper accumulates citations from the beginning until it becomes a success. Other papers, might have almost no citations for some time, but all of a sudden are “discovered” and become a big success. This is, to have a measure of citation volatility over time.

5

Conclusion

TBA

17

References Kenneth J Arrow. Uncertainty and the welfare economics of medical care. The American economic review, pages 941–973, 1963. Amar Bhide. The origin and evolution of new businesses. Oxford University Press, 2000. John Bohannon. Who’s afraid of peer review? Science, 342(6154):60–65, 2013. Colin F Camerer and Eric J Johnson. 10 the process-performance paradox in expert judgment: How can experts know so much and predict so badly? Research on judgment and decision making: Currents, connections, and controversies, page 342, 1997. K Anders Ericsson and Neil Charness. Expert performance: Its structure and acquisition. American psychologist, 49(8):725, 1994. Joshua S Gans and George B Shepherd. How are the mighty fallen: Rejected classic articles by leading economists. The Journal of Economic Perspectives, pages 165–179, 1994. Frank Knight. Risk, uncertainty and profit, 1921. Ramesh Sharda and Dursun Delen. Predicting box-office success of motion pictures with neural networks. Expert Systems with Applications, 30(2):243–254, 2006. Xiaolin Shi, Jure Leskovec, and Daniel A McFarland. Citing for high impact. In Proceedings of the 10th annual joint conference on Digital libraries, pages 49–58. ACM, 2010. Nate Silver. The signal and the noise: Why so many predictions fail-but some don’t. Penguin, 2012. Roberta Sinatra, Dashun Wang, Pierre Deville, Chaoming Song, and Albert-Laszlo Barabasi. Scientific impact: the story of your big hit. Bulletin of the American Physical Society, 2014. James Surowiecki. The wisdom of crowds. Random House LLC, 2005. Philip Tetlock. Expert political judgment: How good is it? How can we know?

Princeton

University Press, 2005. Dashun Wang, Chaoming Song, and Albert-László Barabási. Quantifying long-term scientific impact. Science, 342(6154):127–132, 2013.

18

A

Appendix: Conferences Information

Best paper award conference

Number of Years

AAAI

15

ACL

9

CHI

6

CIKM

7

FSE

9

ICML

5

ICSE

8

KDD

12

MOBICOM

3

OSDI

9

PLDI

8

PODS

14

SIGCOMM

3

SIGIR

15

SIGMETRICS

8

SIGMOD

14

SOSP

7

STOC

8

UIST

12

VLDB

9

WWW

13

Total

194

Table 10: Conferences with “best paper awards”

19

conference

number of years

AAAI

13

ICML

5

ICSE

22

PLDI

15

SIGCOMM

7

SIGMETRICS

5

SIGMOD

15

UIST

3

VLDB

18

Total

103

Table 11: Conferences with “classic paper awards”

We collected data on conferences that award a “best paper award” or an equivalent award. Some of the awards are listed in Jeff Huang’s website9 and we completed some years by manually finding the recipients of the awards.

A.1

List of conferences, brief descriptions, and meaning of the awards

AAAI (ASSOCIATION FOR THE ADVANCEMENT OF ARTIFICIAL INTELLIGENCE): The AAAI National Conference on Artificial Intelligence honors papers that exemplify the highest standards in technical contribution and exposition. During the blind review process, members of the Program Committee recommend papers to consider for the Outstanding Paper Award. A subset of the Senior Program Committee, carefully chosen to avoid conflicts of interest, reviews all nominated papers and selects the winning paper(s). ACL conference (Association for Computational Linguistics): In the ACL tradition, ACL 2010 has several Best Paper awards: best long paper, best short paper, and best long paper authored by a student, sponsored by IBM. The best long paper will receive its own plenary session at the end of the conference, and the recipients of the prizes will each receive a certificate and cash award. The Best Paper awards have been decided by a Best Paper committee, consisting of some of the members of the Program Committee and 9

http://jeffhuang.com/best_paper_awards.html

20

additional members drawn from the ACL community. CHI conference (Human Factors in Computing Systems): The ACM Conference on Human Factors in Computing Systems (CHI) series of academic conferences is generally considered the most prestigious in the field of human-computer interaction and is one of the top ranked conferences in computer science. The “best paper award” corresponds to outstanding work in the field of human-computer interaction, honoring exceptional technical papers, notes and case studies submitted to CHI. Papers and Notes committees took part in this program, nominating up to 5% of their submissions as Award Nominees. A separate awards committee then chose a select group of these submissions - no more than 1% of the total submissions - to receive a “Best” designation. “Getting a paper into CHI is incredibly competitive, and all accepted papers are thoroughly vetted and of high quality. Each winning paper and technical note therefore represents exemplary work in the field of HCI,” noted Dr. Wendy Kellogg, Chair of the Best of CHI 2007 Committee. CIKM (Conference on Information and Knowledge Management): The Conference on Information and Knowledge Management (CIKM) provides an international forum for presentation and discussion of research on information and knowledge management, as well as recent advances on data and knowledge bases. The purpose of the conference is to identify challenging problems facing the development of future knowledge and information systems, and to shape future directions of research by soliciting and reviewing high quality, applied and theoretical research findings. Submissions of high quality papers on all topics in the general areas of databases, information retrieval, and knowledge management. Papers that bridge across these areas are of special interest and will be considered for a Best Interdisciplinary Paper award. The Best Multidisciplinary Paper Award will be the paper that would have most significant impact in the DB+IR+KM areas. Each of PC and industry track chairs (DB/IR/KM/Industry) recommended one to three candidate papers for the Best Multidisciplinary Paper Award and the Best Student Paper Award, respectively, to the award committee that consists of nince Award Committee Members including two keynote speakers and several senior researchers recommended by PC Chairs and Conference Chair. The Award Committee members reviewed all the papers and vote for best papers. An ordering was made based on the voting of all the

21

members of the committee. The committee went through a duscussion by email and came to the final result that was recommended to the general chair. The general chair made the final decision. FSE (Foundations of Software Engineering): ACM SIGSOFT Distinguished Paper Awards are to be awarded only for full-length technical papers accepted by the program committee for the main track of a SIGSOFT-sponsored meeting. They are not intended for abstracts or short papers, for papers from satellite or colocated events such as workshops and doctoral symposia, or for demo papers, panel summaries, invited papers and other such supplementary contributions. The program committee will take a weighted vote, respecting the conflict of interest rules in place for the conference, to identify the top candidates among the papers. The program committee chair(s) will use the results of the weighted votes as a primary basis for selecting the award papers. ICML (International conference on Machine Learning): Every year, ICML honors its best contributions with best paper awards. All best paper award winners received a certificate and a check for $1000, and all runner-ups received a check for $500. For the Best Paper Award, a list of papers that received excellent reviews from the reviewers and the area chair are collected. In a two-step selection process, the winner paper emerges. ICSE (International conference on Machine Learning): ACM SIGSOFT encourages SIGSOFT-sponsored conferences to designate a number of accepted papers for ACM SIGSOFT Distinguished Paper Awards for the conference. In addition to presenting certificates to the authors of awarded papers, two awarded papers from each ICSE, FSE and ESEC will be invited for presentation at the following India Software Engineering Conference (ISEC), which is sponsored by SIGSE, the Special Interest Group on Software Engineering of the Computer Society of India (CSI). KDD : KDD 2010 is the biggest data mining conference of the year, and the data mining community uses the opportunity to present some of its most prestigious awards. The award recognizes papers presented at the annual SIGKDD conference that advance the fundamental understanding of the field of knowledge discovery in data and data mining. MOBICOM:

22

Each year since 2008, the MobiCom Technical Program Committee has awarded the MobiCom Best Paper Award for the best paper from among all papers submitted to the annual MobiCom conference that year. For the MobiCom Best Paper Award, all papers submitted to the conference are considered for the award. The MobiCom Technical Program Committee forms the Selection Committee for this award. The award includes a plaque and a cash prize. OSDI (Operating Systems Design and Implementation) : The program committee will, at its discretion, determine which paper(s) should receive the Jay Lepreau Award for the Best Paper. PLDI (Programming Language Design and Implementation): PLDI is a top-tier conference in the area of programming languages and systems, typically with acceptance rate around 20%. Less than 10% of the papers win a distinguished paper award. Committee tries to avoid conflicts of interest when selecting the winners. The classic paper award, presented annually to the author(s) of a paper presented at the PLDI held 10 years prior to the award year. The award includes a prize of $1,000 to be split among the authors of the winning paper. The papers are judged by their influence over the past decade. The award given in year N is for the most influential paper presented at the conference held in year N − 10. The selection committee consists of the following members: The current SIGPLAN Chair, ex officio, a member of the SIGPLAN EC appointed as committee Chair by the SIGPLAN Chair, the General Chair and Program Chair for PLDI N −10, the General Chair and Program Chair for PLDI N − 1, and a member of the SIGPLAN EC appointed by the committee Chair. The SIGPLAN Chair shall adjudicate conflicts of interest, appointing substitutes to the committee as necessary. PODS The annual ACM SIGMOD/PODS conference is a leading international forum for database researchers, practitioners, developers, and users to explore cutting-edge ideas and results, and to exchange techniques, tools, and experiences. This is an award for the best of all papers submitted, as judged by the program committee. SIGCOMM The SIGCOMM conference gives an award to the best paper. If a student is not the main author of that paper, then the conference also gives a best student paper to the best paper whose primary author is a student. Student authors of the best student paper (there may be more than one) will receive travel grants. These are handled by travel grant chairs although they are awarded from the SIG budget. There is also an honorarium for the

23

best paper award and best student paper award. This is currently set to $500 per paper. By default, the PC chairs are the selection committee chairs, but may transfer the job of selection committee chair over to someone selected by the SIGCOMM EC if they so choose. The selection committee chairs may appoint additional committee members if they so choose. SIGIR SIGIR is the major international forum for the presentation of new research results and for the demonstration of new systems and techniques in information retrieval. Selection of the best papers: During the paper review process, program committee members and Senior PCs nominate papers. These are being discussed at the PC meeting and the Program Chairs with the input of SPCs finalize the list of nominated papers. Then, a best paper awards committee is being formed. The Program and Conference Chairs select members for this committee from the pool of SPCs that have no conflicts of interest. The nominated papers are then forwarded to the awards committee for their deliberations. The committee is charged to identify the best paper. If the best paper happens to be written by a student first author, then only one award is given that covers both the best paper and best student paper. Otherwise, two awards are given. SIGMETRICS (Special Interest Group (SIG) for the computer systems performance evaluation community.) The conference has accepted high quality papers on the development and application of stateof-the-art, broadly applicable analytic, simulation and measurement-based performance evaluation techniques. Test of Time Award. This annual award will recognize an influential paper from the ACM SIGMETRICS conference proceedings 10-12 years prior, based on the criterion of identifying the paper that has had the most impact (research, methodology, application, transfer) in the intervening time period. In each year, the author(s) of the winning paper will receive a recognition plaque and a check for $1,000 US, to be presented at the annual ACM SIGMETRICS conference. SIGMOD (Special Interest Group (SIG) on management data.) The ACM Special Interest Group on Management of Data is concerned with the principles, techniques and applications of database management systems and data management technology. Our members include software developers, academic and industrial researchers, practitioners, users, and students. SIGMOD sponsors the annual SIGMOD/PODS conference, one

24

of the most important and selective in the field. Best Paper Award is granted to the paper that represents an excellent research contribution. SOSP ( Symposium on Operating Systems Principles): The biennial ACM Symposium on Operating Systems Principles is the world’s premier forum for researchers, developers, programmers, and teachers of computer systems technology. Academic and industrial participants present research and experience papers that cover the full range of theory and practice of computer systems software. STOC (Symposium on Theory of Computing): Is an academic conference in the field of theoretical computer science. STOC has been organized annually since 1969, typically in May or June; the conference is sponsored by the Association for Computing Machinery special interest group SIGACT. Acceptance rate of STOC, averaged from 1970 to 2012, is 31%, with the rate of 29% in 2012 Since 2003, STOC has presented one or more Best Paper Awards to recognize the highest quality papers at the conference. UIST (Symposium on Theory of Computing): The ACM Symposium on User Interface Software and Technology (UIST) is the premier forum for innovations in human-computer interfaces. Sponsored by ACM special interest groups on computer-human interaction (SIGCHI) and computer graphics (SIGGRAPH), UIST brings together people from diverse areas including graphical & web user interfaces, tangible & ubiquitous computing, virtual & augmented reality, multimedia, new input & output devices, and CSCW. The intimate size and intensive program make UIST an ideal opportunity to exchange research results and ideas. Acceptance rate, less than 20%. VLBD (Ver Large Databases) VLDB is a premier annual international forum for data management and database researchers, vendors, practitioners, application developers, and users. The conference will feature research talks, tutorials, demonstrations, and workshops. It will cover current issues in data management, database, and information systems research. Data management and databases remain among the main technological cornerstones of the applications of the twenty-first century. With the emergence of Big Data, data-related technologies are becoming more important than ever before. Test of time award. A paper will be selected from the VLDB Proceedings from 10 years earlier that has best met the “test of time,” that is, that has had the most influence since its

25

publication. We are especially interested in first-hand accounts of ways in which the ideas of a paper have been used in practice. WWW The Program Committee used votes by the paper reviewers to select the Best Paper. The WWW Conference series aims to provide the world a premier forum for discussion and debate about the evolution of the Web, the standardization of its associated technologies, and the impact of those technologies on society and culture. The conferences bring together researchers, developers, users and commercial ventures–indeed all who are passionate about the Web and what it has to offer.

26

Suggest Documents