Data Envelopment Analysis as Least-Squares Regression

Data Envelopment Analysis as Least-Squares Regression Timo Kuosmanen* Economic Research Unit, MTT Agrifood Research Finland Abstract Data envelopment...
Author: Guest
0 downloads 0 Views 155KB Size
Data Envelopment Analysis as Least-Squares Regression Timo Kuosmanen* Economic Research Unit, MTT Agrifood Research Finland

Abstract Data envelopment analysis (DEA) is an axiomatic, mathematical programming approach to productive efficiency analysis and performance measurement. This paper shows that DEA can be interpreted as a nonparametric least squares regression subject to shape constraints on production frontier and sign constraints on residuals. Thus, DEA can be seen as a nonparametric counter-part of the corrected ordinary least squares (COLS) model. This result bridges the conceptual and philosophical gap between DEA and regression methods to frontier estimation, and paves a way to a stochastic nonparametric methodology for frontier estimation.

Key Words: frontier estimation, nonparametric regression, productive efficiency analysis.


Address: MTT, Luutnantintie 13, 00410 Helsinki, Finland. Tel. +358956086309. Fax: + 358956086264. E-mail.

[email protected]

1. Introduction Data envelopment analysis (DEA) is an axiomatic, mathematical programming approach to productive efficiency analysis and comparative performance assessment of firms and other decision making units. DEA originates from the work by Farrell (1957), but its current popularity is largely due to the seminal paper by Charnes, Cooper and Rhodes (1978). Thousands of DEA studies have been reported in application areas such as agriculture, education, financial institutions, health care, and public sector firms, among many others. DEA's vitality, real-world relevance, diffusion and global acceptance are evident from such literature studies as Seiford (1996) and Gattoufi et al. (2004). DEA’s main advantage to econometric, regression based tools is its nonparametric treatment of the frontier: DEA does not assume any particular functional form but relies on more general axioms of production theory such as monotonicity, convexity, and homogeneity. This direct data-driven approach of DEA is essential for communicating the results of efficiency analysis to decision makers. However, econometricians often criticize DEA for attributing all deviations from the frontier to inefficiency, as this completely ignores any stochastic noise in the data. While the sampling properties of the DEA efficiency estimates are nowadays well understood (e.g., Simar and Wilson, 2000), the vulnerability to data errors and outliers remains a weakness of DEA (see e.g., Kuosmanen et al., 2007, for discussion). While there exists a broad consensus about the merits (and demerits) of alternative methods, the field of productive efficiency analysis remains divided between the DEA and regression schools of thought, with the main demarcation line along the disciplinary boundary between operations research (preference to DEA) and economics (preference to regression techniques) (see e.g. Førsund and Sarafoglou, 2002, for insightful discussion). Although exchange of ideas and healthy debate does take place between the DEA and regression schools, the lack of a common conceptual framework and research agenda impedes the development of the field to its full potential. The aim of this paper is to bridge the conceptual and philosophical gap between DEA and econometric approaches to frontier estimation by showing that the output-oriented DEA model has an


equivalent interpretation as a least squares regression model. More specifically, we show that the standard output oriented variable returns to scale DEA model (Banker et al., 1984) is a nonparametric variant of Aigner and Chu’s (1968) corrected ordinary least-squares (COLS) model. Although this interpretation is by no means the only correct perspective to DEA, it enables us to place both DEA and regression techniques as particularly interesting special cases within the same unified framework to productive efficiency analysis. In our view, such a unified framework is important for a better understanding of the frontier estimation methodology and a further integration of the field. Besides establishing new methodological links, the results of this paper pave a way for development of a truly stochastic nonparametric methodology for frontier estimation along the lines of Banker and Maindiratta (1992) (see also Kuosmanen, 2006a; and Kuosmanen and Kortelainen, 2007). The rest of the paper is organized as follows. Section 2 introduces a classification of deterministic frontier models and introduces four classic approaches to production analysis. Section 3 presents the main result. Section 4 presents the concluding discussion. The proof of the main result is presented in the Appendix.

2. Models of production 2.1 Classification Consider the standard multiple input, single output, cross-sectional production model from economics:

y i = f ( x i ) + ε i ∀i = 1,..., n ,


where yi denotes the output of firm i, f :  m+ →  + is the production function that characterizes the production technology, x i = ( x i 1...x im )' is the input vector of firm i, and ε i represents the deviation of firm i from the frontier. It is illuminating to classify different models of productive efficiency analysis according to how the production function f is specified and how deviations ε i are interpreted. Firstly, models can be classified as parametric or nonparametric depending on the specification of the production function f. Parametric models postulate a priori some specific functional form for f (e.g.,


Cobb-Douglas or translog) and subsequently estimate its unknown parameters. By contrast, nonparametric models do not restrict to any single functional form, but assume only that f satisfies certain regularity axioms (e.g., monotonicity and concavity). Secondly, models can be classified as neoclassical or frontier models depending on the interpretation of the deviation term ε i . In the neoclassical model, all firms are efficient, and deviations

ε i are seen as random, uncorrelated noise terms that satisfy the Gauss-Markov assumptions. In the frontier models, all deviations from the frontier are attributed to inefficiency, which implies that

ε i ≤ 0 ∀i = 1,...,n . For brevity, we leave aside the stochastic frontier models (Aigner et al., 1977; Meeusen and vanden Broeck, 1977) where ε i are interpreted as composite error terms that include both inefficiency and noise components. Combining these two criteria gives us four different model variants, which are listed in Table 1 together with some canonical references. We next discuss each of these model types in more detail: we start from the parametric ordinary least squares (OLS) approach and its variant corrected ordinary least squares (COLS), and then proceed to the nonparametric approaches: convex nonparametric least squares (CNLS) and DEA.

2.2 Ordinary Least Squares (OLS) OLS is a classic curve estimation method dating back to the work by Legendre and Gauss in the early 19th century. OLS is the most traditional approach to the parametric estimation of the neoclassical production model in economics (see e.g. Cobb and Douglas, 1928). OLS estimation of the production function is based on the assumption that, given some appropriate data transformations, the function f can be expressed as a linear function of the estimated parameters. The name of the technique refers to the principle of determining the intercept and slope coefficients of f through minimization of the sum of squares of residuals ε i .


If we assume (for simplicity) that f is a linear function of inputs (i.e., f ( x) = α + β′x ), then the OLS problem can be formally stated as a quadratic programming (QP) problem n

min ∑ ε i2 α ,β ,ε

I =1


s.t . y i = α + β′x i + ε i ∀i = 1,..., n

Coefficients α ,β represent the intercept and slope coefficients of the linear production function and m

β′x i = ∑ β j x ij [note: we use liberally both sum operators and scalar products in the same equations, j =1

because strict adherence to either convention would make the notation of this paper unnecessarily cryptic]. For brevity, we here abstract from the statistical interpretation of the OLS regression (see any post-graduate econometrics textbook for details) and focus on its mathematical programming formulation that links it with DEA.

2.3 Corrected Ordinary Least Squares (COLS)

The previous OLS model is neoclassical in the sense that all firms are rational (there is no inefficiency), and thus all deviations from the frontier are attributed to random noise. Aigner and Chu (1968) were the first to estimate a parametric frontier model where the deviations ε i are interpreted as inefficiency (see also Timmer 1971). Aigner and Chu’s corrected OLS (COLS) model can be seen as a frontier variant of the OLS model (2). In practice, the COLS model is obtained from (2) by imposing an additional constraint ε i ≤ 0 ∀i = 1,..., n (motivated by the interpretation of ε i as inefficiency terms), which gives the COLS problem n

min ∑ ε i2 α ,β ,ε

I =1

s.t .


y i = α + β′x i + ε i ∀i = 1,..., n

ε i ≤ 0 ∀i = 1,..., n


Note that the constraint ε i ≤ 0 ∀i = 1,..., n does not merely shift the OLS regression line upwards to the frontier, it also influences the coefficients: the parameter estimates α ,β obtained from (2) may generally differ from those obtained from (3).

2.4 Concave Nonparametric Least Squares (CNLS)

In both OLS and COLS models the functional form of the production function is postulated a priori. If the functional form of the regression function is not known, one can resort to nonparametric regression techniques (see e.g., Yatchew 1998, 2003, for comprehensive survey). Nonparametric least squares subject to continuity, monotonicity, and concavity constraints [henceforth referred to as Convex Nonparametric Least Squares (CNLS)] is one of the oldest approaches, dating back to the work by Hildreth (1954). Interestingly, Hildreth illustrated his method by estimating the neoclassical production function for cotton production using data from field experiments. CNLS does not assume any particular functional form for f. Rather, it postulates that f belongs to the set of continuous, monotonic increasing and globally concave functions, denoted henceforth by F2 . In contrast to the kernel regression and spline smoothing techniques, CNLS does not require

specification of a smoothing parameter. The CNLS problem is to find f ∈ F2 that minimizes the sum of squares of the deviations, formally: n

min ∑ ε i f ,ε


i =1

s.t .



y i = f ( x i ) + ε i ∀i = 1,..., n f ∈ F2

The essential statistical properties of the CNLS estimators are nowadays well understood. The maximum likelihood interpretation of CNLS was already noted by Hildreth (1954), and Hanson and Pledger (1976) have proved its consistency. More recently, Nemirovskii et al. (1985), Mammen (1991) and Mammen and Thomas-Agnen (1999) have shown that CNLS achieves the standard nonparametric


rate of convergence OP(n-1/(2+m)) where n is the number of observations and m is the number of regressors. Imposing further smoothness assumptions or derivative bounds can improve the rate of convergence and alleviate the curse of dimensionality (see, e.g., Mammen, 1991; Yatchew, 1998; Mammen and Thomas-Agnen, 1999). Groeneboom et al. (2001) have derived the asymptotic distribution of CNLS estimator at a fixed point. The CNLS problem (4) selects the best-fit function f from the family F2 , which includes an infinite number of functions. This makes problem (4) generally hard to solve. Existing single regressor algorithms (e.g., Fraser and Massam, 1989; Meyer, 1999) require that the data are sorted in ascending order according to the scalar valued regressor x. However, such a sorting is not possible in the general multiple regression setting where x is a vector. To estimate the CNLS problem (4) in the general multi-input setting, we apply insights from the celebrated Afriat’s Theorem (Afriat 1967, 1972: compare with Banker and Maindiratta, 1992; and Matzkin, 1994). Specifically, we model the concavity constraints by means of the Afriat inequalities to rewrite problem (4) equivalently as the following QP problem: n

min ∑ ε i2 α ,β ,ε

I =1

s.t . y i = α i + β′i x i + ε i ∀i = 1,..., n α i + β′i x i ≤ α h + β′h x i ∀h,i = 1,..., n


β i ≥ 0 ∀i = 1,..., n

2 2 be the minimum sum of squares of problem (4) and let s Afriat be the minimum Proposition 1: Let sCNLS

2 2 sum of squares of problem (5). Then for any real-valued data set, sCNLS = s Afriat .

Note that, in contrast to problems (2) and (3), in problem (5) the intercept and slope coefficients vary from one firm to another. Instead of fitting one regression line to the cloud of observed points as in OLS, we fit n different regression lines that can be interpreted as tangent lines to the unknown 7

production function f. The slope coefficients β i represent the marginal products of inputs (i.e., the subgradients ∇f ( x i ) ). The second constraint imposes concavity by applying of a system of Afriat inequalities: these inequalities are the key to modeling concavity constraints in the general multiple regressor setting. The third constraint imposes monotonicity. Given the estimated coefficients (α i ,β i ) from (5), we can construct the following explicit estimator of f: f CNLS ( x ) = min {α i + β′i x} .



In principle, estimator f CNLS consists of n hyperplane segments. In practice, however, the estimated coefficients (α i ,β i ) are clustered to a relatively small number of alternative values: the number of different hyperplane segments is usually much lower than n. Importantly, if we denote the set of functions that minimize the original CNLS problem (5) by F2∗ , then it is easy to show that f CNLS ∈ F2∗ for any finite real-valued data set.

2.5 Data Envelopment Analysis (DEA) DEA is a nonparametric frontier approach that builds on exactly the same regularity properties as CNLS (monotonicity, concavity), but attributes all deviations from the frontier to inefficiency. DEA is not just one model, but a large family of models that differ in their orientation of efficiency measure, specification of returns to scale, and in many other features. The foundation of all DEA models is the minimum extrapolation principle (Banker et al., 1984) which defines the DEA production possibility set as the minimum set that contains all observed input-output vectors and satisfies the postulated properties. Under the usual assumptions of monotonicity and concavity, the production possibility set is the convex monotonic hull of the observed input-output vectors. Thus, under the maintained assumptions of monotonicity and concavity of f, the DEA estimator of f can be stated as


n n n   f DEA ( x) = maxn  y y = ∑ λh y h ; x ≥ ∑ λh x h ; ∑ λh = 1 . λ∈ + h =1 h =1 h =1  


Multipliers λi are referred to as intensity weights, which are used for constructing convex combinations of the observed firms. Substituting f in (1) by the DEA estimator (7), we see that the DEA efficiency estimate ε iDEA for firm i is obtained as optimal solution to the following linear programming (LP) problem

ε iDEA = minε λ ,ε

s.t . n

y i = ∑ λh y h + ε h =1



x i ≥ ∑ λh x h h =1 n




h =1

λh ≥ 0 ∀h = 1,..., n It is illustrative to compare (8) with the standard output-oriented variable returns to scale (VRS) DEA model by Banker et al. (1984) [see also Afriat, 1972]. We find that the only differences are that Banker et al. operate in the multi-output setting and they measure efficiency ( θ ) in the multiplicative form as y i = f ( x i ) ⋅θ i−1 , whereas (8) is consistent with the additive single-output specification of (1). The equivalence of problem (8) and the standard output-oriented DEA problem can be formally stated as follows:

Proposition 2: In the single-output setting, the DEA model (8) is equivalent to the standard outputoriented variable returns to scale DEA model by Banker et al. (1984) in the sense that

θ i = 1 − ε iDEA / y i ∀i = 1,..., n ,


where ε iDEA is the optimal solution to (8) and θ i is the Farrell efficiency score obtained from


max θ λ ,θ

s.t . n

θ y i ≤ ∑ λh y h h =1



x i ≥ ∑ λh x h h =1 n




h =1

λh ≥ 0 ∀h = 1,..., n

3. Least squares interpretation of DEA From the outset, the DEA problems (8) and (10) appear structurally very different from the least squares regression problems (2), (3), or (5). The main result of this paper is to show that the DEA problem (8) can be equivalently formulated as a least-squares regression problem that that can be qualified as a “corrected” CNLS model.

Main Theorem: The output-oriented DEA model (8) can be equivalently presented as a constrained variant of the CNLS problem, augmented by the COLS constraint ε i ≤ 0 ∀i = 1,..., n . Specifically, the DEA efficiency scores ε iDEA for all i = 1,..., n obtained as the optimal solution to the following nonparametric least squares problem n

min ∑ ε i2 α ,β ,ε

I =1

s.t . y i = α i + β′i x i + ε i ∀i = 1,..., n α i + β′i x i ≤ α h + β′h x i ∀h,i = 1,..., n


β i ≥ 0 ∀i = 1,..., n

ε i ≤ 0 ∀i = 1,..., n

It is worth to note that the DEA model (8) is usually computed separately for each firm in the sample by solving n different LP problems, whereas the QP problem (11) computes the efficiency


scores simultaneously for all firms (compare with Kuosmanen et al., 2006). Obviously, solving problem (11) is not the most efficient way of computing the DEA model; we only present it to reveal the methodological links. Note that the DEA efficiency scores are obtained as the optimal ε i from (11) (i.e., not the squared terms ε i2 ). In fact, the squared terms ε i2 are only used for establishing a link to the n

CNLS problem; the same result is obtained even if we substitute the objective function by



, which

I =1

can be interpreted as the least absolute deviations (LAD) model. It is also worth emphasizing that the result does not restrict to the VRS technology: one can derive analogous results for technologies exhibiting constant [add constraint α i = 0 ∀i = 1,..., n in (11)], non-increasing [add constraint

α i ≥ 0 ∀i = 1,..., n ], or non-decreasing [add constraint α i ≤ 0 ∀i = 1,..., n ] returns to scale. Finally, the result can be extended to the input oriented efficiency measures and multi-output technologies in a straightforward fashion: we have here restricted to the single-output setting with an additive inefficiency term because this is the standard model in the parametric frontier estimation literature.

4. Discussion We have shown that the standard output-oriented DEA model can be re-interpreted as a nonparametric least squares regression subject to shape constraints (concavity, monotonicity) and a sign restriction to the residuals (analogous to the COLS model by Aigner and Chu, 1968). This result reveals interesting links between DEA, the parametric COLS model, and the nonparametric CNLS regression by Hildreth (1954), which appear to be unknown in the literature. Importantly, these linkages provide a conceptual bridge between the parametric and nonparametric approaches to productive efficiency analysis. Despite the numerous alternative interpretations, our results show that different approaches to frontier estimation can be ultimately understood as variants of the same unified estimation framework. We hope that the links established in this paper can stimulate the scattered field of productive efficiency analysis develop towards a more coherent and unified paradigm.


From a practical perspective, our main theorem opens up new possibilities for adapting tools and concepts of regression analysis to the DEA framework. For example, DEA currently lacks a meaningful goodness-of-fit statistic. Given the least-squares formulation derived in this paper, we could apply the standard coefficient of determination

∑ (ε ) = 1− ∑ (y − y ) n




i =1


∑ ((1 − θ ) y ) = 1− ∑ (y − y ) n



i =1



i =1 n

i =1






for measuring the proportion of output variation that is explained by the DEA frontier. Finally, the results of this paper pave a way for a truly stochastic nonparametric approach to frontier estimation that can combine the virtues of the nonparametric DEA and the parametric regression approaches. Banker and Maindiratta (1992) were the first to envision a frontier model that combines a nonparametric DEA-like frontier with stochastic inefficiency and noise terms (á la Aigner et al., 1977; and Meeusen and vanden Broeck, 1977). Unfortunately, the practical estimation of their model has proved extremely difficult; there are no reported empirical applications of Banker and Maindiratta’s method. The links between DEA and CNLS derived in this paper can help us to operationalize the method of Banker and Maindiratta to practical applications (see working papers Kuosmanen, 2006a, and Kuosmanen and Kortelainen, 2007, for further discussion and examples).

Appendix: Proofs

Proposition 1: A more detailed proof of this proposition is presented in the unpublished paper Kuosmanen (2006b) [available from the author by request]; we here outline the main argument. Note that the objective functions of (4) and (5) are equivalent. Afriat’s Theorem (Afriat 1967, 1972) implies that the following conditions are equivalent: (1) there exist sets of coefficients α i and β i ≥ 0 that satisfy the Afriat inequalities

α i + β′i x i ≤ α h + β′h x i ∀h, i ∈ {1,..., n} .


(2) there exists a continuous, monotonic increasing, concave function f ⊂ F2 such that f ( x i ) = α i + β′i x i ∀x i , i = 1,..., n . The objective function of (4) depends on the value of f only in n points x i , i = 1,..., n . Thus, it suffices to evaluate function f in n observed points by using a system of supporting hyperplanes that satisfy the 2 2 shape constraints. Thus, the equality sCNLS = s Afriat holds for any real-valued data set (X,y). 

Proposition 2: Using equation (7), it is easy to verify that

ε iDEA = y i − f DEA ( x i )


θ i = f DEA ( x i ) / y i .



Solving f DEA ( x i ) from (13), and substituting it to (14) yields θ i = 1 − ε iDEA / y i . 

Main Theorem: Applying the duality theory of linear programming, we find that the LP problem (8) has the following (equivalent) dual problem max y i − (α + β′x i ) α ,β

s.t .


y h ≤ α + β′x h ∀h = 1,..., n β≥0

where coefficients α ,β can be interpreted as the multiplier weights (compare with the DEA multiplier problem derived by Banker et al., 1984). Introducing inefficiency terms ε , we can rewrite the LP problem (15) equivalently as


max ε α ,β ,ε

s.t . y i = α + β′x i + ε y h ≤ α + β′x h ∀h = 1,..., n



Next, instead of solving the LP problem (16) for each firm separately, we can combine the n LP problems into one big problem and compute the efficiency scores simultaneously for all firms (c.f., Kuosmanen et al., 2006). Since in DEA the efficiency scores and the multiplier weights are independently determined for each firm, we can harmlessly minimize the sum of efficiency scores as n

min ∑ ε i α ,β ,ε

i =1

s.t . y i = α i + β′i x i + ε i y h ≤ α i + β′i x h ∀h , i = 1,..., n


β i ≥ 0 ∀i = 1,..., n Since by construction

ε i ≤ 0 ∀i = 1,...,n


(note that y i ≤ α i + β′i x i for all i), we can harmlessly impose (18) as an additional constraint in (17). Clearly, the inefficient firms (for which ε h < 0 ) do not influence the shape of the DEA frontier, so we can add the inefficiency components to the second constraint and write (17) equivalently as n

min ∑ ε i α ,β ,ε

i =1

s.t . y i = α i + β′i x i + ε i y h ≤ α i + β′i x h + ε h ∀h , i = 1,..., n


β i ≥ 0 ∀i = 1,..., n

ε i ≤ 0 ∀i = 1,..., n Now, since y h = α h + β′h x h + ε h , we can re-write the second constraint as

α h + β′h x h ≤ α i + β′i x h ∀h, i = 1,..., n .



As indices i,h refer to an arbitrary pair of firms in the sample, we can harmlessly swap the subscripts and rewrite (20) as

α i + β′i x i ≤ α h + β′h x i ∀h, i = 1,..., n .


Thus, LP problem (19) can now be equivalently written as n

min ∑ ε i α ,β ,ε

i =1

s.t . y i = α i + β′i x i + ε i α i + β′i x i ≤ α h + β′h x i ∀h, i = 1,..., n


β i ≥ 0 ∀i = 1,..., n

ε i ≤ 0 ∀i = 1,..., n Finally, we note that the distance metric used for gauging deviations from the frontier does not influence the shape or location of the frontier in any way (see Korhonen and Luptacik, 2004, for discussion). Therefore, any monotonic transformation of deviations ε i in the objective function does not have any impact on the optimal values of α,β, ε . To derive a least squares interpretation analogous to (5), we can apply a quadratic transformation to ε i in the objective function to obtain the expression (11). 

References Afriat, S.N., 1967, The Construction of a Utility Function from Expenditure Data, International Economic Review 8, 67-77. Afriat S.N., 1972, Efficiency Estimation of Production Functions, International Economic Review 13, 568-598. Aigner, D.J., and S. Chu, 1968, On Estimating the Industry Production Function, American Economic Review 58, 826-839. Aigner D.J., C.A.K. Lovell, and P. Schmidt, 1977, Formulation and Estimation of Stochastic Frontier Models, Journal of Econometrics 6, 21-37.


Banker, R.D., A. Charnes, W.W. Cooper, 1984, Some Models for Estimating Technical and Scale Inefficiencies in Data Envelopment Analysis, Management Science 30(9), 1078–1092. Banker R.D., and A. Maindiratta, 1992, Maximum Likelihood Estimation of Monotone and Concave Production Frontiers, Journal of Productivity Analysis 3, 401-415. Charnes, A., W.W. Cooper and E. Rhodes, 1978, Measuring the Inefficiency of Decision Making Units, European Journal of Operational Research 2(6), 429-444. Cobb, C.W., and P.H. Douglas, 1928, A Theory of Production, American Economic Review 18(Supplement), 139-165. Farrell, M.J., 1957, The Measurement of Productive Efficiency, Journal of the Royal Statistical Society Series A. General 120(3), 253-282. Førsund, F.R., and N. Sarafoglou, 2002, On the Origins of Data Envelopment Analysis, Journal of Productivity Analysis 17(1-2), 23-40. Fraser D.A.S., and H. Massam, 1989, A Mixed Primal-Dual Bases Algorithm for Regression Under Inequality Constraints: Application to Convex Regression, Scandinavian Journal of Statistics 16, 65-74. Gattoufi, S., M. Oral, A. Kumar and A. Reisman, 2004, Epistemology of data envelopment analysis and comparison with other fields of OR/MS for relevance to applications, Socio-Economic Planning Sciences 38(2-3), 123-140. Groeneboom, P., G. Jongbloed, and J.A. Wellner, 2001, Estimation of a Convex Function: Characterizations and Asymptotic Theory, Annals of Statistics 29, 1653-1698. Hanson, D.L., and G. Pledger, 1976, Consistency in Concave Regression, Annals of Statistics 4(6), 1038-1050. Hildreth, C., 1954, Point Estimates of Ordinates of Concave Functions, Journal of the American Statistical Association 49(267), 598-619.


Korhonen, P.J., and M. Luptacik, 2004, Eco-efficiency analysis of power plants: An extension of data envelopment analysis, European Journal of Operational Research 154(2), 437-446. Kuosmanen, T., 2006a, Stochastic Nonparametric Envelopment of Data: Combining Virtues of SFA and DEA in a Unified Framework, MTT Discussion Paper 3/2006. Available online at: Kuosmanen, T., 2006b, Representation Theorem for Convex Nonparametric Least Squares, unpublished manuscript, available from the author by request. Kuosmanen, T., L. Cherchye, and T. Sipiläinen, 2006, The Law of One Price in Data Envelopment Analysis: Restricting Weight Flexibility across Firms, European Journal of Operational Research 170(3), 735-757. Kuosmanen T, and M. Kortelainen, 2007, Stochastic Nonparametric Envelopment of Data: Crosssectional Frontier Estimation Subject to Shape Constraints, University of Joensuu, Economics Discussion Paper 46. Kuosmanen T, G.T. Post, and S. Scholtes, 2007, Testing for Productive Efficiency in Case of Errors-inVariables, Journal of Econometrics 136, 131-162. Mammen, E., 1991, Nonparametric Regression under Qualitative Smoothness Assumptions, Annals of Statistics 19, 741-759. Mammen, E., and C. Thomas-Agnan, 1999, Smoothing Splines and Shape Restrictions, Scandinavian Journal of Statistics 26, 239-252. Matzkin, R.L., 1994, Restrictions of Economic Theory in Nonparametric Methods, Chapter 42 in R.F. Engle and D.L. McFadden (eds) Handbook of Econometrics Vol IV. Elsevier. Meeusen W, and J. van den Broeck, 1977, Efficiency Estimation from Cobb-Douglas Production Function with Composed Error, International Economic Review 8, 435-444. Meyer, M.C., 1999, An Extension of the Mixed Primal-Dual Bases Algorithm to the Case of More Constraints than Dimensions, Journal of Statistical Planning and Inference 81, 13-31.


Nemirovskii, A.S., B.T. Polyak, and A.B. Tsybakov, 1985, Rates of Convergence of Nonparametric Estimates of Maximum Likelihood Type, Problems of Information Transmission 21, 258-271. Seiford, L.M., 1996, Data envelopment analysis: the evolution of the state of the art (1978–1995), Journal of Productivity Analysis 7, 99–137. Simar, L., and P. Wilson, 2000, Statistical Inference in Nonparametric Frontier Models: The State of the Art, Journal of Productivity Analysis 13, 49-78. Timmer, P., 1971, Using a Probabilistic Frontier Production Function to Measure Technical Efficiency, Journal of Political Economy 79, 776-794. Yatchew, A.J., 1998, Nonparametric Regression Techniques in Economics, Journal of Economic Literature 36, 669-721. Yatchew, A., 2003, Semiparametric Regression for the Applied Econometrician, Cambridge University Press.


Table 1: Classification of production models and the links established in this paper parametric neoclassical




Cobb and Douglas (1928)

Hildreth (1954) Hanson and Pledger (1976)

COLS frontier

Aigner and Chu (1968)


Timmer (1971)

Farrell (1957) Charnes, Cooper, Rhodes (1978)