|
on Econometrics |
By: | Genaro Sucarrat; Álvaro Escribano Sáez |
Abstract: | Exponential models of autoregressive conditional heteroscedasticity (ARCH) are attractive in empirical analysis because they guarantee the non-negativity of volatility, and because they enable richer autoregressive dynamics. However, the currently available models exhibit stability only for a limited number of conditional densities, and the available estimation and inference methods in the case where the conditional density is unknown hold only under very specific and restrictive assumptions. Here, we provide results and simple methods that readily enables consistent estimation and inference of univariate and multivariate power log-GARCH models under very general and non-restrictive assumptions when the power is fixed, via vector ARMA representations. Additionally, stability conditions are obtained under weak assumptions, and the power log-GARCH model can be viewed as nesting certain classes of stochastic volatility models, including the common ASV(1) specification. Finally, our simulations and empirical applications suggest the model class is very useful in practice. |
Keywords: | Power ARCH, Exponential GARCH, Log-GARCH, Multivariate GARCH, Stochastic volatility |
JEL: | C22 C32 C51 C52 |
Date: | 2010–06 |
URL: | http://d.repec.org/n?u=RePEc:cte:werepe:we1013&r=ecm |
By: | Boldea, Otilia; Hall, Alastair R. |
Abstract: | In this paper, we extend Bai and Perron's (1998, Econometrica, pp. 47-78) method for detecting multiple breaks to nonlinear models. To that end, we consider a nonlinear model that can be estimated via nonlinear least squares (NLS) and features a limited number of parameter shifts occurring at unknown dates. In our framework, the break-dates are estimated simultaneously with the parameters via minimization of the residual sum of squares. Using new uniform convergence results for partial sums, we derive the asymptotic distributions of both break-point and parameter estimates and propose several instability tests. We provide simulations that indicate good finite sample properties of our procedure. Additionally, we use our methods to test for misspecification of smooth-transition models in the context of an asymmetric US federal funds rate reaction function and conclude that there is strong evidence of sudden change as well as smooth behavior. |
Keywords: | Multiple Change Points; Nonlinear Least Squares; Smooth Transition |
JEL: | C32 C13 C12 C22 |
Date: | 2010–05–20 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:23150&r=ecm |
By: | Andrew Gordon Wilson; Zoubin Ghahramani |
Abstract: | We define a copula process which describes the dependencies between arbitrarily many random variables independently of their marginal distributions. As an example, we develop a stochastic volatility model, Gaussian Copula Process Volatility (GCPV), to predict the latent standard deviations of a sequence of random variables. To make predictions we use Bayesian inference, with the Laplace approximation, and with Markov chain Monte Carlo as an alternative. We find both methods comparable. We also find our model can outperform GARCH on simulated and financial data. And unlike GARCH, GCPV can easily handle missing data, incorporate covariates other than time, and model a rich class of covariance structures. |
Date: | 2010–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1006.1350&r=ecm |
By: | Marc Hallin; Yves-Caoimhin Swan; Thomas Verdebout; David Veredas |
Abstract: | Linearmodelswith stable error densities are considered. The local asymptotic normality of the resulting model is established. We use this result, combined with Le Cam’s third lemma, to obtain local powers of various classical rank tests (Wilcoxon’s and van derWaerden’s test, the median test, and their counterparts for regression and analysis of variance) under α-stable laws. The same results are used to construct new rank tests achieving parametric optimality at specified stable densities. A Monte-Carlo study is conducted to compare their relative performances. |
Keywords: | Stable distributions; local asymptotic normality; rank tests; Asymptotic Relative efficiencies |
Date: | 2010 |
URL: | http://d.repec.org/n?u=RePEc:eca:wpaper:2013/57641&r=ecm |
By: | Wang, Yafeng; Graham, Brett |
Abstract: | We propose simulation based estimation for discrete sequential move games of perfect information which relies on the simulated moments and importance sampling. We use importance sampling techniques not only to reduce computational burden and simulation error, but also to overcome non-smoothness problems. The model is identified with only weak scale and location normalizations, monte Carlo evidence demonstrates that the estimator can perform well in moderately-sized samples. |
Keywords: | Game-Theoretic Econometric Models; Sequential-Move Game; Method of Simulated Moments; Importance Sampling; Conditional Moment Restrictions. |
JEL: | C13 C35 C01 C72 |
Date: | 2010–07–08 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:23153&r=ecm |
By: | Yoshitsugu Kitazawa (Faculty of Economics, Kyushu Sangyo University) |
Abstract: | In this paper, a simple transformation is proposed for the fixed effects logit model, using which some valid moment conditions including the first-order condition for one of the conditional MLE proposed by Chamberlain (1980) can be generated. Some Monte Carlo experiments are carried out for the GMM estimator based on the transformation. |
Keywords: | fixed effects logit; conditional logit estimator; hyperbolic transformation; moment conditions; GMM; Monte Carlo experiments |
JEL: | C23 |
Date: | 2010–06 |
URL: | http://d.repec.org/n?u=RePEc:kyu:dpaper:43&r=ecm |
By: | Browning, Martin; Carro, Jesus |
Abstract: | We consider dynamic discrete choice models with heterogeneity in both the levels parameter and the state dependence parameter. We first present an empirical analysis that motivates the theoretical analysis which follows. The theoretical analysis considers a simple two-state, first-order Markov chain model without covariates in which both transition probabilities are heterogeneous. Using such a model we are able to derive exact small sample results for bias and mean squared error (MSE). We discuss the maximum likelihood approach and derive two novel estimators. The first is a bias corrected version of the Maximum Likelihood Estimator (MLE) although the second, which we term MIMSE, minimizes the integrated mean square error. The MIMSE estimator is always well defined, has a closed-form expression and inherits the desirable large sample properties of the MLE. Our main finding is that in almost all short panel contexts the MIMSE significantly outperforms the other two estimators in terms of MSE. A final section extends the MIMSE estimator to allow for exogenous covariates. 1368-4221 U Oxford; U Carlos III de Madrid. 1095502 . |
JEL: | C23 |
Date: | 2010 |
URL: | http://d.repec.org/n?u=RePEc:ner:oxford:http://economics.ouls.ox.ac.uk/14764/&r=ecm |
By: | Dominique Guegan (Centre d'Economie de la Sorbonne - Paris School of Economics); Pierre-André Maugis (Centre d'Economie de la Sorbonne - Paris School of Economics) |
Abstract: | We present a new recursive algorithm to construct vine copulas based on an underlying tree structure. This new structure is interesting to compute multivariate distributions for dependent random variables. We proove the asymptotic normality of the vine copula parameter estimator and show that all vine copula parameter estimators have comparable variance. Both results are crucial to motivate any econometrical work based on vine copulas. We provide an application of vine copulas to estimate the VaR of a portfolio, and show they offer significant improvement as compared to a benchmark estimator based on a GARCH model. |
Keywords: | Vines copulas, conditional copulas, risk management. |
JEL: | D81 C10 C40 C52 |
Date: | 2010–05 |
URL: | http://d.repec.org/n?u=RePEc:mse:cesdoc:10040&r=ecm |
By: | Carlos Capistrán; Allan Timmermann; Marco Aiolfi |
Abstract: | We consider combinations of subjective survey forecasts and model-based forecasts from linear and non-linear univariate specifications as well as multivariate factor-augmented models. Empirical results suggest that a simple equal-weighted average of survey forecasts outperform the best model-based forecasts for a majority of macroeconomic variables and forecast horizons. Additional improvements can in some cases be gained by using a simple equal-weighted average of survey and model-based forecasts. We also provide an analysis of the importance of model instability for explaining gains from forecast combination. Analytical and simulation results uncover break scenarios where forecast combinations outperform the best individual forecasting model. |
Keywords: | Factor Based Forecasts, Non-linear Forecasts, Structural Breaks, Survey Forecasts, Univariate Forecasts. |
JEL: | C53 E |
Date: | 2010–06 |
URL: | http://d.repec.org/n?u=RePEc:bdm:wpaper:2010-04&r=ecm |
By: | Nezar Bennala; Marc Hallin; Davy Paindaveine |
Keywords: | Random Effects; Panel Data; Rank Tests; Local Asymptotic Normality |
Date: | 2010 |
URL: | http://d.repec.org/n?u=RePEc:eca:wpaper:2013/57643&r=ecm |
By: | Carolina Rodríguez Zamora; Jean Lim |
Abstract: | This paper extends recent research on the behaviour of the t-statistic in a long-horizon regression (LHR). We assume that the explanatory and dependent variables are generated according to the following models: a linear trend stationary process, a broken trend stationary process, a unit root process, and a process with a double unit root. We show that, both asymptotically and in finite samples, the presence of spurious LHR depends on the assumed model for the variables. We propose an asymptotically correct inferential procedure for testing the null hypothesis of no relationship in a LHR, which works whether the variables have a long-run relationship or not. Our theoretical results are applied to an international data set on money and output in order to test for long-run monetary neutrality. Under our new approach and using bootstrap methods, we find that neutrality holds for all countries. |
Keywords: | Long-horizon regression, asymptotic theory, deterministic and stochastic trends, unit roots, structural breaks, long-run monetary neutrality. |
JEL: | H21 J22 D13 |
Date: | 2010–06 |
URL: | http://d.repec.org/n?u=RePEc:bdm:wpaper:2010-05&r=ecm |
By: | Ayesha, Nazuk; Sadia, Nadir; Javid , Shabbir |
Abstract: | In this paper we have worked to weight and transform various estimators by Prasad (1986) and Lui (1991). We have introduced some ratio and ratio type estimators under weighting, transformation and model based approach, environment. We have introduced estimators efficient than estimators proposed by Chakrabarty (1979), Singh and Singh (1997), Singh (2002) and Singh et al. (2006). |
Keywords: | model based approach; percent relative efficiency; product estimator; ratio estimator; regression estimator; simple mean unit estimator |
JEL: | C13 C42 |
Date: | 2010 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:23243&r=ecm |
By: | Doris, Aedin (National University of Ireland, Maynooth); O'Neill, Donal (National University of Ireland, Maynooth); Sweetman, Olive (National University of Ireland, Maynooth) |
Abstract: | In this paper we study the performance of the GMM estimator in the context of the covariance structure of earnings. Using analytical and Monte Carlo techniques we examine the sensitivity of parameter identification to key features such as panel length, sample size, the degree of persistence of earnings shocks and the evolution of inequality over time. We show that the interaction of transitory persistence with the time pattern of inequality determines identification in these models and offer some practical recommendations that follow from our findings. |
Keywords: | GMM, permanent and transitory inequality |
JEL: | J31 D31 |
Date: | 2010–05 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp4952&r=ecm |
By: | Marina Theodosiou (Central Bank of Cyprus) |
Abstract: | Combination techniques and decomposition procedures have been applied to time series forecasting to enhance prediction accuracy and to facilitate the analysis of data respectively. However, the restrictive complexity of some combination techniques and the difficulties associated with the application of the decomposition results to the extrapolation of data, mainly due to the large variability involved in economic and financial time series, have limited their application and compromised their development. This paper is a re-examination of the benefits and limitations of decomposition and combination techniques in the area of forecasting, and a contribution to the field with a new forecasting methodology. The new methodology is based on the disaggregation of time series components through the STL decomposition procedure, the extrapolation of linear combinations of the disaggregated sub-series, and the reaggregation of the extrapolations to obtain estimation for the global series. With the application of the methodology to the data from the NN3 and M1 Competition series, the results suggest that it can outperform other competing statistical techniques. The power of the method lies in its ability to perform consistently well, irrespective of the characteristics, underlying structure and level of noise of the data. |
Keywords: | ARIMA models, combining forecasts, decomposition, error measures, evaluating forecasts, forecasting competitions, time series |
JEL: | C53 |
Date: | 2010–06 |
URL: | http://d.repec.org/n?u=RePEc:cyb:wpaper:2010-4&r=ecm |
By: | Theodore Panagiotidis (Department of Economics, University of Macedonia) |
Abstract: | This paper employs a local information, nearest neighbour forecasting methodology to test for evidence of nonlinearity in financial time series. Evidence from well-known data generating process are provided and compared with returns from the Athens stock exchange given the in-sample evidence of nonlinear dynamics that has appeared in the literature. Nearest neighbour forecasts fail to produce more accurate forecasts from a simple AR model. This does not substantiate the presence of in-sample nonlinearity in the series. |
Keywords: | nearest neighbour, nonlinearity |
JEL: | C22 C53 G10 |
Date: | 2010–06 |
URL: | http://d.repec.org/n?u=RePEc:mcd:mcddps:2010_08&r=ecm |
By: | Greg Hannsgen |
Abstract: | The process of constructing impulse-response functions (IRFs) and forecast-error variance decompositions (FEVDs) for a structural vector autoregression (SVAR) usually involves a factorization of an estimate of the error-term variance-covariance matrix V. Examining residuals from a monetary VAR, this paper finds evidence suggesting that all of the variances in V are infinite. Specifically, this study estimates alpha-stable distributions for the reduced-form error terms. The ML estimates of the residuals' characteristic exponents "alpha" range from 1.5504 to 1.7734, with the Gaussian case lying outside 95 percent asymptotic confidence intervals for all six equations of the VAR. Variance-stabilized P-P plots show that the estimated distributions fit the residuals well. Results for subsamples are varied, while GARCH(1,1) filtering yields standardized shocks that are also all likely to be non-Gaussian alpha stable. When one or more error terms have infinite variance, V cannot be factored. Moreover, by Proposition 1, the reduced-form DGP cannot be transformed, using the required nonsingular matrix, into an appropriate system of structural equations with orthogonal, or even finite-variance, shocks. This result holds with arbitrary sets of identifying restrictions, including even the null set. Hence, with one or more infinite-variance error terms, structural interpretation of the reduced-form VAR within the standard SVAR model is impossible. |
Keywords: | Structural Vector Autoregression; VAR; Levy-stable Distribution; Infinite Variance; Monetary Policy Shocks; Heavy-tailed Error Terms; Factorization; Impulse Response Function; Transformability Problem |
JEL: | C32 C46 E30 E52 |
Date: | 2010–05 |
URL: | http://d.repec.org/n?u=RePEc:lev:wrkpap:wp_596&r=ecm |
By: | J Paul Dunne (Department of Economics, University of the West of England); Ron P. Smith (Birkbeck College, University of London) |
Abstract: | A large literature has used tests for Granger (1969) non-causality, GNC, to examine the interaction of military spending with the economy. Such tests answer a specific though quite limited question: can one reject the null hypothesis that one variable does not help predict another? If this null is rejected, there is said to be Granger causality, GC. Although the limitations of GNC tests are well known, they are often not emphasised in the applied literature and so may be forgotten. This paper considers the econometric and methodological issues involved and illustrates them with data for the US and other countries. There are three main issues. First, the tests may not be informative about the substantive issue, the interaction of military expenditure and the economy. The difficulty is that Granger causality, incremental predictability, does not correspond to the usual notion of economic causality. To determine the relationship of the two notions of causality requires an identified structural model. Second, the tests are very sensitive to specification. GNC testing is usually done in the context of a vector autoregression, VAR, and the test results are sensitive to the variables and deterministic terms included in the VAR, lag length, sample or observation window used, treatment of integration and cointegration and level of significance. Statistical criteria may not be very informative about these choices. Third, since the parameters are not structural, the test results may not be stable over different time periods or different countries. |
Keywords: | Military Spending; economic growth; causality; VAR conflict-affected states |
JEL: | H56 C01 O40 |
Date: | 2010–06 |
URL: | http://d.repec.org/n?u=RePEc:uwe:wpaper:1007&r=ecm |
By: | Kajal Lahiri; Antony Davies; Xuguang Sheng |
Abstract: | With the proliferation of quality multi-dimensional surveys, it becomes increasingly important for researchers to employ an econometric framework in which these data can be properly analyzed and put to their maximum use. In this chapter we have summarized such a framework developed in Davies and Lahiri (1995, 1999), and illustrated some of the uses of these multi-dimensional panel data. In particular, we have characterized the adaptive expectations mechanism in the context of broader rational and implicit expectations hypotheses, and suggested ways of testing one hypothesis over the others. We find that, under the adaptive expectations model, a forecaster who fully adapts to new information is equivalent to a forecaster whose forecast bias increases linearly with the forecast horizon. A multi-dimensional forecast panel also provides the means to distinguish between anticipated and unanticipated changes in the forecast target as well as volatilities associated with the anticipated and unanticipated changes. We show that a proper identification of anticipated changes and their perceived volatilities are critical to the correct understanding and estimation of forecast uncertainty. In the absence of such rich forecast data, researchers have typically used the variance of forecast errors as proxies for shocks. It is the perceived volatility of the anticipated change and not the (subsequently-observed) volatility of the target variable or the unanticipated change that should condition forecast uncertainty. This is because forecast uncertainty is formed when a forecast is made, and hence anything that was unknown to the forecaster when the forecast was made should not be a factor in determining forecast uncertainty. This finding has important implications on how to estimate forecast uncertainty in real time and how to construct a measure of average historical uncertainty, cf. Lahiri and Sheng (2010a). Finally, we show how the Rational Expectations hypothesis should be tested by constructing an appropriate variance-covariance matrix of the forecast errors when a specific type of multidimensional panel data is available. |
Date: | 2010 |
URL: | http://d.repec.org/n?u=RePEc:nya:albaec:10-07&r=ecm |
By: | Pfarr, Christian; Schmid, Andreas; Schneider, Udo |
Abstract: | Estimation procedures for ordered categories usually assume that the estimated coefficients of independent variables do not vary between the categories (parallel-lines assumption). This view neglects possible heterogeneous effects of some explaining factors. This paper describes the use of an autofit option for identifying variables that meet the parallel-lines assumption when estimating a random effects generalized ordered probit model. We combine the test procedure developed by Richard Williams (gologit2) with the random effects estimation command regoprob by Stefan Boes. |
Keywords: | generalized ordered probit; panel data; autofit; self-assessed health |
JEL: | C87 C23 C25 I10 |
Date: | 2010–06–10 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:23203&r=ecm |
By: | Pierre Del Moral (INRIA Bordeaux - Sud-Ouest - ALEA - INRIA - Université de Bordeaux - CNRS : UMR5251); Peng Hu (INRIA Bordeaux - Sud-Ouest - ALEA - INRIA - Université de Bordeaux - CNRS : UMR5251); Nadia Oudjane (LAGA - Laboratoire Analyse, Géométrie et Applications - Universit´e Paris 13 - Université Paris-Nord - Paris XIII, EDF R&D - EDF); Bruno Rémillard (MQG - Méthodes Quantitatives de Gestion - HEC-Montréal) |
Abstract: | We analyze the robustness properties of the Snell envelope backward evolution equation for discrete time models. We provide a general robustness lemma, and we apply this result to a series of approximation methods, including cut-off type approximations, Euler discretization schemes, interpolation models, quantization tree models, and the Stochastic Mesh method of Broadie-Glasserman. In each situation, we provide non asymptotic convergence estimates, including Lp-mean error bounds and exponential concentration inequalities. In particular, this analysis allows us to recover existing convergence results for the quantization tree method and to improve significantly the rates of convergence obtained for the Stochastic Mesh estimator of Broadie-Glasserman. In the final part of the article, we propose a genealogical tree based algorithm based on a mean field approximation of the reference Markov process in terms of a neutral type genetic model. In contrast to Broadie-Glasserman Monte Carlo models, the computational cost of this new stochastic particle approximation is linear in the number of sampled points. |
Keywords: | Snell envelope; optimal stopping; American option pricing; genealogical trees; interacting particle model |
Date: | 2010–05–28 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:inria-00487103_v3&r=ecm |
By: | Samantha Kleinberg; Petter N. Kolm; Bud Mishra |
Abstract: | We describe a new framework for causal inference and its application to return time series. In this system, causal relationships are represented as logical formulas, allowing us to test arbitrarily complex hypotheses in a computationally efficient way. We simulate return time series using a common factor model, and show that on this data the method described significantly outperforms Granger causality (a primary approach to this type of problem). Finally we apply the method to real return data, showing that the method can discover novel relationships between stocks. The approach described is a general one that will allow combination of price and volume data with qualitative information at varying time scales (from interest rate announcements, to earnings reports to news stories) shedding light on some of the previously invisible common causes of seemingly correlated price movements. |
Date: | 2010–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1006.1791&r=ecm |
By: | Matteo Barigozzi; Christian T. Brownlees; Giampiero M. Gallo; David Veredas |
Abstract: | When observed over a large panel, measures of risk (such as realized volatilities) usually exhibit a secular trend around which individual risks cluster. In this article we propose a vector Multiplicative Error Model achieving a decomposition of each risk measure into a common systematic and an idiosyncratic component, while allowing for contemporaneous dependence in the innovation process. As a consequence, we can assess how much of the current asset risk is due to a system wide component, andmeasure the persistence of the deviation of an asset specific risk from that common level. We develop an estimation technique, based on a combination of seminonparametric methods and copula theory, that is suitable for large dimensional panels. The model is applied to two panels of daily realized volatilities between 2001 and 2008: the SPDR Sectoral Indices of the S&P500 and the constituents of the S&P100. Similar results are obtained on the two sets in terms of reverting behavior of the common nonstationary component and the idiosyncratic dynamics to with a variable speed that appears to be sector dependent. |
Keywords: | Systematic risk; Multiplicative Error Model; idiosyncratic risk; copula; seminonparametric |
Date: | 2010 |
URL: | http://d.repec.org/n?u=RePEc:eca:wpaper:2013/57645&r=ecm |
By: | David Br\'ee; Damien Challet; Pier Paolo Peirano |
Abstract: | We show that log-periodic power-law (LPPL) functions are intrinsically very hard to fit to time series. This comes from their sloppiness, the squared residuals depending very much on some combinations of parameters and very little on other ones. The time of singularity that is supposed to give an estimate of the day of the crash belongs to the latter category. We discuss in detail why and how the fitting procedure must take into account the sloppy nature of this kind of model. We then test the reliability of LPPLs on synthetic AR(1) data replicating the Hang Seng 1987 crash and show that even this case is borderline regarding predictability of divergence time. We finally argue that current methods used to estimate a probabilistic time window for the divergence time are likely to be over-optimistic. |
Date: | 2010–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1006.2010&r=ecm |
By: | Gianni Amisano (European Central Bank, DG Research, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.); Gabriel Fagan (European Central Bank, DG Research, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.) |
Abstract: | We develop a time-varying transition probabilities Markov Switching model in which inflation is characterised by two regimes (high and low inflation). Using Bayesian techniques, we apply the model to the euro area, Germany, the US, the UK and Canada for data from the 1960s up to the present. Our estimates suggest that a smoothed measure of broad money growth, corrected for real-time estimates of trend velocity and potential output growth, has important leading indicator properties for switches between inflation regimes. Thus money growth provides an important early warning indicator for risks to price stability. JEL Classification: C11, C53, E31. |
Keywords: | money growth, inflation regimes, early warning, time varying transition probabilities, Markov Switching model, Bayesian inference. |
Date: | 2010–06 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20101207&r=ecm |
By: | Martha Banbura; Domenico Giannone; Lucrezia Reichlin |
Abstract: | Economists have imperfect knowledge of the present state of the economy and even of the recent past. Many key statistics are released with a long delay and they are subsequently revised. As a consequence, unlike weather forecasters, who know what is the weather today and only have to predict the weather tomorrow, economists have to forecast the present and even the recent past. The problem of predicting the present, the very near future and the very recent past is labelled as nowcasting and is the subject of this paper. |
Date: | 2010 |
URL: | http://d.repec.org/n?u=RePEc:eca:wpaper:2013/57648&r=ecm |
By: | Aldrich, John |
Abstract: | At the beginning of the 20th century economists and psychologists began to use the statistical methods developed by the English biometricians. This paper sketches the development of psychometrics and econometrics out of biometry and makes some comparisons between the three fields. The period covered is 1895-1925. <br><br> Keywords; History of econometrics, statistics, biometry, factor analysis, path analysis. <br><br> JEL Classification: B816. |
Date: | 2010–06–01 |
URL: | http://d.repec.org/n?u=RePEc:stn:sotoec:1010&r=ecm |