Translate

Integration and Cointegration of Variables (ICV)

 Integration and Cointegration of Variables (ICV)


Abstract 

In this report, we present an overview of notions of stationarity (integration) and cointegration of variables. Although they occupy an important place in statistics and econometrics, their concepts have, for many researchers, been a little vague. I allow myself to further substantiate this by proposing only the essentials from the theoretical and empirical point of view on the integration and cointegration of variables. 

Keys Words: ARDL, Box-Jenkins, Engle & Granger (1987), Johansen (1988)

 Introduction

 A temporal variable, a process of short or long memory requires, before being analyzed, a study making it possible to raise the great characteristics of a quantity from the statistical point of view. This is among others, its tendency; seasonality; its stationarity, its law of probability or its density, etc. One of the most interesting and in most cases very interesting is the question of the stationarity of the chronicle. Several processes are at our disposal for its implementation. In addition, the interest of cointegrated variables in the case of a linear combination has become viral in econometric studies. For fear of presenting an illusory model with no-standard coefficients, the literature hurries to propose a lot of tools to come around this major obstacle in econometrics. In the following sections, we present the stationarity of the variable in a more concise manner and discuss the notion of cointegration later. 

Abdi-Basid ADAN * Integration and Cointegration of Variables (ICV) * abdi-basid@outlook.com P a g e 4 | 8 I. 


Integration of Variables (IV) One of the fundamental concepts in statistics and econometrics is actually that related to the integration of variables. What is really the integration of variables? It turns out that in statistics, the idea of no-stationarity is rather commonly used compared to that of integration of variables. However, the meaning of the latter is versatile from one discipline to another. In statistics, a temporal process fluctuating independently of time is said to be stationary. For illustration (chronogram and correlogram), we can witness a concentration of the chronic around its mean value (mathematical expectation) or a rapid decrease of the autocorrelation function. In terms of probability, the joint distribution (law or density) is identical for the first k and k + 1 variables. In this sense, the process is stationary in the strict sense (also called first-order or strong). To say more, whatever the moment t considered, the variation of the chronic in t and t + 1 is not influenced by the temporal reference. From this point of view, its properties, also known as "stationarity conditions", namely mean, variance and covariance, are all convergent and independent of time. In view of the complexity of estimating a law of probabilistic nature for a given distribution, it has been proposed to remedy this, a second sense of stationarity, called second-order stationarity (weak sense or covariance). Only the conditions of expectancy and the variance of the variable are required and indispensable to judge and affirm stationarity against a variable. Knowing the third condition also includes the second when k is zero (cov (xt, xt-k)). This ensures a pioneering role especially in the forecast of a time series. In principle, the confidence interval of the estimated values depends on its validity. It is mentioned in the literature the existence of a multitude of forms of non-stationarity of a temporal sequence among which the stationarity with tendency (TS: Trend Stationnarity, Nelson et Plosser (1982)), which graphically, one witnesses a growth evolution of the series during of time: as a result, stationarity conditions are not met. To circumvent this adversity and later obtain the series devoid of trend component, we have to choose one of three techniques including straight ordinary least squares, simple moving average or by the filter Hodrick-Prescott. Here we obtain a white noise Hamilton (1994), by definition 

Abdi-Basid ADAN * Integration and Cointegration of Variables (ICV) * abdi-basid@outlook.com P a g e 5 | 8

 stationary but of second order. In the statistical vocabulary, one also evokes the independent white noise, because of the independence between the variables, which implies the nullity of the value of the covariance or simply the decorrelation between the variables (the reciprocal being already false). Sometimes of a Gaussian nature, it is both independent and follows a normal law. Under these conditions, the white noise is strictly stationary (or stationary in the strict sense). In contrast, a stochastic process is not necessarily stationary. The second form of stationarity, for its part, is of stationarity by difference (DS: Difference Stationnnarity, Nelson et Plosser (1982)). It is due, in particular, to the influence of the series by its own historical values. In evidence, the differentiated in a good order differ from zero allows to stationise the series. In practice, two families of stationarity test are announced. It is, on the one hand, the test having the null hypothesis of stationarity (KPSS Test) and, on the other hand, the tests having the null hypothesis of non-stationarity (Dickey-Fuller or Phillips-Perron test). Even more, a variable can be stationary with constant and trend; neither one nor the other or with constant only, according to the significativities of these parameters. The nuance which it seems appropriate to specify is that of telling oneself how a series can be stationary with a tendency, whereas it is the tendancy component that must be deprived of it. Quite logical, in fact, it would mean the same thing and not seen as two different or contradictory propositions. In a word, variable integration consists of depriving the time series of any trend or historical disturbance in order to identify the stable series for use in various analyzes. Finally, it is with the support of this primordial and imperious step that we can afford to begin the estimation of the following models precisely models of Box-Jenkins (ARIMA, SARIMA, VAR, ARMAX, ... etc) or Engle (ARCH, GARCH, ... etc). 

Abdi-Basid ADAN * Integration and Cointegration of Variables (ICV) * abdi-basid@outlook.com P a g e 6 | 8 

II. Cointégration des Variables (CV) Cointegration is indispensable in two-dimensional analysis both in statistics and econometrics. We are aware of the imminent error that can lead us to the simultaneous consideration of two or more variables without first having used a study by variable. In a way, "better first evaluate each variable before the considered helpful". Introduced in economics by Engle and Newbold (1974), then developed by Granger and Engle (1987) and continued in 1991 by Johansen, the notion of cointegration is indeed, the integration of the linear combination between two variables taken together to which tests in the literature allows us to assert in case of presence of cointegrating vector. This is the Engle and Granger tests (1987); Johansen (1988); Johansen and Juselius (1990), ... etc. It is so common to end up with the problem of fallacious regression in the modeling of the variables, because of a linear regression on non-stationary variables, which in principle testifies to an explanatory mechanism R² and a t of Student very appreciable when in reality, there is no relationship between them. In this perspective, the distribution of the estimated parameters no longer follows a Student's law but a Wiener process (Brownian motion), which occurs when the variance of at least one of the variables diverges, due to its dependence on the temporal dimension and well identifiable with recursion procedure. Indeed, good prospects of the residual analysis accompany for the final validity of the model. When the residue is not stationary, it is assimilated to the case of presence of autocorrelation between the model residues. The order of integration of the residual model is not necessarily below that of the variables of the model. In obviousness, a residual component of the stationary model with an order different from 0 is considered as a model where the autocorrelation of the residues is imminent. In other words, autocorrelation and stationarity are indirectly involved through the last condition of covariance (in the weak sense of stationarity). The goal of cointegration is to be able to determine a stationary residue while working on two nonstationary variables in level. The idea proposed by Engle is in the sense that in the short term the variables diverge, but there exists in the long term a stability, a balance between them. A common evolution of variables. Now knowing the possibility of no-zero order cointegration, in the long 

Abdi-Basid ADAN * Integration and Cointegration of Variables (ICV) * abdi-basid@outlook.com P a g e 7 | 8 

term the desired perspective exists according to well-defined conditions upstream and downstream. Among them, the same order of integration of the variables, verifying the possibility of cointegrating the variables, in other words an order less than or equal to the common order of integration of the variables. In addition, downstream, a negative sign and significant equilibrium restoring force coefficient (or speed of adjustment) is required while checking the stationarity of residues obtained. Gran-ger's (1983) theorem highlights the cointegration relationship and the error-correction model. Short-term relationship estimation with OLS is only possible when the variables are differentiated. In other words, by integrating in the model, the delayed variables as explanatory. On the other hand the long term relation are es-timées in level by OLS. No method is perfect, in this sense the disadvantage of the Engle-Granger approach (1987) is the non-distinction of cointegration relation. In principle, it has only one cointegration relation, whereas it is of number k-1 relationship with k the number of variables. Johansen (1988) provides a solution to this problem with his multivariate maximum likelihood approach. For the validity of the VECM model, a cointegration rank of less than the number of variables and non-zero is required, which is evidenced by the maximization of the likelihood log. Otherwise, a VAR model (p) is estimated in place of a VECM. On the other hand, as in the self-gressive Vector model, the specification of the model according to the absence or presence of a constant and trend is necessary. We can determine them with their significativities once estimated. The one-step method of BANERJEE et al. or MCE at Handry makes it easier to interpret the long-term relationship. In addition, to estimate long-term relationships in the case of small samples, the two-step model could lead to estimation bias according to Banerjee, Dolado, Hendry and Smith (1986). The ARDL model "AutoRegressive Distributed Lag / ARDL" and the cointegration test at the terminals of Pesaran et al. (2001) take a new approach to overcome the case of cointegration of variables at different orders of sta-tionality. On the other hand, when the variable integration order is greater than 1, the application of the ARDL model poses a problem. 

Abdi-Basid ADAN * Integration and Cointegration of Variables (ICV) * abdi-basid@outlook.com P a g e 8 | 8

Conclusion 

Stationarity is more than ever a prerequisite for studying variables before introducing them into larger studies. It involves denying the temporal process of trend and historical disturbances with appropriate procedures to conduct statistical and econometric analyzes. In the same way, cointegration proves even more complex, because one wants to be interested in working on level variables, while avoiding that the regression is misleading. Several approaches are pressing, one more efficient than the other. In both cases, these notions can not be taken with negligence, because a scientific production based on the temporal process inevitably depends on it. 

Bibliography : 1. Atoumane Diagne (2015) : Econometric modeling of electricity consumption in Senegal from 1999 to 2015 2. Hélène Hamisultane: Error-correction model and applications 3. Jonas Kibala Kuma (2018) : ARDL Modeling, Cointegration Testing and Toda-Yamamoto Approach: Elements of Software Theory and Practice 4. Lonzo Lubu Gastonfils (2015) : Application of the Box-Jenkins Prediction Method

The Abdi-Basid Courses Institute