# Serial Correlation in Time Series

In statistics, the serial correlation of a random process describes the correlation between values of the process at different points in time, as a function of the two times or of the time difference. Let X be some repeatable process, and i be some point in time after the start of that process. (i may be an integer for a discrete-time process or a real number for a continuous-time process.) Then Xi is the value (or realization) produced by a given run of the process at time i. Suppose that the process is further known to have defined values for mean ${\displaystyle \mu _{i}}$ and variance ${\displaystyle \sigma _{i}^{2}}$ for all times i. Then the definition of the serial correlation between any two time s and t is
${\displaystyle R(s,t)={\frac {E[(X_{t}-\mu _{t})(X_{s}-\mu _{s})]}{\sigma _{s}\sigma _{t}}}}$
${\displaystyle R(\tau )={\frac {E[(X_{t}-\mu )(X_{t+\tau }-\mu )]}{\sigma ^{2}}}}$
${\displaystyle R(\tau )=R(-\tau )}$ It is common practice in some disciplines, other than statistics and time series analysis, to drop the normalization by σ2 and use the term "serial correlation" interchangeably with "serial covariance". However, the normalization is important both because the interpretation of the serial correlation as a correlation provides a scale-free measure of the strength of statistical dependence, and because the normalization has an effect on the statistical properties of the estimated serial correlations.