Pages

Saturday 1 February 2014

Implementing Linear Prediction Coefficients in Julia

This post will discuss the autocorrelation or Yule-Walker method of computing LPCs. Other methods include the Covariance method, the modified Covariance Method (Forward-Backward method) and the Burg method.

Linear prediction coefficients (LPCs) are used a lot in signal processing, and the theory around it is covered in other places (see here for an intuitive explanation). In this post I will go over how to implement LPCs in Julia. Given a signal \(x\), the job of LPCs is to predict the current sample i.e. \(x(n)\) using a weighted sum of previous samples:

\[ x(n) = \sum_{i=1}^L a(i) x(n-i) + e(n)\]

Ideally, \(e(n)\), the error, will be zero but usually it is not. As a result we want values of \(a(i)\) that make \(e(n)\) as small as possible. Formally, we try to minimise the prediction error power:

\[P = \dfrac{1}{N}\sum_{n=-\infty}^{\infty} e(n)^2\] \[ = \dfrac{1}{N}\sum_{n=-\infty}^{\infty} \left[ x(n) + \sum_{k=1}^{L} a(k) x(n-k) \right] \]

What we want to do is minimise \(P\) with respect to \(a(i)\). When this is done, we get a set of equations that must be satisfied for \(P\) to be minimised, the well known Yule-Walker equations:

\[\sum_{i=1}^L R_{xx}(i-k) a(i) = R_{xx}(k) \qquad 1 \leq k \leq L\]

where \(R_{xx}\) is the autocorrelation sequence of our signal \(x\). We can write out the Yule-Walker equations as a matrix equation, so solving for the LPCs (values of \(a(i)\)) becomes easy:

\[ \left[\begin{array}{cccc} R(0) & R(1) & \ldots & R(L-1) \\ R(1) & R(0) & \ldots & R(L-2) \\ \vdots & \vdots & \ddots & \vdots\\ R(L-1) & R(L-2) & \ldots & R(0) \end{array}\right] \left[\begin{array}{c} a(1) \\ a(2) \\ \vdots \\ a(L) \end{array}\right] = \left[\begin{array}{c} R(1) \\ R(2) \\ \vdots \\ R(L) \end{array}\right] \]

where \(R(0)\) is the same \(R_{xx}(0)\) in the previous equations, and \(a(i)\) are the LPCs we want to compute. The naive way of solving the Yule Walker equations is to invert the square matrix of autocorrelation coefficients, however this has \( O(L^3)\) complexity. There is faster way that uses the special structure of the \(R\) matrix (it has toeplitz structure), the algorithm for solving it is called the Levinson-Durbin algorithm which can find the values of \(a(i)\) with \(O(L^2)\) complexity.

The Levinson-Durbin algorithm looks a little like the following:

For \(m = 1\):

\[a_{1,1} = -\dfrac{R(1)}{R(0)} \] \[P_1 = R(0)(1-a_{1,1}^2) \]

For \(m = 2,3,\ldots,L\):

\[a_{m,m} = -\dfrac{R(m) + \sum_{i=1}^{m-1} a_{m-1,i}R(m-i)}{P_{m-1}} \] \[a_{m,i} = a_{m-1,i} + a_{m,m}a_{m-1,m-i} \qquad i=1,2,\ldots,m-1\] \[P_m = P_{m-1}(1-a_{m,m}^2) \]

Our final LPCs are a(1:L) = \(a_{L,1:L}\), and the gain term is P = \( P_L\) where \(L\) is the number of LPCs we want. In Julia code, this will look like the following:

And that is it.

2 comments:

  1. Hi Jim,

    nice post! We are working on a DSP library for Julia (https://github.com/JuliaDSP/DSP.jl) and I was going to implement levinson and lpc functions but you got there first! Would you like to contribute this code to our library? Thanks!

    ReplyDelete
    Replies
    1. Well, I just noticed you have a whole series of posts on LPC/autocorrelation implementations in Julia, so this question extends to all the other code as well. We would be happy to have you as a contributor!

      Delete