CAGD-banner.gif
On-Line Geometric Modeling Notes
EIGENVALUES AND EIGENVECTORS
OF THE REFINEMENT MATRIX


Overview

The basic operation in the development of subdivision curves is the refinement procedure. In the cubic case, the points of the refinement can be specified as vertex and edge points and the refinement procedure can be specified by a matrix operation. In this case the refinement matrix $ A$ is defined to be

$\displaystyle A \: = \:
\frac{1}{8}
\left[
\begin{array}{rrr}
6 & 1 & 1 \\
4 & 4 & 0 \\
4 & 0 & 4
\end{array}\right]
$

In these notes, we develop the eigenvalues and eigenvectors of the refinement matrix which play an important role in the analysis of subdivision curves. This matrix is also diagonalizable and we use the eigenvalues and eigenvectors to calculate this diagonal decomposition. Finally, the diagonal decomposition allows an easy calculation of the inverse of the matrix.

pdficonsmall.gif For a pdf version of these notes look here


The Eigenvalues of the Refinement Matrix

One of the eigenvalues is easy to calculate. Since the rows of $ A$ all sum to one, we have

$\displaystyle A
\left[
\begin{array}{c}
1 \\  1 \\  1
\end{array}\right]
\: = \:
\left[
\begin{array}{c}
1 \\  1 \\  1
\end{array}\right]
$

and so $ 1$ is an eigenvalue of $ A$, with corresponding eigenvector $ \left[ \begin{array}{cc} 1 \\  1 \\  1 \end{array} \right]$. The other eigenvalues can be calculated from the characteristic polynomial and turn out to be $ \frac{1}{2}$ and $ \frac{1}{4}$.

The (right) eigenvectors for the eigenvalues $ 1, \frac{1}{2}, \frac{1}{4}$ can be calculated to be

$\displaystyle {\vec r }_1=
\left[
\begin{array}{r}
1 \\  1 \\  1
\end{array}\ri...
...t], \:
{\vec r }_3 =
\left[
\begin{array}{r}
-1 \\  2 \\  2
\end{array}\right]
$

respectively, and in a similar fashion the left eigenvectors turn out to be
$\displaystyle {\vec l }_1$ $\displaystyle =$ \begin{displaymath}\left[
\begin{array}{rrr}
\frac{2}{3} & \frac{1}{6} & \frac{1}{6}
\end{array}\right]\end{displaymath}  
$\displaystyle {\vec l }_2$ $\displaystyle =$ \begin{displaymath}\left[
\begin{array}{rrr}
0 & \frac{1}{2} & -\frac{1}{2}
\end{array}\right]\end{displaymath}  
$\displaystyle {\vec l }_3$ $\displaystyle =$ \begin{displaymath}\left[
\begin{array}{rrr}
-\frac{1}{3} & \frac{1}{6} & \frac{1}{6}
\end{array}\right]\end{displaymath}  


Diagonalization of the Matrix

We will let $ L$ be the $ 3 \times 3$ matrix whose rows are the left eigenvectors of $ A$,

$\displaystyle L \: = \:
\left[
\begin{array}{rrr}
\frac{2}{3} & \frac{1}{6} & ...
... -\frac{1}{2} \\
-\frac{1}{3} & \frac{1}{6} & \frac{1}{6}
\end{array}\right]
$

and $ R$ be the $ 3 \times 3$ matrix whose columns are the right eigenvectors of $ A$,

$\displaystyle R \: = \:
\left[
\begin{array}{rrr}
1 & 0 & -1 \\
1 & 1 & 2 \\
1 & -1 & 2
\end{array}\right]
$

noting that $ R=L^{-1}$ Then since the $ 3 \times 3$ matrix $ A$ has 3 distinct eigenvalues, it is diagonalizable [1] and can be written as

$\displaystyle A \: = R \Lambda L
$

where $ \Lambda$ is the diagonal matrix whose diagonal elements are the eigenvalues of $ A$.

$\displaystyle \Lambda \: = \:
\left[
\begin{array}{rrr}
1 & 0 & 0 \\
0 & \frac{1}{2} & 0 \\
0 & 0 & \frac{1}{4}
\end{array}\right]
$


The Inverse of the Refinement Matrix

Since the refinement matrix can be written as $ A = R \Lambda L$, it is easy to generate the inverse of $ A$.

$\displaystyle A^{-1}$ $\displaystyle =$ $\displaystyle L^{-1} \Lambda^{-1} R^{-1}$  
  $\displaystyle =$ $\displaystyle R \Lambda^{-1} L$  

since $ R=L^{-1}$. The matrix $ \Lambda^{-1}$ is just

$\displaystyle \Lambda^{-1} \: = \: \left[
\begin{array}{ccc}
1 & 0 & 0 \\
0 & 2 & 0 \\
0 & 0 & 4
\end{array}\right]
$

The inverse of $ A$ then works out to be

$\displaystyle A^{-1} \: = \: \frac{1}{2} \left[
\begin{array}{rrr}
4 & -1 & -1 \\
-4 & 5 & 1 \\
-4 & 1 & 5
\end{array}\right]
$


Summary

The eigenvalues and eigenvectors of the refinement matrix are straightforward to calculate. Since this matrix is well conditioned it can be diagonalized and written in form $ R \Lambda L$ which will be very useful when analyzing subdivision curves.


Bibliography

1
GOLUB, G. H., AND VAN LOAN, C. F.
Matrix Computations, 2nd ed.
Johns Hopkins University Press, 1989.

2
HALSTEAD, M., KASS, M., AND DEROSE, T.
Efficient, fair interpolation using Catmull-Clark surfaces.
In Computer Graphics (SIGGRAPH '93 Proceedings) (Aug. 1993), J. T. Kajiya, Ed., vol. 27, pp. 35-44.


\begin{singlespace}
\noindent
\footnotesize\bfseries All contents copyright (c) ...
...ment, University of California, Davis \\
All rights reserved.
\end{singlespace}


Ken Joy
2000-11-28