Usually, in textbooks the authors quickly move from SO(3) to the covering group SU(2) and derive the (complex) representations of the corresponding Lie algebra \(\mathfrak{su}(2)\) using quantum mechanical angular momentum algebra and ladder operators.

There appears to be surprisingly little in the published literature on real-valued representations of \(\mathfrak{so}(3)\). How do the real-valued representations look like? A google search lead me to

Matrix entries of real representations of the group O(3) and SO(3)

Siberian Mathematical Journal (2002), 43(1):36-46

DOI: 10.1023/A:1013816403253

who describes their construction; but see also

A note on the real representation of SU(2,C)

Journal of Pure and Applied Algebra (1990), 69:285-294

DOI: 10.1016/0022-4049(91)90023-U

\[

\begin{align}

P^{(1)}(x,y) =& a_1 x^2 + a_2 x y + a_3 y^2 \\

P^{(2)}(x,y) =& a_1 x^4 + a_2 x^3 y + a_3 x^2 y^2 + a_4 x y^3 + a_5 y^4 \notag\\

& \ldots \notag\\

P^{(N)}(x,y) =& a_1 x^{2N} + a_2 x^{2N-1} y + a_3 x^{2N-2} y^2 \notag\\

&+ \ldots + a_{N+1} x^{N} y^{N} + \ldots + a_{2N} x y^{2N-1} + a_{2N+1} y^{2N} \notag\\

\end{align}

\] Gordienko chooses the following basis in the vector space of polynomials \(P^{(N)}(x,y)\)

\[

\begin{align}

e^{(N)}_n(x,y) =& i^{N+1}\sqrt{\frac{(2N+1)!}{2(N-n)!(N+n)!}} \\

&\times(x^{N-n}y^{N+n} - (-1)^{-n}x^{N+n}y^{N-n}) \quad \mathrm{if}\; n \le -1 \notag\\

e^{(N)}_{n=0}(x,y) =& i^{N} \frac{\sqrt{(2N+1)!}}{N!} x^N y^N \quad \mathrm{if}\; n = 0 \notag\\

e^{(N)}_n(x,y) =& -i^{N}\sqrt{\frac{(2N+1)!}{2(N+n)!(N-n)!}} \notag\\

& \times(x^{N+n}y^{N-n} + (-1)^{n}x^{N-n}y^{N+n}) \quad \mathrm{if}\; n \ge 1 \notag

\end{align}

\] with \(n = -N,\ldots,N\). Now let's take a generic rotation matrix from the group SU(2), parametrized by the three real numbers \(\alpha\), \(\beta\) and \(\gamma\) and we write with \(\delta\equiv\sqrt{1-\alpha^2-\beta^2-\gamma^2}\)

\[

M =

\begin{pmatrix}

\delta - i\gamma & \beta -i\alpha\\

-\beta - i\alpha &\delta + i\gamma

\end{pmatrix}

\equiv

\begin{pmatrix}

A^* & B^*\\

-B & A

\end{pmatrix}

\] and apply it on the 2-dimensional complex vector (x,y), i.e.

\[

(x',y') \equiv (x,y)

\begin{pmatrix}

A^* & B^*\\

-B & A

\end{pmatrix}=(A^*x-By, B^*x+Ay)

\] For the "rotated" basis \(e^{(N)}_n(x',y')\) we obtain

\[

\begin{align}

e^{(N)}_n(x',y')

=& e^{(N)}_n(A^*x-By, B^*x+Ay) \\

=& \sum_{m=-N}^{+N} T^{(N)}_{n,m}(A,B) \; e^{(N)}_m(x,y)\notag

\end{align}

\] The \((2N+1)\times(2N+1)\) matrix \(T^{(N)}_{m,n}(A,B)\) is the \((2N+1)\)-dimensional representation matrix of SO(3) we're looking for.

Why is \(T^{(N)}_{m,n}(A,B)\) real-valued? Turns out, the basis vectors \(e^{(N)}_n(x,y)\) are constructed in such a way that

\[

e^{(N)}_m(x,y) = (e^{(N)}_m(-y,x))^*

\] holds, where \(x^*\) denotes the complex conjugate of \(x\). Gordienko shows that for basis vectors with this property the entries of the representation matrices \(T^{(N)}_{n,m}(A,B)\) are indeed real. Specifically,

\[

\begin{align}

e^{(N)}_m(x',y') =& e^{(N)}_m(A^*x-By, B^*x+Ay) \\

=& (e^{(N)}_m(-B^*x-Ay,A^*x-By))^* \notag\\

=& (e^{(N)}_m(B^*(-x)+A(-y),A^*x-By))^* \notag\\

=& (e^{(N)}_m(-y',x'))^* \notag\\

=& (\sum_{n=-N}^N T^{(N)}_{m,n}(A,B)\;e^{(N)}_n(-y,x))^* \notag\\

=& \sum_{n=-N}^N (T^{(N)}_{m,n}(A,B))^*\;(e^{(N)}_n(-y,x))^* \notag\\

=& \sum_{n=-N}^N (T^{(N)}_{m,n}(A,B))^*\;e^{(N)}_n(x,y) \notag

\end{align} \]Thus, \(T^{(N)}_{m,n}(A,B) = (T^{(N)}_{m,n}(A,B))^*\) since the \(e^{(N)}_n(x,y)\) are linear independent; in other words, the matrix \(T^{(N)}_{m,n}(A,B)\) is real.

representationofso3.m a is a rough-and-ready MATLAB programme (included in LieTools), which outputs numerical approximations of the n-dimensional representations of \(\mathfrak{so}(3)\) (for n below about 90) using the method described above. For dimensions 3, 5 and 7 the output of representationofso3.m yields the following matrices. The 3-dimensional representation is familiar

\[\begin{equation}

t_1 =

\begin{pmatrix}

0 & 1 & 0 \\

-1 & 0 & 0 \\

0 & 0 & 0

\end{pmatrix}\\

t_2 =

\begin{pmatrix}

0 & 0 & -1 \\

0 & 0 & 0 \\

1 & 0 & 0

\end{pmatrix}\\

t_3 =

\begin{pmatrix}

0 & 0 & 0 \\

0 & 0 & 1 \\

0 & -1 & 0

\end{pmatrix}

\end{equation}\] the 5-dimensional representation is \[\begin{equation}

t_1 =

\begin{pmatrix}

0 & 0 & 0 & -1 & 0 \\

0 & 0 & \sqrt{3} & 0 & 1 \\

0 & -\sqrt{3} & 0 & 0 & 0 \\

1 & 0 & 0 & 0 & 0 \\

0 & -1 & 0 & 0 & 0

\end{pmatrix}\\

t_2 =

\begin{pmatrix}

0 & 0 & 0 & 0 & -2 \\

0 & 0 & 0 & 1 & 0 \\

0 & 0 & 0 & 0 & 0 \\

0 & -1 & 0 & 0 & 0 \\

2 & 0 & 0 & 0 & 0

\end{pmatrix}\\

t_3 =

\begin{pmatrix}

0 & -1 & 0 & 0 & 0 \\

1 & 0 & 0 & 0 & 0 \\

0 & 0 & 0 & -\sqrt{3} & 0 \\

0 & 0 & \sqrt{3} & 0 & -1 \\

0 & 0 & 0 & 1 & 0

\end{pmatrix}

\end{equation}\] and the 7-dimensional representation turns out to be

\[\begin{equation}

t_1 =

\begin{pmatrix}

0 & 0 & 0 & 0 & 0 & -\sqrt{3/2} & 0 \\

0 & 0 & 0 & 0 & \sqrt{5/2} & 0 & \sqrt{3/2} \\

0 & 0 & 0 & \sqrt{6} & 0 & -\sqrt{5/2} & 0 \\

0 & 0 & -\sqrt{6} & 0 & 0 & 0 & 0 \\

0 & -\sqrt{5/2} & 0 & 0 & 0 & 0 & 0 \\

\sqrt{3/2} & 0 & \sqrt{5/2} & 0 & 0 & 0 & 0 \\

0 & -\sqrt{3/2} & 0 & 0 & 0 & 0 & 0

\end{pmatrix}\\

t_2 =

\begin{pmatrix}

0 & 0 & 0 & 0 & 0 & 0 & -3 \\

0 & 0 & 0 & 0 & 0 & 2 & 0 \\

0 & 0 & 0 & 0 & -1 & 0 & 0 \\

0 & 0 & 0 & 0 & 0 & 0 & 0 \\

0 & 0 & 1 & 0 & 0 & 0 & 0 \\

0 & -2 & 0 & 0 & 0 & 0 & 0 \\

3 & 0 & 0 & 0 & 0 & 0 & 0

\end{pmatrix}\\

t_3 =

\begin{pmatrix}

0 & -\sqrt{3/2} & 0 & 0 & 0 & 0 & 0 \\

\sqrt{3/2} & 0 & -\sqrt{5/2} & 0 & 0 & 0 & 0 \\

0 & \sqrt{5/2} & 0 & 0 & 0 & 0 & 0 \\

0 & 0 & 0 & 0 & \sqrt{6} & 0 & 0 \\

0 & 0 & 0 & -\sqrt{6} & 0 & -\sqrt{5/2} & 0 \\

0 & 0 & 0 & 0 & \sqrt{5/2} & 0 & -\sqrt{3/2} \\

0 & 0 & 0 & 0 & 0 & \sqrt{3/2} & 0

\end{pmatrix}

\end{equation}\] The zero entries in the off-diagonals look somewhat strange, but the matrices are definitely real, skew-symmetric and obey the \(\mathfrak{so}(3)\) commutation relations

\[

[t_1,t_2] = t_3\\

[t_2,t_3] = t_1\\

[t_3,t_1] = t_2

\]