Efficient SU(3) calculation
2022-04-07
Last time I shared some numerical code for generating the projector onto the SU(3) irrep within . This is an incredibly inefficient way to work with SU(3) irreps, though, since the dimension of the ambient vector space in which everything is described is exponential in and . This post is going to take a step towards understanding a more intrinsic way of representing SU(3) irreps by focusing on the simplest irreps, which are of the form . These irreps are simply defined as the totally symmetric subspaces of , and we can define a concrete basis for that subspace, identifying each element of that basis by the number of times 0, 1, and 2 show up in the tensor product: is the symmetric group on elements, and is the linear operator that permutes the tensor product components according to the permutation . The various factors in the denominator are to normalize these vectors, so .
With a concrete basis in hand for the irrep, the next question becomes how are we going to calculate matrix elements for the unitaries and Lie-algebra elements. The Lie-algebra elements are the easiest to figure out: The reasoning behind this is that is made of terms that act as identity on all but one of the tensor-product factors, so any matrix elements that are non zero must be between symmetric states where 0, 1, and 2 appear either an identical number of times () or a single basis element has been exchanged for another (). Any other combination will result in two orthogonal basis elements sandwiching an identity, which leads to 0. The diagonal
case where 0, 1, and 2 show up in equal numbers on the left and the right works out like using the fact that (summing over the then cancels the in the denominator), counting all the permutations that keep the basis elements paired up (which cancels the in the denominator), and finally tallying up all the terms in .
Calculating the surviving off-diagonal terms proceeds similarly (to make the notation easier Iβll take , , and ): Going from the first to the second line above, you can see there are different choices for the 1 on the right which we will match up with a 0 on the left. For each of those choices, there are permutations permuting the 2s such that they match up with the 2s on the left, permuting the remaining 1s such that they match up with the 1s on the left, and permuting the remaining 1 and all the 0s among the 0s on the left. Each of these permutations determines a single term in that is non zero, and thatβs the one where the is between the 0 and the 1.
The unitary matrix elements are a little more involved, since there arenβt so many terms that are identically zero like for the Lie-algebra elements. To deal with the calculation Iβve introduced a sum over matrices having the property that their rows sum up to the left multiplicities and their columns sum up to the right multiplicities: Saying much in general about these matrices seems hard (see this answer to a Mathematics Stack Exchange question for a link to a review paper), but we can fairly easily enumerate all these matrices with a bit of Python, like I do in my recently-added function generate_row_col_sum_constrained_posint_matrices
. The factor counts the number of permutations that result in s matched up with s in the same tensor-product position, since each of these will give a factor of . We calculate by first finding how many ways to permute the groups of identical elements on the right among themselves (which is ), and then how many ways to split each of the groups on the left into the appropriately sized groups matched with 0, 1, and 2 (which are the multinomial coefficients ). Putting these together gives us and substituting that back into the unitary-matrix-element expression gives us
The code for calculating explicit matrices for the representative unitaries in the symmetric basis is now in my github repo (the slow code using the exponentially growing ambient space is in irrep_codes.su3.symm_tensor_prod.py
and the fast code using the intrinsic symmetric-space basis is in irrep_codes.su3.efficient_symm_rep.py
). Even for p=5, using the intrinsic basis makes a huge difference (cutting down the calculation time for a single matrix from 20 s to 90 ms). My code for the Lie-algebra elements, and for doing these calculations analytically using sympy
, are still in a jupyter notebook, which I intend to push into the same repository. After finishing up that bit of code release, weβll be ready to try tackling a more intrinsic approach to the general (p, q) irreps.