We study the expressive power of deep polynomial neural networks through the
geometry of their neurovarieties. We introduce the notion of the activation degree
threshold of a network architecture to express when the dimension of the
neurovariety achieves its theoretical maximum. We prove the existence of the
activation degree threshold for all polynomial neural networks without width-one
bottlenecks and demonstrate a universal upper bound that is quadratic in the width
of largest size. In doing so, we prove the high activation degree conjecture of Kileel,
Trager, and Bruna. Certain structured architectures have exceptional activation
degree thresholds, making them especially expressive in the sense of their
neurovariety dimension. In this direction, we prove that polynomial neural networks
with equiwidth architectures are maximally expressive by showing their activation
degree threshold is one.