Neural codes, represented as collections of binary strings called codewords, are used
to encode neural activity. A code is called convex if its codewords are represented as
an arrangement of convex open sets in Euclidean space. Previous work has focused on
addressing the question: how can we tell when a neural code is convex? Giusti and
Itskov
(Neural Comput. 26:11 (2014), 2527–2540) identified a local obstruction and
proved that convex neural codes have no local obstructions. The converse is true for
codes on up to four neurons, but false in general. Nevertheless, we prove
that this converse holds for codes with up to three maximal codewords,
and, moreover, the minimal embedding dimension of such codes is at most
2.