×

Shifting: one-inclusion mistake bounds and sample compression. (English) Zbl 1158.68452

Summary: We present new expected risk bounds for binary and multiclass prediction, and resolve several recent conjectures on sample compressibility due to Kuzmin and Warmuth. By exploiting the combinatorial structure of concept class \(\mathcal F\), Haussler et al. achieved a VC\((\mathcal F)/n\) bound for the natural one-inclusion prediction strategy. The key step in their proof is a \(d=VC(\mathcal F)/n\) bound on the graph density of a subgraph of the hypercube-one-inclusion graph. The first main result of this paper is a density bound of \(n(\binom{n-1}{\leqslant d-1})/(\binom{n}{\leqslant d})<d\), which positively resolves a conjecture of Kuzmin and Warmuth relating to their unlabeled Peeling compression scheme and also leads to an improved one-inclusion mistake bound. The proof uses a new form of VC-invariant shifting and a group-theoretic symmetrization. Our second main result is an algebraic topological property of maximum classes of VC-dimension \(d\) as being \(d\)-contractible simplicial complexes, extending the well-known characterization that \(d=1\) maximum classes are trees.
We negatively resolve a minimum degree conjecture of Kuzmin and Warmuth-the second part to a conjectured proof of correctness for Peeling-that every class has one-inclusion minimum degree at most its VC-dimension. Our final main result is a \(k\)-class analogue of the \(d/n\) mistake bound, replacing the VC-dimension by the Pollard pseudo-dimension and the one-inclusion strategy by its natural hypergraph generalization. This result improves on known PAC-based expected risk bounds by a factor of \(O(\log n)\) and is shown to be optimal up to an \(O(\log k)\) factor. The combinatorial technique of shifting takes a central role in understanding the one-inclusion (hyper)graph and is a running theme throughout.

MSC:

68T05 Learning and adaptive systems in artificial intelligence
PDFBibTeX XMLCite
Full Text: DOI Link

References:

[1] N. Alon, D. Haussler, E. Welzl, Partitioning and geometric embedding of range spaces of finite Vapnik-Chervonenkis dimension, in: Proceedings of the 3rd Annual ACM Symposium on Computational Geometry, 1987, pp. 331-340; N. Alon, D. Haussler, E. Welzl, Partitioning and geometric embedding of range spaces of finite Vapnik-Chervonenkis dimension, in: Proceedings of the 3rd Annual ACM Symposium on Computational Geometry, 1987, pp. 331-340
[2] Ben-David, S.; Cesa-Bianchi, N.; Haussler, D.; Long, P. M., Characterizations of learnability for classes of \(\{0, \ldots, n \}\)-valued functions, J. Comput. System Sci., 50, 1, 74-86 (1995) · Zbl 0827.68095
[3] Ben-David, S.; Litman, A., Combinatorial variability of Vapnik-Chervonenkis classes with applications to sample compression schemes, Discrete Appl. Math., 86, 1, 3-25 (1998) · Zbl 0905.68048
[4] Blumer, A.; Ehrenfeucht, A.; Haussler, D.; Warmuth, M. K., Learnability and the Vapnik-Chervonenkis dimension, J. Assoc. Comput. Mach., 36, 4, 929-965 (1989) · Zbl 0697.68079
[5] Dudley, R. M., The structure of some Vapnik-Chervonenkis classes, (Le Cam, L. M.; Olshen, R. A., Proceedings of the Berkeley Conference in Honor of Jerzy Neyman, vol. II (1985), Wadsworth: Wadsworth New York), 495-507 · Zbl 1372.28001
[6] Ehrenfeucht, A.; Haussler, D.; Kearns, M.; Valiant, L., A general lower bound on the number of examples needed for learning, Inform. and Comput., 82, 3, 247-261 (1989) · Zbl 0679.68158
[7] Firsov, F., On isometric embeddings of a graph into a boolean cube, Cybernetics, 1, 112-113 (1965)
[8] S. Floyd, On space-bounded learning and the Vapnik-Chervonenkis dimension, PhD thesis, Technical report TR-89-061, International Computer Science Institute, Berkeley, CA, 1989; S. Floyd, On space-bounded learning and the Vapnik-Chervonenkis dimension, PhD thesis, Technical report TR-89-061, International Computer Science Institute, Berkeley, CA, 1989 · Zbl 0747.68041
[9] Floyd, S.; Warmuth, M. K., Sample compression, learnability, and the Vapnik-Chervonenkis dimension, Machine Learning, 21, 3, 269-304 (1995)
[10] Gärtner, B.; Welzl, E., Vapnik-Chervonenkis dimension and (pseudo-)hyperplane arrangements, Discrete Comput. Geom., 12, 399-432 (1994) · Zbl 0813.52013
[11] Hall, P., On representatives of subsets, J. London Math. Soc., 10, 26-30 (1935) · Zbl 0010.34503
[12] Haussler, D., Sphere packing numbers for subsets of the boolean \(n\)-cube with bounded Vapnik-Chervonenkis dimension, J. Combin. Theory Ser. A, 69, 2, 217-232 (1995) · Zbl 0818.60005
[13] Haussler, D.; Littlestone, N.; Warmuth, M. K., Predicting \(\{0, 1 \}\) functions on randomly drawn points, Inform. and Comput., 115, 2, 284-293 (1994) · Zbl 0938.68785
[14] Havel, I.; Morávek, J., B-valuations of graphs, Czechoslovak Math. J., 22, 338-351 (1972) · Zbl 0247.05148
[15] Hlavička, J., Race-free assignment in asynchronous switching circuits, Inform. Process. Mach., 13 (1967)
[16] Kuzmin, D.; Warmuth, M. K., Unlabeled compression schemes for maximum classes, (Auer, P.; Meir, R., Proceedings of the 18th Annual Conference on Learning Theory. Proceedings of the 18th Annual Conference on Learning Theory, Lecture Notes in Comput. Sci., vol. 3559 (2005)), 591-605 · Zbl 1137.68547
[17] Kuzmin, D.; Warmuth, M. K., Unlabeled compression schemes for maximum classes, J. Mach. Learn. Res., 8, Sep, 2047-2081 (2007) · Zbl 1222.68239
[18] Li, Y.; Long, P. M.; Srinivasan, A., The one-inclusion graph algorithm is near optimal for the prediction model of learning, IEEE Trans. Inform. Theory, 47, 3, 1257-1261 (2002) · Zbl 0998.68095
[19] Littlestone, N.; Warmuth, M. K., Relating data compression and learnability (1986), unpublished manuscript
[20] Livingston, M.; Stout, Q. F., Embeddings in hypercubes, Math. Comput. Model., 11, 222-227 (1988)
[21] Rubinstein, B. I.P.; Bartlett, P. L.; Rubinstein, J. H., Shifting, one-inclusion mistake bounds and tight multiclass expected risk bounds, (Schölkopf, B.; Platt, J.; Hoffman, T., Advances in Neural Information Processing Systems 19 (2007), MIT Press: MIT Press Cambridge, MA), 1193-1200
[22] B.I.P. Rubinstein, J.H. Rubinstein, Geometric & topological representations of maximum classes with applications to sample compression, in: Proceedings of the 21st Annual Conference on Learning Theory, 2008, pp. 299-310; B.I.P. Rubinstein, J.H. Rubinstein, Geometric & topological representations of maximum classes with applications to sample compression, in: Proceedings of the 21st Annual Conference on Learning Theory, 2008, pp. 299-310
[23] Sauer, N., On the density of families of sets, J. Combin. Theory Ser. A, 13, 145-147 (1972) · Zbl 0248.05005
[24] Warmuth, M., Sample compression, learnability, and the Vapnik-Chervonenkis dimension, (Computational Learning Theory: Third European Conference, EuroCOLT ’97. Computational Learning Theory: Third European Conference, EuroCOLT ’97, Lecture Notes in Comput. Sci. (1997), Springer) · Zbl 0697.68079
[25] M.K. Warmuth, Compressing to VC dimension many points, in: Proceedings of the 16th Annual Conference on Learning Theory, COLT03, open problem paper, 2003; M.K. Warmuth, Compressing to VC dimension many points, in: Proceedings of the 16th Annual Conference on Learning Theory, COLT03, open problem paper, 2003
[26] E. Welzl, Complete range spaces, unpublished notes, 1987; E. Welzl, Complete range spaces, unpublished notes, 1987
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.