Download this article
 Download this article For screen
For printing
Recent Issues
Volume 20, Issue 1
Volume 19, Issue 1
Volume 18, Issue 1
Volume 17, Issue 1
Volume 16, Issue 2
Volume 16, Issue 1
Volume 15, Issue 2
Volume 15, Issue 1
Volume 14, Issue 2
Volume 14, Issue 1
Volume 13, Issue 2
Volume 13, Issue 1
Volume 12, Issue 1
Volume 11, Issue 2
Volume 11, Issue 1
Volume 10, Issue 2
Volume 10, Issue 1
Volume 9, Issue 2
Volume 9, Issue 1
Volume 8, Issue 1
Volume 7, Issue 2
Volume 7, Issue 1
Volume 6, Issue 1
Volume 5, Issue 2
Volume 5, Issue 1
Volume 4, Issue 1
Volume 3, Issue 1
Volume 2, Issue 1
Volume 1, Issue 1
The Journal
About the journal
Ethics and policies
Peer-review process
 
Submission guidelines
Submission form
Editorial board
 
Subscriptions
 
ISSN 2157-5452 (electronic)
ISSN 1559-3940 (print)
 
Author index
To appear
 
Other MSP journals
Entropic perspective on AI resilience, data-minimal learning and optimal control

Illia Horenko

Vol. 20 (2025), No. 1, 231–251
Abstract

Behind general terms like adversarial attacks and generative adversarial networks, such criteria as AI resilience and AI robustness gained a crucial importance, in trying to make the emergent AI tools reliable, energy-efficient and safe. From mathematical perspective, these problems boil down to finding a mathematically sound quantification of the smallest-sufficient amounts of the perturbations of function arguments that lead to the largest possible perturbations of the approximated function values.

Here, an entropy-optimising perspective on adversarial algorithms from AI is proposed to attack this problem. It is shown that adopting this perspective helps proving computational conditions for the global optimality and uniqueness of adversarial attacks based on cheaply verifiable mathematical criteria. Further, it is shown how this perspective can be used for developing self-attacking learning algorithms, that generate optimal new data points for training, replacing one of the trained agents with this mathematical criterion. On a broad selection of various synthetic and real-life problems from hydrodynamics and biomedicine, it is shown that such self-attacking learning algorithms allow training orders-of-magnitude simpler and cheaper models with superior performance, and can be used to directly train the optimal controls in complex systems, requiring only a small fraction of the training data and outperforming a set of contemporary AI tools as comprehensive as the author was able to find, including boosted random forests, deep neural networks, and foundational models based on transformer architectures, both in terms of complexity (measured as the model descriptor length) and predictive performance.

Keywords
AI resilience, AI control, adversarial network, adversarial attack, adversarial learning, entropy
Mathematical Subject Classification
Primary: 68Q32, 68T01, 68T99
Secondary: 92C50
Milestones
Received: 7 October 2024
Revised: 13 August 2025
Accepted: 13 August 2025
Published: 30 August 2025
Authors
Illia Horenko
Faculty of Mathematics
RPTU Kaiserslautern-Landau
67663 Kaiserslautern
Germany