AI with a focus on security

Yordan 2

As AI becomes a bigger part of national security and critical infrastructure, there's a growing urgency to not just use it, but to use it securely. Tutus is the main sponsor of Secure AI, the leading conference on AI security. In this regard, Yordan Lazarov, Senior Software Engineer and System Architect at Tutus, highlights one of the most critical questions right now: How can AI become a powerful tool even in high-assurance environments?

“AI is becoming a strategic advantage because it helps defense and security organizations process vast amounts of information, detect patterns, and respond faster than humans alone could. But in this context, speed without security can create new risks,” says Yordan Lazarov.

A secure use of AI requires more than efficiency. National security demands AI that is resilient, trusted, and tightly controlled.

Challenges in classified environments
“Nothing can be left to chance. AI brings a unique complexity because it doesn’t just involve protecting data; the models, training processes, and inference systems all need to be secured,” says Yordan.
AI in classified environments requires a completely different approach than in commercial contexts. Everything must be run on-premises, with a layered protection: strict authorization and authentication, accountability through logging, and secure connections, even between clients and AI applications.
“Most AI solutions are designed for the cloud, but in secure environments the cloud is not an option. AI must be trained, tested, and deployed entirely on-premises, within trusted infrastructure. That raises issues of scalability and performance, but it also creates the opportunity to design AI with complete security assurance,” Yordan explains.

At Tutus, we develop, test, and certify security software—including AI—to meet high-assurance requirements before deploying it on customer premises or running it on our own highly assured on-premises hardware.


Risks of unsecured AI
Without strong protection, AI stops being an advantage and becomes a vulnerability. It can become an invisible attack vector.
“Data, decision chains, and systems can be manipulated, compromised, and cause serious harm. In critical infrastructure, this could mean disruptions of power grids, communication systems, or even defense readiness,” says Yordan. Therefore, secure AI must be built with the same multi-layered protection already used in classified networks.

The future of secure AI
As AI adoption accelerates, the critical question is not what AI can do, but how AI can be trusted.
Tutus role is to ensure that AI in sensitive environments is deployed safely, with layered protections and complete accountability. That’s how AI becomes not just a powerful tool, but a trusted foundation for national security and critical infrastructure,” Yordan concludes.

Tutus has decades of experience in building secure communication solutions for environments where compromise is not an option. That expertise is being extended into AI: developed, tested, and assured to meet the highest security requirements.

As main sponsor of Secure AI 2025, we aim to take another step in advancing the development of secure AI, where the full potential of technology is combined with the assurance required by national security.