Perimeter defenses — like firewalls, authentication and DDoS protection — are crucial, but they only control who can access the system or how much data can flow in or out. However, in recent times, multiple options have emerged to ensure what models can see or do, including running inference inside secure enclaves, dynamic PII scrubbing, role-based data filtering and least-privilege access controls for agents. From my experiments, I can say that two strategies stand out: Confidential compute with policy-driven PII protection and fine-grained agent permissions.
Confidential compute + policy-driven PII protection
In fintech, healthtech, regtech and other domains, LLMs often process sensitive data — contracts, patient records and financials. There, even if you trust the cloud, regulators may not. Confidential computing ensures data-in-use protection, even from cloud operators. It guarantees strong compliance.
But there is a trade-off. The technology is still in its early days and it can incur significant costs. That said, it can be used in narrow use cases with regulated data. It gives satisfactory results when paired with dynamic PII scrubbing tools like Presidio or Immuta for adaptive protection based on geography, user role or data classification.
