Encryption scheme enabling arbitrary computation on encrypted data without decryption.
Fully Homomorphic Encryption (FHE) is a cryptographic technique that allows arbitrary mathematical operations to be performed directly on encrypted data, producing an encrypted result that, when decrypted, exactly matches what would have been obtained by performing the same operations on the original plaintext. Unlike conventional encryption, which requires data to be decrypted before any computation can occur, FHE keeps data permanently encrypted throughout the entire processing pipeline. This property makes it theoretically possible to delegate computation to an untrusted third party — such as a cloud provider — without ever exposing the underlying sensitive information.
The mechanics of FHE rely on algebraic structures, typically lattice-based constructions, that preserve certain mathematical relationships through the encryption process. Early homomorphic schemes supported only limited operations (either addition or multiplication, but not both), making them only partially homomorphic. Craig Gentry's 2009 dissertation introduced the first complete FHE construction by using a "bootstrapping" procedure to periodically refresh ciphertexts and prevent noise accumulation from corrupting results — a fundamental obstacle that had blocked fully general schemes for decades. Subsequent work produced more practical variants such as BGV, BFV, CKKS, and TFHE, each optimized for different computational workloads.
In machine learning, FHE has attracted significant interest as a path toward privacy-preserving inference and training. Models can, in principle, evaluate predictions on a user's encrypted input without ever learning the underlying data, and federated or cloud-based training pipelines can process sensitive records — medical, financial, biometric — without exposing them to the compute infrastructure. Frameworks such as Microsoft SEAL, OpenFHE, and Concrete have lowered the barrier to integrating FHE into ML workflows.
Despite its promise, FHE remains computationally expensive compared to plaintext operations, often by several orders of magnitude, and ciphertext sizes are substantially larger than their plaintext equivalents. Active research focuses on hardware acceleration, algorithmic improvements, and compiler toolchains that automatically translate standard ML models into FHE-compatible circuits, steadily narrowing the gap between theoretical capability and practical deployment.