Deploying Stable Diffusion Model on OBLV Deploy
In this tutorial, we showcase the steps we followed to deploy a Stable Diffusion model in a Trusted Execution Environment.
6 minutes read
Oct 31, 2024

As artificial intelligence continues to shape the future of technology, the deployment of AI models has become increasingly important. It’s not just businesses that are looking to leverage machine learning for competitive advantage. Developers, too, play a crucial role in bringing these models to life, navigating the complexities of deployment to ensure their success.
However, the process of deploying these models—especially when dealing with sensitive data—comes with significant challenges, particularly around data privacy and security. We need to be certain that the data we provide in our prompts is going to persist only for our session, and is not going to be used for training of future models.
In this article, we’ll showcase how to deploy a Stable Diffusion model using OBLV Deploy, highlighting the steps we followed to create a secure image-generation application using the Diffusers library from Hugging Face. This demonstration emphasises how your data and AI models remain secure throughout the process using Trusted Execution Environments (TEEs).
Whether you're a business protecting sensitive information or a developer tackling the complexities of secure deployment, this guide offers practical insights to help you innovate while maintaining the highest privacy standards.
What Is Stable Diffusion?
Stable Diffusion refers to a category of deep learning models specifically designed to generate high-quality images from textual descriptions. These models have diverse applications, ranging from art generation and content creation to more sophisticated use cases like visual storytelling.
Their importance extends beyond producing visually compelling images; they also have the potential to drive innovation across various industries by transforming how we create and consume visual content.
Popular Models for Image Generation
DALL-E 2: Developed by OpenAI, known for generating images from textual descriptions with high fidelity.
Imagen: A text-to-image diffusion model developed by Google Research.
Stable Diffusion: Developed by Stability AI, this model is designed for high-quality image generation with an open-source approach.
Common Libraries for Stable Diffusion
Hugging Face Transformers: A popular library offering state-of-the-art models, including support for diffusion models.
Diffusers Library: Another Hugging Face library specifically tailored for diffusion models.
OpenAI’s CLIP: Used in conjunction with diffusion models to align textual and visual representations.
PyTorch and TensorFlow: General-purpose deep learning libraries that support implementing and fine-tuning diffusion models.
In this blog, we will focus on the Diffusers library from Huggingface, a comprehensive and modular framework that has gained popularity for its ease of use and extensive support for diffusion models.
This library provides access to a variety of pre-trained models while also seamlessly integrating with the Hugging Face ecosystem, including the Transformers library. This integration allows developers to fine-tune and customise models according to specific needs, all while benefiting from robust documentation and community support.
Why Deploy In a “Trusted Execution Environment”?
The primary motivation for using Trusted Execution Environments (TEEs) is straightforward: you cannot always trust the inference provider. Deploying AI models in a TEE ensures that your data is not being misused for purposes like retraining or other unauthorised activities. TEEs provide verifiable proof that your data is secure and handled in accordance with your governance requirements.
In addition, TEEs are essential for organisations with central AI teams managing policies across the business, with different data and privacy rules. They enable secure computations in environments where data controllers are federated but the compute resources are centralised, ensuring consistent adherence to privacy and security standards across the board.
By isolating the execution of AI models, TEEs prevent unauthorised access and tampering, protecting both the data and the model itself. This is particularly critical for sensitive data where regulatory compliance and data integrity are non-negotiable. TEEs allow organisations to maintain control over their intellectual property, safeguarding proprietary algorithms and model weights from reverse engineering or theft.
Why OBLV Deploy Simplifies Secure AI Deployment
Deploying AI models in environments requiring stringent data security can be complex, but OBLV Deploy is designed to simplify this process for developers through these functionalities:
Seamless Integration:
OBLV Deploy integrates effortlessly with Kubernetes and the rest of your existing technology stack, allowing you to deploy secure enclaves without altering your current workflows or CI/CD pipelines.
Simplified Management:
The platform’s intuitive use of manifests and policies lets you easily define and enforce security protocols, resource allocations, and networking rules, making the management of secure environments straightforward.
Robust Security:
Leveraging AWS Nitro Enclaves, OBLV Deploy ensures data processed within the enclave remains secure, with end-to-end encryption and attestation processes that verify the integrity of both data and processing environments. This robust security framework is enforced without adding unnecessary complexity, allowing you to focus on development rather than security intricacies.
Scalability and Flexibility:
OBLV Deploy supports autoscaling, load balancing, and persistent sessions, ensuring your applications can handle varying levels of demand without compromising performance or security.
In our setup, we used the pre-trained model runwayml/stable-diffusion-v1-5, but any model compatible with a CPU could be used.
Prerequisites
OBLV-Deploy: https://docs.oblv.oblivious.com/home/getting-started/prerequisites/
Basic understanding of Python
We built a demo application to demonstrate the deployment. This simple Flask application allowed us to:
Enter prompts for image generation.
Process those prompts and track their status through different processing steps.
Download the generated images once processing is complete.
The front end of this application is user-friendly, offering a straightforward interface for interacting with the model.

For the prompt above, the generated image looks like this:

Deploying the Application with OBLV Deploy
Here’s how we deployed the application:
Docker Image
: We used a pre-built Docker image available at Docker Hub, containing all the necessary dependencies for the Stable Diffusion model. This approach ensures smooth deployment, consistency across environments, and quick setup.Kubernetes Manifest
: We used the Kubernetes manifest file, which contains configuration details that specify how the application should be deployed, including resource allocation and networking settings.Install OBLV Deploy
: Before deployment, we made sure OBLV Deploy was installed following this guide.Update the Kubernetes Manifest
: We modified thestable-diffusion.yaml
file to include thednsHostName
field, which points to the domain where we hosted the application. This configuration ensures that the application is accessible at the desired domain.Deploy the Application
: Finally, we deployed the application by running the following command:
This command initiated the enclave instances, pulled in the Docker image, and started the application, enabling us to securely deploy the model within the Trusted Execution Environment.
Connecting and Using the Application
Once the application was packaged and deployed within the Trusted Execution Environment using OBLV Deploy, we needed to establish a secure connection to interact with it. This involved configuring the CLI, connecting to the enclave, and finally accessing the application to generate images.
Below are the steps we followed to securely connect and use the deployed Stable Diffusion model:
Step 1: Obtaining the Configuration File
for the Oblivious CLI. This file contains key information such as authentication credentials and deployment details that allow the CLI to communicate with the enclave. We used the following command to extract the manifest:
Step 2: Obtaining the User Manifest
This file includes configuration details specific to the user's environment and is required to establish a secure session with the deployed enclave. To generate it, we used the oblv get-config
command:
Step 3: Connect to the Enclave
With the necessary manifests in place, we connected to the Trusted Execution Environment using the CLI. This step ensures that all communication with the enclave remains secure and that the execution of the AI model is isolated from unauthorised access:
Step 4: Accessing the Application
Finally, we accessed the application through a web browser. By navigating to the specified URL, we were able to securely input prompts and generate images with the deployed Stable Diffusion model:
Here’s one more example output screenshot:

Final Thoughts
Deploying Stable Diffusion models in Trusted Execution Environments with OBLV Deploy provides a robust solution for ensuring data privacy and model integrity. By following the steps in this article, you can integrate strong security measures into your AI deployment process without sacrificing performance or usability.
Using the Hugging Face Diffusers library makes the process simple and flexible, allowing you to fine-tune and customise your models according to your specific requirements. OBLV Deploy reduces the complexity of setting up a secure execution environment, making it an ideal tool for developers who need to balance innovation with security.
While this article focused on CPU-compatible models, it’s worth noting that GPU support for enclaves is on the horizon with AWS, which will further enhance the capabilities of OBLV Deploy. This upcoming feature will enable even more efficient processing, particularly for resource-intensive models like Stable Diffusion.
This approach ensures the secure deployment of AI models today as well as offering the flexibility to scale and adapt as your projects grow. Whether you’re handling sensitive data or looking to strengthen the security of your deployments, OBLV Deploy offers a reliable and future-proof solution.
If you're interested in exploring OBLV Deploy and how it can help you deploy AI models securely, please get in touch at hello@oblivious.com.
secure enclaves
ai
stable diffusion
trusted execution environment
pets