Deploying a Full-Stack Application on Anvil Composable with Kubernetes¶
A complete, user-friendly guide to making your application live and accessible to anyone with internet access. We will walk through an example application with 3 separate components (Svelte frontend, FastAPI backend, and MongoDB database) on Anvil Composable using Kubernetes.
To follow along with code, please clone the corresponding GitHub repository.
Who this tutorial is for
This tutorial is meant for new users to veteran users. This tutorial will not explain every concept comprehensively, but it will hit on main concepts. For veteran users, you can jump to Deploying to Anvil Composable
Goal of this tutorial¶
The goal is to showcase how to take a web-app you made on your computer and make it live and accessible to anyone in the world with internet. We call this process 'deployment', as we want to deploy our app onto Anvil Composable.
Intuition¶
Accessing our app¶
When people access our web-app (app being viewed in your browser), Anvil Composable takes care of all the minutiae. Imagine if you tried to use your personal computer to host your app: each time people access the app, your computer will receive network traffic and have to provide, or serve, the appropriate content to the user.
Further, your computer would have to be powered on and connected to the internet at all times, otherwise people couldn't access our app. Anvil Composable, among many other things, provides a stable server to ensure users of our app can always access our app.
Kubernetes is the software layer Anvil Composable (AC) uses that automates deployment, scaling, and other aspects of our app. For example, if millions of people start using our app, we most likely want to replicate more instances of our app so we can serve more people concurrently (at the same time). This would require adding compute resources (CPUs, perhaps) so we can replicate many copies of our app for people to use.
Further, Anvil Composable is a great place to host your app because it can easily connect to the compute power of the Anvil supercompute cluster.
Architecture of an example app (Services)¶
A common format for an app is to have 3 separate components: 1 for the frontend, 1 for the backend, and 1 for the database. This keeps logic separate and allows the developer to make incremental changes to each independent component separately. Let's dive into an example.
Say you create a python script that takes in an argument for your full name and then prints out a greeting to you.
Great, you can run this on the command line and it works. But what if we want people to be able to interact with this script in the browser? We could create what we call a 'frontend', which defines what we see in our browser (in our example of a web-app). We will dive into the details later, but essentially the frontend is called a service because it needs to always monitor the url in our browser and serve the appropriate content.
For example, if we navigate to http://localhost:5173/home we want to show our home page. Let's imagine we have a frontend and we run it via some magic command (we will learn about this later); now we have a frontend service running.
This is what listens on a specific url and port (e.g., http://localhost:5173) and will provide the content for each route (e.g., /home).
Imagine our frontend has a home page with a button that says enter name and once you click Submit, the browser prints a nice text blob saying Hello, <name>! It is nice to meet you. Instead of writing the logic in our frontend (Javascript, for example) app, we can use the logic from our greet.py to display this message.
Our frontend will be in charge o/rom our user, who writes in a box and clicks Submit. Then the frontend will talk to our backend, essentially inserting the name into the script:
So we outlined how a user goes to a browser page (frontend), writes their name in a text box, the frontend communicates this information to the backend, and then the backend runs the program and provides the greeting.
Next, we need the backend to respond, or communicate back to the frontend with the output from running greet.py, which is Hello, Wintermute! It is nice to meet you. Finally, the frontend receives this data and displays it for the user.
Just like our frontend, we call our backend a service (even if it just consists of 1 python script) because it needs a mechanism to listen and respond to requests for doing work. Our example of doing work means listening for a --full-name to be provided and then running the greet.py script with the --full-name
Now let's add 1 more service to complete our 3-service app: a database service. Let's say we want to store the --full-name every time people come to our app, where do we put it? This is where the database comes into play. Our database service will simply be a database where we can store and retrieve data as requests come in (this is why we call it a service, as just like our frontend and backend, it must have a way to listen to requests).
Our workflow thus will be:
- user goes to a browser page and our frontend serves them the home page
- they type in their name
- frontend communicates name to the backend
- backend listens and receives the request to run
greet.pywith the name data - backend runs the code:
- communicates output to database, database receives request, and stores the output
- communicates output to frontend
- frontend receives output and serves the content to the web browser, showing the user their greeting!
Imagine the frontend has another page, called /old-greetings. Here, the frontend could bypass the backend and simply communicate with the database, asking for all previous greetings that are stored. Once it receives a response from the database, it can render all the previous greetings on the page. Although, oftentimes the database service is only accessible to the backend for security reasons.
In this case, the logic would flow as follows:
- User navigates to
/old-greetingson the browser (frontend serves content) - Frontend requests backend that it needs old greetings
- Backend requests database to query old greetings
- Database responds to the backend with all the old greetings
- Backend responds to frontend with all the data from old greetings
- Frontend serves the old greetings data to the web page for viewer to see
Kubernetes to Host our Services¶
In the previous sections, we introduced the different services that make up our application. Now, we need to talk about where those services actually run. This is where you’ll start hearing a lot of Kubernetes-related terms, such as: containers, Docker, Pods, Services, Kubernetes, Ingress, and more. Don’t worry if these sound like buzzwords at first. We’ll introduce only what you need to understand to get up and running. Each part of our application - frontend, backend, and database - runs as its own container. This separation is intentional and powerful: it allows us to develop, update, and deploy each component independently without affecting the others.
Kubernetes Pods
In Kubernetes, containers don’t run on their own. Instead, each container runs inside a Pod, which is the smallest deployable unit in Kubernetes. For our app, we’ll have:
- A Pod for the frontend
- A Pod for the backend
- A Pod for the database
If you want a deeper dive into how Kubernetes works under the hood, The Kubernetes Book by Nigel Poulton is an excellent (and free) resource. This tutorial focuses on the concepts you need to build and deploy an app, not on covering every Kubernetes feature.
How Pods Communicate: Kubernetes Services¶
Now that each service lives in its own Pod, the next question is: how do they talk to each other?
When the frontend needs data, it doesn’t communicate directly with the backend container - it communicates with the backend Pod. To make this communication reliable, Kubernetes provides an abstraction called a Service.
service vs Service
To avoid confusion: - We’ll use “service” (lowercase) to refer to parts of our application (frontend, backend, database) - We’ll use “Service” (capital S) to refer to the Kubernetes object
Think of it this way:
- Each Pod is a house
- A Kubernetes Service is a telephone
- The Service gives Pods a stable “phone number” they can use to reach each other
By defining Services, Kubernetes “wires up” our Pods so they can communicate without needing to know where the other Pods are physically running.
Exposing the Application: Ingress¶
Finally, we need a way for users outside the cluster to access our app. This is where Ingress comes in.
An Ingress is a Kubernetes resource that defines:
- Which URLs your application is available at
- Which Services handle incoming requests
For example, we might configure an Ingress so that requests to: wintermutant.anvilcloud.rcac.purdue.edu are routed to our frontend Service, which then serves the homepage.
Using our analogy: - Pods are houses - Services are telephones - Ingress is the public phone number that lets the outside world find your app
For security reasons, we typically expose only the frontend through the Ingress. The backend and database remain internal to the cluster, protected from direct access.
At a high level, our Kubernetes setup looks like this:
- Each app component runs in its own container
- Each container lives inside a Pod
- Pods communicate with each other through Kubernetes Services
- The outside world accesses the app through an Ingress
With these building blocks in place, we can now focus on deploying and managing our application inside Kubernetes.
Local Kubernetes Development & Deployment¶
Note
Before deploying to Anvil Composable, it's helpful to test your Kubernetes setup locally. This lets you iterate quickly and catch issues before deploying to production.
We'll deploy:
- Frontend: Svelte application (port 3000)
- Backend: FastAPI Python application (port 8080)
- Database: MongoDB (port 27017)
- Ingress: HTTP routing to frontend and backend
Prerequisites¶
Required installations
Before you begin, ensure you have:
- Docker installed: Get Docker
- kubectl installed: Install kubectl
- minikube installed: Install minikube
- Basic understanding of Docker and command line
Setting Up Local Kubernetes with Minikube¶
Minikube runs a single-node Kubernetes cluster on your local machine.
1. Start minikube:
This may take a few minutes on first run as it downloads the Kubernetes components.
2. Verify the cluster is running:
3. Enable the Ingress addon:
This allows us to test Ingress routing locally.
4. (Optional) Use minikube's Docker daemon:
To avoid pushing images to a registry during local development, you can build images directly in minikube's Docker:
Now any docker build commands will build images inside minikube, making them immediately available to your pods.
Note
For this tutorial, I do not personally like to do this, as I prefer to point to Dockerhub. Later in the tutorial, I will reference pointing to Dockerhub. If you choose minikube's Docker, then you may have to slightly adjust your commands when pushing/pulling and pointing to your containers.
Architecture Overview¶
Our application follows a three-tier architecture:
Key Components:
- Ingress: Routes external traffic to frontend (/) and backend (/api)
- Services: Provide stable networking for pod communication
- Deployments: Manage pod lifecycle and scaling
- Namespace: All resources deployed in your chosen namespace (e.g.,
<username>-tutorial)
Project Structure¶
Directory Overview:
- docker/backend/: FastAPI backend application and Dockerfile
- frontend/: Svelte frontend application and Dockerfile
- k8s/: Kubernetes manifests that we'll apply using
kubectl
Note
In this tutorial, we'll walk through deploying each component step-by-step using kubectl commands. This hands-on approach helps you understand how each piece works. Later, you could automate the full deployment with a single manifest or a script.
Understanding the Kubernetes Manifests¶
Namespace¶
All resources are deployed in a namespace you create (e.g., <username>-tutorial). We will create it later. For now, let's walk through all the manifest (AKA config or YAML) files that specify the details of our kubernetes cluster.
Database Layer¶
File: k8s/database/deployment.yaml
Key features:
- Single replica (stateful workload)
- Resource limits: 500m CPU, 512Mi memory
- Uses
emptyDirvolume (data lost on pod restart) - Environment variables for authentication
File: k8s/database/service.yaml
Exposes MongoDB on port 27017 within the cluster:
Note: For production, consider using a PersistentVolume instead of emptyDir.
Backend Layer¶
File: k8s/backend/deployment.yaml
Key features:
- 2 replicas for high availability
- Resource limits: 250m CPU, 256Mi memory
- Environment variable
MONGO_URIconnects to MongoDB service - Container runs on port 80, exposed via service on port 8080
File: k8s/backend/service.yaml
Exposes the FastAPI backend as fastapi-svc:
Frontend Layer¶
File: k8s/frontend/deployment.yaml
Similar structure to backend:
- 2 replicas
- Resource limits: 250m CPU, 256Mi memory
- Runs on port 3000
File: k8s/frontend/service.yaml
Exposes the frontend as frontend-svc on port 3000:
Ingress¶
Files: k8s/ingress-local.yaml and k8s/ingress-prod.yaml
Routes external HTTP traffic:
/→ Frontend service/api→ Backend service
Deploying to Local Minikube¶
Now let's deploy the application to your local minikube cluster.
1. Make sure minikube is running:
2. Build images in minikube's Docker (recommended for local dev):
3. Update the Kubernetes manifests to use local images:
Edit k8s/backend/deployment.yaml:
Edit k8s/frontend/deployment.yaml:
4. Create namespace and deploy:
5. Access the application:
With minikube tunnel running, access the app at:
- Frontend: http://localhost/
- Backend API: http://localhost/api/names
At this point, you should see a little guest book app where the fronten displays the interface and the backend takes in the name and stores it in the database.
6. Verify everything is working:
Test the Backend¶
TODO: Ensure a stable return for this API endpoint.
Test the Frontend¶
Test via Ingress¶
Once you've verified locally, you're ready to deploy to Anvil Composable.
Deploying to Anvil Composable¶
Building and Pushing Docker Images¶
Before deploying to Kubernetes, we need to containerize our backend and frontend applications and push them to a container registry. We'll use Docker Hub in this tutorial. Unlike our local deployment, we cannot push the our local minikube docker daemon, but rather must push to a repository accessible to Anvil Composable.
Prerequisites
- Create a Docker Hub account at hub.docker.com if you don't have one
- Log in to Docker Hub from your terminal:
Enter your Docker Hub username and password when prompted. For the sake of this tutorial, I will be using the username wintermutant.
Backend Image¶
The backend Dockerfile (docker/backend/Dockerfile) creates a Python 3.11 container running FastAPI with Uvicorn. We need to build this image and push it to Docker Hub so that Kubernetes can pull it when creating Pods.
AMD64
Anvil Composable runs on AMD64 (x86_64) servers. If you're building on an Apple Silicon Mac (ARM64), you must build for the correct architecture or the image won't run.
Build and push (multi-architecture):
Note
If docker buildx isn't available, you can build for AMD64 only:
Verify the push:
Visit https://hub.docker.com/r/YOUR_DOCKERHUB_USERNAME/anvil-tutorial-backend to confirm your image is uploaded.
Update the Kubernetes deployment:
Edit k8s/backend/deployment.yaml and update the image field to reference your image:
Frontend Image¶
The frontend is a SvelteKit application located in frontend/. It has its own Dockerfile that creates a Node.js container.
Build and push (multi-architecture):
docker buildx reminder...
If docker buildx isn't available, you can build for AMD64 only:
Update the Kubernetes deployment:
Edit k8s/frontend/deployment.yaml and update the image field:
Database Image¶
For MongoDB, we use the official public image from Docker Hub: mongo:7.0. No build step is required - Kubernetes will pull this image automatically when deploying the database Pod.
Step 1: Configure kubectl¶
Warning
You must complete this step before running any kubectl commands. Without proper configuration, you'll get authentication errors like User "system:unauthenticated".
1. Log in to Anvil Composable:
Visit composable.anvil.rcac.purdue.edu and log in with your Purdue credentials.
2. Download your kubeconfig:
- Once logged in, click on Anvil (AVL) in the left navigation bar
- In the top right corner, you will see an icon that looks like a blacked-out piece of paper with a corner folded over. When you hover over it, it should say 'Download KubeConfig'
- Select "Download kubeconfig" (or navigate to the Kubernetes section)
- Save the file to
~/.kube/anvil.yaml(create the~/.kubefolder if it doesn't exist)
3. Set the KUBECONFIG environment variable:
In your terminal, run:
Note
This sets the config for your current terminal session only. If you open a new terminal, you'll need to run this command again. Alternatively, add it to your ~/.bashrc or ~/.zshrc to make it permanent.
Success
If you see cluster information and namespaces listed, you're ready to deploy. If you get an "unauthenticated" error, double-check that:
- The
KUBECONFIGvariable is set:echo $KUBECONFIG - The file exists at that path
~/.kube/anvil.yaml - Your token hasn't expired (try downloading a fresh kubeconfig)
Step 2: Create Namespace¶
For this part, due to permissions, we will need to create our namespace using the Anvil Composable Rancher interface. Rancher is a GUI for kubernetes and other functionality.
1. Create the namespace in Rancher:
Visit Anvil Rancher and click Cluster > Projects/Namespaces. Create a namespace with:
- Project = Select your project from the dropdown (IMPORTANT: Do not leave this blank! You need to assign the namespace to a project to have deployment permissions)
- Your project should be the same as the one you use for research computing jobs with SLURM
- Name =
<your-username>-tutorial(e.g.,wintermutant-tutorial) - CPU reservation = 1000 mCPUs
- CPU limit = 1000 mCPUs
- Memory reservation = 128 MiB
- Memory limit = 128 MiB
Selecting a Project
If you don't select a project, you will be able to create the namespace but you won't have permissions to deploy anything to it. If you get "Forbidden" errors when running kubectl apply, delete the namespace and recreate it with a project selected.
2. Switch to your Anvil kubeconfig (if you were using minikube for local development):
3. Set your namespace as the default so you don't need -n <namespace> on every command:
4. Verify you have access:
Default Namespace
The manifest files do not specify a namespace, so they will deploy to whatever namespace you have set as your current context. If you get an error while running this command, it means your kube config is not pointing to your namespace + project correctly and you have permission issues. Please try to go back and figure out the issue or contact Anvil Support.
Step 3: Deploy the Database¶
Note
Since you set your default namespace in Step 2, you don't need the -n flag.
The commands below will deploy to your current namespace context.
Wait for the MongoDB pod to be Running before proceeding.
Step 4: Deploy the Backend¶
Step 5: Deploy the Frontend¶
Step 6: Deploy the Ingress¶
Change line 8: - host: wintermutant.anvilcloud.rcac.purdue.edu and have it point to a domain of your choice that ends with .anvilcloud.rcac.purdue.edu.
Note
If you are following along on a different cluster or service, you'll have to adjust the suffix as necessary.
Step 7: Verify All Resources¶
Expected output:
- 1 MongoDB pod (Running)
- 2 FastAPI pods (Running)
- 2 Frontend pods (Running)
- 3 ClusterIP services
- 1 Ingress with an external address
Verification and Testing¶
Check Pod Status¶
Success
Visit <YOUR_URL>.anvilcloud.rcac.purdue.edu and see your app live in action! Congrats, you just deployed a multiservice cloud application on your own!
Troubleshooting¶
Pods Not Starting¶
Check pod status:
Common issues:
- Image pull errors: Verify image name and registry access
- Resource limits: Check cluster has available resources
- Configuration errors: Review environment variables
Database Connection Errors¶
Check MongoDB logs:
Verify service DNS:
Check MONGO_URI environment variable:
Ingress Not Working¶
Check ingress controller:
Verify ingress configuration:
Useful Debug Commands¶
Updates¶
Additional Resources¶
Questions or Issues? Check the troubleshooting section or review the Kubernetes events for your namespace.