How to Dockerize a Python App for your Kubernetes Cluster.
“OBJECTIVE: This projects attempts to show you…
How to dockerize a simple python app on a bare-metal kubernetes cluster using AWS EC2 instance”
Tools Used: AWS EC2, Shell Script, Kubernetes, Docker and Dockerhub, Yaml Manifest deployment and service files
STEPS:
Step 1: Create an EC2 instance and install Kubernetes Cluster
Step 2: Create a Python App
Step 3: Dockerize the Python App
Step 4: Create Kubernetes Deployment and Service Manifests
Step 5: Deploy to Kubernetes
Step 6: Access the Flask App
Lets get Started….
Step 1: Create an EC2 instance and install Kubernetes Cluster
a. Log in to AWS Management Console: Navigate to the EC2 Dashboard.
Launch Instance: Click on “Launch Instance” and follow these steps:
- Choose an Amazon Machine Image (AMI): Select an Ubuntu Server 20.04 LTS AMI.
- Choose an Instance Type: Select a t2.medium or larger instance type for better performance.
- Configure Instance Details: Set the number of instances to 2, and adjust other settings as needed. > Create a key-pair login and download it
- Add Storage: 20GB is typically sufficient, but you can increase if needed.
- Add Tags: Optionally add tags for easier management.
- Configure Security Group: Create a new security group with the following rules:
- SSH (port 22) from your IP address.
- HTTP (port 80) and HTTPS (port 443) cause we plan to expose services.
- Review and Launch: Review your settings and launch the instance. You will need to select or create a key pair to SSH into your instance.
Optional: If you are new to EC2 instances and creating VPCs, click here for a comprehensive guide
b. Install Kubernetes on the instance.
First SSH into the instance using the key-pair
you created when creating the instance.
We have two (2) running instances to install our Kubernetes cluster, one for the Master node and the other as the Worker node.
In this section, it is assumed you already know how to create an EC2 instance in AWS. If you are not familiar with it, please click here.
After creating your EC2 instance, make sure you have the neccessary ports open.
In the “k8s-Worker” node, Create a folder “kubernetes-automation-script” with one script file named “kubernetes-script.sh”
mkdir kubernetes-script-automation
cd kubernetes-script-automation
vi kubernetes-script.sh #creates a file for edit
Copy and paste the script below into the “vim” editor
#!/bin/bash
## Setup K8-Cluster using kubeadm [K8 Version-->1.28.1]
set -e
### 1. Update System Packages [On Master & Worker Node]
sudo apt-get update
### 2. Install Docker[On Master & Worker Node]
sudo apt install docker.io -y
sudo chmod 666 /var/run/docker.sock
### 3. Install Required Dependencies for Kubernetes[On Master & Worker Node]
sudo apt-get install -y apt-transport-https ca-certificates curl gnupg
sudo mkdir -p -m 755 /etc/apt/keyrings
### 4. Add Kubernetes Repository and GPG Key[On Master & Worker Node]
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
### 5. Update Package List[On Master & Worker Node]
sudo apt update
### 6. Install Kubernetes Components[On Master & Worker Node]
sudo apt install -y kubeadm=1.28.1-1.1 kubelet=1.28.1-1.1 kubectl=1.28.1-1.1
Save and run the script on the “worker node”
chmod +x kubernetes-script.sh
./kubernetes-script.sh
In the “k8s-Master ” node, Create a folder and a new script file named “masternode-script.sh”.
mkdir kubernetes-script-automation
cd kubernetes-script-automation
vi masternode-script.sh
#!/bin/bash
# Setup K8-Cluster using kubeadm [K8 Version-->1.28.1]
set -e
### 1. Update System Packages [On Master & Worker Node]
sudo apt-get update
### 2. Install Docker [On Master & Worker Node]
sudo apt install docker.io -y
sudo chmod 666 /var/run/docker.sock
### 3. Install Required Dependencies for Kubernetes [On Master & Worker Node]
sudo apt-get install -y apt-transport-https ca-certificates curl gnupg
### 4. Add Kubernetes Repository and GPG Key [On Master & Worker Node]
sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
### 5. Update Package List [On Master & Worker Node]
sudo apt update
### 6. Install Kubernetes Components [On Master & Worker Node]
sudo apt install -y kubeadm=1.28.1-1.1 kubelet=1.28.1-1.1 kubectl=1.28.1-1.1
### 7. Initialize Kubernetes Master Node [On MasterNode]
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
### 8. Configure Kubernetes Cluster [On MasterNode]
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
### 9. Deploy Networking Solution (Calico) [On MasterNode]
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
### 10. Deploy Ingress Controller (NGINX) [On MasterNode]
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.49.0/deploy/static/provider/baremetal/deploy.yaml
Run the script on Master Node ONLY;
chmod +x masternode-script.sh
./masternode-script.sh
When the script is completed on the master node, run the following on the “worker node” > copy and paste your unique “kubeadm join” hash on the worker node (this will attach the worker node to the master node)
sudo kubeadm join 10.1.1.93:6443 --token eoer316.seib5in1f \
--discovery-token-ca-cert-hash sha256:4850a5refvbbc286ce9e4dd4erresdaa
When you run the “kubeadm join” command on the worker node, you should get similar response as this.
Check to confirm if the worker node has joined the cluster; in the worker node run the following command
kubectl get nodes
after installing kubernetes on our machine, next we need to configure the Runner for GitActions.
Step 2: Create a Python App
Install python
sudo apt-get update
sudo apt-get install -y python3 python3-venv python3-pip python3-flask
Create the App
vi app.py
Paste this python script in the editor and exit with :wq!
# Import Flask module
from flask import Flask
# Create a new Flask application
app = Flask(__name__)
# Define a route for the home page
@app.route("/")
def home():
return "Hello, world!"
# Start the web server
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5000)
Step 3: Dockerize the Python App
a. Create a Dockerfile
and paste this script inside
vi Dockerfile
# Use an official Python runtime as a parent image
FROM python:3.9-slim-buster
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
# Install any needed packages specified in requirements.txt
RUN pip install flask
# Make port 5000 available to the world outside this container
EXPOSE 5000
# Define environment variable
ENV FLASK_APP=app.py
# Run app.py when the container launches
CMD [ "python", "-m" , "flask", "run", "--host=0.0.0.0", "--port=5000"]
b. Login your docker credentials
docker login
c. Build the App using the following command
docker build -t your-dockerhub-username/myapp:latest .
ugogabriel
is the repository name, myapp
is the image name and latest
is the tag. The .
at the end specifies that the Dockerfile is located in the current directory.
Next; Confirm the built image
docker images
ugogabriel
is the repository name, myapp
is the image name and latest
is the tag. The .
at the end specifies that the Dockerfile is located in the current directory.
d. Push the image
docker push your-dockerhub-username/myapp:latest
Step 4: Create Kubernetes Deployment and Service Manifests
Create a file named deployment.yaml
with the following content:
vi deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: flask-app-deployment
spec:
replicas: 3
selector:
matchLabels:
app: flask-app
template:
metadata:
labels:
app: flask-app
spec:
containers:
- name: flask-app
image: ugogabriel/myapp:latest
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: flask-app-service
spec:
selector:
app: flask-app
ports:
- protocol: TCP
port: 80
targetPort: 5000
nodePort: 30007 # Correct indentation
type: NodePort
Step 5: Deploy to Kubernetes
Apply the Kubernetes Deployment and Service:
kubectl apply -f deployment.yaml
Confirm all deployments and if the pods are running
kubectl get deployments
kubectl get pods
kubectl get nodes -o wide
kubectl get service
Step 6: Access the Flask App
After the Deployment and Service are created, you can access your Flask app using the IP address of any node in your Kubernetes cluster on the specified NodePort (in this example, 30007
).
If you don’t know the IP addresses of your nodes, you can get them using the following command:
kubectl get nodes -o wide
Then, you can access your Flask app by navigating to http://<node-ip>:30007
. In my case; this will be my worker-node which is curl ttp://10.1.1.244:30007
This is a simple Hello World
python App, the output should be the same. Same method also applies to any python app.
Conclusion
Dockerizing a Python application and deploying it on a Kubernetes cluster offers significant advantages in terms of scalability, portability, and resource efficiency. By encapsulating the application and its dependencies into Docker containers, we ensure consistent behavior across different environments, simplifying deployment and management tasks.
Throughout this project, we’ve covered essential steps:
1. Setting up an AWS EC2 instance and installing Kubernetes to orchestrate our containerized applications.
2. Creating a basic Python Flask application and Dockerizing it using a Dockerfile.
3. Defining Kubernetes Deployment and Service manifests to automate deployment and manage networking aspects.
4. Deploying our Docker image onto the Kubernetes cluster and verifying its availability.
5. Accessing the Flask application through the exposed NodePort, demonstrating how Kubernetes simplifies service discovery and access.
By following these steps, developers and DevOps engineers can efficiently leverage Docker and Kubernetes to streamline application deployment processes, whether in development, testing, or production environments. This approach not only enhances agility but also ensures robustness and scalability as applications grow and evolve.
The END
#devops #containers #docker #kubernetes #kubernetesbaremetal #baremetal #cluster #kubernetescluster #devopengineer #cloud #aws #ec2 #GabrielOkom