Problem scenario
You want to deploy your own LAMP stack with the power of Kubernetes. You do not want to rely on official Docker Hub images for the underlying Docker containers. How do you do this?
Solution
1. Deploy Kubernetes to AWS. If you need help to deploy Kubernetes to AWS, see this link. If you need assistance installing kubectl on any type of Linux (CentOS/RHEL/Fedora, Debian/Ubuntu or SUSE), see this link.
If you want to deploy it to Azure or Google Cloud Platform with these directions, know that the end product will not be fully functional. Kubernetes can be fully functional in Azure or GCP (but not with these directions that rely on an Elastic Load Balancer [in AWS]). These directions below rely on AWS for full functionality. Using these directions to deploy Kubernetes to Azure or GCP is acceptable for the basic testing of using a Docker image that is not from Docker Hub. If you still want to use these directions and use a custom Docker image with Kubernetes in Azure, see this link to deploy Kubernetes (in Azure). See this link to deploy Kubernetes in Google Cloud Platform. These directions below were designed for AWS to create a load balancer therein. To create worker nodes in AWS (to work with EKS), see this posting.
2.a. Run this command: kubectl get cluster-info
If you see 'error: the server doesn't have a resource type "cluster-info"', do not be alarmed, and proceed with these directions; go directly to step 2b. If you “The connection to the server localhost:8080 was refused – did you specify the right host or port?” when using AWS and EKS, see this posting.
2.b. From your server with kubectl, in a directory where you can write to (e.g., cd /home/ubuntu), download these four files:
The above files are used for creating a Kubernetes cluster. They are configured to take a Docker image from a Google repository when creating the Kubernetes cluster. (The files are modified versions of files originally taken from https://github.com/heptio/example-lamp.)
One way to obtain the files if your Linux server has access to the internet, is to create this script. You can call it g.sh, and run it (with bash g.sh):
curl -Lk https://raw.githubusercontent.com/ContinualIntegration/kubernetes/master/googsecrets.yaml > /tmp/googsecrets.yaml
curl -Lk https://raw.githubusercontent.com/ContinualIntegration/kubernetes/master/googphp.yaml > /tmp/googphp.yaml
curl -Lk https://raw.githubusercontent.com/ContinualIntegration/kubernetes/master/googmysql.yaml > /tmp/googmysql.yaml
curl -Lk https://raw.githubusercontent.com/ContinualIntegration/kubernetes/master/googdata-loader-job.yaml > /tmp/googdata-loader-job.yaml
3. Run these commands if you deployed Kubernetes anywhere except Google Cloud Platform:
kubectl create -f googsecrets.yaml
kubectl create -f googmysql.yaml
kubectl create -f googphp.yaml
kubectl create -f googdata-loader-job.yaml
kubectl get service googweb -o wide # See if the load balancer gets created after 25 minutes.
If you deployed Kubernetes to GCP, run these commands:
kubectl create -f googsecrets.yaml --validate=false
kubectl create -f googmysql.yaml --validate=false
kubectl create -f googphp.yaml --validate=false
kubectl create -f googdata-loader-job.yaml--validate=false
If you deployed this to a Kubernetes cluster in AWS, the external IP address will appear (e.g., after five minutes from the fourth command). But if you deployed this to a Kubernetes cluster in Azure or GCP, do not expect to see an external IP address because it will not be created. If you used Google Cloud Platform's Kubernetes Engine, you can click on "Workloads", "Discovery & load balancing", "Configuration", and "Storage" to see the results of the above commands.