Problem scenario
Sometimes you try to start a Docker container but there is a problem. For example you try:
docker start <containerID>
But you receive this: "Error response from daemon: Cannot start container <containerID>: failed to create endpoint <name> on network bridge: ip tables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 80 -j DNAT --to-destination x.x.x.x:80 ! -i docker0: iptables: No chain/target/match by that name."
Solution
cd /var/lib/docker/network/files/
ls -lh > /tmp/forposterity
mkdir /tmp/backupdir
mv /var/lib/docker/network/files/ /tmp/backupdir
systemctl restart docker #change this command depending on your distribution and version of Linux to restart Docker services
How To Port Forward (redirect traffic destined for an IP address to a specific port)
Scenario: On a Linux server, it can be useful to send traffic destined to a certain IP address to a different port on the server. The listening service could be unique insofar as its port number has been designated. The listening service could be a Docker container or a guest virtual machine.
Method: iptables -t nat -A PREROUTING -i eth3 -p tcp --dport 80 -j DNAT --to 91.91.91.91:81
Explanation: The interface receiving the HTTP requests (port 80) is eth3. 80 is the port that the server can listen on (for packets destined to eth3). The routing that this command will produce will be to redirect the traffic to 91.91.91.91 over port 81. Change the interface name (eth3), the d(estination) port value, the IP address or the final port number as needed. This is an inbound rule so there are, in a way, two destination ports (80 for listening and 81 for somewhere else on the server). For future reference, the --sport flag is a designation of a source port for IP tables commands. NAT (network address translation) can work with mapping two different IP addresses or with mapping sockets (IP addresses bound to port numbers).
Saving a Docker Image and Using It On A Different Server
Problem Scenario
Sometimes the "docker save" command does not work as you would expect. You run the command on a Docker host, but it does not work. You might even try the -o flag or the --output=/tmp/destinationFileName.tar option. But the response may be an surprising error "Cowardly refusing to save to a terminal. Use the -o flag or redirect." You want to copy a Docker image from one machine to another computer but problems get in your way. How do you solve this problem so you can transfer a flat file to another server and use the Docker image? In other words, how do you do the most basic task with Docker, how do you bring a copy of a Docker container to another machine (another virtual server, a different Docker host)?
Solution
Prerequisites
You must have Docker installed. If you need help installing Docker, see this posting.
Procedures
The redirect in the error is the right clue. This solution assumes you have Docker installed on the different server.
From the Docker host, use these three steps to copy a Docker container and place the container on another machine (server or host).
Step #1: This command should work:
docker save repositoryName:versionName > /tmp/destinationFileName.tar
Alternatively, you could try this command:
docker save ImageName > /tmp/imagename.tar
# You could find the ImageName by running "docker ps -a" and looking at the results.
Step #2: The above command will save the image as a regularly accessible flat file with the name destinationFileName.tar. To use it on the destination Docker host, transfer it to that server. Use scp or sftp to do the file transfer to a different server.
Step #3: Then use this command (assuming the location of the file on the destination server is /tmp/) on the different host:
docker load < /tmp/destinationFileName.tar
By the way, there are numerous books on Docker.
How To Import A Copy of An Existing GitLab project
Problem scenario: You want to copy a GitLab project from one instance of GitLab to a new instance of GitLab. The Git repository you want to copy to a new GitLab server is not presented via the git://, http://, nor https:// protocols.
Prerequisites: You have root access to the back end of the server with the GitLab.
Method of Solution:
1) Go to /var/opt/gitlab/git-data/repositories/root/<nameOfProject>.git
2) Copy it to a staging area of the destination server.
3) Make sure every user has logged off the destination GitLab server. This avoids confusion of shutting things down when someone is trying to check code in.
4) Log into the web UI of the GitLab instance that is the destination of this copy task. Create a new project with the name <nameOfProject> (with no .git extension).
5) Go to the back end of the GitLab server that is the destination of this copy task. As root run this command:
rm -rf /var/opt/gitlab/git-data/repositories/root/<nameOfProject>.git
6) Copy the file in step 2 to /var/opt/gitlab/git-data/repositories/root/
7) Run this command: chmod git:git /var/opt/gitlab/git-data/repositories/root/<nameOfProject>.git
8) Stop the GitLab service. Restart it.
How To Enter the Web UI of Gitlab CE without Setting Up Its Backend Email
Problem scenario: When you bring up the web UI for GitLab CE (Community Edition) for the first time, you are prompted to enter a new password twice. This password will be for the admin@example.com username. If someone else set it up and failed to provide you with the username, and the back end email has not been configured, follow these directions.
Prerequisite: You must have root access.
Solution: As root, enter an interactive console (another set of prompts) and change the default user's password:
1) gitlab-rails console production
2) user = User.where(id: 1).first
3) user.password = 'ciNewPassword'
4) user.password_confirmation = 'ciNewPassword'
5) user.save!
#Remember to change ciNewPassword in steps 3 and 4 to the new password you want.
Mostly taken from:
http://doc.gitlab.com/ce/security/reset_root_password.html
Having Two Docker Containers Share A Directory on the Host
Goal: You want two Docker containers to use the same file share on the Docker server from time to time.
Problem/Error blocking goal: In a Docker container, when you try to change directories to a directory that is on the Docker host (e.g., the docker container was created with the --volume flag), and you get an error "bash: cd <directoryname> permission denied," you have to run two commands to fix it. An alternative problem is that you can cd into the directory, but when you try to list the files, you get "cannot open directory: Permission denied." The same solution applies.
Solution: From the Docker server, run these commands:
su -c "setenforce 0"
chcon -Rt svirt_sandbox_file_t </path/to/directoryname>
#where </path/to/directoryname> is the full path of the directory and its name
If you are running CentOS/RHEL/Fedora, you can avoid this problem after a reboot by doing the following on the Docker server itself:
1) Create /etc/profile.d/custom.sh
2) Provide these three lines as its contents:
#!/bin/bash
su -c "setenforce 0"
chcon -Rt svirt_sandbox_file_t </path/to/directoryname>
#where </path/to/directoryname> is the full path of the directory and its name
How To Fix a Docker Container That Is Giving An Error ‘exec “bash”: executable file not found in $PATH’
Normally you can enter a Docker container with this command:
docker exec -it <containerID> bash
It is possible you receive an error that says 'exec "bash": executable file not found in $PATH'
The root cause could be related to insufficient disk space on the Docker host. Delete files or otherwise make room (e.g., add disk space). Then stop the container with this command: docker stop <containerID>
Finally, restart the container with this command: docker start <containerID>
Now you should be able to enter the container.
How Do You Solve a Yum Install/Update Error That Says “Failed to connect to … connection refused”?
Problem: When trying to use yum commands with a repository on your network, you receive "Failed to connect to x.x.x.x:80; Connection refused" (where x.x.x.x is the IP address of the yum repository server). What are some possible solutions to this problem?
Potential solutions:
1) Verify the Linux server with the repository has a valid repository created. Use "createrepo <directoryToRPMfiles>"
2) Verify Apache's httpd.conf file has a DocumentRoot value for the directory that was used with the createrepo command above. This command can help you find it:
cd /; find . -name httpd.conf | xargs cat * | grep DocumentRoot
3) Verify that Apache is running: ps -ef | grep apache
These commands may help you start apache depending on your distribution of Linux:
a) systemctl start httpd.service
b) service apache2 restart
c) apachectl start
4) Verify the firewall is not on. Use ps -ef | grep firewalld to see if it is running. If you are allowed to disabled the firewall, turn it off with "service firewalld stop". Only do this if you know you have enough security measures in place for this to not cause a problem. Turning off a firewall can be a bad idea; be sure other parties involved approve of this change.
5) Use ping to establish connectivity. This will not test port 80 or any port for that matter. You could use "nmap -p 80 x.x.x.x" (where x.x.x.x is the IP address of the server with the yum repository). If an intermediate firewall was installed that blocks port 80, that may explain the problem. traceroute is another Linux utility that may be of some value here.
How To Have The Apache Service Automatically Start When Its Docker Container Is Started
Goal: To have Apache start when a Docker container starts running do the following (for Docker containers based on RHEL/CentOS/Fedora).
Prerequisite: The Docker container was created with the "-p 80:80" flag. Other port numbers besides 80 will work. The port numbers do not have to be the same. They can be any number between 1 and 65535. If there is no binding of the Docker host's TCP ports to the container, the Apache service will not automatically start even if the steps below are followed.
Method:
1. cd /etc/profile.d/
2. vi custom.sh
3. Enter this text and save the changes:
#!/bin/bash
apachectl start
4. If you create an image off a container with the four steps above, subsequent containers based on the image will not have Apache start the first time after the container is created. You have to do one of two things for Apache to automatically start thereafter:
i) login into the container (e.g., docker exec -it <containerID> bash)
ii) stop and start the container (but not with a semicolon). Use the docker stop <containerID> and wait. Then use the docker start <containerID>.
How A Docker Container and Its Server Can Share a Directory
Create the Docker Container with this:
docker run -i -v /absolute/path/on/DockerHost:/absolute/path/in/Docker/Container -t repositoryName /bin/bash
where "/absolute/path/on/DockerHost" is the path and directory on the Docker server that will be presented to the Docker container in the location of "/absolute/path/in/Docker/Container."
The "repositoryName" in the example is found by running this command: docker images
In the results of this command under the left-hand column are the potential repositoryName values.
Be advised that the /absolute/path/in/Docker/Container will have its contents obliterated when the Docker container is created.