Normally you can enter a Docker container with this command:
docker exec -it <containerID> bash
It is possible you receive an error that says 'exec "bash": executable file not found in $PATH'
The root cause could be related to insufficient disk space on the Docker host. Delete files or otherwise make room (e.g., add disk space). Then stop the container with this command: docker stop <containerID>
Finally, restart the container with this command: docker start <containerID>
Now you should be able to enter the container.
How Do You Solve a Yum Install/Update Error That Says “Failed to connect to … connection refused”?
Problem: When trying to use yum commands with a repository on your network, you receive "Failed to connect to x.x.x.x:80; Connection refused" (where x.x.x.x is the IP address of the yum repository server). What are some possible solutions to this problem?
Potential solutions:
1) Verify the Linux server with the repository has a valid repository created. Use "createrepo <directoryToRPMfiles>"
2) Verify Apache's httpd.conf file has a DocumentRoot value for the directory that was used with the createrepo command above. This command can help you find it:
cd /; find . -name httpd.conf | xargs cat * | grep DocumentRoot
3) Verify that Apache is running: ps -ef | grep apache
These commands may help you start apache depending on your distribution of Linux:
a) systemctl start httpd.service
b) service apache2 restart
c) apachectl start
4) Verify the firewall is not on. Use ps -ef | grep firewalld to see if it is running. If you are allowed to disabled the firewall, turn it off with "service firewalld stop". Only do this if you know you have enough security measures in place for this to not cause a problem. Turning off a firewall can be a bad idea; be sure other parties involved approve of this change.
5) Use ping to establish connectivity. This will not test port 80 or any port for that matter. You could use "nmap -p 80 x.x.x.x" (where x.x.x.x is the IP address of the server with the yum repository). If an intermediate firewall was installed that blocks port 80, that may explain the problem. traceroute is another Linux utility that may be of some value here.
How To Have The Apache Service Automatically Start When Its Docker Container Is Started
Goal: To have Apache start when a Docker container starts running do the following (for Docker containers based on RHEL/CentOS/Fedora).
Prerequisite: The Docker container was created with the "-p 80:80" flag. Other port numbers besides 80 will work. The port numbers do not have to be the same. They can be any number between 1 and 65535. If there is no binding of the Docker host's TCP ports to the container, the Apache service will not automatically start even if the steps below are followed.
Method:
1. cd /etc/profile.d/
2. vi custom.sh
3. Enter this text and save the changes:
#!/bin/bash
apachectl start
4. If you create an image off a container with the four steps above, subsequent containers based on the image will not have Apache start the first time after the container is created. You have to do one of two things for Apache to automatically start thereafter:
i) login into the container (e.g., docker exec -it <containerID> bash)
ii) stop and start the container (but not with a semicolon). Use the docker stop <containerID> and wait. Then use the docker start <containerID>.
How A Docker Container and Its Server Can Share a Directory
Create the Docker Container with this:
docker run -i -v /absolute/path/on/DockerHost:/absolute/path/in/Docker/Container -t repositoryName /bin/bash
where "/absolute/path/on/DockerHost" is the path and directory on the Docker server that will be presented to the Docker container in the location of "/absolute/path/in/Docker/Container."
The "repositoryName" in the example is found by running this command: docker images
In the results of this command under the left-hand column are the potential repositoryName values.
Be advised that the /absolute/path/in/Docker/Container will have its contents obliterated when the Docker container is created.
How To Add a Disk To a RHEL/CentOS/Fedora Server
Run these commands (with sudo in front of them or, less preferably, as the root user):
1) fdisk -l > /tmp/saveOfOutput
# This creates a baseline for step #3 below.
2) With a physical server: Turn off the server and connect the physical disk, and turn it back on.
With a virtual server: Add the disk in vSphere
3) As root, issue these commands:fdisk -l
#find added device in the output of this command
#If it is not seen, reboot. You can compare the output with the /tmp/saveOfOutput file above.
# In this example, we'll assume that the newly added disk is called /dev/sdc
4) fdisk -l /dev/sdc
5) pvcreate /dev/sdc
6) vgcreate vgpoolci /dev/sdc
#Replace "vgpoolci" with an arbitrary name throughout these directions.
7) lvcreate -L 229.8G -n lvci vgpoolci
#229.8G should be replaced with the desired size of the logical volume.
#Replace "lvci" with an arbitrary name of a logical volume throughout these directions.
8) mkfs -t ext3 /dev/vgpoolci/lvci
9) mkdir /newdirname
# Replace newdirname with the desired name of a new directory throughout these directions.
10) mount -t ext3 /dev/vgpoolci/lvci /newdirname
To have the file system mount every time (after rebooting), you can modify the /etc/fstab. However, mistakes in this file can prevent the system from booting again. To recover the system, you may need to log into maintenance mode. If the file system is read only, please see this link. If you want to be conservative and avoid modifying the /etc/fstab, do these steps instead (to have the new disk and file system mount automatically):
i. cd /etc/profile.d/
ii. vi custom.sh
iii. Enter these two lines of text and save the changes (change "/dev/vgpoolci/lvci' to what you used in step #8):
#!/bin/bash
mount -t ext3 /dev/vgpoolci/lvci /newdirname
Concerns About Creating a Docker Container With Optional Flags
Some online Docker literature suggests creating a new Docker container (e.g., the "docker run" command) with these two options:
--net=host --privileged=true
There are some caveats with these flags. First, if you use them, within your container you can make changes to the Docker server itself.* For some applications, this defeats Docker's purpose. Secondly, if the application you run in Docker becomes compromised, the entire host could be vulnerable to an attack through the Docker container.* For theoretical testing or unimportant servers protected by a firewall behind intrusion detection systems, these flags are probably acceptable to use.
Docker 1.10 has resolved bugs in the networking. It may be advisable to upgrade to this version or higher. It may help you obviate the desire to potentially create new Docker containers with the two flags above.
* Taken from page 82 of Docker Cookbook written by Sebastien Goasguen.
Five Steps To Creating a Yum Repository Available on Your Network
Q. How do you create a yum repository server on a network?
Answer:
1. Put the .rpm files into a directory (e.g., /mnt/coolrepo).
2. Enter this command: createrepo /mnt/coolrepo
3.
a) Install Apache.
b) Configure the httpd.conf file to make the /mnt/coolrepo file presentable over the network.
cd /; find . -name httpd.conf
vi httpd.conf
Edit the DocumentRoot stanza to have the value be /mnt/coolrepo
Save the changes
c) apachectl start
d) Configure the firewall so it will allow clients to connect. If you are allowed to, you may want to turn off the firewall.
4. Go to a server that will be a client of this repo server. Create a new file called:
/etc/yum.repos.d/new.repo
Make sure it has these stanzas:
[coolrepo]
baseurl=http://IPaddress/coolrepo # *
gpgcheck=0 # **
enabled=1
* Replace "IPaddress" with the IP address of the Yum repo server. The protocol could be a file:///, https://, or an ftp:// constructor.
** use this option with care. If gpgchecks are disabled on your system, this will allow the repo to work without signatures on client machines (e.g., in the file above). If the configuration is for a one-time download and installation or if the repository is for a proof of concept in a development or QA environment, it is probably acceptable. For security purposes, you may want to keep it GPGchecks enabled. If someone spoofed your Yum repository server, the client (or consumer) servers so configured could get malware installed. Benign rpm package names could furtively be spyware and installed during the course of normal system administration operations.
5. On the client, issue this command to test it: yum install nameOfRPMfile
How To Find the IP Address of a Docker Container
First, find the Docker container ID. Issue this:
docker ps -l
The results of the above command should provide the container ID. Next issue this:
docker inspect <containerID>
For greater detail, see this posting.
Linux (RedHat distribution) Administration Tips
#1 When updating firewalld with the firewall-cmd command, remember that a response of "success" does not mean the changes took effect. You still have to stop and restart the services. There are three ways of doing this: reboot the server, use systemctl stop firewalld; systemctl start firewalld, firewall-cmd --reload
#2 When trying to install an rpm package (e.g., rpm -ivh nameOfNewPackage), you can get this error:
"...existingPackageName is obsoleted by nameOfNewPackage..."
One solution to this is to uninstall the existingPackageName.
This command can uninstall packages:
rpm -e nameOfPackage
But sometimes it won't work when yum will. For example:
yum remove existingPackageName
The --force option when using "rpm -ivh" would likely not help in this situation until the existing package is removed. If yum remove is taking a great deal of time (e.g., because the repositories it was configured for are now unreachable, which can happen for a reason related to the very work that you were doing), you may want to do these steps:
yum clean all
cd /etc/yum.repos.d/
mkdir backup
mv *.repo ./backup
Now create a file.repo file for what you need in /etc/yum.repos.d/. The subsequent yum commands should happen relatively quickly because the mirrors and various repos won't be utilized. Remember however, all the configured repos won't be accessible after you do these steps on this server. If you want the server to be the same after you remove the package, bring the .repo files out of the /etc/yum.repos.d/backup directory and back into the /etc/yum.repos.d/.
Ansible Can Push Down Files and/or Changes To New Servers With Little Initial Configuration
Some critics say that Ansible does not do enough to warrant its deployment in an enterprise. The initial deployment to the managed nodes is less than what Puppet and Chef require. Even minionless deployments of SaltStack require more configuration work than Ansible. In this post, we want to demonstrate an advantage of adopting Ansible related to the first deployment. When using passwordless SSH authentication, the great benefit is the lack of a prompt. But experienced I.T. professionals know that there is an initial prompt -- not for a password, but to confirm the fingerprint of the remote host. This prompt can be an ECDSA or RSA prompt. When a user enters yes, the hostname or IP address and MAC address are all inserted into a known_hosts file (in the /root/.ssh/ location on CentOS/RHEL). Thereafter authentication to the remote server is without prompts (for both passwords and for server "fingerprints").
This initial prompt may seem small, but for the types of tasks that Ansible is completing (e.g., pushing down configuration changes to hundreds of servers), it can be burdensome. The way to leverage Ansible and not be prompted when running a playbook against servers that have never had an SSH session from the Ansible server to the managed node, is to use these steps on the Ansible server:
cd /
find . -name ansible.cfg #this way you will find the example template
cd /path/To/The/FileAbove/
cp -i ansible.cfg /etc/ansible/ansible.cfg
vi /etc/ansible/ansible.cfg
#find the [defaults] section, make sure this stanza appears and is not commented out:
host_key_checking = False
#save the file
Now playbooks can run against managed nodes without ever being prompted for an interactive "yes." This configuration file is great because for non-Ansible tasks, the /etc/ssh/sshd_config file does not need to be modified. StrictHostChecking can still be enabled for other SSH tasks. Beware that the known_hosts file will be updated with the servers that Ansible's playbooks affect (so this pure Ansible file does have broader implications to the server). This known_hosts file can be deleted for security reasons. For experienced bash and Python users, Ansible's learning curve is not steep.