How To Have The Apache Service Automatically Start When Its Docker Container Is Started

Goal:  To have Apache start when a Docker container starts running do the following (for Docker containers based on RHEL/CentOS/Fedora). 

Prerequisite:  The Docker container was created with the "-p 80:80" flag.  Other port numbers besides 80 will work.  The port numbers do not have to be the same.  They can be any number between 1 and 65535.  If there is no binding of the Docker host's TCP ports to the container, the Apache service will not automatically start even if the steps below are followed.

Method:
1. cd /etc/profile.d/
2.  vi custom.sh
3.  Enter this text and save the changes:
#!/bin/bash
apachectl start

4.  If you create an image off a container with the four steps above, subsequent containers based on the image will not have Apache start the first time after the container is created.  You have to do one of two things for Apache to automatically start thereafter:
     i) login into the container (e.g., docker exec -it <containerID> bash)
     ii) stop and start the container (but not with a semicolon).  Use the docker stop <containerID> and wait. Then use the docker start <containerID>.

How A Docker Container and Its Server Can Share a Directory

Create the Docker Container with this:

docker run -i -v /absolute/path/on/DockerHost:/absolute/path/in/Docker/Container -t repositoryName /bin/bash

where "/absolute/path/on/DockerHost" is the path and directory on the Docker server that will be presented to the Docker container in the location of "/absolute/path/in/Docker/Container."

The "repositoryName" in the example is found by running this command: docker images
In the results of this command under the left-hand column are the potential repositoryName values.

Be advised that the /absolute/path/in/Docker/Container will have its contents obliterated when the Docker container is created.

How To Add a Disk To a RHEL/CentOS/Fedora Server

Run these commands (with sudo in front of them or, less preferably, as the root user):
1) fdisk -l > /tmp/saveOfOutput   # This creates a baseline for step #3 below.

2)  With a physical server:  Turn off the server and connect the physical disk, and turn it back on.
With a virtual server:  Add the disk in vSphere

3)  As root, issue these commands:
fdisk -l #find added device in the output of this command
#If it is not seen, reboot.  You can compare the output with the /tmp/saveOfOutput file above.
# In this example, we'll assume that the newly added disk is called /dev/sdc

4)  fdisk -l /dev/sdc
5)  pvcreate /dev/sdc
6)  vgcreate vgpoolci /dev/sdc    #Replace "vgpoolci" with an arbitrary name throughout these directions.
7)  lvcreate -L 229.8G -n lvci vgpoolci   
#229.8G should be replaced with the desired size of the logical volume. 
#Replace "lvci" with an arbitrary name of a logical volume throughout these directions.  
8)  mkfs -t ext3 /dev/vgpoolci/lvci
9)  mkdir /newdirname     # Replace newdirname with the desired name of a new directory throughout these directions. 
10)  mount -t ext3 /dev/vgpoolci/lvci /newdirname

To have the file system mount every time (after rebooting), you can modify the /etc/fstab.  However, mistakes in this file can prevent the system from booting again.  To recover the system, you may need to log into maintenance mode.  If the file system is read only, please see this link.  If you want to be conservative and avoid modifying the /etc/fstab, do these steps instead (to have the new disk and file system mount automatically):
i. cd /etc/profile.d/
ii.  vi custom.sh
iii.  Enter these two lines of text and save the changes (change "/dev/vgpoolci/lvci' to what you used in step #8):

#!/bin/bash
mount -t ext3 /dev/vgpoolci/lvci /newdirname

Concerns About Creating a Docker Container With Optional Flags

Some online Docker literature suggests creating a new Docker container (e.g., the "docker run" command) with these two options:

--net=host --privileged=true

There are some caveats with these flags.  First, if you use them, within your container you can make changes to the Docker server itself.*  For some applications, this defeats Docker's purpose.  Secondly, if the application you run in Docker becomes compromised, the entire host could be vulnerable to an attack through the Docker container.*  For theoretical testing or unimportant servers protected by a firewall behind intrusion detection systems, these flags are probably acceptable to use.

Docker 1.10 has resolved bugs in the networking.  It may be advisable to upgrade to this version or higher.  It may help you obviate the desire to potentially create new Docker containers with the two flags above.

* Taken from page 82 of Docker Cookbook written by Sebastien Goasguen.

Five Steps To Creating a Yum Repository Available on Your Network

Q.  How do you create a yum repository server on a network?
Answer:

1.  Put the .rpm files into a directory (e.g., /mnt/coolrepo).
2.  Enter this command: createrepo /mnt/coolrepo
3. 
     a) Install Apache. 
     b)  Configure the httpd.conf file to make the /mnt/coolrepo file presentable over the network. 
           cd /; find . -name    httpd.conf
           vi httpd.conf
           Edit the DocumentRoot stanza to have the value be /mnt/coolrepo
           Save the changes
      c)  apachectl start
      d)  Configure the firewall so it will allow clients to connect.  If you are allowed to, you may want to turn off the firewall.

4.  Go to a server that will be a client of this repo server.  Create a new file called:

/etc/yum.repos.d/new.repo

Make sure it has these stanzas:
[coolrepo]
baseurl=http://IPaddress/coolrepo   # *
gpgcheck=0  # **
enabled=1

* Replace "IPaddress" with the IP address of the Yum repo server.  The protocol could be a file:///, https://, or an ftp:// constructor.
** use this option with care.  If gpgchecks are disabled on your system, this will allow the repo to work without signatures on client machines (e.g., in the file above).  If the configuration is for a one-time download and installation or if the repository is for a proof of concept in a development or QA environment, it is probably acceptable.  For security purposes, you may want to keep it GPGchecks enabled.  If someone spoofed your Yum repository server, the client (or consumer) servers so configured could get malware installed.  Benign rpm package names could furtively be spyware and installed during the course of normal system administration operations.

5.  On the client, issue this command to test it: yum install nameOfRPMfile

Linux (RedHat distribution) Administration Tips

#1  When updating firewalld with the firewall-cmd command, remember that a response of "success" does not mean the changes took effect.  You still have to stop and restart the services.  There are three ways of doing this:  reboot the server, use systemctl stop firewalld; systemctl start firewalld, firewall-cmd --reload

#2  When trying to install an rpm package (e.g., rpm -ivh nameOfNewPackage), you can get this error:
"...existingPackageName is obsoleted by nameOfNewPackage..."

One solution to this is to uninstall the existingPackageName. 
This command can uninstall packages:
rpm -e nameOfPackage
But sometimes it won't work when yum will.  For example:
yum remove existingPackageName
The --force option when using "rpm -ivh" would likely not help in this situation until the existing package is removed.  If yum remove is taking a great deal of time (e.g., because the repositories it was configured for are now unreachable, which can happen for a reason related to the very work that you were doing), you may want to do these steps:  
yum clean all
cd /etc/yum.repos.d/
mkdir backup
mv *.repo ./backup
Now create a file.repo file for what you need in /etc/yum.repos.d/.  The subsequent yum commands should happen relatively quickly because the mirrors and various repos won't be utilized.  Remember however, all the configured repos won't be accessible after you do these steps on this server.  If you want the server to be the same after you remove the package, bring the .repo files out of the /etc/yum.repos.d/backup directory and back into the /etc/yum.repos.d/.

Ansible Can Push Down Files and/or Changes To New Servers With Little Initial Configuration

Some critics say that Ansible does not do enough to warrant its deployment in an enterprise.  The initial deployment to the managed nodes is less than what Puppet and Chef require.  Even minionless deployments of SaltStack require more configuration work than Ansible.  In this post, we want to demonstrate an advantage of adopting Ansible related to the first deployment.  When using passwordless SSH authentication, the great benefit is the lack of a prompt.  But experienced I.T. professionals know that there is an initial prompt -- not for a password, but to confirm the fingerprint of the remote host.  This prompt can be an ECDSA or RSA prompt.  When a user enters yes, the hostname or IP address and MAC address are all inserted into a known_hosts file (in the /root/.ssh/ location on CentOS/RHEL).  Thereafter authentication to the remote server is without prompts (for both passwords and for server "fingerprints").

This initial prompt may seem small, but for the types of tasks that Ansible is completing (e.g., pushing down configuration changes to hundreds of servers), it can be burdensome.  The way to leverage Ansible and not be prompted when running a playbook against servers that have never had an SSH session from the Ansible server to the managed node, is to use these steps on the Ansible server:

cd /
find . -name ansible.cfg
   #this way you will find the example template

cd /path/To/The/FileAbove/
cp -i ansible.cfg /etc/ansible/ansible.cfg
vi /etc/ansible/ansible.cfg

#find the [defaults] section, make sure this stanza appears and is not commented out:
host_key_checking = False
#save the file

Now playbooks can run against managed nodes without ever being prompted for an interactive "yes."  This configuration file is great because for non-Ansible tasks, the /etc/ssh/sshd_config file does not need to be modified.  StrictHostChecking can still be enabled for other SSH tasks.  Beware that the known_hosts file will be updated with the servers that Ansible's playbooks affect (so this pure Ansible file does have broader implications to the server).  This known_hosts file can be deleted for security reasons.  For experienced bash and Python users, Ansible's learning curve is not steep.

How Do You Copy Files into a Docker Container from the Server’s Command Line?

Docker is itself a dependency resolution tool.  It is a container that allows a DevOps engineer to prepare one-time an OS environment with nuanced dependencies and configurations for other packages to be installed.

Leveraging the efficiency of a configuration management tool (such as Ansible, CFEngine, Chef, Puppet, and SaltStack) can empower DevOps engineering.  It can also necessitate using duplicative deployments in different environments (development, quality assurance, staging, and production).  Having a backup plan for disaster recovery is also important.  The Docker container may not have everything that the host OS has.  So some dependencies for the CM tool may need to be installed for the CM tool to work.  You may need to insert in /usr/bin/ two important files: ssh and sftp.  Ansible requires both of these binaries.  The host OS may be an acceptable source for such files.

To copy these files into Docker, do the following:

1)  Use this command to find the container ID: docker ps
2)  If the container is not running use two commands:

docker ps -a
#then use
docker start <containerID>

3) docker cp /usr/bin/ssh <containerID>:/usr/bin/ssh
4) docker cp /usr/bin/sftp <containerID>:/usr/bin/sftp
5) Enter the Docker container with this:
docker exec -it <containerID> bash
6) Issue these commands (while inside the Docker container):
chmod 755 /usr/bin/ssh
chmod 755 /usr/bin/sftp

Now these files will exist and be executable. 

Many CM-related tools require Ruby.  To install Ruby from source, you need to have make installed in the container.  Rather than try to install it, assuming the Docker host is the same Linux distribution as the kernel, use this command:  docker cp /usr/bin/make <containerID>:/usr/bin/make

(If you need directions for installing Docker on any type of Linux in any public cloud, view this posting; it should probably be enough to help you.)

OpenStack Wikipedia Article: Sahara Paragraph Updated

I edited Wikipedia's OpenStack Article found here.  This is the paragraph for Sahara as I found it on 4/5/16:
"Sahara aims to provide users with simple means to provision Hadoop clusters by specifying several parameters like Hadoop version, cluster topology, nodes hardware details and a few more. After a user fills all the parameters, Sahara deploys the cluster in a few minutes. Sahara also provides means to scale an already-provisioned cluster by adding and removing worker nodes on demand."
This is what I revised it to be:
"Sahara is a component to easily and rapidly provision Hadoop clusters. Users will specify several parameters like the Hadoop version number, the cluster topology type, node flavor details (defining disk space, CPU and RAM settings), and others. After a user provides all of the parameters, Sahara deploys the cluster in a few minutes. Sahara also provides means to scale a preexisting Hadoop cluster by adding and removing worker nodes on demand."