How To Add a Disk To a RHEL/CentOS/Fedora Server

Run these commands (with sudo in front of them or, less preferably, as the root user):
1) fdisk -l > /tmp/saveOfOutput   # This creates a baseline for step #3 below.

2)  With a physical server:  Turn off the server and connect the physical disk, and turn it back on.
With a virtual server:  Add the disk in vSphere

3)  As root, issue these commands:
fdisk -l #find added device in the output of this command
#If it is not seen, reboot.  You can compare the output with the /tmp/saveOfOutput file above.
# In this example, we'll assume that the newly added disk is called /dev/sdc

4)  fdisk -l /dev/sdc
5)  pvcreate /dev/sdc
6)  vgcreate vgpoolci /dev/sdc    #Replace "vgpoolci" with an arbitrary name throughout these directions.
7)  lvcreate -L 229.8G -n lvci vgpoolci   
#229.8G should be replaced with the desired size of the logical volume. 
#Replace "lvci" with an arbitrary name of a logical volume throughout these directions.  
8)  mkfs -t ext3 /dev/vgpoolci/lvci
9)  mkdir /newdirname     # Replace newdirname with the desired name of a new directory throughout these directions. 
10)  mount -t ext3 /dev/vgpoolci/lvci /newdirname

To have the file system mount every time (after rebooting), you can modify the /etc/fstab.  However, mistakes in this file can prevent the system from booting again.  To recover the system, you may need to log into maintenance mode.  If the file system is read only, please see this link.  If you want to be conservative and avoid modifying the /etc/fstab, do these steps instead (to have the new disk and file system mount automatically):
i. cd /etc/profile.d/
ii.  vi custom.sh
iii.  Enter these two lines of text and save the changes (change "/dev/vgpoolci/lvci' to what you used in step #8):

#!/bin/bash
mount -t ext3 /dev/vgpoolci/lvci /newdirname

Concerns About Creating a Docker Container With Optional Flags

Some online Docker literature suggests creating a new Docker container (e.g., the "docker run" command) with these two options:

--net=host --privileged=true

There are some caveats with these flags.  First, if you use them, within your container you can make changes to the Docker server itself.*  For some applications, this defeats Docker's purpose.  Secondly, if the application you run in Docker becomes compromised, the entire host could be vulnerable to an attack through the Docker container.*  For theoretical testing or unimportant servers protected by a firewall behind intrusion detection systems, these flags are probably acceptable to use.

Docker 1.10 has resolved bugs in the networking.  It may be advisable to upgrade to this version or higher.  It may help you obviate the desire to potentially create new Docker containers with the two flags above.

* Taken from page 82 of Docker Cookbook written by Sebastien Goasguen.

Five Steps To Creating a Yum Repository Available on Your Network

Q.  How do you create a yum repository server on a network?
Answer:

1.  Put the .rpm files into a directory (e.g., /mnt/coolrepo).
2.  Enter this command: createrepo /mnt/coolrepo
3. 
     a) Install Apache. 
     b)  Configure the httpd.conf file to make the /mnt/coolrepo file presentable over the network. 
           cd /; find . -name    httpd.conf
           vi httpd.conf
           Edit the DocumentRoot stanza to have the value be /mnt/coolrepo
           Save the changes
      c)  apachectl start
      d)  Configure the firewall so it will allow clients to connect.  If you are allowed to, you may want to turn off the firewall.

4.  Go to a server that will be a client of this repo server.  Create a new file called:

/etc/yum.repos.d/new.repo

Make sure it has these stanzas:
[coolrepo]
baseurl=http://IPaddress/coolrepo   # *
gpgcheck=0  # **
enabled=1

* Replace "IPaddress" with the IP address of the Yum repo server.  The protocol could be a file:///, https://, or an ftp:// constructor.
** use this option with care.  If gpgchecks are disabled on your system, this will allow the repo to work without signatures on client machines (e.g., in the file above).  If the configuration is for a one-time download and installation or if the repository is for a proof of concept in a development or QA environment, it is probably acceptable.  For security purposes, you may want to keep it GPGchecks enabled.  If someone spoofed your Yum repository server, the client (or consumer) servers so configured could get malware installed.  Benign rpm package names could furtively be spyware and installed during the course of normal system administration operations.

5.  On the client, issue this command to test it: yum install nameOfRPMfile

Linux (RedHat distribution) Administration Tips

#1  When updating firewalld with the firewall-cmd command, remember that a response of "success" does not mean the changes took effect.  You still have to stop and restart the services.  There are three ways of doing this:  reboot the server, use systemctl stop firewalld; systemctl start firewalld, firewall-cmd --reload

#2  When trying to install an rpm package (e.g., rpm -ivh nameOfNewPackage), you can get this error:
"...existingPackageName is obsoleted by nameOfNewPackage..."

One solution to this is to uninstall the existingPackageName. 
This command can uninstall packages:
rpm -e nameOfPackage
But sometimes it won't work when yum will.  For example:
yum remove existingPackageName
The --force option when using "rpm -ivh" would likely not help in this situation until the existing package is removed.  If yum remove is taking a great deal of time (e.g., because the repositories it was configured for are now unreachable, which can happen for a reason related to the very work that you were doing), you may want to do these steps:  
yum clean all
cd /etc/yum.repos.d/
mkdir backup
mv *.repo ./backup
Now create a file.repo file for what you need in /etc/yum.repos.d/.  The subsequent yum commands should happen relatively quickly because the mirrors and various repos won't be utilized.  Remember however, all the configured repos won't be accessible after you do these steps on this server.  If you want the server to be the same after you remove the package, bring the .repo files out of the /etc/yum.repos.d/backup directory and back into the /etc/yum.repos.d/.

Ansible Can Push Down Files and/or Changes To New Servers With Little Initial Configuration

Some critics say that Ansible does not do enough to warrant its deployment in an enterprise.  The initial deployment to the managed nodes is less than what Puppet and Chef require.  Even minionless deployments of SaltStack require more configuration work than Ansible.  In this post, we want to demonstrate an advantage of adopting Ansible related to the first deployment.  When using passwordless SSH authentication, the great benefit is the lack of a prompt.  But experienced I.T. professionals know that there is an initial prompt -- not for a password, but to confirm the fingerprint of the remote host.  This prompt can be an ECDSA or RSA prompt.  When a user enters yes, the hostname or IP address and MAC address are all inserted into a known_hosts file (in the /root/.ssh/ location on CentOS/RHEL).  Thereafter authentication to the remote server is without prompts (for both passwords and for server "fingerprints").

This initial prompt may seem small, but for the types of tasks that Ansible is completing (e.g., pushing down configuration changes to hundreds of servers), it can be burdensome.  The way to leverage Ansible and not be prompted when running a playbook against servers that have never had an SSH session from the Ansible server to the managed node, is to use these steps on the Ansible server:

cd /
find . -name ansible.cfg
   #this way you will find the example template

cd /path/To/The/FileAbove/
cp -i ansible.cfg /etc/ansible/ansible.cfg
vi /etc/ansible/ansible.cfg

#find the [defaults] section, make sure this stanza appears and is not commented out:
host_key_checking = False
#save the file

Now playbooks can run against managed nodes without ever being prompted for an interactive "yes."  This configuration file is great because for non-Ansible tasks, the /etc/ssh/sshd_config file does not need to be modified.  StrictHostChecking can still be enabled for other SSH tasks.  Beware that the known_hosts file will be updated with the servers that Ansible's playbooks affect (so this pure Ansible file does have broader implications to the server).  This known_hosts file can be deleted for security reasons.  For experienced bash and Python users, Ansible's learning curve is not steep.

How Do You Copy Files into a Docker Container from the Server’s Command Line?

Docker is itself a dependency resolution tool.  It is a container that allows a DevOps engineer to prepare one-time an OS environment with nuanced dependencies and configurations for other packages to be installed.

Leveraging the efficiency of a configuration management tool (such as Ansible, CFEngine, Chef, Puppet, and SaltStack) can empower DevOps engineering.  It can also necessitate using duplicative deployments in different environments (development, quality assurance, staging, and production).  Having a backup plan for disaster recovery is also important.  The Docker container may not have everything that the host OS has.  So some dependencies for the CM tool may need to be installed for the CM tool to work.  You may need to insert in /usr/bin/ two important files: ssh and sftp.  Ansible requires both of these binaries.  The host OS may be an acceptable source for such files.

To copy these files into Docker, do the following:

1)  Use this command to find the container ID: docker ps
2)  If the container is not running use two commands:

docker ps -a
#then use
docker start <containerID>

3) docker cp /usr/bin/ssh <containerID>:/usr/bin/ssh
4) docker cp /usr/bin/sftp <containerID>:/usr/bin/sftp
5) Enter the Docker container with this:
docker exec -it <containerID> bash
6) Issue these commands (while inside the Docker container):
chmod 755 /usr/bin/ssh
chmod 755 /usr/bin/sftp

Now these files will exist and be executable. 

Many CM-related tools require Ruby.  To install Ruby from source, you need to have make installed in the container.  Rather than try to install it, assuming the Docker host is the same Linux distribution as the kernel, use this command:  docker cp /usr/bin/make <containerID>:/usr/bin/make

(If you need directions for installing Docker on any type of Linux in any public cloud, view this posting; it should probably be enough to help you.)

OpenStack Wikipedia Article: Sahara Paragraph Updated

I edited Wikipedia's OpenStack Article found here.  This is the paragraph for Sahara as I found it on 4/5/16:
"Sahara aims to provide users with simple means to provision Hadoop clusters by specifying several parameters like Hadoop version, cluster topology, nodes hardware details and a few more. After a user fills all the parameters, Sahara deploys the cluster in a few minutes. Sahara also provides means to scale an already-provisioned cluster by adding and removing worker nodes on demand."
This is what I revised it to be:
"Sahara is a component to easily and rapidly provision Hadoop clusters. Users will specify several parameters like the Hadoop version number, the cluster topology type, node flavor details (defining disk space, CPU and RAM settings), and others. After a user provides all of the parameters, Sahara deploys the cluster in a few minutes. Sahara also provides means to scale a preexisting Hadoop cluster by adding and removing worker nodes on demand."

How to Handle “Failed to connect to the Docker daemon” message in Linux

To see if Docker has started, do this command:
ps -ef | grep -i docker
If that returns only a service for the grep itself, then Docker is not running.   Occasionally the Docker service won't start through traditional methods.  But some users have found that this command will work reliably:
docker daemon &
The "&" allows for the next prompt to return.  This method is explicit to new users of Docker too.  This method provides more verbose informational messages to print to the console as compared to "systemctl docker start."

How do you install two or more RPM packages when they depend on each other?

Question:  How do you solve circular dependency problems when installing RPMs in RedHat Linux?
Problem Scenario:  For example, you keep trying to install different RPMs, but they always require a different installation.  By exhaustively going through the dependencies, you find a circle of dependencies.  This is sometimes called mutual recursion.

Root cause:  Human error.

Solution:  The way to resolve circular dependencies is with a yum localinstall command with a list of each of the RPM packages afterward. For example if packageA.rpm depends on packageB.rpm to be installled, and packageB.rpm depends on packageC.rpm to be installed, and finally packageC.rpm depends on packageA.rpm to be installed, what do you do?  Put packageA.rpm, packageB.rpm, packageC.rpm in the local directory.  Then do this:
yum localinstall packageA.rpm packageB.rpm packageC.rpm

For people new to patching RedHat derivatives, learning to apply different patches simultaneously solves the circularly dependent problem.  If there is an error message still, and it seems impossible to solve, look closely at the error message.  The error message may have a subversion in the requires message (e.g., a subtle .5 after a number) that is slightly higher than one of the versions that you are trying to install. Certain combinations of .rpm files can be finicky or particular with each combination of .rpm versioned files.  The solution is possible however.  Once you have the correct versions of potentially long-named .rpm files, do the following:

Step #1:  Go to the directory where your .rpm files are.

Step #2:  Issue on of the following:

sudo yum localinstall *.rpm
sudo rpm -ivh *rpm

Any persistent error message may be telling you something.  There may be a version incompatibility.