Setup internal yum repositories for CentOS and RedHat Servers Part 3

Setup storage nodes Setup RHEL7 storage node Set hostname (example: el7repo) hostnamectl set-hostname el7repo Start apache and set to start on boot systemctl start httpd.service systemctl enable httpd.service Create base directory structure mkdir -p /var/www/html/repo/Package_Diff Create repo config files (see Setup Note for link to contents) touch /var/www/html/repo/CentOS-Base.repo touch /var/www/html/repo/Epel.repo touch /var/www/html/repo/Redhat.repo chmod 644 /var/www/html/repo/*.repo Setup Note: Remember to copy the content from the appropriate files. Path: /var/www/html/repo/CentOS-Base.repo CentOS-Base.repo Path: /var/www/html/repo/Epel.

Setup internal yum repositories for CentOS and RedHat Servers Part 2

Setup storage nodes Setup RHEL6 storage node Set hostname (example: el6repo) vi /etc/sysconfig/network Start apache and set to start on boot service httpd start chkconfig httpd on Create base directory structure mkdir -p /var/www/html/repo/Package_Diff Create repo config files (see Setup Note for link to contents) touch /var/www/html/repo/CentOS-Base.repo touch /var/www/html/repo/Epel.repo touch /var/www/html/repo/Redhat.repo chmod 644 /var/www/html/repo/*.repo Setup Note: Remember to copy the content from the appropriate files. Path: /var/www/html/repo/CentOS-Base.repo CentOS-Base.repo Path: /var/www/html/repo/Epel.repo Epel.

Setup internal yum repositories for CentOS and RedHat Servers Part 1

Internal RHEL/CentOS repo for yum Overview The main goal for setting up internal yum repo (or mirror) servers is having more control and consistency over the software deployed within a RHEL/CentOS Linux environment. The process we used prior to using internal repos was much more ad-hoc, causing discrepancies with test server software versions and production server software versions. While a practice of upgrading test servers prior to production servers was in place, trying to manage versions at the endpoint was troublesome and tedious.

Setup SaltStack on CentOS 7

Setup Salt Components on CentOS 7 Setup Note: This guide is basically copied from the salt docs https://docs.saltstack.com/en/latest/topics/installation/rhel.html. The only reason for it to exist is to expand on the RHEL/CENTOS 7 post install specifics for adding firewall rules and enabling the service. Import SaltStack GPG-KEY rpm --import https://repo.saltstack.com/yum/redhat/7/x86_64/latest/SALTSTACK-GPG-KEY.pub Setup SaltStack repo file Edit /etc/yum.repos.d/saltstack.repo vi /etc/yum.repos.d/saltstack.repo Insert this text [saltstack-repo] name=SaltStack repo for RHEL/CentOS $releasever baseurl=https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest enabled=1 gpgcheck=1 gpgkey=https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest/SALTSTACK-GPG-KEY.pub Install the salt-minion, salt-master, or other Salt components:

Setting up a multi-tiered log infrastructure Part 11 -- Cluster Tuning

Tuning Graylog, Elasticsearch, and MongoDB for optimized cluster performance This has been an article a long time in the making. One problem with making changes to a complex clustered environment is that you may have to wait long periods of time to gather data that either shows an improvement or shows a negative impact. Some other considerations just make total sense, if you can afford them. Running on SSDs is going to perform far better than spinning disks.

Setting up a multi-tiered log infrastructure Part 10 -- HA Cluster Setup

Setup HA Cluster Services on CentOS 7 Install HA Cluster components Install pacemaker and the cluster control software on both nodes that will be part of the cluster(corosync is pulled in as a dependency) yum install pacemaker pcs Enable and start the cluster management service systemctl enable pcsd.service systemctl start pcsd.service Enable corosync and pacemaker to start on boot on all nodes systemctl enable corosync.service systemctl enable pacemaker.service Set the hacluster user’s password

Setting up a multi-tiered log infrastructure Part 9 -- Rsyslog HA Setup

Setup for Logging Setup rsyslog aggregator nodes (Optional) Setup Note: As part of the overall design, an HA cluster allows aggregating logs to the Central Log Repository with as little loss of logs as possible due to downtime or maintenance. Below are steps for building an HA cluster and setting up rsyslog for CENTOS 7. Install/upgrade to the latest rsyslog yum update rsyslog Create an rsyslog spool directory (this will be needed later)

Setting up a multi-tiered log infrastructure Part 8 -- Rsyslog Setup

Setup for Logging Setup rsyslog node Install/upgrade to the latest rsyslog yum update rsyslog Create an rsyslog spool directory (this will be needed later) mkdir /var/lib/rsyslog Setup Note: A custom rsyslog.conf is available for the CLR node that allows receiving logs on tcp port 514 by default. Copy the content from the appendixes into the appropriate files. Path: /etc/rsyslog.conf rsyslog.conf for CLR server Edit the rsyslog config vi /etc/rsyslog.conf Uncomment the lines for the action and change server.

Setting up a multi-tiered log infrastructure Part 7 -- Graylog WebUI Setup

Additional Setup for master node Setup Graylog Web UI on master node Setup Note: newer versions of graylog do not require a separate install for the web interface anymore so we can make a few firewall rule changes and be good. Configure Graylog WebUI firewalld rules Let ’s make some firewall rule changes specifically to allow web traffic. If for some reason you aren’t using a firewall then you can skip this.

Setting up a multi-tiered log infrastructure Part 6 -- Graylog Setup

Additional setup for master node Setup graylog-server on master node Install instructions from http://docs.graylog.org/en/2.2/pages/installation.html Setup Note: This deployment is not using a prebuilt rpm package, many of the next steps will be moving files, creating directories, creating additional files, and setting up the proper permissions on the linux command line. An rpm package is available but because when this guide for first written, the RPM only had support for openjdk v1.