Resize an LVM Partition on VMware

Accommodate growth of a VM by expanding an LVM partition At some point, a “physical volume” may have to be enlarged to accommodate growth on a VM. This is how you grow the filesystem of an existing VMDK without adding an additional disk to your VM. Enlarging a VMDK login to VMware Find the VM with the disk that needs to be made larger Right click and select “Edit Settings” Find the specific Hard Disk and update the capacity to the desired size Click “Ok” Expanding the VM Volume Size In most cases, the “Physical Volume” information will not be updated automatically.

Configure a default zone with firewalld

Configure a Default Zone This is not meant as a full primer for firewalld. It is just meant to document changing the default zone. If you are looking for a more in-depth exposure to firewalld try https://www.hogarthuk.com/?q=node/9 Check available zones firewall-cmd --get-zones Check active zone firewall-cmd --get-active-zones Get current zone of interface (assumes it is in the public zone) firewall-cmd --get-zone-of-interface=<interface returned from above output> Check internal zone for existing services

Send Security Onion logs to a centralized Graylog Server

Overview For anyone that doesn’t know, Security Onion is a custom Linux distribution running on Ubuntu that can be used as a Network Intrusion Detection System (NIDS). Security Onion integrates several configurable apps like BRO IDS, Snort, Suricata, and OSSEC to name a few. By default, there is an integrated ELSA Stack that can be configured, which makes SO a pretty interesting one-stop shop for getting your feet wet with IDS technology.

Sorting /etc/passwd and /etc/shadow Files

Sorting /etc/passwd and /etc/shadow files Sorting /etc/passwd and /etc/shadow files [root@server~]# cd /root/ [root@server~]# touch passwd.sorted shadow.sorted [root@server~]# chmod 644 passwd.sorted [root@server~]# chmod 600 shadow.sorted [root@server~]# sort -t: -n -k3,3 /etc/passwd >passwd.sorted [root@server~]# gawk -F: '{system("grep \"^" $1 ":\" /etc/shadow")}' passwd.sorted >shadow.sorted [root@server~]# wc /etc/shadow shadow.sorted 211 211 10985 /etc/shadow 211 211 10985 shadow.sorted 422 422 21970 total [root@server~]# wc /etc/passwd passwd.sorted 211 413 11881 /etc/passwd 211 413 11881 passwd.sorted 422 826 23762 total [root@server~]# cp -a /etc/passwd /root/passwd.

Setup internal yum repositories for CentOS and RedHat Servers Part 4

Configure RHEL/CentOS client machines Setup Note: Now that the storage nodes are configured, the repo files have to be updated on client nodes to point them at the new internal mirrors. This can be accomplished in a few different ways. Configure RHEL6/RHEL7 clients On RHEL systems the subscription manager has to be disabled subscription-manager config --rhsm.manage_repos=0 Get the Redhat.repo file from internal repo server wget http://el${OS_VER}repo/repo/Redhat.repo -O /etc/yum.repos.d/Redhat.repo Configure CentOS6/CentOS7 clients Get the CentOS-Base.

Setup internal yum repositories for CentOS and RedHat Servers Part 3

Setup storage nodes Setup RHEL7 storage node Set hostname (example: el7repo) hostnamectl set-hostname el7repo Start apache and set to start on boot systemctl start httpd.service systemctl enable httpd.service Create base directory structure mkdir -p /var/www/html/repo/Package_Diff Create repo config files (see Setup Note for link to contents) touch /var/www/html/repo/CentOS-Base.repo touch /var/www/html/repo/Epel.repo touch /var/www/html/repo/Redhat.repo chmod 644 /var/www/html/repo/*.repo Setup Note: Remember to copy the content from the appropriate files. Path: /var/www/html/repo/CentOS-Base.repo CentOS-Base.repo Path: /var/www/html/repo/Epel.

Setup internal yum repositories for CentOS and RedHat Servers Part 2

Setup storage nodes Setup RHEL6 storage node Set hostname (example: el6repo) vi /etc/sysconfig/network Start apache and set to start on boot service httpd start chkconfig httpd on Create base directory structure mkdir -p /var/www/html/repo/Package_Diff Create repo config files (see Setup Note for link to contents) touch /var/www/html/repo/CentOS-Base.repo touch /var/www/html/repo/Epel.repo touch /var/www/html/repo/Redhat.repo chmod 644 /var/www/html/repo/*.repo Setup Note: Remember to copy the content from the appropriate files. Path: /var/www/html/repo/CentOS-Base.repo CentOS-Base.repo Path: /var/www/html/repo/Epel.repo Epel.

Setup internal yum repositories for CentOS and RedHat Servers Part 1

Internal RHEL/CentOS repo for yum Overview The main goal for setting up internal yum repo (or mirror) servers is having more control and consistency over the software deployed within a RHEL/CentOS Linux environment. The process we used prior to using internal repos was much more ad-hoc, causing discrepancies with test server software versions and production server software versions. While a practice of upgrading test servers prior to production servers was in place, trying to manage versions at the endpoint was troublesome and tedious.

Setup SaltStack on CentOS 7

Setup Salt Components on CentOS 7 Setup Note: This guide is basically copied from the salt docs https://docs.saltstack.com/en/latest/topics/installation/rhel.html. The only reason for it to exist is to expand on the RHEL/CENTOS 7 post install specifics for adding firewall rules and enabling the service. Import SaltStack GPG-KEY rpm --import https://repo.saltstack.com/yum/redhat/7/x86_64/latest/SALTSTACK-GPG-KEY.pub Setup SaltStack repo file Edit /etc/yum.repos.d/saltstack.repo vi /etc/yum.repos.d/saltstack.repo Insert this text [saltstack-repo] name=SaltStack repo for RHEL/CentOS $releasever baseurl=https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest enabled=1 gpgcheck=1 gpgkey=https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest/SALTSTACK-GPG-KEY.pub Install the salt-minion, salt-master, or other Salt components:

Setting up a multi-tiered log infrastructure Part 11 -- Cluster Tuning

Tuning Graylog, Elasticsearch, and MongoDB for optimized cluster performance This has been an article a long time in the making. One problem with making changes to a complex clustered environment is that you may have to wait long periods of time to gather data that either shows an improvement or shows a negative impact. Some other considerations just make total sense, if you can afford them. Running on SSDs is going to perform far better than spinning disks.