Creating a Virtual Splunk Deployment – Part 3 More Splunk

We only have a few more items to setup before we can start using the deployment.

Indexer Discovery

Before we can use any forwarders, we need an easy way for forwarders to know where to send the data. There are two ways, each with advantages and disadvantages. The Splunk recommended way is Indexer Discovery. To enable Indexer Discovery, ssh to the master node.

Edit server.conf
# nano /opt/splunk/etc/system/local/server.conf

Paste in the below text at the end of the file:

[indexer_discovery]
pass4SymmKey = discovery

Restart splunk on the Master Node

Now the MN is ready to receive polling requests from forwarders using that set password.

Index paths

In some cases, we need to create the storage paths as defined in indexes.conf. They won’t actually be used, so it won’t occupy disk space. But we need to give splunk the access it needs to work. So run these commands on ALL SERVERS EXCEPT THE INDEXERS.
# sudo mkdir /hot && sudo chown splunk /hot
sudo mkdir /cold && sudo chown splunk /cold

this step must be performed or splunk will break when it restarts because it will be looking for these folder locations. The indexers already have these paths so that’s why they don’t need this step.

Deployment Client

A deployment client simply pings the deployment server for apps and takes whatever the DS says it should receive. So we need to tell the client devices where the DS is located.

SSH to the following devices:

  • Deployment Server
  • Heavy Forwarder
  • Monitoring Server
  • Deployer
  • Master Node

And run the following command on each server:
# /opt/splunk/bin/splunk set deploy-poll 192.168.11.11:8089 -auth admin:’splunkpass’

No need to restart splunk after this change.

NOTE: while the Master Node and Deployer are clients of the deployment server, this is not how we will be pushing apps to the search head cluster and indexer cluster. These apps are meant for the master node and deployer to use only for those servers. More on this later.

Deployment Server

To activate a splunk instance as a Deployment Server, all you need to do is place an app (or folder) in the deployment-apps folder. But we need to create an app anyway, so let’s do that now.

Run these commands:
# mkdir -p /opt/splunk/etc/deployment-apps/global_forwarders_outputs/local/
nano /opt/splunk/etc/deployment-apps/global_forwarders_outputs/local/outputs.conf

Paste in the below text to the new file and save.

[indexer_discovery:indexerCluster1]
pass4SymmKey = discovery
master_uri = https://192.168.11.21:8089

[tcpout:groupName]
indexerDiscovery = indexerCluster1
autoLBFrequency = 30
forceTimebasedAutoLB = true
useACK=true

[tcpout]
defaultGroup = groupName

Ensure the IP address used above is the correct internal IP for the master node.
save the file and exit.

Login to the web GUI for the deployment server (DS). Go to SETTINGS > FORWARDER MANAGEMENT.

Click APPS and you should see the app we just created.

We also need to make an app that is a clone of the indexes.conf app on the master node. Use command:
# mkdir -p /opt/splunk/etc/deployment-apps/global_indexes_list/local/
nano /opt/splunk/etc/deployment-apps/global_indexes_list/local/indexes.conf

Now go back to the indexes.conf file in the Master Node configuration and paste in all the same data to the app. Don’t worry about having two copies of the same file. We can address that in another session. For now, we just need to get the Forwarders to know what indexes are available.

In the CLIENTS tab, you should see all you

Now click the SERVER CLASSES tab. Click the link to create a new class

Name the app global_forwarders_outputs

Click ADD APPS, then click on the global_forwarders_outputs app we have on the left side to move that over to the right side. Do not add the indexers app.
click SAVE.

You will see the app listed as part of the server class. under ACTIONS, select EDIT.

By default, RESTART SPLUNKD will not be checked. In many cases, like this one, splunk must be restarted in order for these apps to take effect on the clients to which they are distributed. So in this case we need to check this box and click SAVE.

Now click ADD CLIENTS. Using the list at the bottom of the page, copy the Host name or DNS name or the IP address (internal) and paste into the INCLUDE section at the top. Click SAVE

to verify the heavy forwarder is outputting as expected, run this command:
# sudo tcpdump -nnei any port 9997
if you see data flowing, you are sending data to the indexers on port 9997

Now let’s finish the indexers app.

at the FORWARDER MANAGEMENT page, click NEW SERVER CLASS to create a new server class and name it “global_indexes_list“.

Add the indexers app. Then edit the app to RESTART SPLUNKD when installed. SAVE

For clients, add the Heavy Forwarder and SAVE the class.

Forwarders Server Class

Since we just made the server class to add indexer discovery to the heavy forwarder. We can add some additional splunk instances to that class.

Go to the deployment server and open the class_forwarders_outputs server class.
Under CLIENTS, click the EDIT button
Add all of the available clients to the class. These are the clients we NEED to receive this app.

SAVE the class.

Do the above steps for the class_indexes_list server class

Heavy Forwarder

Most of the splunk items have already been accomplished. But we can still add the forwarder as a search head. It’s not critical, but you will likely need to search your data after adding it via the HF.

SSH to the heavy forwarder
# ssh hf

edit server.conf
# nano /opt/splunk/etc/system/local/server.conf

[clustering]
master_uri = https://192.168.11.21:8089
mode = searchhead
pass4SymmKey = thispassword

save the file and exit.

Since we gave all the servers the extra 2 disks that only the indexers need, we can use the 3rd disk as the log storage disk and leave the 4th disk alone. so we need to edit fstab again:

Run these commands:
# sudo parted /dev/sdc
(parted) mklabel gpt
(parted) mkpart logs 0% 100%
(parted) quit
# sudo mkfs.xfs /dev/sdc1

Now add this volume to fstab:
# sudo nano /etc/fstab

Add the text below to the end of the file

/dev/sdc1	/logs	xfs defaults 0 0

Save the file and reboot. When it comes back up, set permissions with this command:
# sudo chown -R splunk:splunk /logs

Syslog Service

On the Heavy Forwarder, you may want to install a syslog service to collect logs from other servers. This is critical for a splunk deployment because MOST appliances will only have the option to send a syslog feed to a remote collection server. The Heavy Forwarder will act as our main receiver for all log files before it even processes them for splunk. One way or another you have to get the logs to this server whether its syslog, FTP, SFTP/SCP, HEC, etc.. No matter how, it needs to arrive to the HF and the HF needs to be able to receive it. So we will cover the most common method which is a syslog service.

To enable syslog-ng on our HF, ssh to the HF and run these commands:

sudo -s
cd /etc/yum.repos.d/
wget https://copr.fedorainfracloud.org/coprs/czanik/syslog-ng328/repo/epel-7/czanik-syslog-ng328-epel-7.repo
cd /tmp
wget https://download-ib01.fedoraproject.org/pub/epel/testing/7/x86_64/Packages/i/ivykis-0.36.3-1.el7.x86_64.rpm
rpm -ivh ivykis-0.36.3-1.el7.x86_64.rpm 
yum -y install syslog-ng
yum -y remove rsyslog
systemctl enable syslog-ng
systemctl start syslog-ng
exit

Now that syslog is installed, you can make your own config file, or you can download mine. while on the HF, run this command:
# sudo mv /etc/syslog-ng/syslog-ng.conf /etc/syslog-ng/syslog-ng.conf.orig
sudo nano /etc/syslog-ng
/syslog-ng.conf

Paste in the contents of the syslog-ng.conf file from my github repo, then save.

Paste in the info from my sample config, save and exit.
Now restart syslog-ng
# sudo systemctl restart syslog-ng.service

Check the ports are listening:
# netstat -an | grep “:500”

Your Heavy Forwarder is now ready to receive logs using those ports.

A quick test before we move on, use this command from the CLIENT machine:
# logger -d -n 192.168.11.17 -p 5000 “this is a test message”

Now go back to the HF and check the log path in use for port 5000:
# cat /opt/log/IIS/IIS.log

Go ahead and clear that log:
# echo -n > /opt/log/IIS/IIS.log

Also, the eventgen addon does not create the folders, so you have to create them manually. Use this command:
# cd /logs && mkdir -p apache1 checkpoint cisco.ise dhcp dns IIS o365 proxySG

Monitoring Server

The monitoring server is essentially just a search head but we can install additional apps that work well for a monitoring server. It also takes away some valuable CPU and memory from the search head cluster so you can monitor your clusters without affecting the search head cluster’s performance.

If you’ve completed all the steps to this point, then you’ve got a working server so we will focus on the monitoring server in a separate session.

If you wish to do this now, you can add this server to the indexer cluster as a search head.

EventGen Addon

Use a browser to go to the Deployment Server web gui.
Click the APPS menu and select FIND MORE APPS
in the search bar, type in “eventgen”, then click INSTALL for that app
it will prompt you for the splunkbase credentials. sign up for a free account if you dont have one yet.
it will prompt for a restart when finished. No need, click RESTART LATER. we just want the app downloaded.

in this tutorial I’ve done A LOT of work preparing the eventgen addon to ensure it works and to provide sample data to enrich the indexers with usable data. I am using my own github repo for this tutorial, but you can use whatever data you like.

SSH to the deployment server.
# ssh ds
Now use this command to move the eventgen folder to the deployment-apps folder:
# mv /opt/splunk/etc/apps/SA-Eventgen /opt/splunk/etc/deployment-apps/

Now use these commands to download my customized files:
# cd /tmp
yum -y install git

git clone https://github.com/bramuno/SplunkEnv.git
cd SplunkEnv/SA-Eventgen/

mkdir -p /opt/splunk/etc/deployment-apps/SA-Eventgen/samples/
cp samples/* /opt/splunk/etc/deployment-apps/SA-Eventgen/samples/

mkdir -p /opt/splunk/etc/deployment-apps/SA-Eventgen/local/
cp local/* /opt/splunk/etc/deployment-apps/SA-Eventgen/local/

cp metadata/local.meta /opt/splunk/etc/deployment-apps/SA-Eventgen/metadata/

Save and exit, then restart splunk on the deployment server.

Now go to the DS web gui and go to FORWARDER MANAGEMENT
create a new server class called “heavy_forwarders”
add the SA-Eventgen app to the class and SAVE
add the HF client to the clients list and SAVE
in the APPS section, click EDIT and select EDIT, then ensure the RESTART SPLUNKD box is checked, then SAVE

wait a few minutes for the HF to restart, then run this command to check logs are flowing from the eventgen app:
# tail -f /logs/*/*.log

You should see multiple logs streaming. these will be our artifically generated source of logs of which splunk will be monitoring.

Logrotate

These new logs above need to be rotated automatically to prevent the OS or splunk from crashing. To do this, we need to tell linux what to do using logrotate.

ssh to the HF. Run this command:
# nano /etc/logrotate.d/splunk

now paste in the data below to this new file

Save and exit.
this file tells logrotate to rotate all logs in path /logs and only keep 2 days worth of logs per folder/file. It also creates the new log files under the splunk account and group so splunk has no issues reading/writing. the old logs will be compressed after rotation, so we need to ensure splunk is not trying to read these files. more on that later.

/logs/*/*.log {
	daily
	rotate 2
	compress
	missingok
	create 0660 splunk splunk
}

Conclusion

That should be enough to get started on testing in your own environment. Feel free to leave a comment if I messed something up or forgot it completely and I will try to correct it. thanks for coming to my ted talk.

Leave a comment