Threat Hunting

Network Monitoring for Threat Hunting

Mohamed Amine SAIDANI | 27 NOV 2019

What’s happening in our networks?

If we don’t feel we have a handle on that question, we should enhance our network monitoring capabilities; or as we like to say “No Vision, No Security”.

In this post, we will briefly discuss how we configure and use Zeek with the Elastic stack for network monitoring and threat detection.

Before getting started it is worth noting that Zeek has some extra features compared to other open-source IDS that lead us to choose it. Here is a taster of what you can expect from Zeek:

  • Flexible network security monitoring with event correlation
  • Traffic inspection
  • Attack detection
  • Log recording
  • Distributed analysis
  • Full programmability
  • Relatively easy to install

Keep in mind that Zeek is not about trying to tell you what it bad but rather what is happening within your network.

The Elastic Stack is designed to allow users to take data from any source, in any format and to search, analyze, and visualize that data in real-time using an open-source framework, that is easy and quick to both install and maintain.

How it works

With Zeek is up and running, it will need to obtain a copy of all network packets, without being inline as a typical network sniffer would. As a result we first need to generate a copy of the live traffic using a packet broker, after which we will add the necessary configuration to allow the conversion of all data into JSON format.

Unfortunately by default Zeek logs are TSV’s (Tabbed Separator Values) which needs to be converted into JSON for ingestion into ElasticSearch. Easy peasy! This script will also configure it to use ISO8601 timestamps instead of UNIX epoch time.

Note: use your preferred directory - in my example it is: nsm/bro/...

## Configure bro to write JSON logs
mkdir -p /nsm/bro/share/bro/site/scripts
sudo tee /nsm/bro/share/bro/site/scripts/json-logs.bro << EOF
@load tuning/json-logs
redef LogAscii::json_timestamps = JSON::TS_ISO8601;
redef LogAscii::use_json = T;

sudo tee -a /nsm/bro/share/bro/site/local.bro << EOF
# Load policy for JSON output
@load scripts/json-logs

Also just change the value in “ascii.bro” file in this directory


From this

const use_json = F &redef;

To this

const use_json = T &redef;

Restart Zeek and here we go!

/nsm/bro/bin# ./broctl 
Welcome to BroControl 1.9-2
Type "help" for help.
[BroControl] > restart
stopping ...
stopping bro ...
starting ...
starting bro ...
[BroControl] > status
Name         Type       Host          Status    Pid    Started
bro          standalone localhost     running   851    09 Oct 06:39:32

And check if the logs are on JSON format.

/nsm/bro/logs/current# tail -f conn.log

The logs look like this

    "ts": 1570522374.209008,
    "uid": "C75Nxy4MNOcV3UZoma",
    "id.orig_h": "",
    "id.orig_p": 44762,
    "id.resp_h": "",
    "id.resp_p": 443,
    "proto": "tcp",
    "conn_state": "OTH",
    "local_orig": true,
    "local_resp": false,
    "missed_bytes": 0,
    "history": "C",
    "orig_pkts": 0,
    "orig_ip_bytes": 0,
    "resp_pkts": 0,
    "resp_ip_bytes": 0
} {
    "ts": 1570522126.778665,
    "uid": "CFU3TG1mVtNU2EsMK2",
    "id.orig_h": "",
    "id.orig_p": 44734,
    "id.resp_h": "",
    "id.resp_p": 443,
    "proto": "tcp",
    "duration": 0.000343,
    "orig_bytes": 0,
    "resp_bytes": 0,
    "conn_state": "OTH",
    "local_orig": true,
    "local_resp": false,
    "missed_bytes": 0,
    "history": "Ca",
    "orig_pkts": 0,
    "orig_ip_bytes": 0,
    "resp_pkts": 2,
    "resp_ip_bytes": 80

Add Filebeat conf

Until now we finished with Zeek, let’s move to elastic stack’s part. We are going to start with Filebeat which is responsible for reading the logs from Zeek and send them to Logstash. We need to enable the Zeek module, with this command:

/etc/filebeat/modules.d# filebeat modules enable zeek

Next, we add a configuration option to enable the Zeek module to read from /nsm/bro/logs/current/… which is the directory where Zeek put its logs in real time

:/etc/filebeat/modules.d# cat zeek.yml 
# Module: zeek
- module: zeek
  # All logs
    enabled: true
    var.paths: ["/nsm/bro/logs/current/conn.log"]
    enabled: true
    var.paths: ["/nsm/bro/logs/current/dns.log"]
    enabled: true
    var.paths: ["/nsm/bro/logs/current/http.log"]
    enabled: true
    var.paths: ["/nsm/bro/logs/current/files.log"]
    enabled: true
    var.paths: ["/nsm/bro/logs/current/ssl.log"]
    enabled: true
    var.paths: ["/nsm/bro/logs/current/notice.log"]

Now that the modules are configured, we also need to configure the Filebeat output for sending all the logs to Logstash, as previously mentioned.

/etc/filebeat# cat filebeat.yml
  # The Logstash hosts
  hosts: ["localhost:5044"]

In this part we will create Logstash Inputs, Filters, and Outputs.

Input Section: Since the Zeek logs would be forwarded to Logstash by Filebeat, the input section of the pipeline uses the beats input plugin. Here we configure the port on which to listen for Filebeat data.

input {
    beats {
        port => "5044"

Filter section: The filter section is where the real work happens. The filter section is where available filter plugins are used to parse through each message Logstash receives. This is where fields are created and populated. We get our configuration based on the RockNSM project.

Note: since the filter has more than 1300 lines, you can find all the Logstash configurations in our Marketplace.

Output section: The output section is pretty straight-forward. Once Logstash is done parsing the event, it will send its output to Elasticsearch using the Elasticsearch output plugin. Here we configure the address of the Elasticsearch node and a few other settings.

        hosts => ["localhost:9200"]
        index => "Zeek"

Once successfully starting Zeek, Filebeat, and Logstash we are able to see the logs on Kibana:

Let’s have a real use case, first of all, we are going to start a scan on our network and see how Zeek is going to detect it and how we are going to use a Kibana dashboard to show it!

Scanning our network using Nmap:

:/nsm/bro/share/bro/policy/misc# nmap -sP
Starting Nmap 7.80 ( ) at 2019-10-12 13:53 EDT

Open current repository and open notice file:

:/nsm/bro/logs/current# ls
capture_loss.log  dns.log    http.log    ssl.log    stderr.log  weird.log
conn.log          files.log  notice.log  stats.log  stdout.log  x509.log
root@kali:/nsm/bro/logs/current# tail -f notice.log 
  "ts": 1570902827.002012,
  "proto": "tcp",
  "note": "Scan::Address_Scan",
  "msg": " scanned at least 25 unique hosts on port 443/tcp in 0m20s",
  "sub": "local",
  "src": "",
  "p": 443,
  "actions": [
  "suppress_for": 3600,
  "dropped": false

As you can see in the file above, the message is so clear and we can do some modification for more detail.

Next, we will see the Notice’s log in Kibana:

This is just a drop from the huge ocean of network monitoring for threat hunting, where we have shown the required configuration for both the Elastic Stack and Zeek along with a simple use case to make it more clear.