Network telemetry provides security analysts with visibility into the actors present on a network, allowing better tracking of adversaries as they enter and move through the network. To be effective, the telemetry needs to collected from edge devices into data stores where it can be correlated at scale with other signals. This blog shows how to leverage Logstash running on Linux Ubuntu to stream Netflow/IPFIX telemetry to Azure Sentinel for SIEM+SOAR and Azure Data Explorer (Kusto) for long-term storage.
Start by installing Logstash:
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
sudo apt-get update
sudo apt-get install logstash
sudo chmod -R 777 /var/log/logstash
sudo chmod -R 777 /var/lib/logstash
cd /usr/share/logstash
sudo bin/logstash-keystore --path.settings /etc/logstash create
Verify the installation by writing a few log lines to the console:
bin/logstash --path.settings /etc/logstash -e 'input { generator { count => 10 } } output { stdout {} }'
Install the Azure Log Analytics plugin:
sudo bin/logstash-plugin install microsoft-logstash-output-azure-loganalytics
Store the Log Analytics workspace key in the Logstash key store. The workspace key can be found in Azure Portal under Azure Sentinel > Settings > Workspace settings > Agents management > Primary key
. While there, also write down the Workspace ID (workspace_id
below).
sudo bin/logstash-keystore --path.settings /etc/logstash add LogAnalyticsKey
The command prompts for the key.
Create the configuration file /etc/logstash/generator-to-sentinel.conf
:
input {
stdin {}
generator { count => 10 }
}
output {
stdout {}
microsoft-logstash-output-azure-loganalytics {
workspace_id => "<workspace_id>"
workspace_key => "${LogAnalyticsKey}"
custom_log_table_name => "TestLogstash"
}
}
This will create a table called TestLogstash_CL
in Azure Sentinel.
Run the pipeline:
bin/logstash --debug --path.settings /etc/logstash -f /etc/logstash/generator-to-sentinel.conf
The pipeline starts by generating 10 rows and then waits for user inputs to send as extra rows.
Install the Azure Data Explorer plugin:
sudo bin/logstash-plugin install logstash-output-kusto
Create an AAD application in Azure Portal under Azure Active Directory > App registrations > New registration
. Write down the Application ID (app_id
below) and Tenant ID (app_tenant
below).
Create a new key for that app under Certificates & secrets > Client secrets > New client secret
and store it in the Logstash key store:
sudo bin/logstash-keystore --path.settings /etc/logstash add AadAppKey
The command prompts for the key.
In Kusto, grant the AAD app ingestor access and initialize the table:
.add database <database> ingestors ('aadapp=<app_id>;<app_tenant>')
.create table TestLogstash (timestamp:datetime, message:string, sequence:long)
.create table TestLogstash ingestion json mapping 'v1' '[{"column":"timestamp","path":"$.@timestamp"},{"column":"message","path":"$.message"},{"column":"sequence","path":"$.sequence"}]'
Create the configuration file /etc/logstash/generator-to-kusto.conf
:
input {
stdin {}
generator { count => 10 }
}
output {
stdout {}
kusto {
ingest_url => "https://ingest-<cluster>.kusto.windows.net/"
database => "<database>"
table => "TestLogstash"
json_mapping => "v1"
app_tenant => "<app_tenant>"
app_id => "<app_id>"
app_key => "${AadAppKey}"
path => "/tmp/kusto/%{+YYYY-MM-dd-HH-mm-ss}.txt"
}
}
Run the pipeline:
bin/logstash --debug --path.settings /etc/logstash -f /etc/logstash/generator-to-kusto.conf
Netflow logs are collected by Filebeat which forwards them to Logstash. For simplicity, below, the output of Logstash is written to the console. This can be replaced by the Log Analytics and Kusto outputs as needed.
Install Filebeat:
sudo apt-get install filebeat
sudo chmod 644 /etc/filebeat/filebeat.yml
sudo mkdir /var/lib/filebeat
sudo mkdir /var/log/filebeat
sudo chmod -R 777 /var/log/filebeat
sudo chmod -R 777 /var/lib/filebeat
cd /usr/share/filebeat
Have Filebeat listen for NetFlow UDP traffic on localhost:2055
:
sudo filebeat modules enable netflow
Redirect the output of Filebeat from ElasticSearch to Logstash. In /etc/filebeat/filebeat.yml
:
output.elasticsearch
output.logstash
Create the configuration file /etc/logstash/filebeat-to-stdout.conf
:
input {
beats {
port => 5044
}
}
output {
stdout {}
}
Run Logstash:
bin/logstash --debug --path.settings /etc/logstash -f /etc/logstash/filebeat-to-stdout.conf
In another terminal, run Filebeat:
filebeat run -e
For quick testing, nflow-generator can be used to generate local NetFlow traffic.
In a third terminal, install and run nflow-generator:
wget https://github.com/nerdalert/nflow-generator/raw/master/binaries/nflow-generator-x86_64-linux
chmod 777 nflow-generator-x86_64-linux
./nflow-generator-x86_64-linux -t localhost -p 2055
Logstash and its companion Filebeat are not limited to forwarding Netflow/IPFIX: they also support a wide variety of other inputs related to network security, including Zeek, Suricata, Snort, etc. See Filebeat modules for details on how to configure them.
Tested using: