Kibana
Jump to navigation
Jump to search
Docker
Apt-get
apt-get install docker-compose docker
adjust
sudo joe /etc/sysctl.conf
vm.max_map_count=262144
apt-get remove apparmor
Docker-compose
version: '2' services: kibana: image: kibana:6.7.0 container_name: kibana links: - 'elastic:elasticsearch' ports: - '5601:5601' logstash: image: logstash:6.7.0 container_name: logstash volumes: - /root/docker/logstash/config/:/usr/share/logstash/config/ - /root/docker/logstash/patterns/:/usr/share/logstash/patterns/ ports: - '5000:5000/udp' command: 'logstash -f /usr/share/logstash/config/' links: - elastic elastic: image: elasticsearch:6.7.0 container_name: elastic
Logstash
touch /root/docker/logstash/config/logstash.yml
/root/docker/logstash/config/syslog.conf
input { syslog { port => 5000 type => "rsyslog" } udp { host => "0.0.0.0" port => "2055" codec => netflow { versions => [5, 9] } type => "netflow" } } filter { if [type] == "rsyslog" { grok { patterns_dir => ["/usr/share/logstash/patterns/"] match => { "message" => "%{MIKROTIK}" } match => { "message" => "%{GREEDYDATA:message}" } } } } output { elasticsearch { hosts => [ "elastic:9200" ] } }
/root/docker/logstash/patterns/
MIKROTIK_CAPS1 (?<mac_address>\S+)@(?<device>\S+) (?<action>connected|disconnected), %{GREEDYDATA:reason} MIKROTIK_CAPS2 (?<mac_address>\S+)@(?<device>\S+) (?<action>connected|disconnected) # Firewall Log (No NAT): MIKROTIKFIREWALLNONAT %{DATA:LogChain}: in:%{DATA:src_zone} out:%{DATA:dst_zone}, src-mac %{MAC}, proto %{DATA:proto}, %{IP:src_ip}:%{INT:src_port}->%{IP:dst_ip}:%{INT:dst_port}, len %{INT:length} ## firewall,info TODO forward: in:vlan100-dmz out:vlan10-prod, src-mac 52:54:00:d4:f5:21, proto TCP (RST), 192.168.99.16:443->192.168.38.88:46494, len 40 MIKROTIKFIREWALL (?:%{MIKROTIKFIREWALLNONAT}) #DHCP MIKROTIKDHCP %{DATA:DHCP_zone} %{WORD:DHCP_state} %{IP:src_ip} (?:from|to) %{MAC:src_mac} # System Login MIKROTIKLOGINOUT %{SYSLOGTIMESTAMP:date} MikroTik user %{WORD:user} logged (?:out|in) from %{IP:src_ip} via %{WORD:src_type} MIKROTIKLOGINFAIL %{SYSLOGTIMESTAMP:date} MikroTik login failure for user %{WORD:user} from %{IP:src_ip} via %{WORD:src_type} MIKROTIKLOGIN (?:%{MIKROTIKLOGINOUT}|%{MIKROTIKLOGINFAIL}) # Add all the above MIKROTIK (?:%{MIKROTIKFIREWALL}|%{MIKROTIKDHCP}|%{MIKROTIKLOGIN}|%{MIKROTIK_CAPS1}|%{MIKROTIK_CAPS2})
Native (oud)
Install ELK
This guide is based on Ubuntu Server 16.04
APT sources
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add - echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" > /etc/apt/sources.list.d/elastic-5.x.list
Install packages
apt update apt install openjdk-8-jre kibana elasticsearch rsyslog apt install logstash
Config RSyslog
/etc/rsyslog.d/49-remote.conf
if $fromhost-ip startswith '192.168.38.254' then { action(type="omfwd" template="json-template" target="localhost" port="10514") }
service rsyslog restart
LogStash
Create folders for patterns
mkdir -p /opt/logstash/patterns chown -R logstash:logstash /opt/logstash
Patterns file
/opt/logstash/patterns/mikrotik
MIKROTIK_DHCP (?<action>assigned|deassigned) %{IP:ip} (?<verb>to|from) (?<mac_address>\S+) MIKROTIK_CAPS1 (?<mac_address>\S+)@(?<device>\S+) (?<action>connected|disconnected), %{GREEDYDATA:reason} MIKROTIK_CAPS2 (?<mac_address>\S+)@(?<device>\S+) (?<action>connected|disconnected)
Config
/etc/logstash/conf.d/logstash.conf
input { udp { host => "127.0.0.1" port => 10514 codec => "json" type => "rsyslog" } udp { host => "0.0.0.0" port => "2055" codec => netflow { versions => [5, 9] } type => "netflow" } } # This is an empty filter block. You can later add other filters here to further process # your log lines filter { if [type] == "rsyslog" { grok { patterns_dir => ["/opt/logstash/patterns/"] match => { "message" => "%{MIKROTIK_DHCP}" } match => { "message" => "%{MIKROTIK_CAPS1}" } match => { "message" => "%{MIKROTIK_CAPS2}" } match => { "message" => "%{GREEDYDATA:message}" } } } } # This output block will send all events of type "rsyslog" to Elasticsearch at the configured # host and port into daily indices of the pattern, "rsyslog-YYYY.MM.DD" output { if [type] == "rsyslog" and "_grokparsefailure" in [tags] { file { path => "/var/log/logstash/failed_%{+YYYY-MM-dd}" } } if [type] == "rsyslog" { elasticsearch { hosts => [ "localhost:9200" ] } } if [type] == "netflow" { elasticsearch { hosts => [ "localhost:9200" ] } } }
ElasticSearch
/etc/init.d/elasticsearch start
Kibana
If not only access from localhost is required, edit /etc/kibana/kibana.yml:
server.host: "::"
service kibana start
Now the kibana site should be available at http://[ip]:5601
Troubleshooting
- is the correct java version installed?
- Is kibana listening on loopback only?
- netstat -pant | grep 5601
- Is ElasticSearch running?