问题描述:

We are using ELK for logging in our spring application using docker setup. I have configured log stash to read the log file from a given path(where the application generates the logs) and pass it to elastic search. The initial setup works fine and all the logs are passed to kibana instantly. However, as the size of the logs increase (or some form of application logging happens), the response time for application increases exponentially and ultimately brings down the application and everything within the docker network.

Logstash conf file:

input {

file {

type => "java"

path => ["/logs/application.log"]

}

filter {

multiline {

pattern => "^%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{TIME}.*"

negate => "true"

what => "previous"

periodic_flush => false

}

if [message] =~ "\tat" {

grok {

match => ["message", "^(\tat)"]

add_tag => ["stacktrace"]

}

}

grok {

match => [ "message",

"(?<timestamp>%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{TIME}) %{LOGLEVEL:level} %{NUMBER:pid} --- \[(?<thread>[A-Za-z0-9-]+)\] [A-Za-z0-9.]*\.(?<class>[A-Za-z0-9#_]+)\s*:\s+(?<logmessage>.*)",

"message",

"(?<timestamp>%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{TIME}) %{LOGLEVEL:level} %{NUMBER:pid} --- .+? :\s+(?<logmessage>.*)"

]

}

#Parsing out timestamps which are in timestamp field thanks to previous grok section

date {

match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss.SSS" ]

}

}

output {

# Sending properly parsed log events to elasticsearch

elasticsearch {

hosts => ["elasticsearch:9200"] // elastic search is the name if the service in docker-compose file for elk

}

}}

Logstash Docker file:

FROM logstash

ADD config/logstash.conf /tmp/config/logstash.conf

Volume $HOME/Documents/logs /logs

RUN touch /tmp/config/logstash.conf

EXPOSE 5000

ENTRYPOINT ["logstash", "agent","-v","-f","/tmp/config/logstash.conf"]

docker compose for ELK:

version: '2'

services:

elasticsearch:

image: elasticsearch:2.3.3

command: elasticsearch -Des.network.host=0.0.0.0

ports:

- "9200:9200"

- "9300:9300"

networks:

- elk

logstash:

build: image/logstash

volumes:

- $HOME/Documents/logs:/logs

ports:

- "5000:5000"

networks:

- elk

kibana:

image: kibana:4.5.1

ports:

- "5601:5601"

networks:

- elk

networks:

elk:

Note: My spring-boot application and elk are on different networks. The performance issue remains same even if they are on same container.

Is this a performance issue because of the continuous writing/polling of a log file which is causing read/write lock issues?

相关阅读:
Top