ELK-微服务中的日志收集

/ 默认分类 / 0 条评论 / 984浏览

ELK-微服务中的日志收集

一.基础知识

1.1 什么是ELK

1.2 版本说明

ELK这三个软件都是elastic的产品,并且我们在配合使用这三个软件的时候需要注意版本的对应,否则会出现不兼容报错.

二.基础环境搭建

  1. 安装ES
docker pull docker.io/elasticsearch:5.6.13
docker run -d -p 9200:9200 -p 9300:9300 -e ES_JAVA_OPTS="-Xms64M -Xmx256M" --name elasticsearch_5.1.2 docker.io/elasticsearch:5.1.2

  1. 安装kibana
wget https://artifacts.elastic.co/downloads/kibana/kibana-5.6.8-linux-x86_64.tar.gz
解压后修改conf目录下的kibana.yml配置文件,修改一下内容(添加)
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: "http://localhost:9200"  //这是我们的elasticsearch的url地址
kibana.index: ".kibana"
之后直接启动bin目录下的kibana程序即可启动

到这个时候就可以直接访问localhost:5601来访问kibana,界面如下(如果版本选择不正确就会出现kibana连接不上es)

kibana界面

  1. 安装logstash
下载logstash
wget https://artifacts.elastic.co/downloads/logstash/logstash-5.4.1.tar.gz
解压后进入conf目录创建logstash.conf配置文件
vim logstash.config
在其中添加下列配置,表示我们logstash对外端口是5044,logstash会将收集到的日志信息传输存储到指定的es中.
input {
    tcp {
        port => 5044
        codec => json_lines
    }
}
output {
    elasticsearch {
        hosts => ["114.115.169.138:9200"]
    }
}
启动logstash(可以选择nohup后台启动)
nohup sh ./logstash -f ../config/logstash.conf  &
启动后打印如下信息即启动成功

Sending Logstash's logs to /root/logstash-5.4.1/logs which is now configured via log4j2.properties
[2020-06-08T09:16:04,554][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/root/logstash-5.4.1/data/queue"}
[2020-06-08T09:16:04,620][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"a72bbb7c-b9e7-4571-99a5-73c5f5aac48e", :path=>"/root/logstash-5.4.1/data/uuid"}
[2020-06-08T09:16:05,630][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://你的ip地址:9200/]}}
[2020-06-08T09:16:05,632][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://你的ip地址:9200/, :path=>"/"}
[2020-06-08T09:16:05,774][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x44d3dcb0 URL:http://你的ip地址:9200/>}
[2020-06-08T09:16:05,796][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2020-06-08T09:16:05,843][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword"}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2020-06-08T09:16:05,861][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash
[2020-06-08T09:16:06,350][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::Generic:0x48a1a86a URL://你的ip地址:9200>]}
[2020-06-08T09:16:06,368][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}
[2020-06-08T09:16:06,401][INFO ][logstash.inputs.tcp      ] Starting tcp input listener {:address=>"0.0.0.0:5044"}
[2020-06-08T09:16:06,460][INFO ][logstash.pipeline        ] Pipeline main started
[2020-06-08T09:16:06,593][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

  1. 在微服务项目中集成logstash日志收集
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>
<springProperty scope="context" name="springAppName" source="spring.application.name"/>
<appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <destination>114.115.169.138:5044</destination>
        <queueSize>1048576</queueSize>
        <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
            <providers>
                <timestamp>
                    <timeZone>UTC</timeZone>
                </timestamp>
                <pattern>
                    <pattern>
                        {
                        "severity":"%level",
                        "service": "${springAppName:-}",
                        "pid": "${PID:-}",
                        "thread": "%thread",
                        "class": "%logger{40}",
                        "rest": "%message->%ex{full}"
                        }
                    </pattern>
                </pattern>
            </providers>
        </encoder>
    </appender>
<logger name="cn.zuo" level="INFO" additivity="false">
        <appender-ref ref="fileInfoLog" />
        <appender-ref ref="logstash" />
    </logger>
@GetMapping("/log")
    public String haha()
    {
        log.debug("debug级别日志");
        log.info("info级别日志");
        log.warn("warn级别日志");
        log.error("error级别日志");
        return "日志输出结束";
    }

在启动logstash的时候,因为我们没有在logstash的配置文件中设置一个自定义的模板,所以从上面我们也可以看到logstash自己使用了默认的模板 logstash启动打印安装默认模板日志记录

从上面的模板可以发现,默认创建的存储数据的索引叫做logstash-*在实际创建出来的时候显示的是logstash-日期,这个索引中默认会有一个类型存在,这个类型叫做_default_,还有一个类型叫做logs,而我们在logback.xml中配置如下信息后:

                <pattern>
                    <pattern>
                        {
                        "severity":"%level",
                        "service": "${springAppName:-}",
                        "pid": "${PID:-}",
                        "thread": "%thread",
                        "class": "%logger{40}",
                        "rest": "%message->%ex{full}"
                        }
                    </pattern>
                </pattern>

这里的service就是我们的微服务的名字,也就是在springboot中配置的application.name 收集的日志会存储在logs类型中,logs中的文档的字段就是我们上面配置的这些字段,所以我们运行微服务项目后kibana中会显示这样的日志:

实现mybatyis整合springboot和数据源druid实现多数据源的操作,也就是读写分离,主从库的配置实现

ps:logback打印格式参考官方文档