elk + filebeat,6.3.2版本簡單搭建,實現我們自己的集中式日誌系統

来源:https://www.cnblogs.com/youzhibing/archive/2018/09/04/9553758.html
-Advertisement-
Play Games

前言 剛從事開發那段時間不習慣輸出日誌,認為那是無用功,徒增代碼量,總認為自己的代碼無懈可擊;老大的叮囑、強調也都視為耳旁風,最終導致的結果是我加班排查問題,花的時間還挺長的,要復現問題、排查問題等,幸虧那是公司內部員工用的系統,時間長一點也沒什麼大問題,但是如果是針對客戶的,時間就代表很多東西了, ...


前言

  剛從事開發那段時間不習慣輸出日誌,認為那是無用功,徒增代碼量,總認為自己的代碼無懈可擊;老大的叮囑、強調也都視為耳旁風,最終導致的結果是我加班排查問題,花的時間還挺長的,要復現問題、排查問題等,幸虧那是公司內部員工用的系統,時間長一點也沒什麼大問題,但是如果是針對客戶的,時間就代表很多東西了,那造成的影響就非常大了。自那以後養成了輸出日誌的習慣。

  但是後來發現,僅僅只是輸出日誌文件,對於排查問題來說,還是很費時,因為要在一個龐大的日誌文件中過濾出我們需要的信息也十分耗時;那麼此時基於日誌文件的日誌系統就被需要了。

  至於需不需要搭建日誌系統、以及搭建一個怎樣的日誌系統,需要根據我們的業務實際情況而定,例如公司內部員工用的一個不重要的系統,那麼日誌文件可能就夠了;而對於針對客戶的、直接與公司利益掛鉤的,我認為不僅要搭建日誌系統,更要輸出更詳細的日誌信息到日誌文件,提供運維的效率。

  elk + filebeat 各個組件的功能

    Elasticsearch:分散式搜索和分析引擎,具有高可伸縮、高可靠和易管理等特點。基於 Apache Lucene 構建,能對大容量的數據進行接近實時的存儲、搜索和分析操作。通常被用作某些應用的基礎搜索引擎,使其具有複雜的搜索功能;
    Logstash:數據收集引擎。它支持動態的從各種數據源搜集數據,並對數據進行過濾、分析、豐富、統一格式等操作,然後存儲到用戶指定的位置;
    Kibana:數據分析和可視化平臺。通常與 Elasticsearch 配合使用,對其中數據進行搜索、分析和以統計圖表的方式展示;
    Filebeat:一個輕量級開源日誌文件數據搜集器,基於 Logstash-Forwarder 源代碼開發,是對它的替代。在需要採集日誌數據的 server 上安裝 Filebeat,並指定日誌目錄或日誌文件後,Filebeat 就能讀取數據,迅速發送到 Logstash 進行解析,亦或直接發送到 Elasticsearch 進行集中式存儲和分析;

  本文不會對各個組件做詳細的介紹與使用說明,如果想對各個組件有更詳細的瞭解,那麼需要大伙自行去學習,官網的資料就很不錯。

環境準備

  192.168.1.110:logstash + java

  192.168.1.111:filebeat + redis + mysql + jdk + tomcat8

  192.168.1.112:kibana

  192.168.1.113:elasticsearch + java

日誌系統搭建

  相關安裝包大家自行準備,去官網下載即可;elk+filebeat我用的都是6.3.2版本,jdk用的1.8版本,mysql是5.7,tomcat是8.5.30

  Elasticsearch

    依賴jdk,jdk的 搭建可參考我的 另一篇博客:virtualBox安裝centos,並搭建tomcat

    [root@cent0s7-03 opt]# tar -zxvf elasticsearch-6.3.2.tar.gz

         [root@cent0s7-03 opt]# cd elasticsearch-6.3.2

    修改配置,支持遠程訪問:

      修改elasticsearch的home目錄/config/elasticsearch.yml,打開配置項network.host:,並將其值設置成0.0.0.0;

      但是需要增加系統配置來支持:

        [root@cent0s7-03 bin]# vi /etc/security/limits.conf

        新增如下配置        

* soft nofile 65536
* hard nofile 131072
* soft nproc 2048
* hard nproc 4096

        [root@cent0s7-03 bin]# vi /etc/sysctl.conf

        新增如下配置

vm.max_map_count=262144

        [root@cent0s7-03 bin]# sysctl -p

    啟動elasticsearch

      [root@cent0s7-03 bin]# ./elasticsearch

      發現報錯,如下

[2018-08-19T10:26:33,685][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root
    at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:140) ~[elasticsearch-6.3.2.jar:6.3.2]
    at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:127) ~[elasticsearch-6.3.2.jar:6.3.2]
    at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-6.3.2.jar:6.3.2]
    at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) ~[elasticsearch-cli-6.3.2.jar:6.3.2]
    at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-6.3.2.jar:6.3.2]
    at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:93) ~[elasticsearch-6.3.2.jar:6.3.2]
    at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:86) ~[elasticsearch-6.3.2.jar:6.3.2]
Caused by: java.lang.RuntimeException: can not run elasticsearch as root
    at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:104) ~[elasticsearch-6.3.2.jar:6.3.2]
    at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:171) ~[elasticsearch-6.3.2.jar:6.3.2]
    at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:326) ~[elasticsearch-6.3.2.jar:6.3.2]
    at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:136) ~[elasticsearch-6.3.2.jar:6.3.2]
    ... 6 more
View Code

      這是出於系統安全考慮設置的條件。由於ElasticSearch可以接收用戶輸入的腳本並且執行,為了系統安全考慮,建議創建一個單獨的用戶用來運行ElasticSearch。 

    [root@cent0s7-03 bin]# groupadd elk

    [root@cent0s7-03 bin]# useradd elsearch -g elk

    [root@cent0s7-03 bin]# cd /opt

    [root@cent0s7-03 opt]# chown -R elsearch:elk elasticsearch-6.3.2

    [root@cent0s7-03 opt]# su elsearch

    [elsearch@cent0s7-03 opt]$ cd elasticsearch-6.3.2/bin

    [elsearch@cent0s7-03 bin]$ ./elasticsearch (加-d,則表示後端運行)

    訪問:http://192.168.1.113:9200,出現如下信息

{
  "name" : "8dBt-dz",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "gGH8gMvjTm62yyjob3aeZA",
  "version" : {
    "number" : "6.3.2",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "053779d",
    "build_date" : "2018-07-20T05:20:23.451332Z",
    "build_snapshot" : false,
    "lucene_version" : "7.3.1",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

    表示單節點的elasticsearch搭建起來了

  Kibana  

    [root@centos7-02 opt]# tar -zxvf kibana-6.3.2-linux-x86_64.tar.gz

    [root@centos7-02 opt]# mv kibana-6.3.2-linux-x86_64 kibana6.3.2

    修改配置文件:kibana.yml

      [root@centos7-02 opt]# vi kibana6.3.2/config/kibana.yml

           主要改改兩項:

server.host: "0.0.0.0"
elasticsearch.url: "http://192.168.1.113:9200"

      支援遠程訪問和從elasticsearch獲取數據

    [root@centos7-02 opt]# ./kibana6.3.2/bin/kibana

    啟動日誌如下

  log   [11:09:00.993] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready
  log   [11:09:01.032] [info][status][plugin:[email protected]] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [11:09:01.034] [info][status][plugin:[email protected]] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [11:09:01.042] [info][status][plugin:[email protected]] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [11:09:01.045] [info][status][plugin:[email protected]] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [11:09:01.106] [info][status][plugin:[email protected]] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [11:09:01.107] [info][status][plugin:[email protected]] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [11:09:01.123] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready
  log   [11:09:01.125] [info][status][plugin:[email protected]] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [11:09:01.243] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready
  log   [11:09:01.245] [info][status][plugin:[email protected]] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [11:09:01.248] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready
  log   [11:09:01.250] [info][status][plugin:[email protected]] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [11:09:01.251] [warning][security] Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in kibana.yml
  log   [11:09:01.255] [warning][security] Session cookies will be transmitted over insecure connections. This is not recommended.
  log   [11:09:01.280] [info][status][plugin:[email protected]] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [11:09:01.300] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready
  log   [11:09:01.304] [info][status][plugin:[email protected]] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [11:09:01.326] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready
  log   [11:09:01.330] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready
  log   [11:09:01.332] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready
  log   [11:09:01.334] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready
  log   [11:09:01.644] [warning][reporting] Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml
  log   [11:09:01.651] [info][status][plugin:[email protected]] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [11:09:01.697] [info][listening] Server running at http://0.0.0.0:5601
  log   [11:09:01.819] [info][status][plugin:[email protected]] Status changed from yellow to green - Ready
  log   [11:09:01.852] [info][license][xpack] Imported license information from Elasticsearch for the [data] cluster: mode: basic | status: active
  log   [11:09:01.893] [info][status][plugin:[email protected]] Status changed from yellow to green - Ready
  log   [11:09:01.893] [info][status][plugin:[email protected]] Status changed from yellow to green - Ready
  log   [11:09:01.894] [info][status][plugin:[email protected]] Status changed from yellow to green - Ready
  log   [11:09:01.894] [info][status][plugin:[email protected]] Status changed from yellow to green - Ready
  log   [11:09:01.895] [info][status][plugin:[email protected]] Status changed from yellow to green - Ready
  log   [11:09:01.895] [info][status][plugin:[email protected]] Status changed from yellow to green - Ready
  log   [11:09:01.895] [info][status][plugin:[email protected]] Status changed from yellow to green - Ready
  log   [11:09:01.896] [info][status][plugin:[email protected]] Status changed from yellow to green - Ready
  log   [11:09:01.897] [info][status][plugin:[email protected]] Status changed from yellow to green - Ready
  log   [11:09:01.897] [info][status][plugin:[email protected]] Status changed from yellow to green - Ready
  log   [11:09:01.898] [info][status][plugin:[email protected]] Status changed from yellow to green - Ready
  log   [11:09:01.916] [info][kibana-monitoring][monitoring-ui] Starting all Kibana monitoring collectors
  log   [11:09:01.926] [info][license][xpack] Imported license information from Elasticsearch for the [monitoring] cluster: mode: basic | status: active
View Code

      幾個警告不影響功能,不出現error就可以正常服務。

    訪問:http://192.168.1.112:5601,出現下圖

  Logstash

    依賴jdk,jdk的 搭建可參考我的 另一篇博客:virtualBox安裝centos,並搭建tomcat

    [root@centos7-01 opt]# tar -zxvf logstash-6.3.2.tar.gz

    新增配置文件:first-pipeline.conf

    [root@centos7-01 opt]# vi logstash-6.3.2/config/first-pipeline.conf 

input {
    stdin {}
    beats {
        port => 5044
    }
}
output {
    elasticsearch {
        hosts => ["192.168.1.113:9200"]
    }
    stdout {
        codec => rubydebug
    }
}
View Code

      監聽5044埠,filebeat會從此埠向logstash寫入數據;logstash處理數據之後(filter,實例中沒有展示)再輸出到elasticsearch

    [root@centos7-01 opt]# ./logstash-6.3.2/bin/logstash -f /opt/logstash-6.3.2/config/first-pipeline.conf

      啟動日誌如下

Sending Logstash's logs to /opt/logstash-6.3.2/logs which is now configured via log4j2.properties
[2018-09-03T20:59:05,050][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-09-03T20:59:06,072][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.3.2"}
[2018-09-03T20:59:11,487][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-09-03T20:59:12,222][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://192.168.1.113:9200/]}}
[2018-09-03T20:59:12,230][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://192.168.1.113:9200/, :path=>"/"}
[2018-09-03T20:59:12,574][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://192.168.1.113:9200/"}
[2018-09-03T20:59:12,669][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-09-03T20:59:12,672][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-09-03T20:59:12,775][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//192.168.1.113:9200"]}
[2018-09-03T20:59:12,810][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-09-03T20:59:12,862][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-09-03T20:59:13,758][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
The stdin plugin is now waiting for input:
[2018-09-03T20:59:13,852][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x2ff95604 run>"}
[2018-09-03T20:59:13,958][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-09-03T20:59:14,066][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2018-09-03T20:59:14,562][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
View Code

  Filebeat  

    [root@centos7 opt]# tar -zxvf filebeat-6.3.2-linux-x86_64.tar.gz

         [root@centos7 opt]# mv filebeat-6.3.2-linux-x86_64 filebeat6.3.2

    配置filebeat .yml

      [root@centos7 opt]# vi filebeat6.3.2/filebeat.yml 

      配置之後,內容如下

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /log/*.log
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  # Mutiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after


#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

# setup.template.settings:
  # index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
# setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

#============================= Elastic Cloud ==================================

# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
# output.elasticsearch:
  # Array of hosts to connect to.
  # hosts: ["localhost:9200"]

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["192.168.1.110:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
logging.level: info

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#xpack.monitoring.elasticsearch:
View Code

      主要是配置filebeat.inputs,採集哪些日誌;關閉output.elasticsearch,打開output.logstash,將收集到的信息推送到logstash。

    [root@centos7 opt]# ./filebeat6.3.2/filebeat -e -c ./filebeat6.3.2/filebeat.yml

      啟動日誌如下

2018-09-03T21:10:38.748+0800    INFO    instance/beat.go:492    Home path: [/opt/filebeat6.3.2] Config path: [/opt/filebeat6.3.2] Data path: [/opt/filebeat6.3.2/data] Logs path: [/opt/filebeat6.3.2/logs]
2018-09-03T21:10:38.780+0800    INFO    instance/beat.go:499    Beat UUID: 07d523d5-68ef-4470-a99d-5476bbc8535d
2018-09-03T21:10:38.780+0800    INFO    [beat]    instance/beat.go:716    Beat info    {"system_info": {"beat": {"path": {"config": "/opt/filebeat6.3.2", "data": "/opt/filebeat6.3.2/data", "home": "/opt/filebeat6.3.2", "logs": "/opt/filebeat6.3.2/logs"}, "type": "filebeat", "uuid": "07d523d5-68ef-4470-a99d-5476bbc8535d"}}}
2018-09-03T21:10:38.781+0800    INFO    [beat]    instance/beat.go:725    Build info    {"system_info": {"build": {"commit": "45a9a9e1561b6c540e94211ebe03d18abcacae55", "libbeat": "6.3.2", "time": "2018-07-20T04:18:19.000Z", "version": "6.3.2"}}}
2018-09-03T21:10:38.781+0800    INFO    [beat]    instance/beat.go:728    Go runtime info    {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":1,"version":"go1.9.4"}}}
2018-09-03T21:10:38.800+0800    INFO    [beat]    instance/beat.go:732    Host info    {"system_info": {"host": {"architecture":"x86_64","boot_time":"2018-09-03T15:40:54+08:00","containerized":true,"hostname":"centos7","ips":["127.0.0.1/8","::1/128","192.168.1.111/24","fe80::3928:4541:b030:bea4/64"],"kernel_version":"3.10.0-862.el7.x86_64","mac_addresses":["08:00:27:e9:d7:da"],"os":{"family":"redhat","platform":"centos","name":"CentOS Linux","version":"7 (Core)","major":7,"minor":5,"patch":1804,"codename":"Core"},"timezone":"CST","timezone_offset_sec":28800,"id":"acc3d28b9c824b55b6cdd5c8c2a46705"}}}
2018-09-03T21:10:38.803+0800    INFO    [beat]    instance/beat.go:761    Process info    {"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend"],"effective":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend"],"ambient":null}, "cwd": "/opt", "exe": "/opt/filebeat6.3.2/filebeat", "name": "filebeat", "pid": 1579, "ppid": 1454, "seccomp": {"mode":"disabled"}, "start_time": "2018-09-03T21:10:37.710+0800"}}}
2018-09-03T21:10:38.803+0800    INFO    instance/beat.go:225    Setup Beat: filebeat; Version: 6.3.2
2018-09-03T21:10:38.804+0800    INFO    pipeline/module.go:81    Beat name: centos7
2018-09-03T21:10:38.816+0800    INFO    instance/beat.go:315    filebeat start running.
2018-09-03T21:10:38.816+0800    INFO    [monitoring]    log/log.go:97    Starting metrics logging every 30s
2018-09-03T21:10:38.817+0800    INFO    registrar/registrar.go:117    Loading registrar data from /opt/filebeat6.3.2/data/registry
2018-09-03T21:10:38.821+0800    INFO    registrar/registrar.go:124    States Loaded from registrar: 1
2018-09-03T21:10:38.821+0800    WARN    beater/filebeat.go:354    Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2018-09-03T21:10:38.821+0800    INFO    crawler/crawler.go:48    Loading Inputs: 1
2018-09-03T21:10:38.822+0800    INFO    log/input.go:118    Configured paths: [/log/*.log]
2018-09-03T21:10:38.822+0800    INFO    input/input.go:88    Starting input of type: log; ID: 8294414020995878211 
2018-09-03T21:10:38.866+0800    INFO    crawler/crawler.go:82    Loading and starting Inputs completed. Enabled inputs: 1
2018-09-03T21:10:38.867+0800    INFO    cfgfile/reload.go:122    Config reloader started
2018-09-03T21:10:38.867+0800    INFO    cfgfile/reload.go:214    Loading of config files completed.
2018-09-03T21:10:38.883+0800    INFO    log/harvester.go:228    Harvester started for file: /log/spring-boot-integrate.log.2018-08-21.log
2018-09-03T21:11:08.819+0800    INFO    [monitoring]    log/log.go:124    Non-zero metrics in the last 30s    {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":50,"time":{"ms":54}},"total":{"ticks":70,"time":{"ms":83},"value":70},"user":{"ticks":20,"time":{"ms":29}}},"info":{"ephemeral_id":"faaf6d3e-8fff-4670-9dca-c51b48b134c8","uptime":{"ms":30102}},"memstats":{"gc_next":5931008,"memory_alloc":3006968,"memory_total":4960192,"rss":15585280}},"filebeat":{"events":{"added":93,"done":93},"harvester":{"open_files":1,"running":1,"started":1}},"libbeat":{"config":{"module":{"running":0},"reloads":1},"output":{"events":{"acked":91,"batches":1,"total":91},"read":{"bytes":6},"type":"logstash","write":{"bytes":5990}},"pipeline":{"clients":1,"events":{"active":0,"filtered":2,"published":91,"retry":91,"total":93},"queue":{"acked":91}}},"registrar":{"states":{"current":1,"update":93},"writes":{"success":3,"total":3}},"system":{"cpu":{"cores":1},"load":{"1":0.05,"15":0.05,"5":0.03,"norm":{"1":0.05,"15":0.05,"5":0.03}}}}}}
View Code

    採集的是/log下的所有的log文件,我們就用工程:spring-boot-integrate來產生log文件(對應的就是我們的項目產生的日誌文件);spring-boot-integrate依賴127.0.0.1的redis和mysql,需要把redis和mysql啟動起來(註意我的redis是配置了密碼的,mysql用的資料庫是spring-boot,sql文件在工程中)。

    [root@centos7 redis-3.2.12]# cd /usr/local/redis-3.2.12/

    [root@centos7 redis-3.2.12]# ./src/redis-server redis.conf 

    [root@centos7 local]# service mysqld start

    啟動我們的spring-boot-integrate

      用maven生成war包,將spring-boot-integrate.war包拷貝到tomcat的webapps,啟動tomcat即可;註意tomcat版本需要8及以上;

      [root@centos7 opt]# cd /usr/local/apache-tomcat-8.5.33/

      [root@centos7 apache-tomcat-8.5.33]# ./bin/startup.sh

    訪問:http://192.168.1.111:8080/spring-boot-integrate,如下圖

    說明應用啟動成功,我們可以參照:spring-boot-2.0.3不一樣系列之shiro - 搭建篇,訪問應用,多產生一些日誌數據。

效果

  最終數據到kibana進行可視化展示,我們看看我們剛剛的日誌在kibana中的展示情況

總結

  架構圖

    一般而言,架構圖如下

    由nginx對外暴露訪問介面,並提供負載均衡功能。本文中沒有集成nginx,大家可以自己去實現,集成nginx也不難。

    另外也沒有集成消息中間件

    這種架構適合於日誌規模比較龐大的情況。但由於 Logstash 日誌解析節點和 Elasticsearch 的負荷比較重,可將他們配置為集群模式,以分擔負荷。引入消息隊列,均衡了網路傳輸,從而降低了網路閉塞,尤其是丟失數據的可能性,但依然存在 Logstash 占用系統資源過多的問題。

  2、基於docker的搭建

    ELK版本迭代非常快,如果能基於docker做成鏡像,基於docker搭建,既方便ELK的統一搭建、也方便ELK的升級;有興趣的小伙伴可以試著搭建。

  3、組件組合

    本文只是簡單的實現了ELK + Filebeat各個組件都是單節點的集成,相當於搭建了最基礎版本;當然有了這個基礎版本,再搭建某些組件的集群版本也不難了。

    另外,組件之間是可以靈活組合的,有些組件也不是必須的,我們可以根據我們業務量的需求來搭建合適的日誌系統。

  4、組件詳情

    本文只是講elk+filebeat的搭建,各個組件的詳情沒有具體介紹,需要大家自己去瞭解了;各個組件的內容還是挺多的,更好的瞭解各個組件,對搭建高性能的日誌系統有很大幫助。

參考

  談日誌的重要性】運維中被低估的日誌

  集中式日誌系統 ELK 協議棧詳解

  ELK+Filebeat 集中式日誌解決方案詳解

  從零開始搭建ELK+GPE監控預警系統


您的分享是我們最大的動力!

-Advertisement-
Play Games
更多相關文章
  • 前言 在上一篇博文中介紹了Vue.js的常用指令,今天總結歸納一下彈窗Dialog的使用,彈窗經常被使用在一些表單的增刪改查啊,或者彈出一些提示信息等等,在保留當前頁面狀態的情況下,告知用戶並承載相關操作。 之前做了表格的增刪改查任務,其中用到了dialog彈窗,今天總結歸納一下Vue.js中幾種彈 ...
  • 最近的一段時間一直在搞 ,一個巨硬出品、賦予 語言靜態類型和編譯的語言。 第一個完全使用 重構的純 項目已經上線並穩定運行了。 第二個前後端的項目目前也在重構中,關於前端基於 的`TypeScript`套路之前也有提到過: "TypeScript在react項目中的實踐" 。 但是這些做完以後也總感 ...
  • 下載地址: https://github.com/imxiaoer/FloatToolBar 因為是個常見的功能,所以寫個組件。效果圖如下: 組件具體代碼如下: tool.vue 下載地址: https://github.com/imxiaoer/FloatToolBar ...
  • 幾米圈官網8個頁麵包括路由的配置在vue腳手架中進行開發,主要使用bootstrap完成頁面的佈局,amazeui完成動畫效果。vue腳手架單頁面開發路由切換其他子頁面主要遇到導入js和css的問題。在全局導入js插件時應該使用npm下載當前插件,在局部導入時,如果涉及到對現有界面中dom元素事件的 ...
  • 需求: 利用MySql資料庫結合前端技術完成用戶的註冊(要求不使用Web服務技術),所以 Demo採用Socket技術實現Web通信. 第一部分:資料庫創建 資料庫採用mysql 5.7.18, 資料庫名稱為MyUser, 內部有一張表 user.欄位有 Id,UserName,Psd,Tel 第二 ...
  • 1、event.stopPropagation 停止事件的傳播,阻止它被分配到其它Dom節點。但是不能阻止同一Dom節點上的其它事件句柄被調用。 2、event.preventDefault 阻止與事件關聯的預設動作。 ...
  • 本文轉自:http://developer.51cto.com/art/201709/552085.htm 本文轉自:https://www.cnblogs.com/stulzq/p/8573828.html 微服務架構現在是談到企業應用架構時必聊的話題,微服務之所以火熱也是因為相對之前的應用開發方 ...
  • 不管乾什麼,設定一個目標,針對一個目標有一個核心戰略,並堅決的執行核心戰略是取得勝利的不二法寶。 舉個慄子🌰: 三國三分天下。魏蜀吳都有自己的階段性核心戰略。魏國曹操的戰略是挾天子以令諸侯。東吳孫權的戰略是依靠天險,有水做天然屏障,孫吳水師一家獨大。蜀國的戰略是東聯孫權,北拒曹操。 魏國和吳國的執 ...
一周排行
    -Advertisement-
    Play Games
  • 移動開發(一):使用.NET MAUI開發第一個安卓APP 對於工作多年的C#程式員來說,近來想嘗試開發一款安卓APP,考慮了很久最終選擇使用.NET MAUI這個微軟官方的框架來嘗試體驗開發安卓APP,畢竟是使用Visual Studio開發工具,使用起來也比較的順手,結合微軟官方的教程進行了安卓 ...
  • 前言 QuestPDF 是一個開源 .NET 庫,用於生成 PDF 文檔。使用了C# Fluent API方式可簡化開發、減少錯誤並提高工作效率。利用它可以輕鬆生成 PDF 報告、發票、導出文件等。 項目介紹 QuestPDF 是一個革命性的開源 .NET 庫,它徹底改變了我們生成 PDF 文檔的方 ...
  • 項目地址 項目後端地址: https://github.com/ZyPLJ/ZYTteeHole 項目前端頁面地址: ZyPLJ/TreeHoleVue (github.com) https://github.com/ZyPLJ/TreeHoleVue 目前項目測試訪問地址: http://tree ...
  • 話不多說,直接開乾 一.下載 1.官方鏈接下載: https://www.microsoft.com/zh-cn/sql-server/sql-server-downloads 2.在下載目錄中找到下麵這個小的安裝包 SQL2022-SSEI-Dev.exe,運行開始下載SQL server; 二. ...
  • 前言 隨著物聯網(IoT)技術的迅猛發展,MQTT(消息隊列遙測傳輸)協議憑藉其輕量級和高效性,已成為眾多物聯網應用的首選通信標準。 MQTTnet 作為一個高性能的 .NET 開源庫,為 .NET 平臺上的 MQTT 客戶端與伺服器開發提供了強大的支持。 本文將全面介紹 MQTTnet 的核心功能 ...
  • Serilog支持多種接收器用於日誌存儲,增強器用於添加屬性,LogContext管理動態屬性,支持多種輸出格式包括純文本、JSON及ExpressionTemplate。還提供了自定義格式化選項,適用於不同需求。 ...
  • 目錄簡介獲取 HTML 文檔解析 HTML 文檔測試參考文章 簡介 動態內容網站使用 JavaScript 腳本動態檢索和渲染數據,爬取信息時需要模擬瀏覽器行為,否則獲取到的源碼基本是空的。 本文使用的爬取步驟如下: 使用 Selenium 獲取渲染後的 HTML 文檔 使用 HtmlAgility ...
  • 1.前言 什麼是熱更新 游戲或者軟體更新時,無需重新下載客戶端進行安裝,而是在應用程式啟動的情況下,在內部進行資源或者代碼更新 Unity目前常用熱更新解決方案 HybridCLR,Xlua,ILRuntime等 Unity目前常用資源管理解決方案 AssetBundles,Addressable, ...
  • 本文章主要是在C# ASP.NET Core Web API框架實現向手機發送驗證碼簡訊功能。這裡我選擇是一個互億無線簡訊驗證碼平臺,其實像阿裡雲,騰訊雲上面也可以。 首先我們先去 互億無線 https://www.ihuyi.com/api/sms.html 去註冊一個賬號 註冊完成賬號後,它會送 ...
  • 通過以下方式可以高效,並保證數據同步的可靠性 1.API設計 使用RESTful設計,確保API端點明確,並使用適當的HTTP方法(如POST用於創建,PUT用於更新)。 設計清晰的請求和響應模型,以確保客戶端能夠理解預期格式。 2.數據驗證 在伺服器端進行嚴格的數據驗證,確保接收到的數據符合預期格 ...