之前搭建過elk,用於分析日誌,無奈伺服器資源不足,開了多個Logstash之後發現占用記憶體過高,於是現在改為Filebeat做日誌收集,記錄一下搭建過程和遇到問題的解決方案。 第一步 , 安裝jdk8 。 設置環境變數 在profile文件下,添加 添加之後,執行 使配置生效。 然後輸入 檢驗是否 ...
之前搭建過elk,用於分析日誌,無奈伺服器資源不足,開了多個Logstash之後發現占用記憶體過高,於是現在改為Filebeat做日誌收集,記錄一下搭建過程和遇到問題的解決方案。
第一步 , 安裝jdk8 。
tar -zxvf jdk-8u112-linux-x64.tar.gz
設置環境變數
vi /etc/profile
在profile文件下,添加
#set java environment JAVA_HOME=/usr/local/java/jdk1.8.0_112 JRE_HOME=$JAVA_HOME/jre CLASS_PATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin export JAVA_HOME JRE_HOME CLASS_PATH PATH
添加之後,執行
source /etc/profile
使配置生效。 然後輸入
java -version
檢驗是否成功。
成功進入第二步安裝Elasticsearch:
下載 Elasticsearch 5.1.1 的安裝包, https://www.elastic.co/downloads/past-releases/elasticsearch-5-1-1
執行
rpm -ivh elasticsearch-5.1.1.rpm
然後看到
[root@localhost elk]# rpm -ivh elasticsearch-5.1.1.rpm warning: elasticsearch-5.1.1.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY Preparing... ########################################### [100%] Creating elasticsearch group... OK Creating elasticsearch user... OK 1:elasticsearch ########################################### [100%] ### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using chkconfig sudo chkconfig --add elasticsearch ### You can start elasticsearch service by executing
說明安裝成功。 然後我們來執行一下, service elasticsearch start 。
安裝後各個目錄說明
#/usr/share/elasticsearch/ 主目錄
#/var/log/elasticsearch log日誌
#/etc/sysconfig/elasticsearch 配置elasticsearch環境變數
#/etc/elasticsearch/elasticsearch.yml 配置elasticsearch集群
#/etc/elasticsearch/jvm.options 配置elasticsearch的jvm參數
#/etc/elasticsearch/log4j2.properties 配置elasticsearch日誌參數
可能出現各種報錯,解決方案參考 : http://blog.csdn.net/cardinalzbk/article/details/54924511
註意: es啟動要求提高一些系統參數配置,否則會報錯 a. 增大vm.max_map_count到至少262144sudo vim /etc/sysctl.conf 添加 vm.max_map_count=262144 sudo sysctl -pb. 增大文件句柄數至少 65536 ulimit -a查看
sudo vim /etc/security/limits.conf * soft nofile 65536 * hard nofile 65536
然後,我們對Elasticsearch集群配置文件進行配置 。
vi /etc/elasticsearch/elasticsearch.yml
解開註釋
# ---------------------------------- Network ----------------------------------- # # Set the bind address to a specific IP (IPv4 or IPv6): # network.host: 0.0.0.0 # # Set a custom port for HTTP: # http.port: 9200
重啟服務 : service elasticsearch restart ,這時來看一下Elasticsearch用了我們多少記憶體,畢竟這次就是為瞭解決資源不足的問題的,top一下
嗯。。。 只剩幾十M記憶體了,什麼情況? 來,看一下jvm配置。
/etc/elasticsearch/jvm.options
好,看到了,預設
-Xms2g
-Xmx2g
我們先測試測試,設個500m試試。 重啟,ok,正常啟動~
Elasticsearch設置ok。
第三步,下載logstash-5.1.1 , 也是下載rpm,然後安裝.
然後依舊,主體在 /etc/logstash下, 我們先進去bin, 執行
./logstash -e 'input { stdin { } } output { stdout {} }'
然後再隨便輸點東西,就能看到,我們輸入啥,它就輸出啥~
[root@localhost bin]# ./logstash -e 'input { stdin { } } output { stdout {} }' 112 WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs to console The stdin plugin is now waiting for input: 00:00:19.669 [[main]-pipeline-manager] INFO logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125} 00:00:19.688 [[main]-pipeline-manager] INFO logstash.pipeline - Pipeline main started 00:00:19.802 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port=>9600} 2018-02-06T16:00:20.050Z localhost.localdomain 112
但是我們可以看到 , 有一個warning
Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings.
它說咱們沒有logstash.yml,這個是從logstash5.0之後開始出現的,詳細配置參考官網。
現在這樣啟動之後,發現記憶體還是過大,那我們來看看怎麼把占用記憶體調小一點。
依舊是在/etc/logstash下的jvm.options
我們來設置一下大小
vi /etc/logstash/jvm.options
-Xms128m
-Xmx256m
先試試,不夠再調大~
我們看到,我們需要執行logstash的時候非常麻煩,需要先進入目錄再執行啊,這樣不科學~ 來執行下麵的命令
ln -s /usr/share/logstash/bin/logstash /usr/bin/logstash
然後就可以了~
第四步,安裝kibana
wget https://artifacts.elastic.co/downloads/kibana/kibana-5.1.1-x86_64.rpm
然後安裝,安裝之後,找到配置文件,在/etc/kibana/kibana.yml
server.port: 5601 server.host: 0.0.0.0 elasticsearch.url: "http://192.168.2.178:9200"
然後就可以啟動了,不過一樣,我們先創建軟鏈接,
ln -s /usr/share/kibana/bin/kibana /usr/bin/kibana
就可以kibana命令啟動了~
到這裡,我們的elk已經安裝完成~
第五步 , 安裝Filebeat
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.1.1-x86_64.rpm
安裝,創建軟鏈接ln -s /usr/share/filebeat/bin/filebeat /usr/bin/filebeat
接下來就是讓Filebeat跟logstash勾搭起來了~
先創建正則表達式目錄 /usr/local/elk/app/logstash-5.1.1/patterns
創建logstash配置文件 :
vi /etc/logstash/conf.d/pro-log.conf
input { beats { port => 5044 } } filter { if [fields][logIndex] == "nginx" { grok { patterns_dir => "/usr/local/elk/app/logstash-5.1.1/patterns" match => { "message" => "%{NGINXACCESS}" } } urldecode { charset => "UTF-8" field => "url" } if [upstreamtime] == "" or [upstreamtime] == "null" { mutate { update => { "upstreamtime" => "0" } } } date { match => ["logtime", "dd/MMM/yyyy:HH:mm:ss Z"] target => "@timestamp" } mutate { convert => { "responsetime" => "float" "upstreamtime" => "float" "size" => "integer" } remove_field => ["port","logtime","message"] } } } output { elasticsearch { hosts => "192.168.2.178:9200" manage_template => false index => "%{[fields][logIndex]}-%{+YYYY.MM.dd}" document_type => "%{[fields][docType]}" } }
我們這裡用nginx的access_log來試試,先看看nginx的配置
log_format logstash '$http_host $server_addr $remote_addr [$time_local] "$visit_flag" "$jsession_id" "$login_name" "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" ' '$request_time $upstream_response_time $http_x_forwarded_for $upstream_addr';
然後,創建自定義正則文件
vi /usr/local/elk/app/logstash-5.1.1/patterns/nginx
URIPARM1 [A-Za-z0-9$.+!*'|(){},~@#%&/=:;^\\_<>`?\-\[\]]* URIPATH1 (?:/[\\A-Za-z0-9$.+!*'(){},~:;=@#% \[\]_<>^\-&?]*)+ HOSTNAME1 \b(?:[0-9A-Za-z_\-][0-9A-Za-z-_\-]{0,62})(?:\.(?:[0-9A-Za-z_\-][0-9A-Za-z-:\-_]{0,62}))*(\.?|\b) STATUS ([0-9.]{0,3}[, ]{0,2})+ HOSTPORT1 (%{IPV4}:%{POSINT}[, ]{0,2})+ FORWORD (?:%{IPV4}[,]?[ ]?)+|%{WORD} NGINXACCESS (%{HOSTNAME1:http_host}|-) %{IPORHOST:serveraddr} %{IPORHOST:remoteaddr} \[%{HTTPDATE:logtime}\] %{QS:visitflag} %{QS:sessionid} %{QS:loginname} %{QS:request} %{NUMBER:status} %{NUMBER:body_bytes_sent} %{QS:referrer} %{QS:agent} %{NUMBER:upstreamtime} %{NUMBER:responsetime} (%{FORWORD:x_forword_for}|-) (?:%{HOSTPORT1:upstream_addr}|-)
啟動logstash
logstash -f /etc/logstash/conf.d/pro-log.conf &
ok,啟動之後,我們該啟動Filebeat來試試了
修改filebeat.yml
vi /etc/filebeat/filebeat.yml
filebeat.prospectors: # Each - is a prospector. Most options can be set at the prospector level, so # you can use different prospectors for various configurations. # Below are the prospector specific configurations. - input_type: log # Paths that should be crawled and fetched. Glob based paths. paths: - /opt/nginx/logs/app.access.log fields: logIndex: nginx docType: nginx-access project: app-nginx #----------------------------- Logstash output -------------------------------- output.logstash: # The Logstash hosts hosts: ["{your-logstash-ip}:5044"]
啟動 , filebeat -path.config /etc/filebeat/ &
這樣就已經正常監控了,訪問 http://192.168.2.178:5601/
這時我們能看到nginx的access_log, 但是發現,好多靜態資源的訪問記錄也混在了裡面,我們去nginx配置一下,過濾掉靜態資源的access_log
nginx中設置 access_log off 即可。
大體上,ELK+Filebeat已經搞掂了,其餘的就是各種自定義配置的事情了,在這裡就不詳細討論了,有時間再寫配置篇~