ELK学习笔记1
部署方式:ELK部署在同一linux服务器上。
访问方式:通过APACHE反向代理,统一经由80端口访问(内网各种端口被封,木的办法)
监控内容:同一服务器的APACHE日志文件/ JIRA的某一项目的全部问题/。。。
本文将不涉及的:ELK下载、安装过程(各种百度,建议以官网文档为准)
涉及的话题:
1.ELK启动/关闭
2.Ek的反向代理设置(Apache)
3.Logstash的配置(file/http):-TODO
a.默认方式监控Apache日志-TODO
b.jira项目-TODO
C.行情数据- JSON格式(TODO)(其实最早是想扒乌云的安全数据的,但是出公告了。。。)- TODO
ELK启动/关闭
[elk@host~]$ cat run.sh
#!/bin/sh
# stopall
ps -ef|greplogstash |grep -v grep| awk '{print $2}'|xargs kill -9
ps -ef|grepelasticsearch |grep -v grep| awk '{print $2}'|xargs kill -9
ps -ef| grep '.*node/bin/node.*src/cli' |grep -v grep| awk '{print $2}'|xargs kill -9
### startall
#!/bin/sh
/usr/local/logstash-2.3.4/bin/logstash-f /usr/local/logstash-2.3.4/logstash.conf web &
su -elk -c/usr/local/elasticsearch-2.3.4/bin/elasticsearch&
su -elk -c/usr/local/kibana-4.5.3-linux-x64/bin/kibana&
2坑1
我去,原来elasticsearch不能用root用户启动的呦。
坑2疑似logstash bug:非root用户下,通过+rx方式读取apache accss_log报错。
Apache是以root用户起的,通过百度,起初只授权了logs目录以及accss_log日志文件的elk用户读权限,无任何报错,直接起不来。然后追加了执行权限。然后用elk用户启动logstash,悲剧来了:
[elk@hostlogstash-2.3.4]$ Settings: Default pipeline workers: 2
NotImplementedError:stat.st_dev unsupported or native support failed to load
dev_majorat org/jruby/RubyFileStat.java:205
nix_inodeat/usr/local/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/filewatch-0.8.1/lib/filewatch/watch.rb:28
inodeat/usr/local/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/filewatch-0.8.1/lib/filewatch/watch.rb:32
inodeat /usr/local/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/filewatch-0.8.1/lib/filewatch/watch.rb:106
watchat/usr/local/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/filewatch-0.8.1/lib/filewatch/watch.rb:96
_discover_file at/usr/local/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/filewatch-0.8.1/lib/filewatch/watch.rb:313
each atorg/jruby/RubyArray.java:1613
each atorg/jruby/RubyEnumerator.java:274
_discover_file at/usr/local/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/filewatch-0.8.1/lib/filewatch/watch.rb:304
watchat/usr/local/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/filewatch-0.8.1/lib/filewatch/watch.rb:95
call atorg/jruby/RubyProc.java:281
synchronizedat/usr/local/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/filewatch-0.8.1/lib/filewatch/watch.rb:357
synchronizeat org/jruby/ext/thread/Mutex.java:149
synchronizedat/usr/local/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/filewatch-0.8.1/lib/filewatch/watch.rb:357
watchat/usr/local/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/filewatch-0.8.1/lib/filewatch/watch.rb:92
tail at/usr/local/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/filewatch-0.8.1/lib/filewatch/tail_base.rb:73
tail at/usr/local/logstash-2.3.4/vendor/jruby/lib/ruby/1.9/forwardable.rb:201
begin_tailingat /usr/local/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/logstash-input-file-2.2.5/lib/logstash/inputs/file.rb:288
each atorg/jruby/RubyArray.java:1613
begin_tailingat/usr/local/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/logstash-input-file-2.2.5/lib/logstash/inputs/file.rb:288
run at/usr/local/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/logstash-input-file-2.2.5/lib/logstash/inputs/file.rb:292
inputworkerat/usr/local/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:342
start_inputat /usr/local/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:336
疑似在ES的jira上找到过类似的bug。然后修改为以root用户启动logstash,一切顺利。
ELK学习笔记2 -Ek的反向代理设置(Apache)
1在Apache的conf.d目录下,新增一个conf文件,如kibana.conf.
2主要内容如下:
/etc/httpd/conf.d#catkibana.conf
ProxyRequestsOn
ProxyPass
/kibanahttp://127.0.0.1:5601/app/kibana
ProxyPassReverse
/kibanahttp://127.0.0.1:5601/app/kibana
ProxyPass
/bundleshttp://127.0.0.1:5601/bundles
ProxyPassReverse
/bundleshttp://127.0.0.1:5601/bundles
ProxyPass
/elasticsearchhttp://127.0.0.1:9200/
ProxyPassReverse
/elasticsearchhttp://127.0.0.1:9200/
Alias/bundles/ /usr/local/kibana-4.5.3-linux-x64/optimize/bundles/
Require all granted
3坑:
起初按照百度的结果,只配置了kibana和elasticsearch的反向代理。但是启动访问之后一直出错,通过查找apache访问日志,发现了类似
"GET/bundles/kibana.style.css?v=9910 HTTP/1.1" 200
这样的出错信息。
去stackoverflow搜了一下,发现了这个帖子
http://stackoverflow.com/questions/32862053/apache-reverse-proxy-for-kibana
于是问题解决了。