本文介绍了无需解析日志即可从 Java 应用程序登录到 ELK的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想将日志从 Java 应用程序发送到 ElasticSearch,而传统方法似乎是在运行该应用程序的服务器上设置 Logstash,并让 logstash 解析日志文件(使用正则表达式...!)并加载将它们导入 ElasticSearch.

I want to send logs from a Java app to ElasticSearch, and the conventional approach seems to be to set up Logstash on the server running the app, and have logstash parse the log files (with regex...!) and load them into ElasticSearch.

这样做是否有原因,而不是仅仅设置 log4J(或 logback)以将所需格式的内容直接记录到日志收集器中,然后再将其异步发送到 ElasticSearch?当应用程序本身可以首先以所需的格式记录它时,不得不摆弄 grok 过滤器来处理多行堆栈跟踪(并在日志解析时消耗 CPU 周期)对我来说似乎很疯狂?

Is there a reason it's done this way, rather than just setting up log4J (or logback) to log things in the desired format directly into a log collector that can then be shipped to ElasticSearch asynchronously? It seems crazy to me to have to fiddle with grok filters to deal with multiline stack traces (and burn CPU cycles on log parsing) when the app itself could just log it the desired format in the first place?

顺便提一下,对于在 Docker 容器中运行的应用程序,最佳做法是直接登录到 ElasticSearch,因为只需要运行一个进程?

On a tangentially related note, for apps running in a Docker container, is best practice to log directly to ElasticSearch, given the need to run only one process?

推荐答案

我认为从 Log4j/Logback/任何 appender 直接登录到 Elasticsearch 通常是不明智的,但我同意编写 Logstash 过滤器来解析正常" 人类可读的 Java 日志也是一个坏主意.我在任何地方都使用 https://github.com/logstash/log4j-jsonevent-layout可以让 Log4j 的常规文件附加程序生成不需要 Logstash 进一步解析的 JSON 日志.

I think it's usually ill-advised to log directly to Elasticsearch from a Log4j/Logback/whatever appender, but I agree that writing Logstash filters to parse a "normal" human-readable Java log is a bad idea too. I use https://github.com/logstash/log4j-jsonevent-layout everywhere I can to have Log4j's regular file appenders produce JSON logs that don't require any further parsing by Logstash.

这篇关于无需解析日志即可从 Java 应用程序登录到 ELK的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

05-17 17:42