本文介绍了配置fluentd以正确地解析和运送java stacktrace(使用docker json-file日志记录驱动程序格式化),使其弹性化为单个消息的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们的服务作为docker实例运行。
给定的限制是docker日志记录驱动程序不能更改为与默认json文件驱动程序不同的任何内容。
(scala micro)服务输出如下所示的日志

Our service runs as a docker instance.Given limitation is that the docker logging driver cannot be changed to anything different than the default json-file driver.The (scala micro)service outputs a log that looks like this

{"log":"10:30:12.375 [application-akka.actor.default-dispatcher-13] [WARN] [rulekeepr-615239361-v5mtn-7]- c.v.r.s.logic.RulekeeprLogicProvider(91) - decision making have failed unexpectedly\n","stream":"stdout","time":"2017-05-08T10:30:12.376485994Z"}
{"log":"java.lang.RuntimeException: Error extracting fields to make a lookup for a rule at P2: [failed calculating amount/amountEUR/directive: [failed getting accountInfo of companyId:3303 from deadcart: unexpected status returned: 500]]\n","stream":"stdout","time":"2017-05-08T10:30:12.376528449Z"}
{"log":"\u0009at org.assbox.rulekeepr.services.BasicRuleService$$anonfun$lookupRule$2.apply(BasicRuleService.scala:53)\n","stream":"stdout","time":"2017-05-08T10:30:12.376537277Z"}
{"log":"\u0009at org.assbox.rulekeepr.services.BasicRuleService$$anonfun$lookupRule$2.apply(BasicRuleService.scala:53)\n","stream":"stdout","time":"2017-05-08T10:30:12.376542826Z"}
{"log":"\u0009at scala.concurrent.Future$$anonfun$transform$1$$anonfun$apply$2.apply(Future.scala:224)\n","stream":"stdout","time":"2017-05-08T10:30:12.376548224Z"}
{"log":"Caused by: java.lang.RuntimeException: failed calculating amount/amountEUR/directive: [failed getting accountInfo of companyId:3303 from deadcart: unexpected status returned: 500]\n","stream":"stdout","time":"2017-05-08T10:30:12.376674554Z"}
{"log":"\u0009at org.assbox.rulekeepr.services.logic.TlrComputedFields$$anonfun$calculatedFields$1.applyOrElse(AbstractComputedFields.scala:39)\n","stream":"stdout","time":"2017-05-08T10:30:12.376680922Z"}
{"log":"\u0009at org.assbox.rulekeepr.services.logic.TlrComputedFields$$anonfun$calculatedFields$1.applyOrElse(AbstractComputedFields.scala:36)\n","stream":"stdout","time":"2017-05-08T10:30:12.376686377Z"}
{"log":"\u0009at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)\n","stream":"stdout","time":"2017-05-08T10:30:12.376691228Z"}
{"log":"\u0009... 19 common frames omitted\n","stream":"stdout","time":"2017-05-08T10:30:12.376720255Z"}
{"log":"Caused by: java.lang.RuntimeException: failed getting accountInfo of companyId:3303 from deadcart: unexpected status returned: 500\n","stream":"stdout","time":"2017-05-08T10:30:12.376724303Z"}
{"log":"\u0009at org.assbox.rulekeepr.services.mixins.DCartHelper$$anonfun$accountInfo$1.apply(DCartHelper.scala:31)\n","stream":"stdout","time":"2017-05-08T10:30:12.376729945Z"}
{"log":"\u0009at org.assbox.rulekeepr.services.mixins.DCartHelper$$anonfun$accountInfo$1.apply(DCartHelper.scala:24)\n","stream":"stdout","time":"2017-05-08T10:30:12.376734254Z"}
{"log":"\u0009... 19 common frames omitted\n","stream":"stdout","time":"2017-05-08T10:30:12.37676087Z"}

如何利用fluentd指令正确组合以下包含堆栈跟踪的日志事件,因此它

How can I harness fluentd directives for properly combining the following log event that contains a stack trace, so it all be shipped to elastic as single message?

我可以完全控制所使用的logback追加器模式,因此可以将日志值的出现顺序更改为其他值,甚至更改了appender类。

I have full control of the logback appender pattern used, so I can change the order of occurrence of log values to something else, and even change the appender class.

我们正在使用k8s,事实证明改变docker logging驱动程序不是很简单,所以我们正在寻找解决方案

We're working with k8s and it turns out its not straight forward to change the docker logging driver so we're looking for a solution that will be able to handle the given example.

我不太在乎将日志级别,线程,记录程序提取到特定键中,因此以后可以轻松地通过他们在基巴纳。拥有它会很不错,但是重要性不高。
重要的是准确地解析时间戳(精确到毫秒),并在将其交付给Elastic时使用它作为实际的日志甚至是时间戳。

I don't care so much about extracting the loglevel, thread, logger into specific keys so I could later easily filter by them in kibana. it would be nice to have, but less important.What is important is to accurately parse the timestamp, down to the milliseconds and use it as the actual log even timestamp as it shipped to elastic.

推荐答案

您可以使用。

例如,对于Fluentd v0.14.x,

For example with Fluentd v0.14.x,

<source>
  @type tail
  path /var/log/containers/*.log
  pos_file /var/log/fluentd-containers.log.pos
  tag kubernetes.*
  read_from_head true
  <parse>
    @type json
  </parse>
  @label @INPUT
</source>

<label @INPUT>
  <filter kubernetes.**>
    @type concat
    key log
    multiline_start_regexp ^\d{2}:\d{2}:\d{2}\.\d+
    continuous_line_regexp ^(\s+|java.lang|Caused by:)
    separator ""
    flush_interval 3s
    timeout_label @PARSE
  </filter>
  <match kubernetes.**>
    @type relabel
    @label @PARSE
  </match>
</label>

<label @PARSE>
  <filter kubernetes.**>
    @type parser
    key_name log
    inject_key_prefix log.
    <parse>
      @type multiline_grok
      grok_failure_key grokfailure
      <grok>
        pattern YOUR_GROK_PATTERN
      </grok>
    </parse>
  </filter>
  <match kubernetes.**>
    @type relabel
    @label @OUTPUT
  </match>
</label>

<label @OUTPUT>
  <match kubernetes.**>
    @type stdout
  </match>
</label>

类似的问题:




  • https://github.com/fluent/fluent-plugin-grok-parser/issues/36
  • https://github.com/fluent/fluent-plugin-grok-parser/issues/37

这篇关于配置fluentd以正确地解析和运送java stacktrace(使用docker json-file日志记录驱动程序格式化),使其弹性化为单个消息的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-24 01:43