-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use journald app logs in opentelemetry collector #896
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great improvement!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oooooh, this is nice! Is there a way to get the severity out of the journald entries? So that the log entries are coloured in Grafana in orange/red for warns/errors?
cfg.LogSettings.ConsoleLevel = model.NewPointer("ERROR") | ||
cfg.LogSettings.ConsoleLevel = model.NewPointer("WARN") | ||
cfg.LogSettings.EnableFile = model.NewPointer(true) | ||
cfg.LogSettings.FileLevel = model.NewPointer("WARN") | ||
cfg.LogSettings.EnableSentry = model.NewPointer(false) | ||
|
||
cfg.NotificationLogSettings.EnableConsole = model.NewPointer(true) | ||
cfg.NotificationLogSettings.ConsoleLevel = model.NewPointer("ERROR") | ||
cfg.NotificationLogSettings.ConsoleLevel = model.NewPointer("WARN") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, good point 👍
const otelcolOperatorAppAgent = ` | ||
const otelcolOperatorApp = ` | ||
- type: move | ||
from: body.MESSAGE | ||
to: body` | ||
|
||
const otelcolOperatorAgent = ` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm kinda regretting this decision of writing this modularly instead of just writing a long string of text for each 😂
Judging by the screenshot above, I think this is handled out of the box for the usual MM logs. I don't think a stack trace would have any specific severity but we can check. There's a lot we can do in terms of parsing and processing. |
That's what this setting does (although truth be told, I didn't test it without it, so maybe it can autodetect?) severity:
parse_from: attributes.level` But yeah, we could leverage a lot of features from the collector that we're ignoring right now. In any case, this is already a really good step forward, thanks for this! |
Summary
As the last comparison failure showed, it's rather annoying to figure out whether the app panicked. PR adds support to include these messages by using the full output of the
journalctl
unit rather the MM log file.With this in place, we should be able to add a Grafana alert in case a panic actually happens, potentially creating a webhook in MM or something (a future improvement).
Screenshot