Introduction
Structured logging is an approach to logging where information is written in a structured form, that later could be read and parsed by machine with minimum efforts. In many cases, it means that immediate readability of the log is reduced, but with a bit of efforts it could have an opposite effect (we’ll see it a bit later).
Such different Structured Logging
This structure can be anything: from comma separate list of values to complex JSON/XML documents.
Structured logging’s another important difference is focusing on logging larger number of information in batches.
For example, you have an application that reads data and passes those through a chain of rules. Each rule can either vote to execute some specific action over those data later or not.
There are a few approached to logging data here. Lets review them.
Unstructured log line. Each class writes a log line with information it currently contains. There is no common structure for log printed by each rule. Thus easy to notice a different ways of saying the same. In logs below, one rule says ‘Voting for action’ while another says ‘action is picked’. Some rules print important information and others don’t. Such logs can be parsed by human but not always by computer.
Structured log line. This is better. At least same approach is used for logging all information. Yes, it is not always easy to read by human, but it is in a state where both human and computer can read and interpret those logs.
Structured aggregated log line. Simple here: you gather all the “execution” data and then output them at the end in a single structured log line. In this case it is preferable to use JSON or XML to structure the information. CSV most probably won’t be the best option here, especially because it doesn’t support nested structures. Pros: you have a single line with all data. Cons: you need to keep those data until final log line is generated and also it is hard to read those without practice or a formatting tool.
Which one is my favorite? – The last one!
Why? – Because it can hold important information in a single line and save me from grepping and parsing multiple log lines.
Structured Aggregated Log Line
For structure aggregated log, there is a little help from most of logging frameworks. So this functionality needs to be built manually.
Main components of structured aggregated log are aggregated log collector, data formatter and logging framework to output.
Lets get deeply into pros and cons of structured aggregated log line.
First start with good things:
- You have all information aggregated into single log line
- When using JSON or XML, you could benefit from auto-formatting tools
- Even better for program to parse and read data from such log line
- Better control on output data:
- Customize output strategies
- De-duplicate data (printing same thing once)
- Structure data in usable way that simplifies understanding
- De-duplicating log metadata: skip printing duplicate date/time, level and source of logs etc.
There are of course bad things:
- Probably is not supported by “your favorite” logging library. So need to be implemented manually
- Need to store aggregated data, and also building a JSON/XML document to print
- Average log lines become huge and hard to parse by human (however auto-formatting tools help a lot)
- Logger should support printing partially populated aggregated data (more on this below.)
I’m going to deep dive into some items mentioned above to explain in details and, if possible, with examples.
Auto-formatting
This is simple. You found a log line, copied it into auto-formatting tool, pressed the button and you have a well formatted and easier to understand log:
Customized output strategies
This one is quite important. As you have aggregated data and you need to generate your log line, it becomes similar to how you generate a “view” based on “model” in MVC pattern.
You can do many useful things now:
- generate JSON log line, or XML
- generate short log line with the most important information for successful use
- or generate a long log line full of all aggregated data in case error or exception happened
- filter out unimportant and don’t print them into output log
- etc.
De-duplicating data
Lets compare examples of unstructured/structured log lines:
They both have a date/time and log level and request id for every log line. We get rid of this duplicate data if we print all data in single line.
But that’s not all. In both use cases, TrafficLightIsGreenRule
and TrafficLightIsRedRule
print traffic light status. Those are independent rules, and traffic light is important for both of them so they both print it. However, in structured aggregated log line, we print traffic light status only once.
Handling errors
Errors happen. So might happen one before you’ve aggregated all the data and printed them into log. How to deal with this? The most important here is to not lose data aggregated for logging: those need to be printed in any case even though they aren’t complete. Those would be invaluable source of information during error investigation.
Often it is possible and recommendable to print exhaustive data into an aggregated log.
Or alternative would be to collect error message and stack trace into the aggregated log data:
Presentation
As mentioned before, one of the strong benefits of structured aggregated log line is presence of important data in a single line. And this line is in the format that easily understood by machine.
This allows to read data out from the log and do various manipulations. Like generating a better way to present and work with data.
I tried to imagine a way I could get a JSON log line, parse out data from it and convert into readable HTML document, that gives me a better view of data. For sure, this is not very helpful with mine trivial example. But for a larger logs this is more beneficial.
Creating a simple tool where you can paste a log line and get such presentation of data wouldn’t be a big task. But would help to decrease a level of operational load.