Structured Aggregated Log

Introduction

Structured logging is an approach to logging where information is written in a structured form, that later could be read and parsed by machine with minimum efforts. In many cases, it means that immediate readability of the log is reduced, but with a bit of efforts it could have an opposite effect (we’ll see it a bit later).

Such different Structured Logging

This structure can be anything: from comma separate list of values to complex JSON/XML documents.

Structured logging’s another important difference is focusing on logging larger number of information in batches.

For example, you have an application that reads data and passes those through a chain of rules. Each rule can either vote to execute some specific action over those data later or not.

There are a few approached to logging data here. Lets review them.

Unstructured log line. Each class writes a log line with information it currently contains. There is no common structure for log printed by each rule. Thus easy to notice a different ways of saying the same. In logs below, one rule says ‘Voting for action’ while another says ‘action is picked’. Some rules print important information and others don’t. Such logs can be parsed by human but not always by computer.

Structured log line. This is better. At least same approach is used for logging all information. Yes, it is not always easy to read by human, but it is in a state where both human and computer can read and interpret those logs.

Structured aggregated log line. Simple here: you gather all the “execution” data and then output them at the end in a single structured log line. In this case it is preferable to use JSON or XML to structure the information. CSV most probably won’t be the best option here, especially because it doesn’t support nested structures. Pros: you have a single line with all data. Cons: you need to keep those data until final log line is generated and also it is hard to read those without practice or a formatting tool.

Which one is my favorite? – The last one!

Why? – Because it can hold important information in a single line and save me from grepping and parsing multiple log lines.

Structured Aggregated Log Line

For structure aggregated log, there is a little help from most of logging frameworks. So this functionality needs to be built manually.

Main components of structured aggregated log are aggregated log collector, data formatter and logging framework to output.

AggregatedLogDataComponents.png

Lets get deeply into pros and cons of structured aggregated log line.

First start with good things:

  1. You have all information aggregated into single log line
  2. When using JSON or XML, you could benefit from auto-formatting tools
  3. Even better for program to parse and read data from such log line
  4. Better control on output data:
    1. Customize output strategies
    2. De-duplicate data (printing same thing once)
    3. Structure data in usable way that simplifies understanding
  5. De-duplicating log metadata: skip printing duplicate date/time, level and source of logs etc.

There are of course bad things:

  1. Probably is not supported by “your favorite” logging library. So need to be implemented manually
  2. Need to store aggregated data, and also building a JSON/XML document to print
  3. Average log lines become huge and hard to parse by human (however auto-formatting tools help a lot)
  4. Logger should support printing partially populated aggregated data (more on this below.)

I’m going to deep dive into some items mentioned above to explain in details and, if possible, with examples.

Auto-formatting

This is simple. You found a log line, copied it into auto-formatting tool, pressed the button and you have a well formatted and easier to understand log:

formatted_structured_log.png

Customized output strategies

This one is quite important. As you have aggregated data and you need to generate your log line, it becomes similar to how you generate a “view” based on “model” in MVC pattern.

You can do many useful things now:

  • generate JSON log line, or XML
  • generate short log line with the most important information for successful use
  • or generate a long log line full of all aggregated data in case error or exception happened
  • filter out unimportant and don’t print them into output log
  • etc.

De-duplicating data

Lets compare examples of unstructured/structured log lines:

They both have a date/time and log level and request id for every log line. We get rid of this duplicate data if we print all data in single line.

But that’s not all. In both use cases, TrafficLightIsGreenRule and TrafficLightIsRedRule print traffic light status. Those are independent rules, and traffic light is important for both of them so they both print it. However, in structured aggregated log line, we print traffic light status only once.

Handling errors

Errors happen. So might happen one before you’ve aggregated all the data and printed them into log. How to deal with this? The most important here is to not lose data aggregated for logging: those need to be printed in any case even though they aren’t complete. Those would be invaluable source of information during error investigation.

Often it is possible and recommendable to print exhaustive data into an aggregated log.

Or alternative would be to collect error message and stack trace into the aggregated log data:

Presentation

As mentioned before, one of the strong benefits of structured aggregated log line is presence of important data in a single line. And this line is in the format that easily understood by machine.

This allows to read data out from the log and do various manipulations. Like generating a better way to present and work with data.

I tried to imagine a way I could get a JSON log line, parse out data from it and convert into readable HTML document, that gives me a better view of data. For sure, this is not very helpful with mine trivial example. But for a larger logs this is more beneficial.


HtmlPresentation.png

Creating a simple tool where you can paste a log line and get such presentation of data wouldn’t be a big task. But would help to decrease a level of operational load.

Jenkins Pipeline

Introduction


I believe in simplicity. It means I think things should be simple to work with.
It doesn't mean that it always should be simple inside. It could be complex, sometimes very complex.

At the same time, I distinguish two types of complexity:
  1. complexity caused by indirect and non-obvious relationships between components,
  2. complexity caused by mess within components itself and messy relationships between them
First one means product could be implemented in a very smart way, built on large number of implicit assumptions, often non-obvious.

Second one means product's complexity is exaggerated by very confusing, illogical and messed up relationships between components. Unlike the first one, relationships exists, but they make no sense not because they are so over-smartly designed, but because they are spaghetti-like.

Often those two types could be met together in same product. Hope you'd never had a chance to deal with such.

The way to fix complexity type #1 is to remove implicit assumptions by adding smaller components with visible relationships.

However, to fix complexity type #2, you need to understand it deeply. What if thing is complicated b/c you can't understand it yet at this moment? To be able to answer this question, I always start with research. And my research has some kind of diagram. I believe that visualization is the best way to tackle complexity. At least it always works well for me.

Simply put, visualization is one of the best ways to simplify hard and complex things. Even if this is a very basic diagram, it is still better than nothing. Sometimes, it requires a time to find a correct form of visualization within correct level of abstraction, but once you did it, you are half done.

Continuous Deployment


And this post is about visualization; and how it can help to combine parts into a single simple picture. And more specifically, it is about how visualization can help simplifying continuous delivery.

Continuous delivery is a process of getting you product from source code and into production. It usually happens through a pipeline of jobs: first job is compiling source code and building artifacts, then goes integration testing , deploying to staging environment, and eventually to production. For example, in Jenkins job is usually created for each of these steps, where each job triggers next one once it's finished successfully. Fore example, there would be jobs like 'Build XYZ,' 'Test XYZ,' 'Deploy XYZ to Staging,' 'Deploy XYZ to 1-box' and 'Deploy XYZ to Production'.

Default Jenkins View would present those jobs as a list, with no visible relationships between those. But there are relationships between them. And actually all those jobs are here for the single most important goal: get new version into production for the customers! So relationships play important role, which is hidden from us as users.

You might not even feel it immediately, but this presentation of jobs list brings a complexity. There is a sense behind those jobs, but it is normally hidden from viewer, unless one ready to spend time to understand how things work.

Pipelines in Jenkins


But good thing is Jenkins already allows you to remove this complexity. And it's via visualization of the pipeline of jobs you've created.

This support is brought by "Build Pipeline Plugin". This plugin adds a new "View" type called "Build Pipeline View".



In a next step you would need to pick your first job in the pipeline.



Then pipeline would be created based on jobs dependencies: this pipeline will contain jobs that would be triggered by first job, and also jobs which are triggered by that jobs and so on.

And now once someone makes a change into XYZ source, a new job for "Build XYZ" would be triggered. Once this job is finished successfully, it will trigger "Text XYZ" and so on. As you see no functionality change happened here, but with pipelines it is possible to visualize both dependencies between those jobs and what is a current state of the CD process.



That makes things so simple to work with. You can understand build structure and current state with a single glance.

More to that, Jenkins 2.0 comes with built it support for pipelines.

Blue Ocean


I'd also like to mention the initiative called "Blue Ocean" which sets a goal to build a better visualization of pipelines in Jenkins. I'd be happy to see it in live one day.

Other Products


There are bunch of other products that would help you to create build / deployment pipeline:
  1. AWS CodePipeline - https://aws.amazon.com/codepipeline/
  2. Concoure CI - https://concourse.ci/
  3. Bitbucket Pipelines - https://bitbucket.org/product/features/pipelines

Stack trace dump on Go

I'm from Java world. Means I spend most of the time coding things on Java and running them in Java Virtual Machine. This has own benefits and disadvantages. For example, I really wish JVM would use less memory and be faster from the start. It is awesome you can benefit from JIT eventually, but till then you should wait a code to run for thousands times before it can be translated into machine code.

At the same time, Java has bunch of benefits. I believe the most important one is a set of available tools: from small utilities to powerful IDEs to application servers.

One of such utility tools is jstack. It is use to print a stack dump of application threads, including main one. What a useful thing! Saved so many time when I tried to investigate why my application got stuck or is really hard on CPU.

And it is also one of the tools I really needed a few days ago: I had to understand why service written on Go is so eager for CPU resources. I spent some time searching for a tool that would work similarly to jstack but for a Go application. Couldn't find one. Yet found how to get a stack trace with a small change in my Go application.

Go standard library already has everything you need to print a stack dump: signals support to notify application and runtime.Stack function to get stack dump, and print it.

NOTE: signals are primarily supported on UNIX based systems, it means this approach might not work as expected on Windows.


And now you can run you program, and ask for a stack dump by sending a signal using kill command:
$ killall -SIGQUIT accountservice
and you'd see something like this


Output consists of stack traces for each goroutine. Each stack trace starts with 'goroutine', its number and current state. Stack trace, like Java one, shows the most recent operation on top, contains code file name (not a class name), line number, and also function parameters. Also stack trace contains information about goroutine creator.

Now, it should be easy to find why application got stuck, or what is it currently doing so that almost all CPU is used up. However, this solution might not work always: in case all resources are used, it might take a time until SIGQUIT notification is processed by application.

Simple RESTful API on Go (with Gin)

It is very populate to use Go to create microservice applications. And actually there are many benefits when using Go comparing to other technologies: it is easy to start, easy to create, and so cheap to run that on your servers. The last one is one of my favorites. Go application has so small footprint on memory and CPU comparing to Java applications, and the same time runs faster.

In this post, I want to show how to create a simple RESTful web service on Go using awesome Gin library.

First, lets create an main function, that sets up logging, Gin router with registered paths and handlers. Gin provides a simple way to create template-like paths and register a handler for each path.

Now lets create DTOs for our business models, and methods to convert from model to DTO:

Finally, need to create functions to handle requests. Again, super easy with Gin:

And we are done. Well, almost, as those examples don't actual models and functions to work with storage.

Simple configuration on Go

Very often your new tool/service needs to have a configuration defined either via parameters or configuration file.
For example, you want to have different settings in development and production environments, or you just have a separate fleets for different clients.

It is pretty easy to add configuration support to you Go application using YAML for configurations and gopkg.in/yaml.v2 for deserializing YAML files into struct value.

Here is an example YAML configuration file. It contains service name and its listening endpoint, and also database connection URL. Pretty simple!

To read configuration, we first open and read whole configuration file, and then use yaml package to unmarshal it into value of Configuration type.

And finally a part of main function in our application, where we use parameter to pass a path to the configuration file, and load it to use configuration later:

That's it. Simple and fast!