Sumo Logic ahead of the pack
Read articleComplete visibility for DevSecOps
Reduce downtime and move from reactive to proactive monitoring.
May 25, 2022
Node.js is an established platform for developing server-side applications in JavaScript. One of the most fundamental concerns that arise during the development of Node.js apps is how to carry out proper logging that will be safe, secure and performant.
While there are several options for configuring Node.js logging, a few specific engineering best practices still apply, no matter which option you choose. This article will explain how to employ those best practices to perform effective logging in Node.js, using a straightforward question and answer approach.
Let’s get started.
Logging is the process of capturing important events during the running life of an application. Logging in Node.js is no different from logging in Go or Python in that the application-related logs should be streamed into a log management service and formatted in a searchable and human-readable way.
Node.js logging doesn’t generally use console statements since they’re considered to be suitable only for quickly debugging certain pieces of code. The following statements only send the messages to the standard output or the standard error streams:
console.log({
event: "Lifecycle",
message: "Application Started",
});
console.error({
event: "Lifecycle",
message: "Application Running",
}
When running this program, you can filter messages written to the error stream like this:
$ node "/Users/theo.despoudis/Workspace/wpgraphql-client-experiment/index.js" 2>&1 >/dev/null
{ event: 'Lifecycle', message: 'Application Running' }
You will soon notice a slight decrease in performance when you use several console statements, and trying to filter all of those logs from the command line is cumbersome. This is why you should use a dedicated logging library for logging Node.js apps in production. We will explain how to enable and configure some of those libraries next.
Node.js does not have a dedicated logging library for us to use. Thankfully, the community rose to the challenge and provided us with some good options.
First of all, there is a package called Winston. You can add Winston to your project by installing it with:
$ npm i winston
Then, you need to create an instance of the logger to use for logging purposes. For now, let’s take a look at a very simple configuration. We’ll explore some of the available options in the next section.
const winston = require("winston");
const logger = winston.createLogger({
level: "info",
format: winston.format.json(),
defaultMeta: { service: "application-server" },
transports: [
new winston.transports.Console(),
new winston.transports.File({ filename: "combined.log" }),
],
});
logger.log({
level: "info",
message: "Application Started",
});
logger.log({
level: "warn",
message: "Application Ended",
});
This creates an instance of the logger and specifies where the log output will be streamed. In this case, it will stream to the console and to a file named combined.log. When you run this program, you will see the results on the screen and in the filesystem.
Another good option is pino.js. This is a modern Node.js logger with low overhead. You can install it as follows:
$ npm install pino
Then, you need to follow the same steps that you would use with Winston and create an instance of the logger:
const logger = require("pino")();
logger.info("Application Started");
logger.warn("Application Ended");
When we do so, it outputs:
❯ node "/Users/theo.despoudis/Workspace/wpgraphql-client-experiment/index.js"
{"level":30,"time":1650537835052,"pid":10654,"hostname":"theo-despoudis","msg":"Application Started"}
{"level":40,"time":1650537835052,"pid":10654,"hostname":"theo-despoudis","msg":"Application Ended"}
There are other logging libraries for Node.js. The differences between them are mainly related to community adoption, performance, and active development. You might want to check out node-bunyan or morgan.js.
In the previous section, we introduced two popular Node.js logging libraries. Now, we will explain how to configure them for an application. The configuration part is probably the most crucial part of the whole process. We recommend the following practices:
Use JSON formatting for messages: JSON is a versatile, human- and machine-friendly way to describe information. Using it as a log format provides several advantages. With Winston, you can use:
format: winston.format.json()
This sends out the messages as JSON strings.
Use timestamps and unique identifiers in each log message: Timestamps and UUIDs are critical for debugging, analytics and transaction management. You need to configure the logger to always include them one way or another in the output format:
format: winston.format.combine(
winston.format.timestamp({
format: "YYYY-MM-DD HH:mm:ss",
}),
winston.format.json(),
)
This configuration allows combining the format of a timestamp and json all in one string.
Use logging levels, metadata and tags to scope the information logged: For better management of logging information, it is crucial to use appropriate log levels (such as info, warn or error) and tagging labels (like “cache-service,” “signups-service,” and so on). Later on, this will help when filtering the associated logs with the code that initiated those events. You can do this using child loggers, for example:
const cacheServiceLogger = logger.child({ service: "cache-service" });
This configuration will always append the { service: "cache-service" } property in the final log message.
You also want to spend some time configuring relevant logging transports. If you need to add a file logger, for example, don’t forget to use rotation to prevent logs from becoming too large:
const DailyRotateFile = require('winston-daily-rotate-file');
logger.configure({
level: 'verbose',
transports: [
new DailyRotateFile(opts)
]
});
To meet the challenge of storing logs, it's a best practice to follow a redundancy policy so that you do not risk losing the logs due to failure. The easiest way to do this is to configure a file rotation transport that sends events to a redundant disk array.
However, it is also advisable to send log events to a dedicated log management service like Sumo Logic, which provides you with all of the features necessary to analyze logs at scale. For example, winston-sumologic-transport allows you to send events to Sumo Logic. You can include this in the list of transports by forwarding the logs to the HTTP Source Collector. If you want to opt for an open-source tool, the ELK Stack is a good choice.
Structured logging was inspired by the idea of having different levels of logging based on the severity of application events. There are conventional levels for errors, warnings and debug statements. Let’s explain their differences:
0 – Error: A serious problem/failure that needs immediate attention.
1 – Warn: A non-blocking warning about an unusual exception within the system. These messages give context to a potential error.
2 – Info: Normal behavioral messages about the current state of the application.
3 – HTTP: Slightly more verbose messages related to HTTP requests.
4 – Debug: Diagnostic messages for developers and internal teams only. They should be disabled in production environments.
5 – Silly: Typically, the current stack trace of the calling function should be printed when silly messages are invoked. This information assists developers and internal teams with debugging incidents.
Each level is associated with an integer ranging from 0 to 5, where 0 is the highest priority and 5 is the lowest. This practical conventional rule makes it easier to filter low priority (more verbose) to high priority (less verbose) messages.
It’s best to filter out the lower levels of logging in production. You can do that by specifying the level property when you create the Winston logger instance:
const logger = winston.createLogger({
levels: winston.config.npm.levels,
level: "error",
…
Now, between these two log messages, only the error message will be recorded:
logger.log({
level: "warn",
message: "Application Warning",
});
logger.log({
level: "error",
message: "Application Error",
});
In the context of logging, system logs (or syslogs) represent events that relate to the operating system, such as starting up services, system changes, catastrophic service errors and so on. If you are running application services on Node.js, you wouldn’t need to use syslog logging unless you specifically need to use a syslog server.
User Datagram Protocol (UDP) dispatches syslog messages. This is a connectionless protocol, so messages aren’t acknowledged or guaranteed to arrive. However, this makes the system easier to manage, as it can ingest a larger bandwidth of log data. In a Linux system, you can read the contents of the syslog by inspecting the /var/log/syslog or /var/log/messages files.
With file logging, on the other hand, there is no daemon listening in the background, and it does not use UDP for communication. The application server writes to a file, rotating the contents at a certain cadence.
If you want to get started with syslog logging, you can either use the winston-syslog transport or the phuesler/ain package available on GitHub.
One last piece of advice regarding log management: storing logs is only useful if you receive alerts when things break. For that reason, it’s important to use a SaaS platform for centralized logging, like Sumo Logic. Sumo Logic offers the compelling advantage of unique features (like advanced threat intelligence) that work out of the box.
To get started with Node.js logging in Sumo Logic, simply sign up for a plan, install a Hosted Collector with an HTTP source, and point your logging transport to that URL. The whole setup process takes less than ten minutes from start to finish.
Reduce downtime and move from reactive to proactive monitoring.
Build, run, and secure modern applications and cloud infrastructures.
Start free trial