Node.js Logging Best Practices - A Complete Guide

Logging is essential in Node.js for tracking errors, monitoring performance, and debugging issues. Traditional node.js logging methods, like using console.log(), are often insufficient due to unstructured, cluttered logs that are hard to read. They lack features like log levels, proper formatting, and efficient storage management.

Best practices for logging ensure logs are useful, structured, and manageable. Implementing these best practices is crucial for several reasons. Well-structured logs help developers quickly pinpoint and resolve issues, reducing downtime and improving reliability.

This blog will detail a set of best practices for writing logs in a Node.js application, providing a comprehensive understanding of best practices.

  1. Use Node.js Logging Libraries
  2. Define Log Levels
  3. Always Provide Timestamps
  4. Implement Structured Logging
  5. Write Descriptive Messages
  6. Centralize Logs
  7. Secure Sensitive Data
  8. Use Logging beyond Troubleshooting
  9. Log HTTP requests
  10. Use Log Monitoring Tools

1. Use Node.js Logging Libraries

‌The basic Console API in Node.js serves as a fundamental tool for logging messages to the console. While the console API offers simple logging functionality, it comes with several drawbacks. It lacks structured logging, customization options, and support for logging to external sources.

There are several third-party node.js logging libraries to overcome these limitations. A node.js logging framework enhances logging capabilities beyond what is provided by the basic Console API.

Using a third-party node.js logging framework in Node.js saves you from writing logging code yourself. With easy installation via npm or Yarn, these libraries offer support for common logging levels like info, debug, warn, and error.

Here are some of the best node.js logging libraries:

  • Pino - A high-speed and low-overhead logging library. It claims to be up to five times faster than many alternatives.
  • Winston -  A versatile logging library that supports multiple transports, custom formatting, and different log levels.
  • Bunyan - A fast and simple JSON logging library that produces structured logs and supports log rotation.
  • Morgan - A middleware for HTTP request logging in Node.js, often used with Express.js.

2. Define Log Levels

Log levels in Node.js are essential for categorizing the severity and importance of log entries. These levels enable developers to quickly identify and respond to issues based on their urgency and impact. Here’s an example to illustrate the importance of log levels:

Imagine you are monitoring an e-commerce application. A log entry like:

{"message":"New product added to inventory"}

indicates a routine and expected operation. In contrast, another log entry might read:

{"message":"Payment gateway timeout during checkout"}

This entry signals a critical problem that could affect sales and customer satisfaction.

Without log levels, distinguishing between such routine actions and critical failures would be challenging. Properly defined log levels allow you to set up automated alerts for high-severity issue.

In Node.js, common log levels include:

  • DEBUG: Detailed information for diagnosing problems.
{"level":"debug","message":"Starting purchase process for user ${userId} and product ${productId}"}
  • INFO: Confirmation that things are working as expected.
{"level":"info","message":"Purchase completed successfully"}
  • WARN: An indication that something unexpected happened.
{"level":"warn","message":"Product is out of stock"}
  • ERROR: Due to a more serious problem, the software has not been able to perform some function.
{"level":"error","message":"Payment failed for user"}
  • FATAL: Critical errors need immediate attention.
{"level":"fatal","message":"Connection failure"}

Node.js logging Frameworks differ in how they express event severity: some use strings, while others rely on integers.

Table Example
Severity Level String (Winston, Log4js) Integer (Pino)
Debug "debug" 10
Informational "info" 20
Warning "warn" 30
Error "error" 40
Fatal "fatal" 50

3. Always Provide Timestamps

Including timestamps in log entries is a fundamental practice in logging. It allows easier navigation and understanding of log data. Timestamps denote the exact time when an event occurred, enabling chronological ordering of log entries and aiding in troubleshooting and debugging processes.

Additionally, timestamps facilitate comparative analysis across different time periods, helping to identify patterns and anomalies.

Node.js Logging Framework Timestamp Formats and Platform Support:

node.js Logging Framework Timestamp Format Platform Support
Pino Elapsed milliseconds since Jan 1, 1970 UTC Node.js
Winston Customizable (ISO 8601, Unix timestamps) Node.js, Browser
Log4j Customizable (date and time patterns) Java

4. Implement Structured Logging

Structured logging organizes log data into a consistent, machine-readable format, making it easier for automated systems to process and understand.

Imagine you have a log entry that says, "User 'Alice' logged in from IP 192.168.0.1." It's straightforward for us to understand, but if you have thousands of similar logs, it becomes hard for computers to process them efficiently because the format varies and contains lots of embedded variables.

To make it easier for machines to read and analyse, you can use a structured format. This means organizing the log information into a consistent, machine-readable form.

For example, instead of the above unstructured log, you can structure it like this:

{
  "level": "info",
  "timestamp": "2023-10-25T07:12:46.743Z",
  "username": "Alice",
  "action": "login",
  "ip_address": "192.168.0.1",
  "message": "User 'Alice' logged in from IP 192.168.0.1"
}

This structure makes it much easier for automated tools to search, filter, and analyse the logs. JSON is commonly used for this purpose because it is widely supported and easy to use.

5. Write Descriptive Messages

When creating log messages, it is crucial to include detailed and specific information about the event. This approach ensures that each log entry provides meaningful insights, making it easier to troubleshoot and understand system behaviour.

  • Detailed logs help identify the exact point of failure or the specifics of an issue, reducing the time needed to diagnose problems.
  • Clear and informative logs provide valuable data for monitoring system health and performance.
  • Comprehensive log entries offer context, which is crucial for understanding events, especially when reviewing logs later.

Here is a simple example to illustrate the concept:

Suboptimal Log Message

logging.error("Database connection failed")

Descriptive Log Message:

logging.error("Database connection failed for user 'db_admin' on host '192.168.1.10' at '2024-05-24 14:32:00'")

The descriptive log message provides specific details that are crucial for understanding the context of the failure. By specifying the user, the host, and the timestamp, it becomes much easier to identify the issue.

6. Centralize Logs

Centralizing log data involves collecting logs from various sources, such as application servers, network devices, and databases, and storing them in a single location for easier management, analysis, and visualization. This process can be accomplished using log management tools.

Centralizing logs not only simplifies management but also provides several benefits:

  • By aggregating logs, it's easier to set up alerts for specific events like errors or warnings, allowing teams to address issues before they escalate.
  • Instead of checking logs on each server individually, centralization puts all logs in one place for easier access.
  • Centralized platforms often offer visualizations like graphs and charts, making it simpler to understand trends and patterns.
  • Even if a server goes down, logs remain accessible, ensuring that monitoring and troubleshooting can continue uninterrupted.
  • Centralization helps meet regulatory requirements by enabling better control over log retention and ensuring data integrity over the long term.

7. Secure Sensitive Data

Logging sensitive data can expose it to unauthorized individuals who have access to the logs. For example, API URLs, session cookies, and JWT keys might contain information that can be exploited. These details should be handled securely during runtime and never stored in plaintext or logged to files. Instead, implement filters or custom formatting to exclude or mask sensitive information from logs.

Equifax Data Breach : During the Equifax data breach, attackers exploited a vulnerability to access sensitive information of 147 million consumers. The breach was worsened by Equifax's practice of storing sensitive data, including Social Security numbers, in unencrypted log files, highlighting the importance of secure logging practices in preventing data breaches and protecting user privacy.

If logging sensitive data is necessary for debugging or operational purposes, it can be done safely by redacting and hashing. This way, you can log the data while protecting its sensitive parts.

Redacting sensitive data involves removing a specific portion of data to prevent exposure while still keeping the rest of the log or data intact. For example, in logs, you might redact parts of an API URL or replace a session cookie with a placeholder.

Hashing involves converting sensitive data into a string of nondescript text, that cannot be reversed or decoded. This is useful for securely storing data like passwords or keys. Unlike encryption, hashing is a one-way process, ensuring that even if the hashed value is exposed, the original data cannot be easily retrieved.

8. Use Logging beyond Troubleshooting

Logging goes beyond just fixing problems. It helps in tracking user behaviour, monitoring performance, and ensuring security. Additionally, logging can serve as a profiling tool, tracking the duration of operations and function execution counts.

In addition to logging messages, Winston offers basic profiling mechanism that you can utilize for any logger.

//
// Start profile of 'test'
//
logger.profile('test');

setTimeout(function () {
  //
  // Stop profile of 'test'. Logging will now take place:
  //   '17 Jan 21:00:00 - info: test duration=1000ms'
  //
  logger.profile('test');
}, 1000);

With Winston, you can start a timer for a specific task. Later, when the task is completed, you can call the .done() method to stop the timer and log the duration.

// Returns an object corresponding to a specific timing. When done
 // is called the timer will finish and log the duration. e.g.:
 //
 const profiler = logger.startTimer();
 setTimeout(function () {
   profiler.done({ message: 'Logging message' });
 }, 1000);

9. Log HTTP requests

Logging HTTP requests in your Node.js application is an important best practice for monitoring and debugging. Morgan is a widely-used tool for this purpose. It simplifies the logging process by providing readable and organized log outputs.

Morgan offers several predefined log formats, such as combined, common, dev, short, and tiny, which capture essential request details like HTTP method, URL, status code, response time,  and formats these details into structured, easy-to-read logs.

Additionally, Morgan allows for custom log formats and can be configured to write logs to files for persistent storage. Its ease of integration, customization options, and performance efficiency make it an essential tool for effective HTTP request logging in Node.js applications.

This helps you quickly understand what's happening in your application, identify issues, and ensure smooth operation.

10. Use Log Monitoring Tools

Log monitoring tool is critical for server health management. The monitoring tools offer visual representations of error flows, aiding developers in identifying root causes efficiently.

Graphing log data using a log monitoring tool helps to see trends and patterns as time goes on, which makes it easier to spot any unusual events and keep track of how well things are performing.

The tools help to integrate with alerting systems which provides automatic notifications for critical events or performance thresholds.

This practice enhances application stability, reduces downtime, and improves overall system performance.

Conclusion

Effective logging is crucial for enhancing the observability and reliability of Node.js applications. By following best practices, developers can establish a robust logging system that aids in debugging, monitoring, and securing applications.

Effective logging is about balance. It is not just about the quantity of logs but the quality and usability of the information they contain.

By ensuring logs are structured and informative you can enhance system observability, streamline troubleshooting, and improve overall system reliability and security.

Start incorporating these best practices into your Node.js applications today, and experiment with various node.js logging libraries and tools to discover what best fits your needs.


Atatus Logs Monitoring and Management

Atatus offers a Logs Monitoring solution which is delivered as a fully managed cloud service with minimal setup at any scale that requires no maintenance. It monitors logs from all of your systems and applications into a centralized and easy-to-navigate user interface, allowing you to troubleshoot faster.

We give a cost-effective, scalable method to centralized nodejs logging, so you can obtain total insight across your complex architecture. To cut through the noise and focus on the key events that matter, you can search the logs by hostname, service, source, messages, and more. When you can correlate log events with APM slow traces and errors, troubleshooting becomes easy.

Try your 14-day free trial of Atatus.

Atatus

#1 Solution for Logs, Traces & Metrics

tick-logo APM

tick-logo Kubernetes

tick-logo Logs

tick-logo Synthetics

tick-logo RUM

tick-logo Serverless

tick-logo Security

tick-logo More

Pavithra Parthiban

Pavithra Parthiban

Technical Content Writer
Chennai