ImplementationBeginner

APM Setup for Laravel, Node.js, and Python: Complete Tutorial

Step-by-step guide to setting up APM monitoring for Laravel, Node.js, and Python applications with Atatus. Includes code examples, dashboard setup, and alerting configuration.

20 min read
Atatus Team
Updated March 15, 2026
12 sections
01

Why APM Matters for Application Developers

Understanding what APM gives you that logs and error tracking alone cannot

Application Performance Monitoring changes how you understand your application behavior in production. Without APM, you work from user complaints, log files that show what happened but not why, and error messages that tell you an exception occurred but not which path through the code led to it. APM gives you continuous, automatic visibility into every transaction — how long it took, which database queries it executed, which external API calls it made, and where time was spent at the function level.

The business case for APM is straightforward: slow applications lose users. Research consistently shows that a 100ms increase in response time reduces conversion rates by 1 to 2% for e-commerce applications, and that 40% of users abandon a page if it takes more than 3 seconds to load. APM gives you the data to find and fix the specific code paths, database queries, and external calls responsible for slow responses.

APM is the first tool that gives you proactive performance management instead of reactive firefighting. Instead of waiting for a customer to file a support ticket because their report generation is timing out, APM alerts you the moment response time exceeds your defined threshold — before the first complaint arrives.

Distributed tracing, one of the core APM capabilities, is essential for microservices and any application that makes external API calls. When a user experiences a slow response, the cause might be in your application code, a database query, a Redis cache miss, or a third-party payment API call. Without distributed tracing, isolating the root cause requires manual log correlation across multiple systems that can take hours. With distributed tracing, you see the complete request path with timing at every step in a single view.

APM data is valuable not just during incidents but for ongoing performance optimization work. Transaction traces show you exactly which database queries are running on every page load, revealing N+1 query problems, missing indexes, and opportunities to add caching. Over time, tracking p95 and p99 response times reveals performance trends — gradual degradations that would be invisible in user feedback but clearly visible as upward trends in APM time-series charts.

Modern APM tools like Atatus integrate with your existing development workflow. APM data is available during code review to see the immediate performance impact of a change, during incident investigation where trace data is available within seconds of a transaction completing, and during sprint planning to prioritize performance improvements that will have the highest user impact.

02

Prerequisites and Account Setup

Everything you need before writing your first line of agent configuration

Create your Atatus account at atatus.com and complete the initial project setup. During account creation, you will be prompted to select your primary application type. Select APM. A project is created for you automatically with an API key that you will use for agent configuration. The API key is displayed in your project settings and can be rotated at any time without reinstalling the agent.

Identify the language and framework combination for the application you are instrumenting first. This guide covers Laravel with PHP, Node.js with Express, Fastify, or NestJS, and Python with Django, Flask, or FastAPI. If you have applications in multiple languages, start with the one that has the highest user-facing impact — the service that most directly affects your users experience. You can add additional language agents later without any changes to your Atatus account configuration.

Confirm that your application server has outbound internet access to the Atatus data ingest endpoint at ingest.atatus.com on port 443. The Atatus agent sends data over HTTPS, so no special firewall rules are required beyond standard HTTPS outbound access. If your environment requires proxy configuration for outbound HTTPS, the agent supports HTTP proxy configuration via environment variable.

Review the language-specific minimum version requirements. The Atatus PHP agent supports PHP 7.1 and later. The Node.js agent supports Node.js 14.x and later with LTS versions recommended. The Python agent supports Python 3.7 and later. If your application is running an older version of any of these runtimes, check the Atatus compatibility matrix for legacy version support options before proceeding.

Set up environment-based configuration using environment variables, which is the recommended approach for all production deployments. The Atatus agent reads its API key, application name, and environment name from environment variables — this allows you to use the same agent installation across development, staging, and production environments by simply changing the environment variables.

Create separate Atatus projects or use environment tags for development, staging, and production. Mixing monitoring data from different environments in a single project creates confusion — a staging deployment error should not appear alongside production errors. The recommended approach is one Atatus project per environment with the same application name but different API keys, so that each environment data is isolated.

03

Laravel APM Setup: From Zero to Traces in 5 Minutes

Complete installation and configuration guide for Laravel applications

Install the Atatus PHP agent using Composer: composer require atatus/atatus-php. This adds the Atatus package as a dependency and makes the agent class available throughout your application. The package supports Laravel 8.x, 9.x, 10.x, and 11.x and includes automatic service provider registration for Laravel — no manual service provider registration is required.

Configure the Atatus agent by adding the following environment variables to your .env file: ATATUS_API_KEY set to your api key, ATATUS_APP_NAME set to your application name, and ATATUS_APP_ENV set to production. These three variables are the minimum required for the agent to start collecting data. For production deployments, add these variables to your server environment configuration rather than the .env file.

Publish the Atatus configuration file to customize agent behavior beyond the defaults: php artisan vendor:publish --provider="Atatus\Laravel\AtatusServiceProvider". This creates a config/atatus.php file where you can configure trace sampling rate, ignored routes, custom middleware configuration, and database query capture settings.

Verify the installation by checking the Laravel log file at storage/logs/laravel.log for the Atatus agent startup message confirming the agent has initialized. If you see an error, the most common cause is an API key mismatch — verify that the ATATUS_API_KEY environment variable matches the project API key shown in your Atatus dashboard.

Database query monitoring is enabled by default and works with Laravel Eloquent ORM and Query Builder without any additional configuration. The agent automatically captures all SQL queries executed during a request, including query text, execution time, and the model or table being queried. Slow queries taking longer than 100ms are flagged in the Atatus interface for easy identification. Sensitive query parameters are sanitized by default before being sent to Atatus.

Queue job monitoring requires a one-line configuration change for Laravel Horizon or standard Laravel queues. Add the Atatus middleware to your queue worker configuration in config/horizon.php or config/queue.php. Queue job traces will appear as separate transaction types in the Atatus interface, allowing you to monitor job execution time, error rates, and retry counts independently from HTTP request transactions.

Cache operation monitoring is automatic for applications using the Laravel Cache facade with Redis, Memcached, or file-based cache drivers. Cache hit and miss ratios, cache key lookup times, and cache write operations are all captured as spans within the request trace, giving you visibility into the performance impact of your caching layer.

04

Node.js APM Setup: Express, Fastify, and NestJS

Complete installation and configuration guide for Node.js applications

Install the Atatus Node.js agent using npm: npm install atatus-apm. The agent must be required before any other module in your application entry point — this is the most common Node.js APM setup mistake. The require statement must be the first line of your application file, before any framework imports, before any database client imports, and before any other business logic code.

Initialize the agent immediately after requiring it by calling atatus.start with your apiKey, appName, and env configuration values read from environment variables. The start method must be called synchronously before your application server begins accepting requests — not inside an async function or Promise chain.

For Express.js applications, the agent instruments the framework automatically once initialized. Every incoming HTTP request is captured as a transaction with timing for the entire request lifecycle, individual middleware execution times, route handler execution time, and database query timing. No additional Express middleware needs to be added — the instrumentation is applied automatically at the framework level.

For Fastify applications, add the Atatus Fastify plugin after initializing the agent. This plugin integrates the agent with Fastify request lifecycle hooks, capturing timing for each route and plugin execution within a request. The plugin is compatible with Fastify 3.x, 4.x, and later versions.

Database monitoring works automatically for the most common Node.js database clients: mongoose for MongoDB, pg for PostgreSQL, mysql2 for MySQL and MariaDB, ioredis and redis for Redis, and the AWS SDK v3 DynamoDB client. The agent patches each client at startup and captures query text, execution time, and connection pool status as spans within each transaction trace. Sensitive data in query parameters is sanitized before transmission.

Verify the Node.js agent installation by starting your application and making a test request. Within 30 seconds, a transaction should appear in the Atatus dashboard under your project APM section. If no transactions appear after 60 seconds, check the application startup logs for Atatus agent initialization messages and verify that the ATATUS_API_KEY environment variable is set correctly.

05

Python APM Setup: Django, Flask, and FastAPI

Complete installation and configuration guide for Python applications

Install the Atatus Python agent using pip: pip install atatus. For production deployments, add atatus to your requirements.txt or pyproject.toml to ensure the agent is installed consistently across deployments. The Python agent has minimal dependencies and adds less than 5MB to your application installed package footprint.

For Django applications, add the agent initialization at the top of your settings.py file before the INSTALLED_APPS list. Import atatus and call atatus.init with your api_key, app_name, and env configuration values. The agent automatically instruments Django ORM, cache framework, and request handling pipeline when initialized before the Django app registry is populated.

For Flask applications, initialize the agent before creating the Flask app instance, then wrap your Flask app with the Atatus WSGI middleware. The middleware captures request timing, response status codes, and exception information for every HTTP request.

For FastAPI applications, the agent integrates through ASGI middleware. After initializing the agent, add the AtatusMiddleware to your FastAPI application. FastAPI async request handling is fully supported — the agent correctly captures async function execution time without blocking the event loop or introducing concurrency issues.

Database monitoring is automatic for SQLAlchemy (both core and ORM), psycopg2 for PostgreSQL, pymysql for MySQL, pymongo for MongoDB, and redis-py. The agent patches the database client libraries at initialization time and captures query timing, query text, and connection pool metrics as spans within each request trace.

Celery task monitoring works automatically when using Celery with a Django or Flask application. The agent captures task execution time, task retry counts, and task error rates as separate transaction types, distinct from HTTP request transactions. Task-to-request distributed tracing links Celery tasks to the HTTP requests that triggered them.

06

Configuring Custom Transactions and Instrumentation

Adding business-meaningful context to your APM data beyond automatic framework instrumentation

Automatic instrumentation captures the technical performance data — HTTP request timing, database query timing, cache operations — but does not inherently understand your business specific operations. Custom transactions allow you to define the boundaries of operations that matter to your business: process checkout, generate invoice PDF, sync customer data from CRM, or send marketing email batch. These custom transactions appear as distinct transaction types in Atatus.

In Node.js, create a custom transaction using the Atatus transaction API by calling atatus.startTransaction with the transaction name and type. Wrap your business logic between the start and end calls. The transaction type argument groups transactions in the Atatus interface — use consistent, meaningful type names like business, background, or worker to organize your custom transactions logically.

In Python, use the context manager API for clean custom transaction syntax. The set_custom_data method allows you to attach business-specific metadata to the transaction, making it searchable and filterable in the Atatus interface. Including fields like invoice ID, customer tier, or order ID makes the transaction data significantly more useful for debugging user-specific issues.

Custom spans allow you to add timing instrumentation within an existing transaction for specific operations you want to measure. For example, within a Process Order HTTP request transaction, you might add custom spans around the payment gateway call, the inventory check, and the email notification send. This gives you sub-transaction timing visibility without creating separate transactions for each step.

Tagging transactions with user and business context makes the data significantly more useful for debugging user-specific issues. Add user context to transactions by calling setUserContext with the user ID, email, and plan. When a specific user reports a problem, you can search Atatus for that user recent transactions and find the exact traces that correspond to their experience.

Instrument external API calls that are not automatically captured. Most HTTP clients are automatically instrumented by the Atatus agent, but some custom HTTP implementations may not be. For these cases, wrap the external call with a custom span indicating the span name and type as external. This ensures external API call timing is captured in your transaction traces.

07

Setting Up Error Tracking and Exception Monitoring

Capturing, grouping, and alerting on application errors

The Atatus APM agent automatically captures unhandled exceptions in all three frameworks. For Node.js, uncaught exceptions and unhandled Promise rejections are captured automatically. For Django, unhandled exceptions that result in 500 responses are captured. For Laravel, exceptions handled by the Laravel exception handler that result in 5xx responses are captured. All automatically captured errors include the full stack trace, request context, and user context if set.

Capture handled exceptions that you recover from but want to track. Not all errors should result in a 500 response — some are caught and handled gracefully, but you still want visibility into their frequency. Use the Atatus captureException API to record the error with additional context about the operation being performed when the error occurred.

Error grouping in Atatus automatically groups similar exceptions together based on error type, message pattern, and stack trace fingerprint. A flood of the same exception type from the same code location appears as a single error issue with a count, not as thousands of individual error events. This grouping is essential for distinguishing between a widespread issue and isolated failures.

Set up error rate alerts that notify you when exception rates exceed acceptable thresholds. A baseline error rate of 0.1 to 0.5% is typical for production web applications. Alert when the error rate exceeds 2% to indicate a significant regression with a PagerDuty integration for immediate response, and when it exceeds 1% with a Slack notification for asynchronous investigation.

Integrate error tracking with your issue tracker for structured error lifecycle management. Atatus provides integrations with GitHub Issues, Jira, and Linear that allow you to create tracked issues directly from the Atatus error interface. When an error is captured, the integration creates an issue in your tracker with the error details, stack trace, and affected user count.

Use error sampling judiciously for high-volume error types that would otherwise flood your error tracking inbox. If a known, low-priority error type occurs thousands of times per day, configure error sampling rules in Atatus to capture 10% of occurrences for that error type while capturing 100% of unknown or high-priority error types. Error count statistics remain accurate even when sampling is applied.

08

Configuring Distributed Tracing Across Services

Connecting traces across microservices, APIs, and async processes

Distributed tracing works automatically between services that all use the Atatus agent, as long as the services communicate via HTTP. The Atatus agent injects trace context headers compatible with the W3C Trace Context standard into outgoing HTTP requests and extracts them from incoming requests. When Service A calls Service B and both use Atatus, the trace from Service A transaction automatically connects to Service B transaction, creating a complete cross-service trace view.

For cross-language scenarios such as Node.js calling a Python service, the W3C Trace Context compatibility ensures that traces connect correctly even across different Atatus agent implementations. The Atatus agents for all supported languages use the same trace propagation header format, so distributed tracing works transparently in polyglot microservice architectures.

Async communication via message queues including RabbitMQ, Kafka, and AWS SQS requires explicit trace context propagation because the automatic HTTP header injection does not apply to queue message publishing. The Atatus agent provides publisher and consumer instrumentation helpers that attach trace context to queue message metadata, creating complete end-to-end traces that span synchronous HTTP calls and asynchronous queue processing.

Service maps are generated automatically by Atatus from distributed tracing data. Once multiple services are instrumented and communicating, the service map in the Atatus APM interface shows the relationships between services, the request flow between them, and the health status including error rate and response time of each service and each service-to-service connection.

Trace sampling strategy for distributed traces requires coordination across all services. If Service A samples traces at 10% and Service B samples independently at 10%, only 1% of A-to-B distributed traces will be complete. Configure head-based sampling at the entry point service and use a consistent sampling decision that propagates to all downstream services via the trace context headers.

External API call tracing provides visibility into third-party service dependencies. When your application calls a payment gateway, a shipping API, or a cloud service API, the Atatus agent captures the HTTP call as a span including the target URL, HTTP method, response status code, and response time. This data makes it immediately clear whether a performance problem originates in your own code or in an external dependency.

09

Creating Your First APM Dashboard

Building a useful service health dashboard from your APM data

Your first APM dashboard should focus on the four metrics that matter most for service health: request throughput in requests per minute, error rate as a percentage of requests that result in errors, response time percentiles at p50, p95, and p99, and Apdex score as a single satisfaction metric derived from response time thresholds. These four metrics give you immediate awareness of whether your service is healthy without requiring you to correlate multiple charts manually.

Use the Atatus dashboard builder to create a new dashboard. Add a Throughput time-series chart using a 5-minute rollup interval. This shows request volume over time, making deployment spikes, traffic drops, and anomalous patterns immediately visible. Add a threshold line at your expected minimum throughput to make anomalous drops obvious at a glance.

Add an Error Rate time-series chart with the Y-axis as a percentage from 0 to 100%. Configure a horizontal threshold line at your alert threshold, typically 1 to 2%. Color the area above the threshold to make violations immediately obvious. This chart should be the most prominent on the dashboard.

Add response time percentile charts showing p50, p95, and p99 as separate lines on a single chart. The gap between p50 and p99 is particularly revealing — a small gap means most users get similar performance, while a large gap indicates that a minority of users experience significantly worse performance than the typical user.

Add a Top Transactions table that shows the 10 most time-consuming transaction types ranked by total impact, calculated as average duration multiplied by request count. This table identifies which operations are consuming the most total server time and prioritizes optimization work.

Add infrastructure context panels to your service dashboard: CPU utilization, memory usage, and for Node.js the event loop lag. Correlating application performance metrics with infrastructure metrics on the same timeline makes it much easier to identify whether performance issues are code-related or resource-constrained.

10

Setting Up Alerts for Your Application

Configuring meaningful alerts that catch real problems without crying wolf

Start with three foundational alerts for any new service: an availability alert for when the service is not receiving any traffic when it should be, an error rate alert for when error rate exceeds an acceptable threshold, and a response time alert for when p95 response time exceeds an acceptable threshold. These three alerts cover the most common classes of user-impacting production issues and should be the minimum alerting configuration for any production service.

Configure the availability alert to trigger when request throughput drops below your expected minimum for more than 5 consecutive minutes. Set the expected minimum to 50% of your typical off-peak throughput — this catches complete service outages and significant partial outages while avoiding alerts on normal low-traffic periods overnight or on weekends.

Configure the error rate alert with a two-tier approach: a warning threshold at 1% error rate with Slack notification, and a critical threshold at 5% error rate with PagerDuty page. Evaluate the condition over a 5-minute window to smooth out transient spikes from deployments or brief network glitches.

Configure the response time alert based on your application established performance baseline. Use Atatus anomaly detection mode for response time alerting rather than fixed thresholds — anomaly detection compares the current value to the historical baseline for the same time of day and day of week, alerting only when the deviation exceeds a configured number of standard deviations.

Add deployment markers to your alerting context. Configure your CI/CD pipeline to send a deployment event to Atatus whenever a new version is deployed. Deployment markers appear on all time-series charts as vertical lines, making it immediately obvious whether a performance change correlates with a deployment.

Test your alerts before relying on them in production. After configuring alert conditions, use Atatus alert test functionality to verify that notification delivery is working correctly — the right channels receive the notification, the notification format includes the information needed for triage, and the escalation policy routes correctly for both warning and critical severity levels.

11

Advanced Configuration Options

Fine-tuning the Atatus agent for production performance and security requirements

Trace sampling rate configuration balances observability completeness against data volume and cost. For low-traffic services under 100 RPM, 100% trace sampling is appropriate and cost-effective. For high-traffic services at 1,000 or more RPM, a 10 to 25% sampling rate captures statistically representative performance data without excessive storage overhead. Regardless of the overall sampling rate, configure the agent to capture 100% of error traces and 100% of traces that exceed the p99 threshold.

Sensitive data filtering is critical for production deployments in applications that handle personal data, payment information, or credentials. The Atatus agent sanitization configuration allows you to define patterns for request parameters, header names, and query parameter names that should be redacted before being sent to Atatus. Always redact passwords, credit card numbers, social security numbers, authentication tokens, and API keys.

Ignore rules allow you to exclude specific URLs, transaction names, or user agents from APM data collection. Health check endpoints that generate high-frequency traffic add noise to throughput metrics without providing useful performance data. Add ignore rules for these endpoints, and similarly exclude monitoring and synthetic check user agents to prevent synthetic traffic from inflating real-user performance metrics.

Custom metric emission allows your application to send business metrics to Atatus alongside the automatic APM metrics. Use the Atatus metrics API to emit counters, gauges, and histograms with tags for dimensional filtering. These custom metrics appear in the Atatus metrics explorer and can be used in dashboards and alerts alongside the standard APM metrics.

Multi-environment configuration management becomes important when the same codebase is deployed to development, staging, and production. Use environment variables exclusively for Atatus configuration and manage environment-specific values in your deployment configuration such as Kubernetes ConfigMaps, AWS Parameter Store, or Heroku config vars. Never hard-code environment-specific values in application code.

Agent overhead monitoring is a good operational practice for high-throughput applications where every millisecond of request processing time matters. The Atatus agent is designed for minimal overhead — typically adding less than 1ms to average request processing time and consuming less than 50MB of additional memory per agent instance. Verify these numbers for your specific application by comparing instrumented versus uninstrumented response time percentiles during a controlled load test.

12

Troubleshooting Common Setup Issues

Diagnosing and resolving the most frequent APM installation problems

No data appearing in the Atatus dashboard is the most common first-run problem. The diagnostic checklist: confirm the API key is correct by copying it directly from the Atatus dashboard, confirm the agent is initialized before any framework code loads, confirm that your application server has outbound HTTPS access to ingest.atatus.com, and check the application startup logs for Atatus agent error messages that indicate initialization failure.

Missing traces for some but not all transactions often indicates that the agent is initialized correctly for some process workers but not others. In multi-process server configurations with Gunicorn or PM2, ensure that the agent initialization code runs in each worker process, not only in the master process. Node.js cluster mode is a common source of this issue — the agent must be initialized in the worker process, not the cluster master.

Database queries not appearing in traces is typically a library version mismatch between the database client version and the agent supported version range. Check the Atatus compatibility matrix for your specific database client and version. For Node.js, the agent patches database clients at require time — if the database client is required before the Atatus agent, the patch is not applied.

High agent memory usage beyond the expected 50MB overhead usually indicates that the agent is accumulating unsent data due to connectivity issues. Check whether your network configuration is blocking outbound connections to ingest.atatus.com, or whether a firewall rule is dropping the connections silently without returning an error. The agent retries failed sends with exponential backoff, but prolonged connectivity issues will cause the send buffer to grow.

Incorrect transaction naming causes APM data to be grouped incorrectly, making it difficult to analyze performance by transaction type. In Express.js, transactions are named by route pattern such as GET /users/:id rather than by request URL such as GET /users/12345 — this is the correct behavior. If you see many unique transaction names that look like URLs with IDs in them, the agent may be using a non-parameterized naming strategy. Review your framework route registration to ensure routes are registered with parameter placeholders.

Distributed trace context not propagating between services is usually caused by a custom HTTP client that is not automatically instrumented by the agent. Check whether you are using a standard HTTP library or a custom client built on lower-level socket APIs. Standard libraries are instrumented automatically; custom clients require manual trace context injection using the agent propagation API.

Key Takeaways

  • The Atatus APM agent installs via standard package managers — Composer, npm, or pip — and requires only three configuration values to start collecting data: API key, application name, and environment.
  • The most critical installation requirement for all three languages is that the agent must be initialized before any other application code loads — failure to do this is the most common cause of missing traces.
  • Database query monitoring, cache operation monitoring, and external HTTP call tracking are all automatic with no additional configuration required for standard database clients and HTTP libraries.
  • Custom transactions, custom spans, and user context enrichment allow you to add business-meaningful context to APM data beyond what automatic framework instrumentation provides.
  • Distributed tracing works automatically between services that all use Atatus agents and communicate via HTTP — the W3C Trace Context standard ensures cross-language compatibility.
  • A minimum viable alert configuration covers three conditions: service availability when throughput drops unexpectedly, error rate when it exceeds an acceptable threshold, and response time when p95 exceeds an acceptable threshold.
  • Sensitive data should always be configured for redaction before deploying to production — use the agent sanitization configuration to define patterns for any fields that could contain personal or financial data.
Get started today

Monitor your applications with Atatus

Put the concepts from this guide into practice. Set up full-stack observability in minutes with no credit card required.

No credit card required14-day free trialSetup in minutes

Related guides