Phase 4 - Fault Tolerance | Post 8 | Fault Tolerance — Keeping Your App Alive When Things Break

Post 8 of 15 | Phase 4: Fault Tolerance


Fault Tolerance — Keeping Your App Alive When Things Break

In every post so far we have assumed that services are always available and always respond correctly. In reality, services crash, slow down, run out of memory, and fail in unexpected ways. This is not a rare edge case. In a distributed system with many services, something is always failing somewhere.

Fault tolerance is the set of techniques that keep your overall application working even when individual parts of it are broken. Moleculer has four built-in fault tolerance mechanisms: Timeout, Retry, Circuit Breaker, and Bulkhead. This post covers all four.


The Restaurant Analogy Revisited

Before writing any code, understand the problem through a real-world scenario.

You own a restaurant. One day your kitchen equipment breaks down and every order takes 45 minutes instead of 15. Customers are waiting, getting frustrated, and new customers are still walking in and placing orders. Soon the entire restaurant is backed up. Nobody is getting served. The restaurant collapses under the load of waiting orders.

What should you have done?

  • After waiting 20 minutes with no food, tell the customer we cannot serve you right now. That is a Timeout.
  • If the kitchen fails on the first attempt, try again once or twice before giving up. That is a Retry.
  • If the kitchen has failed 10 times in a row, stop sending orders there and tell customers immediately instead of making them wait. That is a Circuit Breaker.
  • Only allow 5 orders in the kitchen at once. If more come in, queue them or reject them. That is a Bulkhead.

These four concepts apply directly to your microservices.


Fault Tolerance Mechanism 1: Timeout

A timeout says: if this action does not respond within a certain time, stop waiting and throw an error.

Without a timeout, a slow service can block your entire application. Your calling service waits forever, occupying resources, and eventually your whole system grinds to a halt.

Global timeout in moleculer.config.js:

module.exports = {
    requestTimeout: 10 * 1000  // 10 seconds for every action call
};

Per-call timeout override:

actions: {
    async createOrder(ctx) {
        // This specific call has a 3 second timeout
        // Overrides the global 10 second timeout
        const user = await ctx.call("user.getById", { id: ctx.params.userId }, {
            timeout: 3000
        });
        return user;
    }
}

Per-action timeout on the action definition itself:

module.exports = {
    name: "report",

    actions: {
        generate: {
            // This action allows up to 30 seconds
            // because generating reports is slow
            timeout: 30000,
            handler(ctx) {
                // slow report generation
            }
        }
    }
};

What happens when a timeout occurs:

Moleculer throws a RequestTimeoutError. The calling service receives this error and can handle it:

actions: {
    async getDashboard(ctx) {
        try {
            const data = await ctx.call("slow.service", {}, { timeout: 3000 });
            return data;
        } catch (err) {
            if (err.name === "RequestTimeoutError") {
                // Return a fallback response instead of crashing
                return { message: "Service is slow right now. Please try again." };
            }
            throw err;
        }
    }
}

Setting timeout to zero disables it for that call:

// No timeout for this call — wait forever
const result = await ctx.call("long.running.job", {}, { timeout: 0 });

Fault Tolerance Mechanism 2: Retry

A retry says: if this action call fails, automatically try again a few times before giving up.

Some failures are temporary. A service might be restarting, a database connection might be briefly lost, a network blip might have occurred. Retrying after a short delay often resolves these temporary failures without any user-visible error.

Global retry policy in moleculer.config.js:

module.exports = {
    retryPolicy: {
        enabled: true,
        retries: 3,          // Try up to 3 times after the first failure
        delay: 100,          // Wait 100ms before first retry
        maxDelay: 2000,      // Never wait more than 2 seconds between retries
        factor: 2,           // Double the delay each time (exponential backoff)
        check: err => err && !!err.retryable  // Only retry if error is marked retryable
    }
};

With factor: 2 and delay: 100, the retry timing looks like this:

First attempt  → fails
Wait 100ms
Second attempt → fails
Wait 200ms
Third attempt  → fails
Wait 400ms
Fourth attempt → fails or succeeds
Give up if still failing

This is called exponential backoff. Waiting longer between each retry gives the failing service more time to recover.

Per-call retry override:

actions: {
    async processPayment(ctx) {
        // Payment is critical — retry up to 5 times
        const result = await ctx.call("payment.charge", {
            amount: ctx.params.amount
        }, {
            retries: 5
        });
        return result;
    }
}

Making your errors retryable:

By default, Moleculer only retries errors marked as retryable. You control this when throwing errors:

const { MoleculerRetryableError } = require("moleculer").Errors;

actions: {
    getFromExternalAPI(ctx) {
        try {
            // Call an external API
        } catch (err) {
            if (err.message.includes("ECONNRESET")) {
                // Network error — worth retrying
                throw new MoleculerRetryableError("External API unavailable", 503);
            }
            // Logic error — do not retry
            throw err;
        }
    }
}

Important: Do not retry non-idempotent operations blindly

An idempotent operation is one that produces the same result no matter how many times you run it. Reading data is idempotent. Charging a credit card is not — you do not want to charge three times just because the response was slow.

Be careful enabling global retries. It is safer to enable retries per-call for operations you know are safe to retry.


Fault Tolerance Mechanism 3: Circuit Breaker

This is the most important fault tolerance pattern. Understand it well.

The Problem Without Circuit Breaker

Imagine your user service is down. Every time order service calls user.getById, it waits 10 seconds for the timeout, then fails. If 100 requests per second are coming in, that is 100 requests all waiting 10 seconds each, occupying memory and connections. Your order service slows to a crawl because of one broken downstream service.

What Circuit Breaker Does

The Circuit Breaker monitors calls to each service. If too many calls fail within a time window, it opens the circuit. An open circuit means it stops trying to call the service immediately — it throws an error right away without waiting for a timeout. This protects the calling service from being dragged down by a broken dependency.

There are three states:

Closed — Normal operation. Calls go through. Failures are counted.

Open — Too many failures detected. Calls are blocked immediately. No actual call is made. Error is thrown instantly.

Half-Open — After a cooldown period, one test call is allowed through. If it succeeds, circuit closes again. If it fails, circuit stays open.

CLOSED → (too many failures) → OPEN → (cooldown passes) → HALF-OPEN → (test succeeds) → CLOSED
                                                                      → (test fails)    → OPEN

Enabling Circuit Breaker in moleculer.config.js:

module.exports = {
    circuitBreaker: {
        enabled: true,
        threshold: 0.5,        // Open if 50% of calls fail
        minRequestCount: 20,   // Need at least 20 requests before evaluating
        windowTime: 60,        // Look at failures in the last 60 seconds
        halfOpenTime: 10000,   // Wait 10 seconds before allowing a test call
        check: err => err && err.code >= 500  // Only count 5xx errors as failures
    }
};

Let us go through each option:

  • threshold: 0.5 means if 50 percent or more of calls fail, open the circuit
  • minRequestCount: 20 means do not open the circuit until at least 20 calls have been made. Prevents opening on just one or two failures during startup.
  • windowTime: 60 means count failures that happened in the last 60 seconds only
  • halfOpenTime: 10000 means after the circuit opens, wait 10 seconds before trying one test call
  • check defines what counts as a failure. Here only server errors (500+) count. A 404 or validation error does not trip the circuit breaker.

What the caller sees:

actions: {
    async createOrder(ctx) {
        try {
            const user = await ctx.call("user.getById", { id: ctx.params.userId });
            return user;
        } catch (err) {
            if (err.name === "CircuitBreakerOpenError") {
                // Circuit is open. User service is known to be broken.
                // Return a graceful response instead of making the user wait.
                return { error: "User service is temporarily unavailable" };
            }
            throw err;
        }
    }
}

Without the circuit breaker, every call waits 10 seconds before failing. With the circuit breaker, once it opens, every call fails in milliseconds. Your order service stays responsive even though user service is broken.

Testing Circuit Breaker

Create this file to see circuit breaker behavior:

"use strict";

const { ServiceBroker } = require("moleculer");

const broker = new ServiceBroker({
    logLevel: "info",
    circuitBreaker: {
        enabled: true,
        threshold: 0.5,
        minRequestCount: 3,   // Low number for testing purposes
        windowTime: 60,
        halfOpenTime: 5000
    }
});

// A service that always fails
broker.createService({
    name: "broken",
    actions: {
        doSomething(ctx) {
            throw new Error("I am always broken");
        }
    }
});

// A service that calls the broken service
broker.createService({
    name: "caller",
    actions: {
        async test(ctx) {
            try {
                await ctx.call("broken.doSomething", {});
            } catch (err) {
                return `Error type: ${err.name} — ${err.message}`;
            }
        }
    }
});

broker.start()
    .then(async () => {
        // Make several calls — watch the error type change
        for (let i = 1; i <= 8; i++) {
            const result = await broker.call("caller.test", {});
            console.log(`Call ${i}: ${result}`);
            await new Promise(r => setTimeout(r, 200));
        }
        await broker.stop();
    });

Run this file:

node circuit-test.js

Output:

Call 1: Error type: Error — I am always broken
Call 2: Error type: Error — I am always broken
Call 3: Error type: Error — I am always broken
Call 4: Error type: CircuitBreakerOpenError — Circuit breaker is open
Call 5: Error type: CircuitBreakerOpenError — Circuit breaker is open
Call 6: Error type: CircuitBreakerOpenError — Circuit breaker is open
Call 7: Error type: CircuitBreakerOpenError — Circuit breaker is open
Call 8: Error type: CircuitBreakerOpenError — Circuit breaker is open

After 3 failures the circuit opens. Subsequent calls fail instantly without actually calling the broken service.


Fault Tolerance Mechanism 4: Bulkhead

A bulkhead limits how many concurrent calls can be active at the same time for a service. If the limit is reached, additional calls are queued or rejected.

The name comes from ship design. A bulkhead is a wall that divides a ship into sections. If one section floods, the bulkhead prevents the entire ship from sinking. In software, if one service is overwhelmed, the bulkhead prevents it from taking down everything else.

Enabling Bulkhead in moleculer.config.js:

module.exports = {
    bulkhead: {
        enabled: true,
        concurrency: 10,      // Only 10 calls active at the same time
        maxQueueSize: 100     // Queue up to 100 additional calls
    }
};

With these settings:

  • First 10 calls execute immediately
  • Calls 11 to 110 wait in a queue
  • Call 111 and beyond are rejected with a QueueIsFullError

Per-action bulkhead:

You can also set bulkhead limits on individual actions:

module.exports = {
    name: "report",

    actions: {
        generate: {
            // Report generation is heavy — only 3 at a time
            bulkhead: {
                enabled: true,
                concurrency: 3,
                maxQueueSize: 10
            },
            async handler(ctx) {
                // Heavy report generation
                await generateHeavyReport();
                return { status: "done" };
            }
        },

        // Other actions in this service are not limited
        list: {
            handler(ctx) {
                return [];
            }
        }
    }
};

This is useful when one action is resource-heavy and you do not want it to consume all available resources and starve other actions.


Fallback — The Safety Net

A fallback is a function that runs when an action call fails for any reason — timeout, circuit open, service not found, any error. Instead of propagating the error to the user, you return a default response.

Fallback can be defined at the call level:

actions: {
    async getDashboard(ctx) {
        const result = await ctx.call("recommendations.get", {
            userId: ctx.params.userId
        }, {
            // If recommendations service fails for any reason,
            // return this instead of throwing an error
            fallbackResponse: {
                recommendations: [],
                message: "Recommendations unavailable right now"
            }
        });
        return result;
    }
}

Fallback can also be a function:

const result = await ctx.call("recommendations.get", {
    userId: ctx.params.userId
}, {
    fallbackResponse(ctx, err) {
        this.logger.warn(`Recommendations failed: ${err.message}`);
        return {
            recommendations: [],
            message: "Showing default recommendations"
        };
    }
});

Use fallback for non-critical features. Recommendations, personalization, analytics — these are nice to have but your app should work without them.


Putting It All Together — Production-Ready Config

Here is a realistic moleculer.config.js for a production application with all fault tolerance features enabled:

"use strict";

module.exports = {
    namespace: "ecommerce",
    nodeID: null,

    logLevel: "warn",

    transporter: "nats://localhost:4222",

    // Global timeout — 10 seconds
    requestTimeout: 10 * 1000,

    // Retry policy — retry up to 3 times with exponential backoff
    retryPolicy: {
        enabled: true,
        retries: 3,
        delay: 100,
        maxDelay: 2000,
        factor: 2,
        check: err => err && !!err.retryable
    },

    // Circuit breaker
    circuitBreaker: {
        enabled: true,
        threshold: 0.5,
        minRequestCount: 20,
        windowTime: 60,
        halfOpenTime: 10 * 1000,
        check: err => err && err.code >= 500
    },

    // Bulkhead — limit concurrent calls per service
    bulkhead: {
        enabled: true,
        concurrency: 10,
        maxQueueSize: 100
    },

    // Load balancing
    registry: {
        strategy: "RoundRobin",
        preferLocal: true
    }
};

When to Use Each Mechanism

Timeout         — Always. Set a global timeout. Every call should have a limit.

Retry           — For network calls, external APIs, temporary failures.
                  Not for payment processing or any non-idempotent operation.

Circuit Breaker — Always in production. Protects healthy services from
                  being dragged down by broken ones.

Bulkhead        — For resource-heavy operations like report generation,
                  file processing, or calls to slow external services.

Fallback        — For non-critical features. Recommendations, analytics,
                  personalization. Your app should work without them.

Summary

  • Fault tolerance is not optional in production microservices. Things will break.
  • Timeout prevents your app from waiting forever. Set a global timeout always.
  • Retry automatically retries failed calls. Use exponential backoff. Be careful with non-idempotent operations.
  • Circuit Breaker monitors failure rates. When too many fail, it opens and rejects calls instantly. This protects healthy services from broken ones.
  • Circuit has three states: Closed (normal), Open (blocking), Half-Open (testing recovery).
  • Bulkhead limits concurrent calls to prevent resource exhaustion.
  • Fallback provides a default response when everything else fails.
  • Configure all four in moleculer.config.js for global behavior.
  • Override per-call or per-action when specific operations need different limits.

Up Next

Post 9 covers Caching — one of the easiest wins for performance in Moleculer. Built-in caching with zero extra code on most actions. We will cover memory caching, Redis caching, cache keys, TTL, and how to invalidate cache when data changes.


Course Progress: 8 of 15 posts complete.

Phase 3 - Communication | Post 7 | Transporters — Making Services Talk Across Different Machines

Post 7 of 15 | Phase 3: Communication


Transporters — Making Services Talk Across Different Machines

In every post so far, all your services have been running inside one Node.js process, on one machine. The broker routes calls between them directly in memory. This is great for development but not how real microservices work in production.

In production, each service runs as a separate process, possibly on a completely different machine or container. How does the order service on Machine A call the user service on Machine B? That is exactly what Transporters solve.


What is a Transporter?

A transporter is a communication channel between brokers running on different nodes. When you add a transporter, every broker connects to it. They use it to send messages to each other.

Think of it like a walkie-talkie network. Without a transporter, each person can only talk to people in the same room. With a transporter, everyone connects to the same radio frequency and can talk to anyone regardless of where they are.

Without Transporter:
[Node A: user-service] ←→ [Node A: order-service]
Only works because they are in the same process.

With Transporter (NATS):
[Node A: user-service] ←→ [NATS Server] ←→ [Node B: order-service]
Works across machines, containers, data centers.

The best part: from your code's perspective, nothing changes. You still write ctx.call("user.getById", { id: "1" }). Moleculer figures out where the user service is and routes the call through the transporter automatically.


Available Transporters

Moleculer supports several transporters out of the box:

  • TCP — built-in, no external server needed, good for simple setups
  • NATS — lightweight, extremely fast, recommended for most projects
  • Redis — you probably already have Redis, easy to set up
  • MQTT — good for IoT applications
  • AMQP — RabbitMQ, good for enterprise setups
  • Kafka — good for high throughput event streaming

In this post we will cover TCP, NATS, and Redis. These three cover 90 percent of real-world use cases.


Transporter 1: TCP — No External Server Needed

The TCP transporter is built into Moleculer. You do not need to install or run any external server. Brokers discover each other automatically using UDP broadcasting on the local network.

This is perfect for:

  • Local development with multiple processes
  • Simple production setups where all nodes are on the same network
  • Getting started without setting up NATS or Redis
// moleculer.config.js
module.exports = {
    nodeID: "node-1",
    transporter: "TCP"
};

That is it. Just set transporter to the string "TCP". No other configuration needed for basic usage.


Seeing TCP Transporter in Action — Two Process Setup

Let us actually run two separate Node.js processes and watch them communicate. This is the most important exercise in this post.

First, create a clean folder for this exercise:

mkdir transporter-demo
cd transporter-demo
npm init -y
npm install moleculer

Create two service files.

user-node.js — This process runs the user service

"use strict";

const { ServiceBroker } = require("moleculer");

const broker = new ServiceBroker({
    nodeID: "node-user",
    transporter: "TCP",
    logLevel: "info"
});

broker.createService({
    name: "user",
    actions: {
        getById(ctx) {
            this.logger.info(`getById called for id: ${ctx.params.id}`);
            // Simulate a database lookup
            const users = {
                "1": { id: "1", name: "Rahul Sharma", email: "rahul@example.com" },
                "2": { id: "2", name: "Priya Singh", email: "priya@example.com" }
            };
            return users[ctx.params.id] || null;
        }
    }
});

broker.start()
    .then(() => {
        console.log("User node is running. Waiting for calls...");
    });

order-node.js — This process runs the order service and calls the user service

"use strict";

const { ServiceBroker } = require("moleculer");

const broker = new ServiceBroker({
    nodeID: "node-order",
    transporter: "TCP",
    logLevel: "info"
});

broker.createService({
    name: "order",
    actions: {
        async create(ctx) {
            this.logger.info(`Creating order for user: ${ctx.params.userId}`);

            // This call goes to the user service on a DIFFERENT process
            // Moleculer routes it automatically through TCP
            const user = await ctx.call("user.getById", {
                id: ctx.params.userId
            });

            if (!user) {
                throw new Error("User not found");
            }

            return {
                orderId: String(Date.now()),
                product: ctx.params.product,
                user: user.name,
                status: "created"
            };
        }
    }
});

broker.start()
    .then(async () => {
        console.log("Order node is running. Waiting 2 seconds for discovery...");

        // Wait for service discovery to complete
        await new Promise(resolve => setTimeout(resolve, 2000));

        console.log("Calling order.create...");

        // Call our own action which will internally call user service
        const result = await broker.call("order.create", {
            userId: "1",
            product: "Mechanical Keyboard"
        });

        console.log("Result:", result);
    });

Open two terminal windows. In terminal 1:

node user-node.js

In terminal 2:

node order-node.js

Watch what happens. In terminal 2 you will see:

Order node is running. Waiting 2 seconds for discovery...
Calling order.create...
Result: { orderId: '1715234567890', product: 'Mechanical Keyboard', user: 'Rahul Sharma', status: 'created' }

In terminal 1 you will see:

getById called for id: 1

The order process called the user service running in a completely separate process. The code inside order-node.js looks exactly the same as if user service was in the same process. Moleculer handled everything.


Transporter 2: NATS

NATS is a lightweight, extremely fast messaging server. It is the recommended transporter for most Moleculer production setups. It is faster than Redis for messaging and simpler than Kafka.

Install NATS Server

On Windows, download the NATS server executable from nats.io/download. Extract it and run:

nats-server

You should see:

[1] Starting nats-server
[1] Listening for client connections on 0.0.0.0:4222
[1] Server is ready

Install the NATS client package in your project:

npm install nats

Configure the transporter:

// moleculer.config.js
module.exports = {
    nodeID: "node-1",
    transporter: "nats://localhost:4222"
};

Or with full options:

module.exports = {
    nodeID: "node-1",
    transporter: {
        type: "NATS",
        options: {
            url: "nats://localhost:4222",
            // If NATS requires authentication
            user: "admin",
            pass: "password"
        }
    }
};

Everything else in your code stays exactly the same. Just changing the transporter string is enough.


Transporter 3: Redis

If you already have Redis running for caching, you can use it as a transporter too. Install the Redis client:

npm install ioredis

Configure:

// moleculer.config.js
module.exports = {
    nodeID: "node-1",
    transporter: "redis://localhost:6379"
};

Or with options:

module.exports = {
    nodeID: "node-1",
    transporter: {
        type: "Redis",
        options: {
            host: "localhost",
            port: 6379,
            password: "your-redis-password",
            db: 0
        }
    }
};

Redis transporter is convenient if you already have Redis in your infrastructure. For dedicated messaging, NATS is faster.


Service Discovery — How Brokers Find Each Other

When a new broker starts and connects to the transporter, it announces itself to all other brokers. This is called service discovery. Every broker maintains a registry of all known services across all nodes.

Node A starts → announces "I have user-service" to NATS
Node B starts → announces "I have order-service" to NATS
Node A hears → "Oh, Node B has order-service. I will remember that."
Node B hears → "Oh, Node A has user-service. I will remember that."

Now:
Node B calls ctx.call("user.getById") →
Moleculer checks registry → user-service is on Node A →
Routes call through NATS to Node A →
Node A executes the action →
Result comes back to Node B

This happens automatically. You never write any of this yourself.

You can see the registry in action using the REPL:

npm run repl

Type:

nodes

This shows all connected nodes.

services

This shows all registered services across all nodes.


Namespace with Transporter

When you use a transporter, the namespace option in moleculer.config.js becomes very important. It isolates your application from other Moleculer apps using the same transporter server.

// App 1 — e-commerce
module.exports = {
    namespace: "ecommerce",
    transporter: "nats://localhost:4222"
};

// App 2 — blog platform (using same NATS server)
module.exports = {
    namespace: "blog",
    transporter: "nats://localhost:4222"
};

These two apps share the same NATS server but never interfere with each other. Their messages are isolated by namespace. Always set a namespace in production.


Multiple Instances of the Same Service

One of the biggest benefits of using a transporter is horizontal scaling. You can run multiple instances of the same service on different nodes. Moleculer automatically load balances requests across them.

Node A: user-service (instance 1)
Node B: user-service (instance 2)
Node C: user-service (instance 3)

When order-service calls "user.getById":
→ Request 1 goes to Node A
→ Request 2 goes to Node B
→ Request 3 goes to Node C
→ Request 4 goes to Node A (round robin repeats)

No configuration needed. The moment multiple instances of the same service exist, Moleculer distributes the load automatically using the strategy defined in moleculer.config.js. Default is RoundRobin.


What Happens When a Node Goes Down

When a node disconnects, the transporter notifies all other brokers. They remove that node's services from their registry. Calls to those services are routed to other available instances. If no instances are available, Moleculer throws a ServiceNotFoundError.

Node A: user-service  ← goes down
Node B: user-service  ← still running

order-service calls "user.getById"
→ Moleculer sees Node A is gone
→ Routes to Node B automatically
→ No error, no downtime

This is the foundation of fault tolerance in Moleculer. We build on top of this with Circuit Breakers and Retries in Post 8.


Heartbeats

Brokers send heartbeat signals to each other through the transporter at regular intervals. The default is every 10 seconds. If a broker misses heartbeats for 30 seconds, it is considered dead and removed from the registry.

You can configure this in moleculer.config.js:

module.exports = {
    heartbeatInterval: 10,  // Send heartbeat every 10 seconds
    heartbeatTimeout: 30    // Consider node dead after 30 seconds of silence
};

In production you might want to reduce these values for faster failure detection. But lower values mean more network traffic.


Using Transporter in Your My-Project

To enable transporter in the project you already have, open moleculer.config.js and change:

// Before
transporter: null,

// After (TCP — no extra setup needed)
transporter: "TCP",

Or if you have NATS running:

transporter: "nats://localhost:4222",

Then run your project normally with npm run dev. Everything works exactly the same. The only difference is that if you open another terminal and start another Node.js process with the same transporter and a different service, they will automatically discover each other and communicate.


Development Tip — When to Use a Transporter

During the early stages of development, transporter: null is fine. All services in one process is simpler and faster to work with. Use a transporter when:

  • You are ready to split services into separate processes
  • You want to test horizontal scaling
  • You are preparing for production deployment

For this course, keep transporter: null until Post 14 where we set up Docker and run each service as a separate container. At that point we will switch to Redis transporter because Redis will already be in our Docker setup.


Summary

  • A transporter connects brokers running on different machines or processes.
  • Without a transporter, all services must be in the same Node.js process.
  • With a transporter, services can run anywhere and still call each other transparently.
  • TCP transporter is built-in and needs no external server. Good for local multi-process setups.
  • NATS is the recommended production transporter. Lightweight and extremely fast.
  • Redis transporter is convenient if you already have Redis in your infrastructure.
  • Service discovery is automatic. Brokers announce themselves when they connect.
  • Multiple instances of the same service are load balanced automatically using RoundRobin by default.
  • When a node goes down, other nodes detect it via missed heartbeats and stop routing to it.
  • Namespace isolates multiple Moleculer apps using the same transporter server.
  • Your action code never changes regardless of whether you use a transporter or not.

Up Next

Post 8 covers Fault Tolerance — Circuit Breaker, Retry, Timeout, and Bulkhead. These are the mechanisms that keep your application running even when individual services fail or slow down. This is what separates a production-ready microservices system from a fragile one.


Course Progress: 7 of 15 posts complete.

Phase 3 - Communication | Post 6 | The Context Object — The Backbone of Every Request

Post 6 of 15 | Phase 3: Communication


The Context Object — The Backbone of Every Request

In every post so far you have seen ctx appearing everywhere. ctx.params, ctx.meta, ctx.call, ctx.emit. You have been using it without fully understanding what it is. This post fixes that completely.

The Context object is one of the most important things in Moleculer. Every action call and every event creates a Context object. It travels through the entire call chain carrying information about the request. Understanding it deeply will make debugging easier, your code cleaner, and advanced features like tracing and auth much simpler to implement.


What is the Context Object?

When you call an action, Moleculer does not just call your handler function directly. It first creates a Context object that wraps everything about that request — the input data, who called it, which node it came from, timing information, metadata, and more. Then it passes this Context to your handler as the ctx argument.

Think of ctx as a package that travels with a request. Like a courier package that has the item inside, but also has a label with the sender address, receiver address, tracking number, and delivery instructions. The item is ctx.params. Everything else on the label is the rest of ctx.


Complete Map of the Context Object

Here is every property on ctx that you will use:

actions: {
    example(ctx) {
        // INPUT DATA
        ctx.params          // The data passed to this action call
        ctx.meta            // Shared metadata across the call chain

        // REQUEST IDENTITY
        ctx.id              // Unique ID for this specific request
        ctx.requestID       // The root request ID — same across the whole chain
        ctx.parentID        // The ID of the context that called this one

        // NODE AND SERVICE INFO
        ctx.nodeID          // Which node sent this request
        ctx.caller          // Which service.action called this action

        // COMMUNICATION
        ctx.call()          // Call another action (carries context forward)
        ctx.emit()          // Emit a balanced event
        ctx.broadcast()     // Broadcast to all instances

        // EVENT SPECIFIC (only available in event handlers)
        ctx.eventName       // The name of the event that triggered this handler
        ctx.eventType       // "emit" or "broadcast"
        ctx.eventGroups     // Which groups this event was sent to

        // TIMEOUT AND LEVEL
        ctx.options         // The call options passed to this request
        ctx.level           // How deep in the call chain this request is
    }
}

Let us go through the important ones in detail.


ctx.params

This is the data passed when the action was called. You have used this in every post. Quick recap:

// Caller
broker.call("math.add", { a: 5, b: 3 });

// Handler
actions: {
    add(ctx) {
        console.log(ctx.params); // { a: 5, b: 3 }
        return ctx.params.a + ctx.params.b;
    }
}

ctx.params is always the direct input to your action. It is validated against your params schema before reaching the handler.


ctx.meta

You saw this briefly in Post 4. Let us go deep now because this is one of the most powerful and most used features in real applications.

ctx.meta is a shared object that travels across the entire call chain. Any service can read from it and write to it. Changes made in one service are visible in subsequent calls.

The most common use case is authentication. Your API Gateway verifies the JWT token and puts the authenticated user into ctx.meta. Every downstream service can then read ctx.meta.user without needing to verify the token again.

// api.service.js — the API Gateway
// This runs before every request (we cover this in Post 10)
async onBeforeCall(ctx, route, req, res) {
    const token = req.headers["authorization"];
    const user = verifyJWT(token);
    // Put user into meta — now all downstream services can see this
    ctx.meta.user = user;
    ctx.meta.requestID = req.headers["x-request-id"];
}

// user.service.js
actions: {
    getProfile(ctx) {
        // Read from meta — no need to pass user ID in params
        const currentUser = ctx.meta.user;
        this.logger.info(`Profile requested by ${currentUser.email}`);
        return { id: currentUser.id, name: currentUser.name };
    }
}

// order.service.js
actions: {
    myOrders(ctx) {
        // Same meta is available here too
        const currentUser = ctx.meta.user;
        return orders.filter(o => o.userId === currentUser.id);
    }
}

You can also write back to ctx.meta from a service and the caller will see the updated value:

// user.service.js
actions: {
    login(ctx) {
        const user = authenticateUser(ctx.params.email, ctx.params.password);
        // Write the token back to meta
        // The caller (API Gateway) will receive this in the response meta
        ctx.meta.token = generateJWT(user);
        return { message: "Login successful" };
    }
}

// In the API Gateway after the call completes
const result = await ctx.call("user.login", { email, password });
// ctx.meta.token is now available here
// You can set it as a cookie or response header
res.setHeader("Authorization", ctx.meta.token);

This is a very clean pattern. The service sets the token in meta, the gateway picks it up and sends it to the client.


ctx.id and ctx.requestID

Every Context has a unique ID. These two fields help you trace requests through the system.

actions: {
    create(ctx) {
        // ctx.id — unique ID of THIS specific context
        // Changes at every hop in the call chain
        this.logger.info(`Context ID: ${ctx.id}`);

        // ctx.requestID — the ROOT request ID
        // Stays the same across the ENTIRE call chain
        // If API Gateway called order which called user,
        // all three have the same requestID
        this.logger.info(`Request ID: ${ctx.requestID}`);
    }
}

ctx.requestID is extremely useful for debugging. When something goes wrong in production, you can search your logs for a specific requestID and see every single step of that request across all services.

You can also set your own requestID from outside:

await broker.call("order.create", { userId: "1" }, {
    requestID: "my-custom-id-abc123"
});

Now every service in the chain will have requestID as my-custom-id-abc123. This is useful when you want to correlate a Moleculer request with an external request ID from your frontend or mobile app.


ctx.caller

This tells you which service and action triggered the current call.

// order.service.js
actions: {
    create(ctx) {
        // Who called this action?
        this.logger.info(`Called by: ${ctx.caller}`);
        // Output: "api.rest" if called from API Gateway
        // Output: "payment.process" if called from payment service
    }
}

This is useful for authorization. For example, a sensitive internal action should only be callable by specific services, not from the API Gateway directly.

actions: {
    internalRecalculate(ctx) {
        // Only allow calls from the admin service
        if (ctx.caller !== "admin.trigger") {
            throw new Error("Unauthorized internal call");
        }
        // proceed
    }
}

ctx.level

This is the depth of the current call in the call chain. The first call from outside has level 1. If that action calls another action, that inner call has level 2. And so on.

actions: {
    process(ctx) {
        this.logger.info(`Call depth: ${ctx.level}`);
        // Level 1 if called directly
        // Level 2 if called from another action
    }
}

Moleculer has a maxCallLevel option in moleculer.config.js that defaults to 100. If your call chain goes deeper than 100 levels, Moleculer throws an error. This prevents infinite loops where service A calls service B which calls service A again.


ctx.options

This contains the options that were passed when this action was called — timeout, retries, etc.

actions: {
    process(ctx) {
        console.log(ctx.options);
        // { timeout: 5000, retries: 3, ... }
    }
}

You rarely read this directly but it is good to know it exists.


Practical Example — Tracing a Full Request Chain

Let us build a small example that demonstrates how ctx travels through multiple services. Create or update these files:

services/gateway-demo.service.js

"use strict";

module.exports = {
    name: "gateway-demo",

    actions: {
        async placeOrder(ctx) {
            this.logger.info(`[gateway-demo] requestID: ${ctx.requestID}`);
            this.logger.info(`[gateway-demo] level: ${ctx.level}`);

            // Set something in meta
            ctx.meta.initiatedBy = "gateway-demo";

            // Call order service
            const order = await ctx.call("order-demo.create", {
                product: ctx.params.product,
                userId: ctx.params.userId
            });

            return order;
        }
    }
};

services/order-demo.service.js

"use strict";

module.exports = {
    name: "order-demo",

    actions: {
        async create(ctx) {
            this.logger.info(`[order-demo] requestID: ${ctx.requestID}`);
            this.logger.info(`[order-demo] level: ${ctx.level}`);
            this.logger.info(`[order-demo] caller: ${ctx.caller}`);
            this.logger.info(`[order-demo] meta.initiatedBy: ${ctx.meta.initiatedBy}`);

            // Write something back to meta
            ctx.meta.orderProcessedBy = "order-demo";

            // Call user service
            const user = await ctx.call("user-demo.getById", {
                id: ctx.params.userId
            });

            return {
                product: ctx.params.product,
                user: user.name,
                status: "created"
            };
        }
    }
};

services/user-demo.service.js

"use strict";

module.exports = {
    name: "user-demo",

    actions: {
        getById(ctx) {
            this.logger.info(`[user-demo] requestID: ${ctx.requestID}`);
            this.logger.info(`[user-demo] level: ${ctx.level}`);
            this.logger.info(`[user-demo] caller: ${ctx.caller}`);
            this.logger.info(`[user-demo] meta:`, ctx.meta);

            return { id: ctx.params.id, name: "Rahul Sharma" };
        }
    }
};

Now test via REPL:

npm run repl
call gateway-demo.placeOrder {"product": "Keyboard", "userId": "user-1"}

Your terminal output will look like this:

[gateway-demo] requestID: abc-123-xyz
[gateway-demo] level: 1

[order-demo] requestID: abc-123-xyz       <-- same requestID
[order-demo] level: 2                     <-- deeper level
[order-demo] caller: gateway-demo.placeOrder
[order-demo] meta.initiatedBy: gateway-demo

[user-demo] requestID: abc-123-xyz        <-- still same requestID
[user-demo] level: 3                      <-- even deeper
[user-demo] caller: order-demo.create
[user-demo] meta: { initiatedBy: "gateway-demo", orderProcessedBy: "order-demo" }

Look at what happened:

  • requestID stayed the same across all three services. One request ID to trace the whole chain.
  • level increased at each hop. gateway called order (level 2), order called user (level 3).
  • caller shows exactly who called each service.
  • meta accumulated data as it traveled through the chain. Both values set by different services are visible at the end.

This is how you debug and trace requests in a real microservices system.


Copying ctx — When You Need a Fresh Context

Sometimes you want to call an action but start a fresh context instead of continuing the current chain. For example, a background job that should not be tied to the original request timeout.

actions: {
    async processOrder(ctx) {
        // This call uses the current context (shares timeout, requestID etc.)
        const result = await ctx.call("payment.charge", { amount: 100 });

        // This call creates a brand new independent context
        // Useful for background tasks that should not be limited
        // by the original request timeout
        await broker.call("email.sendReceipt", { orderId: result.id });

        return result;
    }
}

Using broker.call() instead of ctx.call() creates a new root context. The new call gets its own requestID and starts at level 1. It is completely independent of the original request.


Common Mistakes with Context

Mistake 1: Mutating ctx.params directly

// Wrong
actions: {
    create(ctx) {
        ctx.params.id = generateId(); // Do not mutate params
        return ctx.params;
    }
}

// Correct
actions: {
    create(ctx) {
        const newRecord = {
            ...ctx.params,
            id: generateId()
        };
        return newRecord;
    }
}

Mistake 2: Using broker.call() inside services when you should use ctx.call()

// Wrong — breaks the call chain, tracing does not work
actions: {
    async createOrder(ctx) {
        const user = await broker.call("user.getById", { id: ctx.params.userId });
    }
}

// Correct
actions: {
    async createOrder(ctx) {
        const user = await ctx.call("user.getById", { id: ctx.params.userId });
    }
}

Mistake 3: Storing ctx and using it after the action completes

// Wrong — ctx is only valid during the action execution
let savedCtx;
actions: {
    process(ctx) {
        savedCtx = ctx; // Do not do this
        return "ok";
    }
}
// Using savedCtx later is undefined behavior

Quick Reference Card

ctx.params          — input data for this action
ctx.meta            — shared data across the call chain, readable and writable
ctx.id              — unique ID of this specific context
ctx.requestID       — root request ID, same across the whole chain
ctx.caller          — which service.action called this
ctx.nodeID          — which node sent this request
ctx.level           — depth in the call chain
ctx.eventName       — event name (only in event handlers)

ctx.call()          — call another action, continues the chain
ctx.emit()          — emit a balanced event
ctx.broadcast()     — emit to all instances

Summary

  • Context is created automatically for every action call and event. You never create it manually.
  • ctx.params holds the input data validated against your params schema.
  • ctx.meta is a shared object that travels across the entire call chain. Use it for auth, request IDs, and cross-cutting data.
  • ctx.requestID stays the same across all services in one request chain. Essential for debugging.
  • ctx.caller tells you which service called the current action. Useful for internal authorization.
  • ctx.level shows how deep in the call chain you are. Moleculer stops at maxCallLevel to prevent infinite loops.
  • Always use ctx.call() inside services, not broker.call(). It carries the context forward correctly.
  • Never mutate ctx.params. Spread it into a new object instead.
  • Never store ctx and use it after the action finishes. It is only valid during execution.

Up Next

Post 7 covers Transporters — the communication layer that allows services running on completely different machines to talk to each other. We will set up NATS and Redis transporters, run services as separate processes, and see how the broker routes calls transparently across nodes.


Course Progress: 6 of 15 posts complete.

Phase 2 - Core Concepts | Post 5 | Events — Fire and Forget Communication Between Services

Post 5 of 15 | Phase 2: Core Concepts


Events — Fire and Forget Communication Between Services

In the previous post you learned about Actions, which follow a request-reply pattern. You call an action, you wait, you get a result back. This is perfect for most situations.

But sometimes you do not want to wait for a response. Sometimes you just want to say "this thing happened" and let whoever cares react to it. That is exactly what Events are for.


The Real World Analogy

Think about what happens when you place an order on an e-commerce website.

The moment you click "Place Order", the website confirms your order immediately. But behind the scenes, many things need to happen:

  • The inventory system needs to reduce stock
  • The email system needs to send a confirmation email
  • The notification system needs to send a push notification
  • The analytics system needs to record the purchase

Should the order service wait for all of these to finish before telling you your order is placed? No. That would be slow and wrong. The order is placed. Everything else is a reaction to that fact.

This is the event pattern. The order service emits one event called order.created. Every other service that cares listens to that event and reacts independently. The order service does not know or care who is listening.


Actions vs Events — When to Use Which

Before writing any code, understand this distinction clearly.

Use an Action when:

  • You need a result back from the other service
  • The operation must complete before you continue
  • Example: Get user details, create a record, check stock availability

Use an Event when:

  • You do not need a response back
  • Multiple services might react to the same thing
  • The operation can happen asynchronously in the background
  • Example: Send email after registration, update analytics after purchase, notify after payment

Emitting Events

Inside any action or method, you emit an event using broker.emit() or ctx.emit().

// From outside a service (using broker directly)
broker.emit("order.created", { orderId: "123", userId: "456", total: 999 });

// From inside a service action (preferred)
actions: {
    async create(ctx) {
        const order = {
            id: "123",
            userId: ctx.params.userId,
            total: ctx.params.total
        };

        // Save order to database here

        // Emit the event — fire and forget
        // We do not await this. We do not care about the response.
        ctx.emit("order.created", order);

        // Return immediately without waiting for listeners
        return order;
    }
}

Notice there is no await before ctx.emit(). Events are fire and forget. Your code continues immediately after emitting.


Listening to Events

Any service can listen to any event using the events property in its service schema.

// email.service.js
module.exports = {
    name: "email",

    events: {
        // The key is the event name you are listening to
        "order.created"(ctx) {
            // ctx.params contains the data that was emitted
            const order = ctx.params;
            this.logger.info(`Sending confirmation email for order ${order.id}`);
            // Send email logic here
        }
    }
};
// inventory.service.js
module.exports = {
    name: "inventory",

    events: {
        "order.created"(ctx) {
            const order = ctx.params;
            this.logger.info(`Reducing stock for order ${order.id}`);
            // Reduce stock logic here
        }
    }
};
// analytics.service.js
module.exports = {
    name: "analytics",

    events: {
        "order.created"(ctx) {
            const order = ctx.params;
            this.logger.info(`Recording purchase of ${order.total} for analytics`);
            // Record analytics logic here
        }
    }
};

All three services listen to the same event. When order service emits order.created, all three handlers run. The order service knows nothing about any of them.


Full Event Handler Syntax

Just like actions, events have a shorthand and a full syntax.

Shorthand:

events: {
    "order.created"(ctx) {
        // handle
    }
}

Full syntax:

events: {
    "order.created": {
        handler(ctx) {
            // handle
        }
    }
}

Always use the full syntax in real projects because it supports additional options we will cover next.


Two Types of Events

Moleculer has two different ways to emit events. Understanding the difference is important.

Type 1: Balanced Event using ctx.emit()

When you use ctx.emit() or broker.emit(), the event is balanced. This means if you have multiple instances of the same service running, only one instance receives the event. The broker distributes events across instances in a round-robin fashion.

This is what you want in most cases. If you have three instances of email.service running, you only want one of them to send the confirmation email, not all three.

// Only ONE instance of email service receives this
ctx.emit("order.created", { orderId: "123" });

Type 2: Broadcast Event using ctx.broadcast()

When you use ctx.broadcast() or broker.broadcast(), the event is sent to ALL instances of ALL services that are listening. Every single listener receives it regardless of how many instances exist.

// ALL instances of ALL listening services receive this
ctx.broadcast("config.updated", { newConfig: {} });

Use broadcast when you need every instance to know about something. A common use case is a configuration change that every instance needs to reload, or a cache clear that every instance needs to perform.


Practical Example — Complete Flow

Let us build a realistic example. Create these three files in your services folder.

services/order.service.js

"use strict";

const orders = [];

module.exports = {
    name: "order",

    actions: {
        create: {
            rest: {
                method: "POST",
                path: "/"
            },
            params: {
                userId: "string",
                product: "string",
                total: "number"
            },
            async handler(ctx) {
                // Create the order
                const order = {
                    id: String(Date.now()),
                    userId: ctx.params.userId,
                    product: ctx.params.product,
                    total: ctx.params.total,
                    status: "confirmed",
                    createdAt: new Date()
                };

                orders.push(order);

                // Emit event — do not await
                // order service does not care who handles this or when
                ctx.emit("order.created", order);

                this.logger.info(`Order ${order.id} created and event emitted`);

                // Return immediately
                return order;
            }
        },

        list: {
            rest: {
                method: "GET",
                path: "/"
            },
            handler(ctx) {
                return orders;
            }
        }
    }
};

services/email.service.js

"use strict";

module.exports = {
    name: "email",

    events: {
        "order.created": {
            handler(ctx) {
                const order = ctx.params;

                // In a real project you would use nodemailer or sendgrid here
                this.logger.info("-------------------------------");
                this.logger.info("EMAIL SERVICE: New email triggered");
                this.logger.info(`To: User ${order.userId}`);
                this.logger.info(`Subject: Order Confirmation - ${order.id}`);
                this.logger.info(`Your order for ${order.product} worth ${order.total} is confirmed`);
                this.logger.info("-------------------------------");
            }
        }
    }
};

services/notification.service.js

"use strict";

module.exports = {
    name: "notification",

    events: {
        "order.created": {
            handler(ctx) {
                const order = ctx.params;

                this.logger.info("-------------------------------");
                this.logger.info("NOTIFICATION SERVICE: Push notification triggered");
                this.logger.info(`Sending push to user ${order.userId}`);
                this.logger.info(`Your order ${order.id} has been placed successfully`);
                this.logger.info("-------------------------------");
            }
        }
    }
};

Now run npm run dev and make a POST request to create an order:

POST http://localhost:3000/api/order

Body:

{
    "userId": "user-1",
    "product": "Mechanical Keyboard",
    "total": 2999
}

In your terminal you will see all three services reacting:

[INFO]  order: Order 1715234567890 created and event emitted
[INFO]  email: EMAIL SERVICE: New email triggered
[INFO]  email: To: User user-1
[INFO]  email: Subject: Order Confirmation - 1715234567890
[INFO]  notification: NOTIFICATION SERVICE: Push notification triggered
[INFO]  notification: Sending push to user user-1

The order action returned instantly. The email and notification services reacted in the background. This is the power of event-driven architecture.


Event Naming Conventions

Use dot notation for event names. The convention is:

entity.action

Good event names:

  • order.created
  • order.cancelled
  • user.registered
  • user.passwordChanged
  • payment.completed
  • payment.failed

Bad event names:

  • orderCreated (no dot notation)
  • ORDER_CREATED (wrong convention in Moleculer)
  • order (too vague)

Wildcard Event Listeners

You can listen to multiple events using wildcards.

events: {
    // Listen to ALL order events
    "order.*"(ctx) {
        this.logger.info(`An order event occurred: ${ctx.eventName}`);
        this.logger.info("Data:", ctx.params);
    },

    // Listen to ALL events from any service
    "**"(ctx) {
        this.logger.info(`Any event: ${ctx.eventName}`);
    }
}

ctx.eventName inside the handler tells you the exact event name that triggered this handler. This is useful for a logging or audit service that wants to record every event that happens in the system.


Getting the Event Name Inside the Handler

events: {
    "order.*": {
        handler(ctx) {
            // ctx.eventName is the actual event name
            // For example "order.created" or "order.cancelled"
            this.logger.info(`Received event: ${ctx.eventName}`);
            this.logger.info(`From node: ${ctx.nodeID}`);
            this.logger.info(`Data:`, ctx.params);
        }
    }
}

Local Events

Sometimes you want to emit an event that only services in the same process can hear. Remote nodes using a transporter should not receive it. Use ctx.emit with the local option or broker.emitLocal():

// Only services in THIS process receive this event
broker.emitLocal("internal.cache.clear", { key: "user-list" });

This is useful for internal coordination within a single node without broadcasting to the entire network.


Async Event Handlers

Event handlers can be async. This is fine and common when the handler needs to do database operations.

events: {
    "order.created": {
        async handler(ctx) {
            const order = ctx.params;

            // Async operation — database write, API call, etc.
            await this.saveToAnalyticsDB(order);

            this.logger.info(`Analytics saved for order ${order.id}`);
        }
    }
},

methods: {
    async saveToAnalyticsDB(order) {
        // Database logic here
    }
}

One thing to know: if your async event handler throws an error, it does not affect the emitter. The order service already returned its response before this runs. The error will be logged but it will not propagate back to the caller. This is by design — events are decoupled.


Summary of emit vs broadcast

broker.emit()        — balanced, one instance receives it
broker.broadcast()   — all instances receive it
broker.emitLocal()   — only local process receives it

ctx.emit()           — same as broker.emit(), use inside actions
ctx.broadcast()      — same as broker.broadcast(), use inside actions

Complete Picture — Actions vs Events Side by Side

// ACTION — I need an answer back
const user = await ctx.call("user.getById", { id: "123" });
// I wait here until user service responds
console.log(user.name);

// EVENT — I am announcing something happened
ctx.emit("user.registered", { id: "123", email: "rahul@example.com" });
// I do not wait. I continue immediately.
// Whoever cares will handle it in their own time.

Summary

  • Events are fire-and-forget. You emit and move on. No waiting for a response.
  • Use actions when you need a result. Use events when you are announcing something happened.
  • ctx.emit() sends a balanced event — one instance per service receives it.
  • ctx.broadcast() sends to all instances of all services.
  • broker.emitLocal() sends only within the current process.
  • Multiple services can listen to the same event independently.
  • Event names follow dot notation convention — entity.action.
  • Wildcard listeners use asterisk — order.* or ** for everything.
  • ctx.eventName inside the handler gives you the exact event name.
  • Async event handlers are fine. Errors in them do not affect the emitter.
  • The order service does not know or care who is listening. That is the point.

Up Next

Post 6 covers the Context object in depth. You have been using ctx everywhere — ctx.params, ctx.meta, ctx.call, ctx.emit — but we have not looked at the full picture. The Context object carries far more information than you have seen so far, and understanding it completely will make you a much stronger Moleculer developer.


Course Progress: 5 of 15 posts complete.

Phase 2 - Core Concepts | Post 4 | Services and Actions — The Building Blocks You Write Every Day

Post 4 of 15 | Phase 2: Core Concepts


Services and Actions — The Building Blocks You Write Every Day

In the previous post you learned about the ServiceBroker and its configuration. Now we go deep into Services and Actions. These are the things you will write every single day in a Moleculer project. Understanding them thoroughly will make you productive immediately.


What is a Service — The Full Picture

A service is a plain JavaScript object that you export from a file. The broker reads this object and registers it. That is it. No classes to extend, no framework-specific decorators, no magic.

A service object can have the following top-level properties:

module.exports = {
    // Required
    name: "user",

    // Optional
    version: 1,
    settings: {},
    metadata: {},
    dependencies: [],
    mixins: [],

    // Lifecycle hooks
    created() {},
    async started() {},
    async stopped() {},

    // The main work
    actions: {},
    events: {},
    methods: {}
};

We already covered name, created, started, and stopped in Post 3. Let us now go through the rest.


version

module.exports = {
    name: "user",
    version: 1
};

When you have multiple versions of the same service running simultaneously, version lets you differentiate them. The full service name becomes v1.user. Actions become v1.user.create.

This is useful when you are deploying a new version of a service but cannot take down the old one immediately. Old clients call v1.user, new clients call v2.user, both run at the same time until migration is complete.

For now you will not use this. Just know it exists.


settings

Settings is an object for service-level configuration values. Think of it as constants or default values specific to this service.

module.exports = {
    name: "user",

    settings: {
        defaultPageSize: 10,
        maxPageSize: 100,
        jwtSecret: process.env.JWT_SECRET
    },

    actions: {
        list(ctx) {
            // Access settings via this.settings
            const pageSize = ctx.params.pageSize || this.settings.defaultPageSize;
            return { pageSize };
        }
    }
};

You access settings inside actions and methods using this.settings. Settings are also visible to other nodes when using a transporter, which can be useful for service discovery.


dependencies

module.exports = {
    name: "order",

    dependencies: ["user", "product"],

    async started() {
        this.logger.info("Order service started. User and Product are ready.");
    }
};

Dependencies tell the broker to wait until the listed services are available before starting this service. In the example above, the order service will not start until both user and product services are registered and running.

This is important in production where services start up at different times. Without dependencies, your order service might try to call user.getById before the user service is even ready.


methods

Methods are private functions that belong to the service. They cannot be called from outside the service. They are helper functions used internally by actions and lifecycle hooks.

module.exports = {
    name: "user",

    actions: {
        create(ctx) {
            // Call an internal method
            const hashedPassword = this.hashPassword(ctx.params.password);
            return { user: ctx.params.name, password: hashedPassword };
        }
    },

    methods: {
        // This cannot be called from outside
        // Only actions, events, and other methods in this service can use it
        hashPassword(password) {
            // In real code you would use bcrypt here
            return Buffer.from(password).toString("base64");
        },

        validateEmail(email) {
            return email.includes("@");
        }
    }
};

You call methods inside the service using this.methodName(). They are bound to the service instance so they have access to this.settings, this.logger, and everything else.

Think of methods as private class methods in object-oriented programming. They keep your action handlers clean and your logic reusable within the service.


Actions — The Full Syntax

You have seen two syntaxes for actions already. Let us make this completely clear.

Shorthand syntax

Use this when your action is simple and does not need params validation or HTTP exposure.

actions: {
    hello(ctx) {
        return `Hello ${ctx.params.name}`;
    }
}

Full syntax

Use this in real projects. It gives you full control.

actions: {
    hello: {
        rest: {
            method: "GET",
            path: "/hello"
        },
        params: {
            name: "string"
        },
        handler(ctx) {
            return `Hello ${ctx.params.name}`;
        }
    }
}

In real projects always use the full syntax. The shorthand is fine for quick tests and learning but not for production code.


Action Parameters and Validation

The params property defines what data an action expects. Moleculer uses the fastest-validator library under the hood. When validator: true is set in moleculer.config.js, every action call is automatically validated against this schema before the handler runs.

Here are the most common validation rules:

actions: {
    createUser: {
        params: {
            // Required string
            name: "string",

            // Required number
            age: "number",

            // Required email
            email: "email",

            // Optional string with default value
            role: { type: "string", default: "user" },

            // String with min and max length
            username: { type: "string", min: 3, max: 20 },

            // Number with min and max value
            score: { type: "number", min: 0, max: 100 },

            // Boolean
            isActive: "boolean",

            // Optional field
            bio: { type: "string", optional: true },

            // Array of strings
            tags: { type: "array", items: "string" },

            // Enum — only these values allowed
            status: { type: "enum", values: ["active", "inactive", "pending"] }
        },
        handler(ctx) {
            return ctx.params;
        }
    }
}

If validation fails, Moleculer automatically throws a ValidationError and your handler never runs. The error message tells the caller exactly which field failed and why.

Test this yourself. Add this action to your greeter.service.js and call it without passing name:

greet: {
    params: {
        name: "string"
    },
    handler(ctx) {
        return `Hello ${ctx.params.name}`;
    }
}

Call it via REPL:

call greeter.greet {}

You will see a validation error like this:

ValidationError: Parameters validation error!
  - name: The 'name' field is required.

No code needed on your end. Moleculer handles it.


Calling Actions — All the Ways

You have seen broker.call() and ctx.call(). Let us go through all the options available when calling an action.

Basic call

const result = await broker.call("user.create", {
    name: "Rahul",
    email: "rahul@example.com"
});

Call with options

The third argument to broker.call is an options object:

const result = await broker.call("user.create", {
    name: "Rahul",
    email: "rahul@example.com"
}, {
    // Override the global timeout for this specific call
    timeout: 5000,

    // Retry this call up to 3 times if it fails
    retries: 3,

    // Pass metadata — visible in ctx.meta inside the handler
    meta: {
        userAgent: "Mozilla/5.0",
        requestID: "abc-123"
    }
});

Calling from inside a service action

Inside a service action always use ctx.call() instead of broker.call():

actions: {
    async createOrder(ctx) {
        // Call user service to verify user exists
        const user = await ctx.call("user.getById", {
            id: ctx.params.userId
        });

        if (!user) {
            throw new Error("User not found");
        }

        // Call product service to check stock
        const product = await ctx.call("product.checkStock", {
            id: ctx.params.productId
        });

        return {
            order: "created",
            user: user.name,
            product: product.name
        };
    }
}

The reason you use ctx.call() inside services is that it carries the request context forward. This means the tracing system can see the full chain of calls — order called user, order called product — as one connected request. If you used broker.call() instead, this chain would break and tracing would not work correctly.

Calling multiple actions in parallel

When two calls do not depend on each other, run them at the same time using Promise.all:

actions: {
    async getDashboard(ctx) {
        // These two calls run at the same time, not one after another
        const [user, orders] = await Promise.all([
            ctx.call("user.getById", { id: ctx.params.userId }),
            ctx.call("order.listByUser", { userId: ctx.params.userId })
        ]);

        return { user, orders };
    }
}

This is significantly faster than awaiting them one by one when the calls are independent.


ctx.meta — Passing Data Across the Call Chain

ctx.meta is a special object that travels with the request through the entire call chain. Unlike ctx.params which contains the action input, ctx.meta is for cross-cutting data — things like the authenticated user, request ID, language preference, etc.

// In your API Gateway or first action in the chain
actions: {
    async getProfile(ctx) {
        // Set something in meta
        ctx.meta.authUser = { id: 1, role: "admin" };

        // Call another service
        const result = await ctx.call("user.getById", { id: 1 });
        return result;
    }
}

// In user.service.js
actions: {
    getById(ctx) {
        // meta is automatically available here
        // even though this is a different service
        console.log(ctx.meta.authUser); // { id: 1, role: "admin" }
        return { id: ctx.params.id, name: "Rahul" };
    }
}

ctx.meta is how you pass the authenticated user's information across all services without having to explicitly include it in every action's params. We will use this heavily in the API Gateway post.


Throwing Errors from Actions

When something goes wrong in an action, throw an error. Moleculer has built-in error classes you should use:

const { MoleculerClientError, MoleculerServerError } = require("moleculer").Errors;

module.exports = {
    name: "user",

    actions: {
        getById: {
            params: {
                id: "string"
            },
            async handler(ctx) {
                const user = await findUserById(ctx.params.id);

                if (!user) {
                    // 404 — client made a bad request
                    throw new MoleculerClientError(
                        "User not found",
                        404,
                        "USER_NOT_FOUND",
                        { id: ctx.params.id }
                    );
                }

                return user;
            }
        }
    }
};

MoleculerClientError is for errors caused by bad input from the caller — wrong ID, missing field, unauthorized. MoleculerServerError is for internal failures — database down, unexpected exception.

The API Gateway automatically converts these errors into the correct HTTP status codes. A MoleculerClientError with code 404 becomes an HTTP 404 response.


A Complete Realistic Service Example

Let us put everything together. This is what a real service looks like:

"use strict";

const { MoleculerClientError } = require("moleculer").Errors;

// In-memory store for this example
// In real projects this would be a database
const users = [
    { id: "1", name: "Rahul Sharma", email: "rahul@example.com", role: "admin" },
    { id: "2", name: "Priya Singh", email: "priya@example.com", role: "user" }
];

module.exports = {
    name: "user",

    settings: {
        defaultPageSize: 10
    },

    actions: {
        // List all users
        list: {
            rest: {
                method: "GET",
                path: "/"
            },
            handler(ctx) {
                return users;
            }
        },

        // Get a single user by ID
        getById: {
            rest: {
                method: "GET",
                path: "/:id"
            },
            params: {
                id: "string"
            },
            handler(ctx) {
                const user = this.findUser(ctx.params.id);
                if (!user) {
                    throw new MoleculerClientError(
                        "User not found",
                        404,
                        "USER_NOT_FOUND"
                    );
                }
                return user;
            }
        },

        // Create a new user
        create: {
            rest: {
                method: "POST",
                path: "/"
            },
            params: {
                name: { type: "string", min: 2 },
                email: "email",
                role: { type: "enum", values: ["admin", "user"], default: "user" }
            },
            handler(ctx) {
                const newUser = {
                    id: String(users.length + 1),
                    ...ctx.params
                };
                users.push(newUser);
                return newUser;
            }
        },

        // Delete a user
        remove: {
            rest: {
                method: "DELETE",
                path: "/:id"
            },
            params: {
                id: "string"
            },
            handler(ctx) {
                const index = users.findIndex(u => u.id === ctx.params.id);
                if (index === -1) {
                    throw new MoleculerClientError(
                        "User not found",
                        404,
                        "USER_NOT_FOUND"
                    );
                }
                users.splice(index, 1);
                return { message: "User deleted successfully" };
            }
        }
    },

    methods: {
        // Private helper method
        findUser(id) {
            return users.find(u => u.id === id);
        }
    },

    started() {
        this.logger.info(`User service started with ${users.length} users`);
    }
};

Save this as services/user.service.js in your project. Run npm run dev and test these endpoints:

GET    http://localhost:3000/api/user
GET    http://localhost:3000/api/user/1
POST   http://localhost:3000/api/user
DELETE http://localhost:3000/api/user/1

For the POST request, send this JSON body in Postman:

{
    "name": "Amit Kumar",
    "email": "amit@example.com",
    "role": "user"
}

You have a fully working user service with proper validation, error handling, and REST endpoints.


Summary

  • A service is a plain exported JavaScript object. No classes, no decorators.
  • settings stores service-level config values, accessed via this.settings.
  • dependencies makes a service wait for other services before starting.
  • methods are private helper functions inside a service, called via this.methodName().
  • Always use full action syntax in real projects — with rest, params, and handler.
  • params schema handles both validation and type coercion automatically.
  • Use ctx.call() inside services, not broker.call(), to preserve the call chain for tracing.
  • Run independent calls in parallel using Promise.all for better performance.
  • ctx.meta carries cross-cutting data like auth info across the entire call chain.
  • Use MoleculerClientError for bad input errors and MoleculerServerError for internal failures.

Up Next

Post 5 covers Events — the second way services communicate in Moleculer. Instead of request-reply like actions, events are fire-and-forget broadcasts. We will cover emitting events, listening to events, balanced vs broadcast events, and when to use events over actions.


Course Progress: 4 of 15 posts complete.

Phase 2 - Core Concepts | Post 3 | The ServiceBroker — The Heart of Moleculer

Post 3 of 15 | Phase 2: Core Concepts


The ServiceBroker — The Heart of Moleculer

In the previous post you created a project, ran it, and wrote your first service. You saw broker.start() and moleculer.config.js but we did not go deep into what the broker actually is and how it works. This post covers that completely.

Understanding the broker well is the most important thing in this entire course. Every other concept — services, actions, events, transporters — all sit on top of the broker. If you understand the broker, everything else becomes easy.


What is the ServiceBroker?

Think of a large company. The company has multiple departments — HR, Finance, Engineering, Sales. Each department does its own job. But there is a central management office that:

  • Knows which departments exist
  • Routes requests to the right department
  • Manages communication between departments
  • Handles failures when a department is unavailable
  • Keeps logs of everything happening

The ServiceBroker is exactly that central management office for your Moleculer application. Every service registers itself with the broker. Every action call goes through the broker. Every event passes through the broker. Nothing happens in Moleculer without the broker knowing about it.


Creating a Broker — Three Ways

Way 1: Default broker with no configuration


    const { ServiceBroker } = require("moleculer");

    const broker = new ServiceBroker();

This creates a broker with all default settings. Logger is enabled, no transporter, no caching. Fine for quick experiments.

Way 2: Inline configuration


    const { ServiceBroker } = require("moleculer");

    const broker = new ServiceBroker({
        nodeID: "my-node-1",
        logLevel: "info",
        transporter: null
    });

You pass a configuration object directly when creating the broker.

Way 3: Using moleculer.config.js (recommended for real projects)

This is what the CLI generates. The configuration lives in a separate file and moleculer-runner reads it automatically when you run npm start or npm run dev.


    // moleculer.config.js
    module.exports = {
        nodeID: "my-node-1",
        logLevel: "info",
        transporter: null
    };

In real projects you always use Way 3. Ways 1 and 2 are for quick scripts and learning purposes.


Understanding moleculer.config.js

Open the moleculer.config.js file in your my-project folder. It looks like this:


    "use strict";

    module.exports = {
        namespace: "",
        nodeID: null,

        metadata: {},

        logger: {
            type: "Console",
            options: {
                colors: true,
                moduleColors: false,
                formatter: "full",
                objectPrinter: null,
                autoPadding: false
            }
        },

        logLevel: "info",

        transporter: null,

        cacher: null,

        serializer: null,

        requestTimeout: 10 * 1000,

        retryPolicy: {
            enabled: false,
            retries: 5,
            delay: 100,
            maxDelay: 1000,
            factor: 2,
            check: err => err && !!err.retryable
        },

        maxCallLevel: 100,

        heartbeatInterval: 10,
        heartbeatTimeout: 30,

        contextParamsCloning: false,

        tracking: {
            enabled: false,
            shutdownTimeout: 5000,
        },

        disableBalancer: false,

        registry: {
            strategy: "RoundRobin",
            preferLocal: true
        },

        circuitBreaker: {
            enabled: false,
            threshold: 0.5,
            minRequestCount: 20,
            windowTime: 60,
            halfOpenTime: 10 * 1000,
            check: err => err && err.code >= 500
        },

        bulkhead: {
            enabled: false,
            concurrency: 10,
            maxQueueSize: 100,
        },

        validator: true,

        errorHandler: null,

        metrics: {
            enabled: false
        },

        tracing: {
            enabled: false
        },

        middlewares: [],

        replCommands: null,

        created(broker) { },
        started(broker) { },
        stopped(broker) { },
    };

This looks like a lot but you do not need to understand everything right now. Let us go through the important ones one by one.


Key Configuration Options Explained

nodeID


    nodeID: null

Every broker instance has a unique ID called nodeID. When set to null, Moleculer auto-generates one using your hostname and a random number, for example my-pc-3721.

The nodeID matters when you run multiple instances of your application on different machines. Each machine needs a unique nodeID so the broker knows which node is which. For local development, auto-generate is fine.

namespace


    namespace: ""

Think of namespace as a room name. If you have multiple Moleculer applications running on the same network, they can accidentally talk to each other. Setting a namespace isolates them.

For example:

// App 1
namespace: "ecommerce"

// App 2
namespace: "blog"

These two apps will never interfere with each other even if they share the same transporter.

logLevel


    logLevel: "info"

Controls how much the broker prints to the console. Available levels from most to least verbose:

  • trace — prints everything, very noisy
  • debug — prints debugging details
  • info — prints normal operational messages (recommended for development)
  • warn — prints only warnings and errors
  • error — prints only errors
  • fatal — prints only fatal errors
  • silent — prints nothing

During development use info. In production use warn or error.

requestTimeout


    requestTimeout: 10 * 1000

This is 10000 milliseconds, meaning 10 seconds. If you call an action and it does not respond within 10 seconds, Moleculer automatically throws a RequestTimeoutError. This prevents your application from hanging forever waiting for a response.

You can override this per call as well, which we will see later.

transporter


    transporter: null

When null, all services run in the same process and communicate directly in memory. No network involved. This is perfect for development.

When you want services on different machines to communicate, you set a transporter here, for example NATS or Redis. We cover this in Post 7.

validator


    validator: true

When true, Moleculer automatically validates action params using the fastest-validator library. This is what makes the params schema work in your service actions. Keep this true always.

retryPolicy


    retryPolicy: {
        // Enable feature
        enabled: false,
        // Count of retries
        retries: 5,
        // First delay in milliseconds.
        delay: 100,
        // Maximum delay in milliseconds.
        maxDelay: 1000,
        // Backoff factor for delay. 2 means exponential backoff.
        factor: 2,
        // A function to check failed requests.
        check: err => err && !!err.retryable
    }

When enabled, if an action call fails, Moleculer automatically retries it. retries is how many times to retry. delay is the initial wait in milliseconds between retries. factor means each retry waits longer — first retry waits 100ms, second waits 200ms, third waits 400ms, and so on. We cover this in depth in Post 8.

circuitBreaker


    circuitBreaker: {
        // Enable feature
        enabled: false,
        // Threshold value. 0.5 means that 50% should be failed for tripping.
        threshold: 0.5,
        // Minimum request count. Below it, CB does not trip.
        minRequestCount: 20,
        // Number of seconds for time window.
        windowTime: 60,
        // Number of milliseconds to switch from open to half-open state
        halfOpenTime: 10 * 1000,
        // A function to check failed requests.
        check: err => err && err.code >= 500
    }

The circuit breaker is a fault tolerance mechanism. If a service keeps failing, the circuit breaker stops sending requests to it temporarily and returns an error immediately instead of waiting for a timeout. We cover this in Post 8.

registry.strategy


    registry: {
        // Define balancing strategy. More info: https://moleculer.services/docs/0.15/balancing.html
        // Available values: "RoundRobin", "Random", "CpuUsage", "Latency", "Shard"
        strategy: "RoundRobin",
        // Enable local action call preferring. Always call the local action instance if available.
        preferLocal: true
    },

When you have multiple instances of the same service running, the broker uses a load balancing strategy to decide which instance handles each request. RoundRobin means requests are distributed evenly, one by one to each instance in turn. preferLocal means if a service is available on the same machine, prefer that over a remote one.


The Broker Lifecycle

The broker has a clear lifecycle with three stages. Understanding this is important when you need to do things like connect to a database when the app starts, or clean up resources when it stops.

Stage 1: created

This runs immediately when the broker object is created, before any services start. Use this for very early initialization.

Stage 2: started

This runs after all services have started and the broker is fully ready to handle requests. This is where you put code that needs the broker to be ready — like seeding initial data or connecting to external services.

Stage 3: stopped

This runs when the broker is shutting down. Use this for cleanup — closing database connections, flushing logs, etc.

In moleculer.config.js these are defined as:


    created(broker) {
        console.log("Broker created");
    },

    started(broker) {
        console.log("Broker started, ready to handle requests");
    },

    stopped(broker) {
        console.log("Broker stopped, cleaning up");
    }

Each service also has its own lifecycle hooks which we cover in Post 4.


The Broker Lifecycle in a Service

Services also participate in the broker lifecycle through their own hooks:


    module.exports = {
        name: "greeter",

        created() {
            // Runs when the service is created
            // broker is not yet started here
            this.logger.info("Greeter service created");
        },

        async started() {
            // Runs when the broker starts
            // Safe to call other services from here
            // Good place to connect to database
            this.logger.info("Greeter service started");
        },

        async stopped() {
            // Runs when the broker stops
            // Good place to close connections
            this.logger.info("Greeter service stopped");
        },

        actions: {
            hello(ctx) {
                return `Hello ${ctx.params.name}!`;
            }
        }
    };

Notice this.logger inside the service. Every service automatically gets a logger instance from the broker. You do not need to import or configure it. Just use this.logger.info(), this.logger.warn(), this.logger.error() anywhere inside a service.


Customizing the Logger

The default logger prints to the console. You can customize the format in moleculer.config.js:


    // Enable/disable logging or use custom logger. More info: https://moleculer.services/docs/0.15/logging.html
    // Available logger types: "Console", "File", "Pino", "Winston", "Bunyan", "debug", "Log4js", "Datadog"
    logger: {
        type: "Console",
        options: {
            // Using colors on the output
            colors: true,
            // Print module names with different colors (like docker compose for containers)
            moduleColors: false,
            // Line formatter. It can be "json", "short", "simple", "full", a `Function` or a template string like "{timestamp} {level} {nodeID}/{mod}: {msg}"
            formatter: "full",
            // Custom object printer. If not defined, it uses the `util.inspect` method.
            objectPrinter: null,
            // Auto-padding the module name in order to messages begin at the same column.
            autoPadding: false
        }
    }

The formatter option controls the log line format. Options are:

  • full — shows timestamp, level, nodeID, service name, and message. Best for development.
  • short — shorter format, less information.
  • simple — minimal format, just level and message.
  • json — outputs logs as JSON objects. Best for production log aggregation tools.

For development, full with colors is the most readable.


Running Multiple Brokers Locally (Preview)

Right now everything runs in one process with one broker. In real microservices you run each service as a separate process, possibly on separate machines. Each process has its own broker. They communicate via a transporter.

Here is a preview of what that looks like. Do not run this yet, just read it to understand the concept:


    // Process 1: runs user-service
    const broker1 = new ServiceBroker({
        nodeID: "node-user",
        transporter: "NATS"
    });
    broker1.loadService("./services/user.service.js");
    broker1.start();


    // Process 2: runs order-service
    // This is a completely separate Node.js process
    const broker2 = new ServiceBroker({
        nodeID: "node-order",
        transporter: "NATS"
    });
    broker2.loadService("./services/order.service.js");
    broker2.start();

Both brokers connect through NATS. The order service can call user.getUser and the broker automatically routes it to Process 1 over the network. From the developer's perspective the call looks exactly the same as a local call.

This is the power of Moleculer. The location of a service is transparent to the caller.


Practical Exercise

Open your my-project folder and make these changes to moleculer.config.js to understand how config changes affect behavior.

Change 1: Change logLevel to debug and restart the server. Notice how much more information appears in the console. Change it back to info after.

Change 2: Change requestTimeout to 3 * 1000. This sets a 3 second timeout. Restart and call any action. It still works because actions respond in milliseconds.

Change 3: Add your own started hook:


    async started(broker) {
        broker.logger.info("==== Application is ready ====");
    }

Restart and look for your message in the console output.

After each change, restore the file to its original state before moving on.


Summary

  • The ServiceBroker is the central manager of your entire Moleculer application.
  • Every service registers with the broker. Every call and event passes through it.
  • moleculer.config.js is the recommended place to configure the broker in real projects.
  • nodeID identifies each broker instance. Namespace isolates multiple apps on the same network.
  • logLevel controls console output verbosity. Use info for development.
  • requestTimeout prevents your app from hanging when a service does not respond.
  • transporter: null means all services run in one process. Setting a transporter enables multi-node communication.
  • The broker has three lifecycle stages: created, started, stopped.
  • Services also have their own lifecycle hooks: created, started, stopped.
  • Every service gets this.logger automatically from the broker.

Up Next

Post 4 goes deep into Services and Actions — the building blocks you write every single day in Moleculer. We will cover service schema in full detail, action parameters, validation, shorthand vs full action syntax, calling actions with options, and service mixins.


Course Progress: 3 of 15 posts complete.

Phase 4 - Fault Tolerance | Post 8 | Fault Tolerance — Keeping Your App Alive When Things Break

Post 8 of 15 | Phase 4: Fault Tolerance Fault Tolerance — Keeping Your App Alive When Things Break In every post so far we have assumed that...