≡ Menu

There’s a specific kind of dread that sets in when you check your production logs and see the infamous error: Fatal error: JavaScript heap out of memory.

For developers using modern Node.js stacks like Next.js (frontend/SSR) and NestJS (backend), this error usually means the Garbage Collector (GC) has failed to reclaim enough memory to keep the application running.

In most cases, it isn’t a problem with the frameworks themselves, but rather how we manage data and state within them.

In this post, we’ll explore the most common programming mistakes that cause Out of Memory (OOM) errors within a Next.js/NestJS architecture, complete with examples and solutions.


What is a Memory Leak?

Before we dive in, a quick primer.

JavaScript is a garbage-collected language.

The runtime automatically allocates memory when objects are created and frees it when they are no longer “reachable” (referenced) from the code’s root.

A memory leak occurs when your application inadvertently retains a reference to an object that is no longer needed.

Because the reference still exists, the Garbage Collector cannot free that memory, causing the heap size to grow steadily until the application crashes.


Mistakes in NestJS (Backend API)

NestJS applications are long-running processes. A tiny leak in a single API endpoint can accumulate over thousands of requests, eventually bringing down the entire server.

1. The Accidental Global Cache

A common requirement is to cache data from a slow API or database call.

However, storing this cache in a simple global object or a Singleton class variable without a removal strategy is a recipe for disaster.

The Mistake:

You decide to cache user permissions in a plain JavaScript object attached to a service.

As more unique users log in, the object grows indefinitely.

// user.service.ts (Bad Implementation)
import { Injectable } from '@nestjs/common';

@Injectable()
export class UserService {
  // ❌ LEAK: This object grows with every unique user and is never cleared.
  private permissionsCache: Record<string, any[]> = {};

  async getUserPermissions(userId: string) {
    if (this.permissionsCache[userId]) {
      return this.permissionsCache[userId];
    }

    const permissions = await this.db.fetchPermissions(userId); // Slow call
    this.permissionsCache[userId] = permissions;
    return permissions;
  }
}

The Solution:

Use a dedicated caching library with built-in Time-To-Live (TTL) or Least Recently Used (LRU) eviction policies.

NestJS has a built-in CacheModule that makes this easy.

// user.service.ts (Good Implementation)
import { Injectable, Inject } from '@nestjs/common';
import { CACHE_MANAGER } from '@nestjs/cache-manager';
import { Cache } from 'cache-manager';

@Injectable()
export class UserService {
  constructor(@Inject(CACHE_MANAGER) private cacheManager: Cache) {}

  async getUserPermissions(userId: string) {
    const cacheKey = `perms:${userId}`;
    const cached = await this.cacheManager.get<any[]>(cacheKey);

    if (cached) return cached;

    const permissions = await this.db.fetchPermissions(userId);
    // ✅ FIX: Use TTL (e.g., 300 seconds) to ensure data is removed.
    await this.cacheManager.set(cacheKey, permissions, 300); 
    return permissions;
  }
}

2. Slicing Large Arrays to “Free” Memory

In Node.js, Buffer.slice() or Array.slice() often creates a view onto the original memory rather than a completely new copy.

If you slice a small piece from a massive 100MB array and retain only the small slice, you are still holding the entire 100MB array in memory.

The Mistake:

You process a huge CSV import, but only need the first few headers.

// import.service.ts (Bad Implementation)
async processHugeCsv(csvBuffer: Buffer) {
  // csvBuffer might be 50MB
  // ❌ LEAK: This slice retains a reference to the massive underlying buffer.
  const headerSlice = csvBuffer.slice(0, 100); 
  
  // You save the slice to a service variable to use later
  this.currentHeaders = headerSlice; 
  
  // ... process the rest ...
}

The Solution:

If you need a smaller part of a large buffer or array, create a fresh copy of that smaller part.

The Garbage Collector can then free the original large object.

// import.service.ts (Good Implementation)
async processHugeCsv(csvBuffer: Buffer) {
  const start = 0;
  const end = 100;
  // ✅ FIX: Create a brand new, detached Buffer instance containing only the data needed.
  const headerCopy = Buffer.allocUnsafe(end - start);
  csvBuffer.copy(headerCopy, 0, start, end);

  this.currentHeaders = headerCopy; // Original csvBuffer can now be GCed.
  
  // ... process the rest ...
}

Mistakes in Next.js (Frontend & SSR)

Next.js operates in two distinct worlds: the client-side (browser) and the server-side (Server-Side Rendering, API Routes).

You have to watch out for leaks in both.

3. Forgotten Listeners and Intervals in useEffect

This is the classic React memory leak, but it is magnified in Next.js when users spend a long time on your site navigating between dynamic routes.

If you set a listener but don’t remove it when the component unmounts, that listener stays alive.

The Mistake:

A dashboard component tracks window resize events to adjust its layout, but forgets to clean up.

// components/Dashboard.tsx (Bad Implementation)
'use client';
import { useEffect, useState } from 'react';

export default function Dashboard() {
  const [isMobile, setIsMobile] = useState(false);

  useEffect(() => {
    const handleResize = () => {
      console.log('Resizing...'); // This keeps running even after navigation
      setIsMobile(window.innerWidth < 768);
    };

    // ❌ LEAK: Every time this component mounts, a new listener is added.
    window.addEventListener('resize', handleResize);
  }, []); // Empty dependency array means it runs on mount only.

  return <div>{isMobile ? 'Mobile View' : 'Desktop View'}</div>;
}

The Solution:

Always return a cleanup function from your useEffect hook to remove listeners, clear intervals, or cancel API subscriptions.

// components/Dashboard.tsx (Good Implementation)
'use client';
import { useEffect, useState } from 'react';

export default function Dashboard() {
  const [isMobile, setIsMobile] = useState(false);

  useEffect(() => {
    const handleResize = () => {
      setIsMobile(window.innerWidth < 768);
    };

    window.addEventListener('resize', handleResize);

    // ✅ FIX: The returned function is called when the component unmounts.
    return () => {
      window.removeEventListener('resize', handleResize);
    };
  }, []);

  return <div>{isMobile ? 'Mobile View' : 'Desktop View'}</div>;
}

4. Holding massive state in Route Handlers / API Routes

In Next.js App Router, route.ts handlers are similar to NestJS controllers.

They run on the server. If you receive a massive payload (like a file upload) and try to hold the entire thing in memory as a variable, you will instantly spike the server’s RAM.

The Mistake:

An API route receives a file upload and stores it in an in-memory buffer before sending it to S3. If the file is 1GB, your server needs 1GB of free RAM to handle just that one request.

// app/api/upload/route.ts (Bad Implementation)
import { NextResponse } from 'next/server';

export async function POST(request: Request) {
  // ❌ LEAK/SPIKE: This loads the entire body into memory at once.
  const entirePayload = await request.arrayBuffer();
  const buffer = Buffer.from(entirePayload);

  await uploadToS3(buffer); // A slow process
  return NextResponse.json({ success: true });
}

The Solution:

Use Node.js Streams. Streaming allows you to process data chunk by chunk without ever loading the entire file into the server’s memory.

// app/api/upload/route.ts (Good Implementation)
import { NextResponse } from 'next/server';
import { Upload } from "@aws-sdk/lib-storage"; // Example S3 streaming library

export async function POST(request: Request) {
  if (!request.body) return NextResponse.error();

  // ✅ FIX: request.body is already a ReadableStream!
  const stream = request.body; 

  const upload = new Upload({
    client: s3Client,
    params: { Bucket: 'my-bucket', Key: 'large-file', Body: stream },
  });

  await upload.done();
  return NextResponse.json({ success: true });
}

Conclusion: How to Protect Your Heap

Out Of Memory (OOM) errors can be some of the hardest to debug because the crash rarely happens at the exact point of the leak.

However, by adopting these three principles, you can significantly reduce your risk:

  1. Stop using Globals: In NestJS services or global utilities, avoid plain objects or arrays for persistent data storage. Use CacheModule or a database.

  2. Clean up your Hooks: In Next.js client components, every useEffect that creates an interval, timeout, or listener must have a cleanup function.

  3. Stream Large Data: If you are handling files, massive CSV exports, or large database queries, never load the entire dataset into a variable. Stream it from source to destination.

Happy coding, and keep your memory tidy!

Useful links below:

Let me & my team build you a money making website/blog for your business https://bit.ly/tnrwebsite_service

Get Bluehost hosting for as little as $1.99/month (save 75%)…https://bit.ly/3C1fZd2

Best email marketing automation solution on the market! http://www.aweber.com/?373860

Build high converting sales funnels with a few simple clicks of your mouse! https://bit.ly/484YV29

Join my Patreon for one-on-one coaching and help with your coding…https://www.patreon.com/c/TyronneRatcliff

Buy me a coffee ☕️https://buymeacoffee.com/tyronneratcliff

{ 0 comments }

If you’ve ever wondered how a database can find one specific row out of millions in milliseconds, the answer is likely a B-Tree.

Unlike a standard Binary Search Tree (BST), which can become unbalanced and slow, a B-Tree is a self-balancing search tree designed to handle large amounts of data efficiently.


Why B-Trees for Databases?

In a database, data is often stored on a disk rather than in RAM. Accessing a disk is significantly slower than accessing memory.

  • Minimizing Disk I/O: B-Trees are “fat” and “short.” While a BST might have a height of 20 to find an element, a B-Tree with a high branching factor (order) might only have a height of 3. This means fewer “hops” to find your data.

  • Sorted Storage: Because the keys are kept in order, range queries (e.g., “find all users aged 20 to 30”) are incredibly fast.

  • Predictable Performance: The self-balancing nature ensures that the time complexity for search, insertion, and deletion remains O(log n).


Understanding the B-Tree Logic

Before coding, keep these rules in mind for a B-Tree of degree t:

  1. Every node (except the root) must have at least t-1 keys.

  2. Every node can have at most 2t-1 keys.

  3. A legal node with n keys has n+1 children.

  4. All leaves must be at the same depth.


Python Implementation

Here is a simplified version of a B-Tree insertion algorithm.

We focus on the split mechanism, which is the “magic” that keeps the tree balanced.

class BTreeNode:
    def __init__(self, leaf=False):
        self.leaf = leaf
        self.keys = []
        self.child = []

class BTree:
    def __init__(self, t):
        self.root = BTreeNode(True)
        self.t = t  # Minimum degree

    def insert(self, k):
        root = self.root
        if len(root.keys) == (2 * self.t) - 1:
            # If root is full, the tree grows in height
            temp = BTreeNode()
            self.root = temp
            temp.child.insert(0, root)
            self.split_child(temp, 0)
            self.insert_non_full(temp, k)
        else:
            self.insert_non_full(root, k)

    def split_child(self, x, i):
        t = self.t
        y = x.child[i]
        z = BTreeNode(y.leaf)
        
        # Move the second half of y's keys to z
        x.child.insert(i + 1, z)
        x.keys.insert(i, y.keys[t - 1])
        
        z.keys = y.keys[t: (2 * t) - 1]
        y.keys = y.keys[0: t - 1]
        
        if not y.leaf:
            z.child = y.child[t: 2 * t]
            y.child = y.child[0: t]

    def insert_non_full(self, x, k):
        i = len(x.keys) - 1
        if x.leaf:
            x.keys.append(None)
            while i >= 0 and k < x.keys[i]:
                x.keys[i + 1] = x.keys[i]
                i -= 1
            x.keys[i + 1] = k
        else:
            while i >= 0 and k < x.keys[i]:
                i -= 1
            i += 1
            if len(x.child[i].keys) == (2 * self.t) - 1:
                self.split_child(x, i)
                if k > x.keys[i]:
                    i += 1
            self.insert_non_full(x.child[i], k)

    def print_tree(self, x, l=0):
        print("Level", l, " ", len(x.keys), end=":")
        for i in x.keys:
            print(i, end=" ")
        print()
        l += 1
        if len(x.child) > 0:
            for i in x.child:
                self.print_tree(i, l)

# Usage
db_index = BTree(3)
for key in [10, 20, 5, 6, 12, 30, 7, 17]:
    db_index.insert(key)

db_index.print_tree(db_index.root)

How this speeds up your DB

When you create an index on a column (like user_id), the database builds this structure in the background.

Instead of a Full Table Scan (checking every single row), the database engine:

  1. Loads the Root node.

  2. Compares your ID to the keys in the node to decide which child pointer to follow.

  3. Repeats until it finds the exact pointer to the data on the disk.

This reduces the search space from N to log N almost instantly.

Useful links below:

Let me & my team build you a money making website/blog for your business https://bit.ly/tnrwebsite_service

Get Bluehost hosting for as little as $1.99/month (save 75%)…https://bit.ly/3C1fZd2

Best email marketing automation solution on the market! http://www.aweber.com/?373860

Build high converting sales funnels with a few simple clicks of your mouse! https://bit.ly/484YV29

Join my Patreon for one-on-one coaching and help with your coding…https://www.patreon.com/c/TyronneRatcliff

Buy me a coffee ☕️https://buymeacoffee.com/tyronneratcliff

{ 0 comments }