DebaterXDebaterX

The Moment I Killed Contentlayer

Contentlayer was broken in Next 15. I replaced it with 40 lines of code.

·4 min read

For a while, every Next.js MDX blog tutorial recommended contentlayer. It was the default. It had types, it had plugins, it had the feature matrix you wanted. I used it on two previous projects and liked it.

Then Next 15 shipped, and contentlayer stopped working. The maintainer was responsive but slow. The project was being rewritten. For weeks I had a half-working blog pipeline that broke on every deploy.

I finally gave up and replaced contentlayer with forty lines of code. Here's what I learned.

What contentlayer was doing

Contentlayer was doing three things:

Parsing frontmatter. Reading MDX files, extracting YAML headers, returning structured data.

Generating TypeScript types. Every frontmatter field got a generated type so your app could access them safely.

Creating a content "layer" abstraction. A unified API for querying all your content regardless of where it was stored.

Each of those features is nice. Not one of them is load-bearing for a simple blog.

What I replaced it with

Forty lines of code, roughly:

That's it. No generated types. No layer abstraction. No content schema.

I lost the automatic type generation. I gained a pipeline that takes zero seconds to start and never breaks on a deploy.

What I didn't miss

Here's the thing: I thought I'd miss the generated types. I didn't. The manually-written interface is ten lines. I edit it once a year. The automatic generation was saving me approximately zero minutes of work.

I thought I'd miss the query abstraction. I didn't. My "query" is getAllPosts().filter(p => p.tags.includes('foo')). Plain JavaScript. No DSL needed.

I thought I'd miss the plugin ecosystem. I didn't. The plugins I was using in contentlayer all have direct MDX equivalents — rehype-pretty-code, remark-gfm — that I just add to my MDX compiler options.

What I did miss, briefly

The content-hash cache invalidation was nice. Contentlayer would re-read only files that had changed since last build. My naive implementation re-reads everything on every build.

But my blog has 101 posts. Re-reading everything takes about 30 milliseconds. The cache invalidation wasn't saving me meaningful time.

For a blog with 10,000 posts, I'd need to add caching. For 100, I don't.

The general principle

Tools like contentlayer are valuable when:

  1. You have a lot of content (thousands of files).
  2. You need cross-content queries at build time.
  3. You're okay with the tool's framework assumptions.

For everything else, they're over-engineering. A function that reads the filesystem and returns parsed markdown is ~40 lines. It's straightforward to write, easy to debug, and never has a version conflict.

The pattern I now follow: before adopting a content management tool, ask if I could write the functionality I need in under 100 lines of plain code. If yes, do that. If no, adopt the tool.

Most of the time, it's yes. Most content management tools are solving problems you don't actually have.

The blog pipeline today

My blog pipeline is:

No contentlayer. No schema. No generated types. Forty lines of glue code.

It works. It's fast. It's never broken on a deploy. It's the kind of infrastructure I don't have to think about, which is the goal.

The takeaway

When a dependency you depend on stalls, breaks, or slows down, the question isn't "can I wait for it to be fixed." The question is "can I replace this with code I control."

Often the answer is yes, and cheaper than you think. Fewer dependencies means more control. More control means less anxiety about deploy-day surprises.

Fewer, simpler, yours. That's the direction.

← Back to all posts