Skip to content
Alexander Holbreich
Go back

How I migrated my Hugo blog with AI

What happened

This site is in the middle of a longer transition from my old Hugo based blog to the current Astro setup.
The old repository still contained a lot of historical posts and assets, but I wanted the content to live in the current project and I wanted the old URLs to survive.

This time I did not migrate those posts manually.
Instead I worked together with Codex inside the repository and asked it first a very basic question:

Can you work with tasks in this repository?

That was important because this repo uses bd for issue tracking and I did not want the work to start with random TODO notes or vague promises. I wanted the work to happen through the repo’s actual workflow.

First step: use bd properly

After Codex confirmed that it can work with bd, I asked it to create a proper story first.
The concrete goal was simple:

Codex created one parent story and several subtasks around it. That was already a good sign, because this migration was not really one task.

The work was split roughly like this:

  1. inventory the old Hugo content and decide what belongs to the migration pass
  2. migrate markdown into src/data/blog
  3. copy page bundle assets and fix local references
  4. validate routes against the old Hugo URLs

I liked that decomposition because it mirrors the real risk areas.
The danger was not “copy files”. The danger was breaking URLs, missing images, malformed frontmatter and weird legacy content.

How I asked Codex to execute it

The next instruction was basically:

ok go ahed execute mentioned tasks

This was enough because the story and subtasks already existed in bd.
So Codex claimed the task, inspected the repository rules, checked the current Astro routing and content schema, and then started working through the migration in the same terminal session.

That is actually one of the most interesting aspects for me.
I did not ask it for a plan document or for pseudocode. I asked it to operate inside the repo as if it were another engineer on the project, but constrained by the rules of the repo itself.

What Codex actually did

The implementation ended up being quite pragmatic.

First it inspected both repositories:

Then it checked the current routing behavior and discovered that this project already had a Hugo compatibility idea in place: when slug is present, the route becomes flat like /<slug>.

That was exactly the missing bridge for the migration.

After that Codex created a rerunnable migration script:

scripts/migrate-hugo-posts.mjs

and a validation script:

scripts/validate-hugo-routes.mjs

The migration script did the following:

The route validation script checked whether duplicate published slugs were introduced.

The interesting part: legacy content is messy

Of course the first run was not perfect.
That would have been suspicious.

The old content contained some very typical historical problems:

Codex iterated over those problems one by one.
For example:

Instead of pretending those files were fine, the migration left explicit notes where assets were missing and recorded those anomalies in the generated report.

That was exactly the right tradeoff in my opinion.
Better to preserve the article and expose the gap than to silently ship broken references.

How long it took

The whole session took around 15 minutes from creating the migration story to a successful build.

That does not mean “the migration took 15 minutes” in some absolute sense.
It means:

Still, for migrating 125 published posts, additional drafts, assets, route validation and a full successful build, I think this is a very respectable result.

Why bd mattered here

The quiet hero in this small story is actually bd.

Without bd, this kind of AI-assisted task can become too hand-wavy very quickly.
You ask for a big change, the agent starts exploring, and after a while nobody is sure what exactly was agreed, what is done, and what is still open.

With bd, the flow was much cleaner:

  1. create the story
  2. split the work into subtasks
  3. claim the task
  4. execute the work
  5. validate the result
  6. close the issues

That structure gave the session a shape.
And importantly, it made the AI operate in the same project discipline that I expect from human contributors as well.

What I learned

After this run I would summarize the collaboration like this:

I also think that asking first whether the agent can work with the repository task system was the right move.
That small question established the contract for everything that followed.

Final thought

There is a lot of noise around AI coding tools and most of it is either too optimistic or too dismissive.
This migration was a more grounded example.

I had a concrete problem, a repository with clear conventions, a local task tracker, and an agent able to operate in that environment.
The result was not magic. It was just productive engineering work with a much faster feedback loop.

And yes, I still reviewed what happened.

That part is not going away any time soon.


Share this post on:

Next Post
Architecture Decision Records: A Tool for Experienced Engineers