Creating pages from data

Am I missing an obvious way to loop through data to create pages?

Context: I’m mostly familiar with how 11ty allows you to create files from data: Pagination — Eleventy and I’m looking for something similar with hyperctl

What I’ve tried (very hacky!!)

Working on reading and creating pages from data returned from a database (very simple CMS setup using pocketbase)

sample data:

{
  "items": [
    {
      "collectionId": "pbc_38771238272",
      "collectionName": "blog",
      "content": "<h1>This is a test</h1>\r\n<p>Hello there!</p>",
      "created": "2025-07-09 08:25:15.844Z",
      "description": "Testing testing",
      "draft": false,
      "id": "test1",
      "json": null,
      "slug": "blog-test-1",
      "title": "Test 1",
      "updated": "2025-07-09 08:25:38.817Z"
    }
  ],
  "page": 1,
  "perPage": 1000,
  "totalItems": 1,
  "totalPages": 1,
  "namespace": "blog"
}

When it comes to creating content from this, I found a VERY hacky way…

  1. Set up a “Blog Post” content-type
  2. Writing the fetched JSON data to a file at data/blog.json (using curl and jq)
  3. Read the JSON with jq, feeding a hyperctl new page --content-type "Blog Post" command to bash to write files to content/blog/**/index.md (see scary solution below, I haven’t even tested if this works with more than one object, probably needs to be reworked if I had set up more posts)
  4. Now, the hyper build or server commands will handle creating all the pages

Scary bash code:

Copy at your own risk …

bash <(jq -r '.items[] | "hyperctl new page --title \(.title|@sh) --content-type \"Blog Post\" -d \(.description|@sh) --content \(.content|@sh)"' ./data/blog.json)

My first thought is, of course, Feeds & Feed Pages

But from what I see there, that is just creating a ‘feed’ section in a page based on content files? Not from data / data files… correct?

Some context here would be helpful… are you looking for a repeatable process to convert some external data source (e.g. a data export from a bespoke CMS), or are you trying to perform a one-time migration? If you’re performing a one-time migration, the following information may be helpful…

In terms of the resulting data structure, pages in the HyperTemplates CMS follow a single rule:

A page is a directory containing an index file in Markdown (index.md ), YAML (index.yaml ), or JSON (index.json ) format.

I’m not too familiar with PocketBase beyond knowing that it is a wrapper around SQLite, so my first thought would be to figure out a better way of exporting the data from the SQLite database using the built-in JSON functions.

But if you already have an export and just want a quick and dirty solution for parsing the files into a format that will work with HyperTemplates, something like the following could work:

content_dir=content json_pages=.items[] json_slug=.slug cat export.json | jq -c $json_pages | while read page; do slug=$(echo $page | jq -r $json_slug); mkdir -p $content_dir/$slug; echo $page > $content_dir/$slug/index.json; done

Here’s the same thing in a format that will be easier to read but maybe harder to copy & paste:

content_dir=content # configure the content directory 
json_pages=.items[] # configure the JSON path to your page objects array (for jq)
json_slug=.slug # configure the page object property to use for the page path
cat export.json | jq -c $json_pages | while read page; do 
    # loop over the page objects array
    slug=$(echo $page | jq -r $json_slug) # set the $slug variable
    mkdir -p $content_dir/$slug # create the page directory and any parent directories
    echo $page > $content_dir/$slug/index.json # write the page object to the page index file
done

I tested this command on macOS (using bash as my shell with jq installed) with an export.json file containing a modified copy of your sample data:

{
  "items": [
    {
      "collectionId": "pbc_38771238272",
      "collectionName": "blog",
      "content": "<h1>This is a test</h1>\r\n<p>Hello there!</p>",
      "created": "2025-07-09 08:25:15.844Z",
      "description": "Testing testing",
      "draft": false,
      "id": "test1",
      "json": null,
      "slug": "blog-test-1",
      "title": "Test 1",
      "updated": "2025-07-09 08:25:38.817Z"
    },
    {
      "collectionId": "pbc_38771238273",
      "collectionName": "blog",
      "content": "<h1>This is another test</h1>\r\n<p>Hello there!</p>",
      "created": "2025-07-09 08:25:15.844Z",
      "description": "Testing testing",
      "draft": false,
      "id": "test2",
      "json": null,
      "slug": "blog-test-2",
      "title": "Test 2",
      "updated": "2025-07-09 08:35:38.817Z"
    }
  ],
  "page": 1,
  "perPage": 1000,
  "totalItems": 2,
  "totalPages": 1,
  "namespace": "blog"
}

And I get the following file structure as an output:

content/
    blog-test-1/
        index.json
    blog-test-2/
        index.json

I hope this helps! :blush:

1 Like

Sorry I missed this!

are you looking for a repeatable process to convert some external data source (e.g. a data export from a bespoke CMS)…?

Yes, this is what I mean. I understand each CMS will present data differently, but lets say I have a list of blogs posts I can query from a CMS and that CMS also has linked authors that are connected to each post. I want to dynamically create (1) the blog articles as pages and (2) a landing page for each author, with some subset of their related posts linked from their author landing page. Right now I’m struggling to see how this could be done without some type of scripting, but it would be super helpful!

So this is just more of a feature request here, to have some ability like this: Pagination — Eleventy or something like this discussion related to hugo: ( Create pages based on data files - #3 by toledox82 - support - HUGO )