Using Notion and Next.js ISR to sync content across platforms

  • Publish Date
  • Reading Time
    15 minutes
  • Tags
    • react
    • dev rel
Tweet

    Publishing content across multiple platforms offers a number of advantages for content creators. Third-party platforms provide authors a wider reach and access to a dedicated user base. Publishing content on a personal website gives the author complete creative control over their content and its presentation.

    Ensuring synchronicity across platforms can be laborious, time-consuming, and prone to human error. It requires careful attention to detail, an efficient workflow, and a commitment to maintaining consistency and accuracy across all published platforms.

    This article will demonstrate an optimized workflow for creating and publishing synchronized blog content across the web. For additional context, the final solution is available on GitHub and deployed on Vercel.

    The approach primarily uses Notion and the Notion API to create a single source of truth for the content. It also integrates the Incremental Static Regeneration feature into a Next.js website to ensure effortless synchronization of each article across the platforms.

    Creating a blog database with Notion

    Notion is a powerful productivity tool that caters to both individual users and large enterprise teams. This article uses Notion to create a small database of technical blog posts, along with a custom integration that will allow our Next.js website to retrieve the content from a personal workspace.

    Setting up the database

    To get started, create a new page in your Notion workspace and select the Table template. With the table created, selecting the “+ New Database” option from the Data Source menu creates a blank table on the page. Each row on the table will contain the following information:

    ColumnDescription
    PublishedControl whether or not the page displays on the personal website
    PageA link to a subpage in Notion containing the article’s content
    CanonicalThe preferred URL of a web page for search engine rankings
    DateThe date the article was published
    TagsAn array of tags that can be used to filter the articles on the personal site
    AbstractA short description of the article. Useful for previews and SEO
    SlugA unique identifier for each post. Used for the URL routing on the website

    The “Published” column allows the Next.js website to filter any unfinished articles. This also allows a grace period for the article to be published on a third-party platform before it goes live on the personal website.

    The “Page” column links to a subpage in Notion with the respective article’s content. We’ll use this subpage later to retrieve the content blocks for each article in the Next.js website.

    The “Canonical” column helps specify the preferred URL of the content when multiple versions exist across various platforms. Where the Notion article can be considered the source of truth for the content, the canonical URL is considered the source of truth for search engine rankings.

    PublishedPageCanonical
    [ ]
    Using Notion and ISR to synchro…
    [ x ]
    Building an Accessible Menubar…
    https://blog.logrocket.com/...

    The remaining columns contain frontmatter for each article:

    • Date — Allows the articles to be sorted into chronological order
    • Tags — Can be used to filter the articles by a specific category
    • Abstract — Provides a short description of the article to pique the user’s interest
    • Slug — A unique identifier for retrieving the post’s content
    DateTagsAbstractSlug
    04/01/2021react, a11yHow to create an accessible menubar…accessible-menubar
    09/27/2021react, dev relSynchronize content across multiple plat…synchronize-content

    Setting up the custom integration

    Let’s now create the integration that will allow the Next.js website to make authenticated requests to the database in the workspace.

    In Notion, open the My Integrations page and select “Create New Integration.” Since this integration will be private, choose “Internal Integration.”

    Give the integration a name and associate it with your workspace. Then, select “Read Content” from the Content Capabilities and check the “No User Information” radio button in User Capabilities.

    You can leave all Comment Capabilities unchecked. This will permit the website to retrieve the content without assigning any unnecessary permissions.

    You should end up with something like the following:

    The settings screen for a custom Notion integration
    The settings screen for a custom Notion integration

    Integrating Next.js with Notion

    Previously, publishing articles on my personal website was a manual, error-prone task that involved copying, pasting, and refining content into separate markdown files. This process was tedious and made it difficult to maintain consistency across the platforms.

    Fortunately, with the help of the Notion API, this task can now be automated!

    Let’s configure a Next.js project that retrieves all of the content from the database we created earlier. To accomplish this, we’ll create a custom client to fetch the posts and use our custom integration from Notion to authenticate the requests.

    Configuring environment variables

    There are two environment variables required to authenticate and retrieve the database content.

    The first is the Internal Integration Token, which can be found in the “Secrets” menu after creating the integration:

    An example Internal Integration Token for a custom Notion integration
    An example Internal Integration Token for a custom Notion integration

    The second is the Database ID. You can find this in the Notion URL between the workspace and the View ID, shown in bold below:

    Loading...

    Let’s add these variables to the Next.js application’s local environment and the project dashboard. Paste the following code into the

    Loading...
    file, using your own variables in place of the dummy values:

    Loading...

    The Environment Variables section in the Settings menu on the Vercel project dashboard should look something like the image below:

    The Environment Variables section in the Settings menu on the Vercel project dashboard
    The Environment Variables section in the Settings menu on the Vercel project dashboard

    Note that the token and database ID values presented here are invalid and for demonstration purposes only. You should never share these values outside of your Vercel project dashboard and environment files.

    Fetching data from Notion

    With the Notion integration successfully configured, the next step is to create a client that fetches the posts from the Notion database and stores the result in Markdown files.

    To achieve this, we’ll start by creating a new

    Loading...
    instance using the
    Loading...
    library. We’ll also use the
    Loading...
    environment variable to authenticate the requests. Add the following code to the
    Loading...
    file:

    Loading...

    With the authenticated client, we can now make requests to retrieve the content from the Notion database using the

    Loading...
    environment variable in the same file:

    Loading...

    In the code above, we declared an asynchronous function

    Loading...
    with a single argument of
    Loading...
    . We used the
    Loading...
    keyword to pause the function’s execution until the
    Loading...
    promise resolves and returns the collection of blog posts from the requested database.

    Creating Markdown files

    If we inspect the

    Loading...
    , each chunk of content in Notion is parsed as a block, which is an object containing the raw content and metadata for each chunk. See the following example content block from Notion:

    Loading...

    Although it’s possible to serialize each block manually, we’ll use a third-party wrapper —

    — for brevity. The package is somewhat literal, in that it will take a collection of Notion blocks and convert them into a single Markdown file.

    To convert the Notion content blocks to a Markdown file, we’ll create an

    Loading...
    client by passing the
    Loading...
    we created earlier to the
    Loading...
    function from the
    Loading...
    package.

    Add the following code to the

    Loading...
    file:

    Loading...

    We can then define a

    Loading...
    function that takes the results of the
    Loading...
    function and generates a Markdown file for each result.

    Initially, the

    Loading...
    client will convert the Notion content blocks into Markdown blocks. We’ll then reuse the client to format those blocks into a Markdown string.

    Finally, we can create a Markdown file for each post using the “Slug” column property as the filename and populating it with the newly-created Markdown string:

    Enjoying the article?

    Support the content
    Loading...

    Filtering unpublished articles

    To prevent any unpublished articles from appearing on the site, and to mitigate any potentially unnecessary computational cycles when we generate the markdown files, we’ll also create a

    Loading...
    utility function that uses the “Published” column property from earlier to remove any unwanted results.

    We can consider an article published if its published checkbox is checked, which we can determine if the post’s

    Loading...
    property is
    Loading...
    :

    Loading...

    Creating dynamic routes

    With the published blog posts retrieved from Notion, formatted into Markdown, and serialized in their respective files, let’s create a dynamic route for each blog post to be pre-rendered at build time.

    We’ll start by exporting a

    Loading...
    function from
    Loading...
    . This function will return a collection of unique identifiers for each post, derived from the filenames in the
    Loading...
    directory.

    Loading...

    Finally, we’ll use the

    function in the
    Loading...
    file to execute our
    Loading...
    ,
    Loading...
    ,
    Loading...
    , and
    Loading...
    functions to generate the Markdown files and return a collection of paths that will enable Next.js to statically pre-render the respective routes.

    It’s important that we call these methods inside the

    Loading...
    function.
    Loading...
    will run once during the production build, whereas
    Loading...
    will run once for each dynamic route:

    Loading...

    Rendering Markdown content

    To parse the Markdown content from each file into HTML, we’ll create a

    Loading...
    function in the
    Loading...
    file that uses the IDs generated from the
    Loading...
    function and the
    Loading...
    npm package to convert the respective Markdown content to a string:

    Loading...

    Finally, in the

    Loading...
    file, we’ll use the
    Loading...
    component from the
    Loading...
    library and the
    Loading...
    plugin to render the Markdown string as HTML on the
    Loading...
    page đź’Ą:

    Loading...

    Creating a blog page index

    Now that each article has its own individual page in the Next.js project, we’ll present a list of them on the homepage. Each list item will display the article’s title, frontmatter, and a link to the corresponding page on the website.

    We’ll start by creating a

    Loading...
    utility function that sorts the posts chronologically:

    Loading...

    We then use the

    Loading...
    function in the
    Loading...
    file to fetch the data with
    Loading...
    , filter any unpublished articles with
    Loading...
    , sort the published posts chronologically with
    Loading...
    , and then return the result as the
    Loading...
    prop for the
    Loading...
    component.

    Add the following to your

    Loading...
    file:

    Loading...

    In the same file, using a

    Loading...
    function, we render the collection of
    Loading...
    on the homepage. For each post, we display the title, the abstract as a description, the publish date, relevant tags, and a link to the dynamic routes that we created earlier:

    Loading...

    Enabling ISR to synchronize content over time

    Currently, the Next.js project is statically generated at build time, allowing the resulting content to be cached. This approach, known as static site generation (SSG), is known for having great performance and SEO.

    However, updating the fetched content requires a new build and site deployment. This can be problematic for syncing our content with Notion, as the site will continue to serve cached versions of the articles until a new build is deployed, regardless of whether or not the content has been updated.

    Server-side rendering (SSR) potentially solves this problem, but it can impact performance as each page is now rendered on every request. This leads to an increase in server load and longer page load times, diminishing the overall user experience.

    A relatively newer method in Next.js called Incremental Static Regeneration (ISR) is a perfect compromise between the two, as it allows static content to be updated over time without requiring a complete rebuild of the website.

    When users access the page within the revalidation window, they are served a cached version, regardless of whether or not the content has since been updated. The first user to access the page after the revalidation window has expired will also be served the cached version.

    At this point Next.js will re-fetch and cache the latest data on the fly without rebuilding the entire site! If the build is successful, the next user will be served the updated content.

    We’ll start by exporting a

    Loading...
    constant in the
    Loading...
    file to represent a time interval of one minute:

    Loading...

    To enable ISR for the homepage and the dynamic routes, we need to return the desired revalidation period as the

    Loading...
    key on the object returned from
    Loading...
    .

    Add the following code to the

    Loading...
    file:

    Loading...

    And finally, add the following to the

    Loading...
    file:

    Loading...

    With that, we’re done. The application will now periodically check to see if any content in Notion has been updated, and update any pages with changes without having to rebuild the entire site!

    Conclusion

    By using Incremental Static Regeneration with the Notion API, content creators can now confidently define a single source of truth that synchronizes content across multiple online platforms.

    Whenever new articles are created or existing articles are updated, the latest version is immediately available to third-party platforms. The author’s personal website will also periodically update the content on a per-page basis after a given period of time.

    This approach will continue to benefit the author as their collection of articles scales without significantly compromising the overall build times for their personal website.

    Ultimately, this solution provides an optimized workflow for synchronizing content while also supporting long-term content creation goals.

    Related Articles