
There is a particular kind of frustration that every developer who has worked on a content site knows well. Months have been invested in crafting the architecture; everything is functioning as expected—routing is working properly, code is clean, and builds are fast. One day someone does an SEO analysis and reports back that you have eighty pages without meta descriptions, thirty pages with duplicate title tags, and your sitemap stopped being updated two months ago because of changes in the content pipeline that went unnoticed. None of it is anybody’s fault exactly; it is just what happens when SEO lives outside the codebase. I came across the Seozilla GitHub project while looking for a concrete example of how to do this properly in Next.js, and it is one of the cleaner implementations I have seen for connecting a Next.js blog to automated metadata generation through Seozilla.
As you will notice after going through the explanation, the whole concept is very straightforward. The data that is contained within your content includes a title, a description, an author, a publish date, a featured image, and a URL slug. There is a direct mapping from each of these items to SEO elements. If you write that mapping logic once and connect it to your data pipeline, the SEO generates itself every time a page builds. You stop thinking about it as a separate task because it stops being one.
The Problem With Doing SEO Manually at Scale
I want to be specific about where manual SEO breaks down, because people often underestimate how quickly it becomes a problem. When you have five pages, manually writing meta tags takes twenty minutes and feels totally reasonable. With fifty pages, the monotony sets in, but everything is manageable. At two hundred pages, things become unmanageable. This is when the level of quality goes down, but it only becomes apparent during the audit process and one discovers that there are roughly thirty percent of pages which lack metadata or had metadata copy-pasted into them or were correct on writing the content but are outdated now.
That last one is the sneaky problem. Content gets updated; headings get rewritten; the focus of an article shifts slightly during an edit. The meta description, which was accurate when the post launched, now describes a slightly different article than the one that exists. Google sees the mismatch; users who click through from search results see a page that does not quite match what the snippet promised. Neither of these things helps your rankings or your click-through rate.
When the meta description is generated dynamically from the current post excerpt, this problem cannot happen. The metadata always reflects what is actually on the page because it is derived from the same source. That tight coupling between content and metadata is worth more than it might sound.
How the Next.js App Router Makes This Elegant
Next.js added the function to the App Router, and it is genuinely well designed for this use case. You export it from a page file alongside your default component export; it receives the same params and search params your component gets; it can do async data fetching; and whatever object it returns gets transformed into the full head metadata for that page at render time.
What this means practically is that your metadata generation is colocated with your page logic. The same fetch that gets the blog post data for the page component can feed into the metadata generation. You are not maintaining two separate systems; you are writing one data fetch and two consumers of that data. The component renders the page; the metadata function produces the head. Both stay in sync automatically because they are reading from the same source.
This architecture is exactly what makes the Seozilla integration sensible. Seozilla takes your content data and applies your SEO configuration rules to produce a complete metadata object. Plug that into and you have automated SEO that is structurally impossible to get out of sync with your content.
Walking Through the Project Structure
The DKTK-Tech example is set up as a blog, which is the most instructive content type for understanding how this all fits together. Each post is a dynamic route; the slug comes from the URL; the data gets fetched server-side; the metadata is generated from that data. Follow any single post through the codebase and you can see every layer of the pipeline clearly.
The Seozilla configuration file is worth reading carefully before looking at anything else in the project. That’s where all the global SEO guidelines exist, such as base URL, which acts as the basis for canonical URLs; title template, which dictates how post titles and the blog name will be combined together; default Open Graph image, which will be used if there is no featured image in the post; and Twitter card type. Correcting these SEO settings makes up to 80% of the process. The rest is just applying them page by page.
Once the configuration is in place, the pattern for each page type is consistent. The GitHub Next.js repo shows how the function in the post page file calls a helper that passes the post data through Seozilla and returns the full metadata object. If you are adding a new page type to your own project, say a category page or an author profile page, you follow the same pattern: fetch the data for that page, pass it through Seozilla with the appropriate schema type, and return the result.
Sitemap and Robots: the Unglamorous but Important Parts
Two things that get much less attention than metadata but matter quite a bit for how well Google can crawl and understand your site: the sitemap and the robots.txt file. Both of these can be generated automatically in Next.js using files in the app directory, and both benefit from being connected to your content pipeline rather than maintained by hand.
The file exports a function that returns an array of sitemap entries. When that function fetches its data from the same source as your pages, the sitemap is always current. New posts appear automatically; deleted posts disappear. The priority and change Frequency values can be set based on page type; blog posts might get a different priority than the homepage; category pages might update more frequently than individual posts. All of this logic lives in one file and applies consistently across the whole site.
The robots.txt equivalent in the App Router is a file that works the same way. Define your rules once; they apply everywhere. This is less exciting than metadata automation, but it is the kind of thing that causes real problems when it goes wrong; a misconfigured robots.txt that accidentally blocks Googlebot from crawling your content section is not a mistake you want to discover after the fact.
Open Graph and Social Sharing: Worth Getting Right
Organic distribution of content takes place to a great extent via social media shares as opposed to searches. You get traffic from a Google search, find your content valuable, and share it on LinkedIn, X, and so forth. Your connections see your post, and some of them click through. The quality of such click-throughs will largely depend on whether you have set up the appropriate Open Graph tags: Does the share include the proper title, description, and image?
Getting Open Graph wrong is surprisingly easy. The image dimensions matter; LinkedIn and Twitter have different requirements for optimal display. The description should be different from the meta description because the context is different; a search result description and a social share description serve different purposes. The URL in the og:url tag should be the canonical URL, not whatever URL the user happened to be on when they shared the page.
Seozilla handles all of this through configuration. You define your Open Graph image dimensions once. You specify whether to use the post-specific description or fall back to a generated one; the og:url is built from the canonical URL logic you have already defined. Every post gets correct, complete Open Graph tags without you having to think about it per post.
What to Do When You Start Your Own Implementation
The most common mistake people make when implementing automated SEO for the first time is trying to automate everything at once before understanding what each piece does. Start with the basics: get title tags and meta descriptions generating correctly for your main content type. Verify in browser dev tools that the output looks right. Then add canonical URLs. Then Open Graph. Then schema markup. Build the pipeline incrementally rather than trying to configure everything in one session.
The other thing I would say is to test your schema markup early and often using Google’s Rich Results Test tool. It’s free, it gives you instant feedback on the validity of your JSON-LD code, and it tells you which types of rich results you’re able to achieve. After running this once upon making the initial connection from Seozilla’s schema generator to your site, you’ll be able to see how things look so far and if there are any required fields that aren’t accounted for in your configuration.
Automation doesn’t take care of all of your SEO worries; you will have to still consider keyword usage and the quality of your content and links. It does eliminate an entire class of technical SEO issues from your list of worries, however, and you can be sure that all pages on your website meet at least a certain threshold of technical accuracy without someone needing to audit them for it. When you want organic success, that threshold is non-negotiable.