Blog

How To Add SEO to A Next.js Project

How To Add SEO to A Next.js Project

Web frameworks like React and Next.js are incredible tools in modern web development. Being able to produce a complex web application in just a few days with impressive features such as 3d capabilities is a superpower that can't be ignored. While these frameworks are full of benefits, there is unsurprisingly some downsides to using them. One of those downsides is that SEO isn't as off-the-shelf as it may be in other approaches. Luckily it doesn't take much to add dynamic SEO solutions to any Next..js project.

Adding a Dynamic Sitemap

The dynamic nature of modern web applications built with Next.js requires a thoughtful solution. A sitemap is a file that let's search engines know the importance of the areas of your website and how they are connected. In a small application with only a few pages, it is possible to write or update this file manually. However, in a larger application where there is new content being added regularly, such as this blog, it would become quite cumbersome to keep the sitemap updated. This is why we need to programmatically generate a sitemap.

Next, in your Next.js app create a new file app/server-sitemap.xml/route.ts:

First, install Next-sitemap

npm i next-sitemap

Next, in your Next.js app create a new file app/server-sitemap.xml/route.ts:

import { getAllPosts } from '@/actions/actions' import { getServerSideSitemap } from 'next-sitemap' export async function GET(request: Request) { // Method to source urls from cms const urls = await getAllPosts() const post = urls?.map((post:any) =>{ return{ loc: `https://www.example.com/blog/${post?.slug}`, lastmod: post?.updatedAt.toISOString() } }) return getServerSideSitemap([ { loc: `https://example.com`, lastmod: new Date().toISOString(), // changefreq // priority }, ...post ]) }

Here, we have an async function that grabs all of the posts available on the blog through a server action. Next, we map over all of the results and return an object with the url of each blog post as well as the last time the page was modified, because search engines like knowing how old the information on a webpage is. Finally, we return the base url of the website as well as when it has last been modified followed by a spread of the mapped posts. You can replace the idea of blog posts with things like products or whatever content you would like.

Robots

Search engines 'crawl' your website by sending robots that emulate a variety of devices to visit the important areas of your web application. The robot file is another important file that shows these robots which pages on your website are not important to them and should be ignored. This usually includes things like admin dashboards or authentication areas. We can tell search engines they really should only care about our blog posts in this case by utilizing this file.

Let's make another file called robot.ts

import {MetadataRoute} from 'next' export default function robot(): MetadataRoute.Robots { return { rules: { userAgent: '*', allow: '/', disallow: '/admin' }, sitemap: 'https://www.jessethedev.com/server-sitemap.xml' } }

In this file we are dictating what rules search engines should follow when crawling our website. For instance, the user agent is set to '*' as we want our website to be indexable by any search engine. Second, we allow the homepage to be crawled by the search engines robots which will lead them to each of our posts. With the disallow option we omit the admin page as well as it's sub pages because the content there is not important. Lastly, we direct the robots to the sitemap for further investigation into important areas of our website.

Metadata

We're in the home stretch now, and all we have left to do is add information to our metadata. In the root layout.tsx file, add the following just before the RootLayout export.

export const metadata: Metadata = { metadataBase: new URL("https://www.jessethedev.com"), keywords: [ "react", "reactjs", "react.js", "next", "next.js", "nextjs", "unity", "game development", "unity engine", "javascript", "python", "c#", "html", "html5", "css", "css3", "tailwind", "react-three", "react three", "react three fiber" ], title: { default: "JesseTheDev", template: "%s | JesseTheDev" }, openGraph: { description: "Learn Software Development In A Fun and Realistic Way" } };

For our metadata, we add the base url in our metadataBase option, then we add an array of keywords. This can be any topic you may write about or have written about to ensure that relevant keywords can be used for topics. Next we add top title of the website with a default option as well as an option to use a keyword with the site title that is displayed in the tab on your browser. Finally we add an openGraph with a general description of the website.

Conclusion

Adding dynamic SEO to your web application is an important step for many personal and professional projects. The ability to do just this alone could probably land you some freelance jobs which is great in the current job market in the field. Understanding how SEO works under the hood is a great way to propel your content to the front page of the internet. This is a great start but remember that SEO is a massive field of interest on its own and is constantly evolving. Keeping up with current developments and expanding on this foundation to suit your own needs is a surefire way to ensure that your projects reach as many eyeballs as needed.