How I Rebuilt My Blog in 2025 Using Next.js and Vercel

March 28, 2025

Six year ago, I started learning programming. The first thing I built was a personal blog, cause I'd heard from folks that a good programmer should have their own blog website. Even though I only knew a tiny bit of HTML, CSS, and JavaScript, I still hit up Google to figure out how to whip a site using some ready-made tools.

I messed around with a bunch of options and ended up picking Hexo + GitHub as my go-to combo. Hexo's got a big community, and people have cooked up tons of awesome themes for it. Plus, GitHub Pages is the free service from GitHub that lets me host static pages without spending a dime.

Everything worked out pretty smooth. Hexo and the theme code came with solid documentation that walked me through tweaking stuff like the title, intro, footer, SEO, colors, background images. I even tried to write some stylus code to fiddle with the UI details. It took me almost five days to get it all running, though, full disclosure, that includes the time I spent goofing of on video games :D. Although the domain name was later bought by an illegal website because I forgot to renew it, it was still an interesting experience.

Fast forward to today, and the tech stack we use in the frontend world has totally flipped -- classic, right? There are way more killer tools now compared to a few years back. Even though I've rebuilt the website a couple of times since then, let's just pretend those versions never happened.

So, yeah, the blog you're reading right now is my latest stable setup. I threw together a mix of frameworks and libraries to build it, and it might be cool to share what I used. I'm not sure if this site will be kicking around six years from now, but it'd be fun to look back on this moment if it is.

Server & Basic Frameworks

For my personal blog website, I've got a few mush-haves:

  • Basic SEO and Markdown support
  • Fast & easy to used pipeline (who wants to deploy a site manually in 2025?)
  • FREE!

After poking around and comparing options, I went with Next.js and Vercel. It's probably the most yawn-worthy tech stack right now. Since I'm lazy, I just grabbed the Portfolio Starter Kit template straight from Vercel.

Vercel's this super popular cloud platform that plays nice with GitHub. I can whip up a site and push it live with one click. For a pure frontend project, it's pretty much the sweetest spot to park it. I mean, nobody wants to mess with AWS and then cry over a fat bill at the end of the month after all those fancy setups.

aws bill memes

Next.js is that full-stack framework everyone in the frontend crowd keeps yapping about -- mostly to complain, though. The template built on it comes with some solid features that save me a ton of setup time:

  • MDX support
  • SEO optimization
  • Built-in RSS feed support
  • Tailwind v4

MDX runs on next-mdx-remote, and it's like magic. It can pull MDX files from anywhere -- another repo, a database, whatever. The template uses React Server Components by default. You can dig into the library's source code to see how it works, but basically, it treats MDX like data: compiles it on teh server, sends the result to the frontend, and slaps on custom render components during client hydration

With this setup, my articles aren't just a wall of snooze-worthy text. I can use it to build some slick interactive components to break down tricky concepts. Throw in cool tech like Canvas, WebGL, or WebAssembly, and the sky's the limit.

  • Mermaid. Lets me whip up flowcharts, diagrams, and visualizations with simple text-based syntax.
  • KaTeX. Handles rendering math expressions and equations on the web with LaTeX syntax.
  • Giscus. A comments system based on GitHub Discussions

Design

I think the coolest thing about personal blog websites is how wild the designs can get. Everyone's got their own take on what good design looks like.

To me, for a blog, the reading experience is king when it comes to design. A solid combo of colors and fonts can make or break how comfy it feels to read. Especially for a blog site, aiming for a paper-like vibe seems like the way to go.

After some digging and fiddling around, here's the final setup I landed on. I didn't tackle dark mode, though -- it's way tricker than I thought. Maybe me (or my AI squad) will get to it down the road.

RoleDescriptionColor
BackgroundMain background of the site
Text ColorPrimary font color for body text
Primary AccentDecorative color for first letters or highlights
Secondary AccentUsed for quotes, icons, subtle highlights

And the typography.

Font SampleName & Usage
The quick brown fox jumps over the lazy dog(Nunito Sans)UI elements — buttons, navigation
The quick brown fox jumps over the lazy dog(Montserrat)Section & chapter titles — for structure
The quick brown fox jumps over the lazy dog(Spectral)Body text for comfortable reading
The quick brown fox jumps over the lazy dog(Maple Mono)Programming code

So, the whole site runs on a warm color scheme with a mix of two sans-serif fonts (Nunito Sans and Montserrat) and a serif one (Spectral). I made sure the main color combo vibes with the WCAG guidelines. Maybe you want to ask why just the main color? Well, some third-party components are a pain to tweak color-wise. 😣 Maybe I will find a time to mess with those later.

On the top of fonts and colors, transition animations are a big deal for a website too. Especially since I'm a frontend dev -- nowadays, the React community's got some dope libraries and tools to make that stuff a breeze:

  • Motion. Animation library, built on native browser APIs.
  • reactbits. A collection of animated, interactive, and fully customizable React components.
  • animate.style. Ready-to-go, cross-browser animations.
  • Easing Wizard. Great tool to whip up and tweaks CSS easing functions.

I didn't exactly study UX design in school (or programming either, for that matter 😅). But with a little help from AI and a lot of coffee, I managed to cobble some design.

AI Audio

I study Material Science and Technology in school. In that world, some researchers are obsessed with slapping Graphene into all kinds of materials and then saying about inventing the next big super material. If you haven't messed with Graphene, you're basically too embarrassed to event say hi to anyone. Now, in the programming scene, AI is the Graphene equivalent -- so, yeah, I add AI in my website.

Google's got this killer AI tool called NotebookLM. It's an AI-powered research assistant that helps people summarize, analyze, and pull insights from docs, notes, websites, even YouTube videos. My favorite things about it? The Audio Overview feature. It splits out a podcast-style audio with a guy and a gal chatting about whatever resource you feed it. And it's not some robotic recap -- the voices are lively, full of emotion, taking turns steering the convo like a real talk show.

Problem is, Google doesn't give you an official API to generate the audio on the fly. So I've been doing it the old-school way for my site: dump the article into NotebookLM, generate the audio, download it, then upload it to the resources host service.

There are tons of hosting options out there, but after some digging and a quick budget check, I went with Cloudflare R2. It's an AWS S3-compatible object storage service, but the kicker? Zero egress fees. That means no charges for data leaving the R2 bucket to hit the internet or anywhere else.

For the audio player, I picked Plyr. It's a sleek little player built on Vidstack, handling with video and audio like a champ. There's event a React binding, plyr-react, so I could plug it right in. Buttery smooth user experience.

Interactive component

When I was writing my article on video encoding and encoding, I had to tackle a beast of a concept: Discrete Cosine Transform (DCT) and quantization. Honestly, just pronouncing those words is a struggle for me, let alone explaining them. So, I called in my AI crew for help. ChatGPT suggested building an interactive component to break it down, Gemini scoured the web for resources, Grok fine-tuned the plan, DeepSeek gave it a once-over, and Claude handled the code. In the end, I typed pnpm run dev to fire up the site. 😉

aws bill memes

Thanks to some flawless teamwork, we built an interactive component using the pure Canvas API to explain DCT and quantization. You can peek at the source code here.

Since it's loaded with mathy calculations, it's prime territory for vibe coding. Still, I brushed up on the basics of the Canvas API before diving in. What we did was pretty straightforward: grabbed a Canvas Context with const ctx = canvas.getContext('2D'), then used the API to draw paths, circles, and rectangles; filled in areas; styled up each element; handled mouse events; and tied it all into React with useEffect. All the docs are out there online if you're curious.

But, bro, building something like this with the raw API is a grind. Claude didn't whine about it, but I could tell from how often prompts shown to tell me use a new convo to keep things from dragging on too long.

Konva would've been a way better pick for React and TypeScript users. Claude and I aren't planning to redo the DCT visualization with Konva right now, but I might give it a shot sometime down the line.

What's next

Honestly, it's tough to predict what's next for my personal blog site. Steve Jobs dropped that line at Stanford's commencement: "Stay hungry, stay foolish." I'm typing this up on Arch Linux right now 😋, and I just hope I can keep at it and log my journey here.

Oh, and it'd probably be smart to remember to cough up the $20 yearly domain fee.