Supporting Tools and Closing Thoughts

NotebookLM, Repomix, Gemini Gems — the supporting tools that complete the AI-assisted development workflow. Plus the honest take on what this all means for software engineers.

March 30, 20265 min read8 / 8

The Supporting Toolkit

Claude Code and Cursor are the core tools. But three others keep coming up in professional AI-assisted development workflows.

NotebookLM

Google's NotebookLM is a research tool, not a coding tool. Its core behavior: you upload sources — YouTube videos, PDFs, docs, text — and it only answers questions based on those sources. Not its general training data. Your sources.

Why this matters for engineers:

Drop in:

  • The docs for a library you're learning
  • Architecture decision records from a project
  • Multiple YouTube videos on a technology
  • A set of blog posts or papers on a topic

Then ask questions, get summaries, generate study guides — all grounded in exactly the material you provided.

The podcast feature is the most famous: it generates a two-person audio discussion of your uploaded content, complete with an interactive mode where you can ask questions during the "conversation." More importantly: you can generate briefing documents, study guides, and outlines from your uploads. Useful for rapidly getting up to speed on an unfamiliar codebase's documentation or a technology you're about to use.

Repomix

Repomix takes a code repository and flattens it into a single file that's parseable by an LLM.

Bash
npx repomix # current directory npx repomix https://github.com/user/repo # remote repo

It outputs a single file with:

  • The file tree structure
  • All file contents concatenated with clear delimiters
  • Configurable exclusions (test files, docs, specific directories)
  • Token count estimation

Use cases:

Drop the whole output into Gemini (2M token context window) and ask:

  • "Explain the architecture of this project"
  • "What's the state management approach?"
  • "If I wanted to add feature X, what are the moving pieces?"
  • "What patterns does this codebase use consistently?"

For Gemini's massive context window, a whole small-to-medium repo fits. You get a genuine high-level understanding of a codebase without running it locally, cloning it, or spending hours reading files.

Also useful for your own codebase: "given the whole repo, if I wanted to migrate from Prisma to Drizzle, what would I need to touch?" — the AI has the full picture.

Gemini Gems

A Gem is Google's version of a persistent AI persona you can configure and chat with. You give it a system prompt (like a persona), and every conversation with it starts from that configuration.

Relevant use case: creating a dedicated planning persona.

Plain text
Gem prompt: "You are a senior software architect. When given a feature request or problem, you produce structured implementation plans in this format: [format specification] Ask clarifying questions before planning. Challenge assumptions. Point out dependencies and risks. Never write code — only plans."

Now you have a persistent planning assistant available in Gemini's interface with a large context window, optimized for the brainstorming phase of the plan → execute → verify workflow.

The Honest Reckoning

Some things are worth saying directly after all of this.

The code generation itself was never the hard part. It was always the decisions — what to build, which tradeoffs to make, how to structure it for the team that has to maintain it. AI tools don't change that. They just make the execution faster, which raises the premium on getting the decisions right.

More code is usually more problems. The vibe coding failure mode — flowing state, lots of generation, surface and realize you don't understand what was built — is real. The engineers getting the most leverage aren't generating more code. They're generating better code, with more discipline around what gets kept.

The discipline you always aspired to follow compounds now. Git discipline, small focused commits, test-driven development, architectural decision records — these weren't just good ideas before. Now they're load-bearing parts of how you work with AI tools effectively. The CLAUDE.md, the ESLint rules, the hooks, the commit-before-asking-for-more habit — this is all just good engineering practice made more consequential.

Bad days will happen. A productive two days followed by a bad day where you throw it all away and rewrite by hand over the weekend is a normal experience, not a failure. The branch where the AI-generated code lives is still useful — you understand the shape of the solution better for having gone through it. Sometimes the best use of AI-generated code is as a draft you learn from and then throw away.

It doesn't replace understanding. The engineers who'll get into trouble are the ones who trust the output without understanding it. An AI-generated codebase you don't understand is a liability. The code runs until the moment it doesn't, and then you can't fix it without understanding it first.

It does accelerate learning. You can now generate a messy legacy codebase on purpose and practice the navigation skills that used to only come with years in production systems. You can get exposure to architectural challenges that weren't accessible early in a career. That's a genuine acceleration of the learning curve, for those who use it that way.

The Practical Summary

What to do wellWhy
Write a strong CLAUDE.mdEvery session benefits
Set up ESLint with tight rulesCatches AI slop automatically
Use hooks for quality gatesEnforcement doesn't require trust
Commit oftenMakes fresh starts cheap
Plan before executingClearer spec → better output
Watch, don't queueCatch mistakes early
Use the right toolCursor for inline, Claude Code for scale
Start fresh when stuckNew context, new clarity

The meta-skill isn't prompting. It's building the system around the prompting: the rules, the hooks, the habits, the workflows that make AI output consistently trustworthy.

That system compounds. A team with strong CLAUDE.md files, ESLint guardrails, and established hooks will consistently outperform a team with better prompting skills but no supporting infrastructure.

The tools will keep changing — model capabilities, context windows, pricing, new entrants. The underlying discipline won't. Ship working, understandable code. Keep the codebase maintainable. Review what you ship. Know when to stop the agent and do it yourself.

That was always the job. AI tools just changed how much you can get done in a day.

Enjoyed this? Get more like it.

Deep dives on system design, React, web development, and personal finance — straight to your inbox. Free, always.