
Cory LaNou
AI: Back Where I Started
Overview
I spent over a decade building muscle memory in vim and tmux. Then AI-powered IDEs like Cursor showed productivity gains I couldn't ignore, so I made the switch despite the pain. A year later, I'm back in the terminal. Not because I gave up on AI, but because AI evolved. CLI-based agents like Claude Code changed everything. Now I work from anywhere via Tailscale and Termux, running multiple AI agents simultaneously without my machine crashing. The future of development wasn't in the IDE. It was in making the terminal powerful enough to compete.
The Impossible Choice
When GitHub Copilot and Cursor launched, I faced a choice no developer with a decade of terminal muscle memory wants to make: stick with my perfectly-tuned workflow or chase the productivity gains everyone was talking about.
The productivity boost was undeniable. Type if and watch Copilot write four perfect lines of idiomatic Go. As someone who's spent years writing Go, a language where idioms matter, having AI that actually understood my style and rarely needed correction was genuinely amazing. It wasn't just completing code; it was writing code better than my first pass often was. Extra guard clauses I'd normally add later during debugging. Edge case handling I'd typically skip until testing revealed the need.
I was seeing 2x, sometimes 3x productivity improvements. You can't ignore numbers like that.
But the cost was real.
The Price of Progress
Switching to Cursor meant abandoning a decade of muscle memory. Every time my fingers moved to execute a vim command, I'd hit the wrong key. I had to consciously re-learn shortcuts I'd been executing unconsciously for years. Unlearning is harder than learning. Your brain fights you every step of the way.
Then there was the mouse problem. I'm someone who views taking your hands off home row as a workflow failure. Yet modern IDEs, for all their power, still require mouse interactions for certain tasks. Every time I reached for that mouse, I felt slower.
But the IDE wasn't all bad. I'd be lying if I said I didn't appreciate some things. The fonts were beautiful and easy to read. Dark mode was thoughtfully designed, not a config file battle. Clicking URLs directly in the terminal output instead of copying and pasting them. Yes, terminals can do these things. But only after you've spent hours in config files getting everything just right.
I spent the last year primarily in Cursor, and honestly, it was worth the pain. The productivity gains were real.
Then Q4 2025 happened.
The Shift Most Are Still Denying
I'd been using CLI-based AI coding since summer 2025. Claude Code, Codex CLI, Gemini CLI. They were good, and I was productive. But I was still splitting time between the terminal and Cursor, using whichever tool fit the task.
Then the model updates hit. Opus 4.5 dropped in late November. GPT-5.2-Codex followed in mid-December. The difference wasn't incremental. It was a step change. The CLI agents went from "useful for certain tasks" to "faster than me at almost everything."
My workflow shifted fundamentally. I went from spending most of my time writing code to spending most of my time doing research: understanding issues, planning features, investigating bugs. Then I let AI take the first pass at implementation.
Here's what that looks like in practice: I use AI to research an issue, gathering context and understanding the problem space. Then I let Claude Code take the first implementation pass. After that, I have Codex review Claude's work. They feed off each other, iterating and improving. By the time they're done with their feedback cycle, the code is ready for my manual review.
I'm no longer writing most code. I'm researching, planning, and reviewing. The AI agents handle the implementation, and they're reviewing each other's work before it ever gets to me.
The Real Win: Parallel Everything
The biggest shift isn't just that AI writes code faster. It's that I'm no longer limited to one thing at a time.
I built Gopher AI plugins that automate the work I used to do manually. /start-issue 456 doesn't just set up a worktree. It fetches the full issue context, checks if this was already fixed elsewhere, looks for duplicates, reads through all the comments for additional context, creates an isolated git worktree, sets up the branch, and routes to the right workflow based on whether it's a bug or feature. All the non-coding work that still eats your day, done in seconds.
That means I can have 5 issues in progress simultaneously. Each in its own worktree. Each with its own AI agent working on it. I kick off an issue, switch to another terminal, start the next one. Come back later to review what got done.
This workflow is impossible when you're writing code yourself. You can't context-switch between 5 features and make real progress on all of them. But when AI handles implementation and you're just researching, reviewing, and course-correcting? Parallel work becomes the default.
I wrote more about this workflow in From Commands to Plugins.
This changed what I needed from my development environment.
The Breaking Point
Running this many parallel sessions exposed a fundamental problem.
When you're mostly writing code, an advanced IDE makes sense. Autocomplete, inline documentation, sophisticated refactoring tools. These matter when your hands are on the keyboard typing.
But when you're running 15 to 20 IDE instances simultaneously, each one executing AI agents, each agent consuming memory and CPU, your machine starts to buckle.
I have an M1 Max with 64GB of RAM. At half my target workload, I was already at 81% memory usage. Cursor would crash multiple times a day. Daily reboots became necessary just to keep working.
I started investigating, assuming the AI CLI tools were the problem. They weren't. The IDEs were the bloat.
Here's what I measured with just a few Cursor instances open:
| Process | Memory |
|---|---|
| gopls (single instance) | 4.93GB |
| Cursor Helper (GPU) | 3.11GB |
| Cursor Helper (Renderer) x5 | ~3GB |
| Node processes | ~1GB |
Each Cursor instance spawns its own gopls process. At 20-30 projects, that's 100-150GB of memory just for the IDE layer. On a 64GB machine. The workload was literally impossible.
I didn't want to go back to the terminal. I'd paid the cost of switching once; I didn't want to pay it again in reverse.
Then I found Ghostty.
The Terminal That Changed Everything
Ghostty is what modern terminal emulators should have been all along. Beautiful fonts out of the box. Thoughtful design. Shift-click a URL and it opens in your browser. All the polish of modern IDEs, but in a terminal that actually respects your system resources.
Pair that with tmux, and suddenly I had everything I needed. Multiple panes, session management, context switching. All the power I'd built muscle memory for over a decade. And because I already knew vim and tmux inside and out, there was no re-learning curve. I was immediately comfortable.
The numbers tell the story:
| Setup | Memory for 20 sessions |
|---|---|
| Cursor + AI agents | 100-150GB |
| Terminal + AI agents | 10-15GB |
Neovim runs at 50-100MB per instance. Cursor runs at 3-5GB. More importantly, gopls can be shared across all terminal sessions via a single LSP server. Each Cursor instance spawns its own.
Same workload. 10x less memory. No crashes.
I could finally run all my AI agents in parallel without my computer giving up.
Working From Anywhere
Getting back to terminal efficiency was great. Then I added Tailscale to the mix.
Tailscale creates a secure network between all your devices. Install Termux on my phone and tablet, connect via Tailscale, and suddenly I can SSH into my home development machine from anywhere.
Sitting in the car while someone else drives? I can attach to my tmux sessions, check on running AI agents, review their output, kick off new tasks.
On my tablet at a coffee shop? Same thing. Full access to my development environment, all my sessions, all my running agents.
This is productivity I never had with IDEs. Cursor doesn't work on my phone. VS Code barely functions on a tablet. But with tmux over SSH via Tailscale, I have my entire development environment available anywhere I have internet.
I'm legitimately productive in situations where before I would have just been scrolling Twitter.
SSH Port Forwarding with Auto-Reconnect
When running dev servers on your remote machine (localhost:8000, etc.), you need to forward those ports to your local machine. The problem: SSH tunnels die when the remote server restarts or the network hiccups.
Solution: autossh# Install on your local machine
brew install autossh
I use a shell function that forwards a range of dev ports with auto-reconnect:
proxy-dev() {
local -a ports
for p in {8000..8020}; do
ports+=(-L "${p}:localhost:${p}")
done
autossh -M 0 -N \
-o "ServerAliveInterval=30" \
-o "ServerAliveCountMax=3" \
-o "ExitOnForwardFailure=yes" \
<your-tailscale-ip> "${ports[@]}"
}
Why a range of ports? Each of my projects has a dedicated port locally so I don't have to change scripts. This project runs on 8008, another on 8009, another on 8010. With multiple AI agents working on different projects simultaneously, I might have several dev servers running at once. Forwarding the whole range means I never have to think about which ports are active.
The key options:
-M 0: Disable autossh's monitoring port, use SSH's built-in keepalives insteadServerAliveInterval=30: Send a keepalive packet every 30 secondsServerAliveCountMax=3: After 3 missed responses (~90 seconds), reconnect automaticallyExitOnForwardFailure=yes: Fail immediately if a port is already in use locally
Run proxy-dev in one terminal, and your remote dev servers appear at localhost:8000-8020. When the remote machine restarts for a deploy, it reconnects automatically within 90 seconds.
Screenshot Sync for AI Context
One subtle but powerful piece: screenshots. Both machines save screenshots to Dropbox:
# On both local and remote machines
mkdir -p ~/Dropbox/screenshots
defaults write com.apple.screencapture location ~/Dropbox/screenshots
killall SystemUIServer
When I take a screenshot on the remote desktop to capture a UI bug or error message, it syncs to Dropbox instantly. My local Claude Code session can then evaluate that screenshot without any manual file transfer. Visual debugging works the same whether the screenshot was taken locally or remotely. The AI just sees a path to ~/Dropbox/screenshots/... and can analyze it.
This closes the loop on the "AI can't see what I see" problem that plagues remote development.
My Setup
For those who want the specifics, here's exactly what I'm running:
Terminal: Ghostty
Ghostty is a GPU-accelerated terminal built by Mitchell Hashimoto (HashiCorp co-founder). Key config:
- Font: CaskaydiaCove Nerd Font Mono at 14pt with thickening enabled
- Theme: Catppuccin Mocha - the same theme across all my tools
- Vim-style navigation:
Ctrl+h/j/k/lfor pane switching - Quick splits:
Cmd+d(right),Cmd+Shift+d(down)
Multiplexer: tmux
tmux for session management. My config includes:
- Catppuccin-styled status bar matching Ghostty
- Vim-style navigation:
prefix + h/j/k/lfor panes - Smart splits:
|and-that preserve working directory - Plugins:
- tmux-resurrect - persist sessions across restarts
- tmux-continuum - auto-save/restore
- tmux-which-key - discoverable keybindings
Editor: Neovim with LazyVim
LazyVim as the base config, customized for Go:
- Go support: go.nvim with gopls configured for gofumpt, staticcheck, and all code lenses
- Claude integration: claude-code.nvim - toggle Claude with
<leader>cc, send selections with<leader>cs - Theme: Catppuccin (consistent across the stack)
- Key plugins: Telescope, Neo-tree, Treesitter, LSP, Conform, Gitsigns
Shell: Zsh with Oh My Zsh
- Theme: Powerlevel10k
- Plugins: git, golang, heroku, docker
- Editor: Neovim for local, vim for SSH sessions
- Environment: direnv for per-project settings
AI CLI Tools
The core of the workflow:
- Claude Code (v2.0.75) - primary implementation agent
- OpenAI Codex CLI (v0.77.0) - code review and second opinions
- Google Gemini CLI (v0.21.1) - research and alternative perspectives
I run multiple instances simultaneously in separate tmux panes.
Claude Code Plugins
Custom plugins I've built for my Go workflow:
- go-workflow - GitHub issue management with worktrees
- go-dev - Go best practices, test generation, lint fixes
- productivity - Weekly summaries, changelogs, standup notes
- llm-tools - Cross-LLM comparison and delegation
Remote Access
- Tailscale - zero-config VPN connecting all devices
- Termux - Android terminal with SSH access
- Full tmux session attachment from phone/tablet
The Configuration Paradox
If you've made it this far, you might be thinking: "That's a lot of configuration. Fonts, themes, tmux plugins, Neovim setup, shell customization… this is exactly why I use VS Code."
You're not wrong. This is the terminal's historical Achilles heel.
IDEs like Cursor and VS Code just work. Open the app, install an extension with one click, and you're done. Updates happen automatically. Themes apply instantly. Everything is designed to minimize friction. You spend your time coding, not configuring.
Terminal setups are the opposite. Every tool needs configuration. You're constantly researching the latest widget or plugin. "Should I try this new terminal emulator? What about that tmux plugin everyone's talking about? Is there a better Neovim colorscheme?" And when something breaks: a plugin conflict, a theme that doesn't render correctly, a keybinding that stopped working. You're debugging config files instead of writing code.
I know this pain intimately. I've spent countless hours over the years grooming my terminal setup. Reading blog posts about dotfile management. Watching YouTube videos about "the ultimate Neovim config." It's a time sink that never ends.
AI does all of that now.
When I decided to switch to Ghostty, I didn't spend hours reading documentation and experimenting with settings. I asked Claude Code to research the best configuration for my workflow, and it gave me a working config in minutes. When I wanted to add tmux-resurrect for session persistence, I didn't hunt through GitHub issues to figure out why it wasn't working. Claude debugged it for me.
The setup I showed you earlier? Most of it was configured by AI. I described what I wanted, and Claude Code researched the options, wrote the configs, and fixed the issues when things didn't work. The same AI agents I use for coding also maintain my development environment.
The reason terminal setups are now viable is the same reason I'm in the terminal in the first place. AI makes the terminal powerful enough to compete with IDEs, and AI makes the terminal maintainable enough to be practical.
The configuration burden that drove me to Cursor in the first place? It's gone. Not because terminals got simpler, but because I have an AI assistant that handles the complexity for me.
Why This Matters
If you told me two years ago that the future of AI-assisted development would bring me back to the terminal, I wouldn't have believed you. The trend seemed clear: IDEs were getting more powerful, more integrated, more essential.
What actually happened: AI got good enough that the development workflow fundamentally changed. We're not writing as much code anymore. We're researching, planning, orchestrating AI agents, and reviewing their work.
For that workflow, the terminal is actually better. It's lighter weight, which means you can run more agents simultaneously. It's more scriptable, which means you can automate the orchestration. And with tools like Ghostty, it's finally as pleasant to use as modern IDEs.
The mobile access via Tailscale and Termux is just the cherry on top. But it's a big cherry. Being able to manage your development workflow from anywhere, on any device, fundamentally changes what's possible.
The Irony
I spent a year forcing myself to adapt to modern IDEs because that's where AI productivity was. I paid the cost of retraining muscle memory, accepting mouse-driven workflows, dealing with the friction.
Now, a year later, I'm back where I started. Terminal, tmux, vim. The tools I spent a decade mastering.
Except I'm not really back where I started. I'm doing something completely different. I'm not writing code in vim like I used to. I'm orchestrating AI agents, reviewing their work, and managing that process across devices and locations in ways that were never possible before.
The terminal won. Not because IDEs failed, but because the terminal evolved just enough, and AI evolved dramatically, and together they made something better.
It turns out the future of development isn't about fancy IDEs or powerful AI in isolation. It's about the right tools for the workflow AI enables. And for me, that's tmux sessions running AI agents, accessible from anywhere, using tools I already mastered a decade ago.
I'm back where I started. And I've never been more productive.
Your Setup Might Look Different
This is what works for me. Your setup will depend on your workflow, your tools, your constraints.
Maybe you don't need remote access from mobile devices. Maybe you're not running a dozen AI agents in parallel. Maybe you still spend most of your time writing code, not reviewing it.
That's fine. Use what works.
But if you're finding your IDE is becoming the bottleneck, crashing under the weight of AI agents, consuming resources, limiting how many sessions you can run, consider that the terminal might not be a step backward. It might be a step forward.
And if you already know tmux and vim, you might find you haven't lost those skills. You've just been waiting for the right moment to use them again.
Further Reading
If you're interested in exploring the tools and concepts mentioned:
- Ghostty - Modern GPU-accelerated terminal by Mitchell Hashimoto
- tmux - Terminal multiplexer for session and pane management
- Tailscale - Zero-config VPN for secure access across devices
- Termux - Android terminal emulator and Linux environment
- Claude Code - AI coding assistant with CLI access
- LazyVim - Neovim setup with sensible defaults
- Catppuccin - Soothing pastel theme for everything
- Nerd Fonts - Developer fonts with icons
Want More?
Related articles you might find interesting:


