I use Conductor to manage git worktrees. It's great — you get isolated branches,
each with their own working directory, and Conductor handles creating and tearing them down. But every time
it spun up a new workspace for a Laravel project, I'd hit the same annoying wall: no .env, no
node_modules, site not linked in Herd, wrong PHP version. Five minutes of mechanical setup before I could
even look at the code.
Turns out Conductor has a conductor.json config with a scripts feature that solved this in a pretty clean
way. You define setup, run, and archive scripts, and Conductor runs them at each stage of the worktree
lifecycle. One command, fully working Laravel app, every time.
Here's how I set it up with Laravel Herd, and the tricks I've picked up along the way.
What Conductor Does
Conductor is a desktop app that sits on top of git worktrees. You point it at a repo, it creates worktrees for you, and it runs your scripts at each stage of the worktree lifecycle:
- Setup runs once when the workspace is created — install dependencies, link the site, configure the environment.
- Run boots your dev environment — starts the dev server, queue workers, whatever you need.
- Archive tears everything down when you're done with the branch — unlinks the site, removes
node_modules, frees disk space.
You define these scripts in a .conductor/ folder in your project root, and point to them from a
conductor.json file. Commit both to your repo and every developer on your team gets the same setup
experience.
How Worktrees Are Organized
Conductor keeps everything under ~/conductor/workspaces/. Each project gets a folder, and each worktree
inside it gets a city name (Conductor picks these automatically):
~/conductor/workspaces/
├── my-project/
│ ├── nagoya/
│ ├── montreal/
│ └── salvador/
├── another-app/
│ └── khartoum/
├── client-site/
│ ├── bordeaux/
│ ├── london/
│ ├── minsk/
│ ├── quito-v1/
│ └── vilnius-v1/
├── sema-lisp/
├── sql-splitter/
└── token-editor/
└── abu-dhabi/
Each of these is a full git worktree. nagoya might be a feature branch, montreal a bugfix, salvador
a spike — all running simultaneously without stepping on each other.
The Config
This goes in your project root as conductor.json:
{
"scripts": {
"setup": ".conductor/setup.sh",
"run": ".conductor/run.sh",
"archive": ".conductor/archive.sh"
},
"runScriptMode": "concurrent"
}
Three scripts, three lifecycle hooks. runScriptMode: "concurrent" means Conductor runs the run script
in a way that supports concurrent processes (like a Vite dev server and a queue worker running side by
side).
One thing I wish conductor.json supported: arrays for the script values, so you could inline multiple
commands without cramming everything into one unreadable string (the way composer.json scripts do it).
It doesn't, so just bypass the whole problem by pointing each hook at its own .sh file. You get proper
syntax highlighting, comments, multi-line commands — all the things you lose when you try to stuff shell
logic into a JSON string. Later in this article there's a zsh function you can paste into your ~/.zshrc
to scaffold the whole thing out in any project.
Environment Variables
Conductor injects these into every script it runs. You'll use them throughout your setup and teardown logic:
# Available in every .conductor/ script:
CONDUCTOR_WORKSPACE_NAME # e.g. "nagoya"
CONDUCTOR_WORKSPACE_PATH # e.g. "~/conductor/workspaces/my-project/nagoya"
CONDUCTOR_ROOT_PATH # e.g. "~/code/my-project"
CONDUCTOR_DEFAULT_BRANCH # e.g. "main"
CONDUCTOR_PORT # e.g. "55100" (first of 10 ports: PORT+0 through PORT+9)
CONDUCTOR_ROOT_PATH is the important one. It points to your actual repo directory — not the worktree.
This is how you share files like .env without copying them.
The Scripts
These are the actual scripts I use for a Laravel + Herd project. I'm showing them verbatim — this is exactly what's running in production on my machine.
setup.sh
#!/bin/zsh
# Conductor Environment Variables:
# CONDUCTOR_WORKSPACE_NAME - Workspace name (e.g. "nagoya")
# CONDUCTOR_WORKSPACE_PATH - Workspace path
# CONDUCTOR_ROOT_PATH - Path to the main repo root
# CONDUCTOR_DEFAULT_BRANCH - Default branch (e.g. "main")
# CONDUCTOR_PORT - First of 10 ports, PORT+0 through PORT+9
# Link folder
herd link $CONDUCTOR_WORKSPACE_NAME
# Set php version
herd isolate 8.3 --site="${CONDUCTOR_WORKSPACE_NAME}"
# Symlink .env from project root into worktree
ln -sf "${CONDUCTOR_ROOT_PATH}/.env" .env
# Install deps
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
nvm use
herd composer i
pnpm install
Let me walk through what each piece does.
herd link registers this worktree directory as a Herd site. After this,
http://nagoya.test resolves to this worktree. Each worktree gets its own .test domain automatically
based on the workspace name.
herd isolate pins PHP 8.3 for this specific site. Without it, the worktree uses whatever PHP version
Herd is globally set to — which might be wrong if you've been switching between projects. Isolating per-site
means it doesn't matter.
ln -sf creates a symlink from the worktree's .env to the main repo's .env. This is the single
most important line. Every worktree shares the same database credentials, API keys, and service config.
Change your .env once and every worktree picks it up immediately.
ln -sf won't fail if the target file doesn't exist yet — it creates a dangling symlink, which resolves
the moment the file appears. So the order doesn't matter.
The rest is standard: switch to the right Node version, install Composer and npm dependencies.
run.sh
#!/bin/zsh
# Conductor Environment Variables:
# CONDUCTOR_WORKSPACE_NAME - Workspace name (e.g. "nagoya")
# CONDUCTOR_WORKSPACE_PATH - Workspace path
# CONDUCTOR_ROOT_PATH - Path to the main repo root
# CONDUCTOR_DEFAULT_BRANCH - Default branch (e.g. "main")
# CONDUCTOR_PORT - First of 10 ports, PORT+0 through PORT+9
herd open
npx concurrently "pnpm run start" "herd php artisan queue:work"
herd open launches http://nagoya.test in your default browser. Then concurrently runs the Vite dev
server and the Laravel queue worker side by side. When you hit Ctrl+C, both stop.
archive.sh
#!/bin/zsh
# Conductor Environment Variables:
# CONDUCTOR_WORKSPACE_NAME - Workspace name (e.g. "nagoya")
# CONDUCTOR_WORKSPACE_PATH - Workspace path
# CONDUCTOR_ROOT_PATH - Path to the main repo root
# CONDUCTOR_DEFAULT_BRANCH - Default branch (e.g. "main")
# CONDUCTOR_PORT - First of 10 ports, PORT+0 through PORT+9
herd unlink
rm -rf node_modules
Unlink the Herd site and delete node_modules to reclaim disk space. Conductor handles deleting the
worktree directory itself — archive is just for your cleanup logic.
Pain Points This Solves
The .env problem
Without this setup, every worktree needs its own .env. You either copy it manually (and forget, every
time), or you write a wrapper script that does it for you (and then maintain that script forever).
The symlink approach sidesteps all of this. There is exactly one .env file, in your main repo directory.
Every worktree reads from it. Update your database password once and you're done.
One caveat: if you need per-worktree database isolation (different DB per worktree), you'll want to copy
the .env instead of symlinking it. I cover this in the advanced section below.
SQLite database cloning
If your project uses SQLite, you might want each worktree to start with a copy of your current dev
database. Add this to setup.sh after the symlink line:
# .conductor/setup.sh — add after the ln -sf line:
# Clone the SQLite database so this worktree starts with real data.
# Use cp, not ln — each worktree needs its own copy because
# they'll diverge as you make changes.
cp "${CONDUCTOR_ROOT_PATH}/database/database.sqlite" \
database/database.sqlite
This gives you the full schema and all your seed data instantly without running migrations from scratch. It's a copy, not a symlink, because each worktree will make its own changes and you don't want them stomping on each other.
Worktree subdirectories and .gitignore
Some tools create worktrees inside your project directory instead of in ~/conductor/. Claude Code puts
its worktrees in .claude/worktrees/. If you're using any tool that does this, add the directory to
.gitignore so you don't accidentally commit a worktree:
# AI tool worktrees
.claude/worktrees/
Commit the conductor config
Do commit conductor.json and .conductor/ to your repo. That's the whole point — every developer on
your team gets the same setup, run, and teardown scripts. The scripts use Conductor's environment variables,
so they're portable. It doesn't matter where Conductor puts the worktree or what the workspace is called.
Quick Setup: ZSH Functions
If you set up Conductor config in multiple projects, add one of these to your ~/.zshrc so you can run
setup-conductor from any project root.
Template version
Keep your default scripts in a template folder and copy them in:
setup-conductor() {
local tpl="$HOME/.templates/conductor-workflow"
if [ ! -d "$tpl" ]; then
echo "Template not found: $tpl"
echo "Create it with conductor.json and .conductor/*.sh"
return 1
fi
cp "$tpl/conductor.json" ./conductor.json
cp -r "$tpl/.conductor" ./.conductor
chmod +x .conductor/*.sh
echo "Conductor config copied. Edit .conductor/*.sh for this project."
}
Inline version
No template directory needed — this creates everything directly. Copy the whole thing and paste it into
your ~/.zshrc:
setup-conductor() {
mkdir -p .conductor
cat > conductor.json << 'EOF'
{
"scripts": {
"setup": ".conductor/setup.sh",
"run": ".conductor/run.sh",
"archive": ".conductor/archive.sh"
},
"runScriptMode": "concurrent"
}
EOF
cat > .conductor/setup.sh << 'EOF'
#!/bin/zsh
# Conductor Environment Variables:
# CONDUCTOR_WORKSPACE_NAME - Workspace name
# CONDUCTOR_WORKSPACE_PATH - Workspace path
# CONDUCTOR_ROOT_PATH - Path to the main repo root
# CONDUCTOR_DEFAULT_BRANCH - Default branch name
# CONDUCTOR_PORT - First of 10 ports (PORT+0 through PORT+9)
# --- Customize below for your project ---
# Symlink .env from the main repo
ln -sf "${CONDUCTOR_ROOT_PATH}/.env" .env
# Install dependencies (change to your package manager)
npm install
EOF
cat > .conductor/run.sh << 'EOF'
#!/bin/zsh
# Start the dev server (change to your start command)
npm run dev
EOF
cat > .conductor/archive.sh << 'EOF'
#!/bin/zsh
# Clean up
rm -rf node_modules
EOF
chmod +x .conductor/*.sh
echo "Created conductor.json and .conductor/ scripts."
echo "Edit the scripts in .conductor/ for your project."
}
Generate Your Own
Pick a template, customize the scripts, and hit generate. You'll get a one-liner you can paste into your
terminal at the project root — it creates conductor.json and all three .conductor/ scripts in one go.
Advanced: Per-Worktree Isolation
The setup above shares a single .env across all worktrees. That's the right default — it means zero
config drift between worktrees and zero maintenance burden.
But sometimes you need actual isolation: a separate database per worktree, different cache prefixes, worktree-specific mail routing. Here are the patterns I've found useful.
Sharing your site via Herd
Herd has built-in tunnel support via Expose. If you need to share a running worktree
with someone (demo for a client, testing a webhook, pair debugging), add this to your .conductor/run.sh:
# .conductor/run.sh
# Share this worktree publicly via Herd's Expose tunnel
herd share "${CONDUCTOR_WORKSPACE_NAME}"
# Grab the public URL (useful for logging or passing to other tools)
SHARE_URL=$(herd fetch-share-url)
echo "Public URL: ${SHARE_URL}"
Each worktree gets its own tunnel URL. This is particularly useful when you're running multiple feature branches and need a client to test a specific one.
Per-worktree MySQL databases
Instead of sharing one database, create a fresh one per worktree. This is essential if you're working on migrations — you don't want one branch's migration to mess up another branch's schema.
Several of the patterns below need to override specific .env values per worktree.
dotenvx is a CLI tool by the original author of dotenv that lets you properly
get and set values in .env files. It's basically a better dotenv CLI — you give it a key, a value, and
a file, and it does the right thing. Much cleaner than writing sed substitutions that nobody can read
and everyone gets wrong. Install it with brew install dotenvx/brew/dotenvx.
Add to .conductor/setup.sh:
# .conductor/setup.sh
# Create a database named after this worktree
DB_NAME="${CONDUCTOR_WORKSPACE_NAME}"
mysql -u root -e "CREATE DATABASE IF NOT EXISTS \`${DB_NAME}\`"
# Optionally import a dump from the main repo
if [ -f "${CONDUCTOR_ROOT_PATH}/database/dump.sql" ]; then
mysql -u root "${DB_NAME}" \
< "${CONDUCTOR_ROOT_PATH}/database/dump.sql"
fi
# Copy .env (not symlink) because we need a different DB_DATABASE
cp "${CONDUCTOR_ROOT_PATH}/.env" .env
# Point this worktree at its own database
dotenvx set DB_DATABASE "${DB_NAME}" \
-f .env --plain
And clean up in .conductor/archive.sh:
# .conductor/archive.sh
# Drop the worktree-specific database
mysql -u root \
-e "DROP DATABASE IF EXISTS \`${CONDUCTOR_WORKSPACE_NAME}\`"
Important: when you need per-worktree .env values, you copy the .env instead of symlinking it.
The symlink approach is for shared config; the copy approach is for isolated config.
Docker containers per worktree
If your project uses Docker, COMPOSE_PROJECT_NAME is your friend. It prefixes all container and network
names, so each worktree gets a completely isolated Docker stack:
# .conductor/setup.sh
export COMPOSE_PROJECT_NAME="${CONDUCTOR_WORKSPACE_NAME}"
# Copy .env (Docker needs a real file)
cp "${CONDUCTOR_ROOT_PATH}/.env" .env
# Override the app name for this worktree
dotenvx set APP_NAME "${CONDUCTOR_WORKSPACE_NAME}" \
-f .env --plain
# Build images with worktree-specific args
docker compose build \
--build-arg APP_NAME="${CONDUCTOR_WORKSPACE_NAME}"
docker compose up -d
docker compose exec app php artisan migrate --seed
In your Dockerfile, use the build arg:
ARG APP_NAME=app
ENV APP_NAME=${APP_NAME}
# Label for easy identification and cleanup
LABEL conductor.workspace="${APP_NAME}"
And in .conductor/archive.sh:
# .conductor/archive.sh
export COMPOSE_PROJECT_NAME="${CONDUCTOR_WORKSPACE_NAME}"
# Tear down everything — containers, volumes, networks
docker compose down -v --remove-orphans
rm -f .env
With this setup, nagoya and montreal run completely independent Docker stacks. Different containers,
different volumes, different networks. Run docker compose ls and you can see exactly what's running:
$ docker compose ls
NAME STATUS CONFIG FILES
nagoya running(3) /Users/you/conductor/workspaces/my-project/nagoya/docker-compose.yml
montreal running(3) /Users/you/conductor/workspaces/my-project/montreal/docker-compose.yml
salvador exited(3) /Users/you/conductor/workspaces/my-project/salvador/docker-compose.yml
Each worktree is its own compose project. No name collisions, no port conflicts, no accidentally nuking the wrong stack.
Redis cache prefix isolation
If all your worktrees hit the same Redis server, their cache keys will collide. nagoya flushes its cache
and montreal loses its cached data too. Fix this by prefixing cache keys per worktree.
In .conductor/setup.sh (with a copied .env, not symlinked):
# .conductor/setup.sh
dotenvx set REDIS_PREFIX "${CONDUCTOR_WORKSPACE_NAME}_" \
-f .env --plain
Now nagoya writes to nagoya_cache:users:1 and montreal writes to montreal_cache:users:1. No
collisions, no accidental flushes, no mysterious cache misses.
Per-worktree mail routing
Route outbound mail to worktree-specific addresses so you can trace which worktree sent what. This is useful if you're using a mail trap like Mailpit or Mailtrap and need to debug email issues across branches.
In .conductor/setup.sh:
# .conductor/setup.sh
# Extract the project name from the root path
PROJECT=$(basename "${CONDUCTOR_ROOT_PATH}")
# Route mail so each worktree has a unique sender address
dotenvx set MAIL_FROM_ADDRESS \
"noreply+${PROJECT}+${CONDUCTOR_WORKSPACE_NAME}@herdsite.test" \
-f .env --plain
Emails from nagoya show up as noreply+my-project+nagoya@herdsite.test. When you're staring at a list of
test emails in Mailpit, you can immediately see which worktree and which project generated each one.
Links:
- Conductor.dev
- Conductor Docs
- Laravel Herd
- Herd CLI Reference
- dotenvx — CLI for editing
.envfiles properly
