A comprehensive toolkit for setting up new development environments across Linux, macOS, and Windows. This repository contains configuration templates, setup scripts, and environment management tools to get you up and running quickly with a consistent, production-ready development setup.
- Initial Setup: GitHub SSH Connection
- Quick Start
- Repository Structure
- Components
- Platform-Specific Guides
- Prerequisites
- Usage Examples
- Customization
- Roadmap
Before cloning this repository, you'll need to set up a secure SSH connection to GitHub. This is a one-time setup that enables you to clone the repo and use the automated scripts it contains.
On your new machine, create the .ssh directory and generate a new SSH key:
# Create .ssh directory with proper permissions
mkdir -p ~/.ssh
chmod 700 ~/.ssh
# Generate a new SSH key (use your GitHub email)
ssh-keygen -t ed25519 -C "[email protected]"
# When prompted, save to: /home/yourusername/.ssh/id_ed25519
# Add a passphrase for extra security (recommended)Alternative: Copy existing SSH keys
If you have SSH keys from a previous machine, you can copy them instead:
# Create .ssh directory
mkdir -p ~/.ssh
chmod 700 ~/.ssh
# Copy your existing keys to the new machine
# (use a USB drive, secure file transfer, or your preferred method)
cp /path/to/backup/id_ed25519 ~/.ssh/
cp /path/to/backup/id_ed25519.pub ~/.ssh/
# Set proper permissions
chmod 600 ~/.ssh/id_ed25519
chmod 644 ~/.ssh/id_ed25519.pub-
Copy your public key to clipboard:
cat ~/.ssh/id_ed25519.pub -
Go to GitHub: Settings → SSH and GPG keys → New SSH key
-
Give it a descriptive title (e.g., "MacBook Pro 2025")
-
Paste your public key into the "Key" field
-
Click Add SSH key
Manually start the SSH agent and add your key for this session:
# Start the SSH agent
eval "$(ssh-agent -s)"
# Add your SSH key to the agent
ssh-add ~/.ssh/id_ed25519
# Test the connection
ssh -T [email protected]
# You should see: "Hi username! You've successfully authenticated..."
# Clone this repository
git clone [email protected]:NicholasGrundl/.fresh.git
cd .freshNote: Once cloned, this repository contains scripts to manage SSH connections automatically. You won't need to manually activate the SSH agent for future sessions.
# Clone the repository (after SSH setup)
git clone [email protected]:NicholasGrundl/.fresh.git
cd .fresh
# Run automated setup (installs packages, Docker, kubectl, GCloud SDK)
cd scripts
./setup_environment.sh# Clone the repository
git clone [email protected]:NicholasGrundl/.fresh.git
cd .fresh
# Copy configurations you need
cp conda/base_template.yml ~/
cp -r python/* ~/your-project/
cp starship/starship.njg.toml ~/.config/starship.toml# Clone the repository
git clone git@github.com:NicholasGrundl/.fresh.git
cd .fresh\windows
# Follow the Windows-specific setup guide
Get-Content .\README.md.fresh/
├── conda/ # Conda environment templates
│ └── base_template.yml # Base environment with essential tools
├── nvm-uv/ # NVM + UV development environment
│ ├── .envrc # direnv auto-activation config
│ ├── .nvmrc # Node.js version specification
│ ├── .env.example # Environment variables template
│ ├── .gitignore # Python + Node.js ignore patterns
│ └── DEVELOPMENT.md # Cross-platform setup guide
├── editors/ # Editor configurations
│ ├── jupyter_lab_config.py # JupyterLab settings
│ └── settings.json # VS Code configuration
├── ai/ # AI development setup
│ └── ollama-opencode/ # Ollama + OpenCode configuration
│ ├── opencode.json # OpenCode config for local models
│ └── README.md # Complete setup guide
├── python/ # Python project templates
│ ├── Makefile # Common development tasks
│ ├── pyproject.toml # Modern Python packaging config
│ ├── requirements.txt # Production dependencies
│ ├── requirements-dev.txt # Development dependencies
│ ├── requirements-build.txt # Build dependencies
│ ├── setup.cfg # Package metadata
│ └── setup.cfg.sh # Setup configuration helper
├── scripts/ # Automated setup scripts (Linux)
│ ├── check_environment.sh # Verify installation
│ ├── required_packages.txt # System packages list
│ └── setup_environment.sh # Main setup script
├── starship/ # Terminal prompt configurations
│ ├── starship.fredericrous.toml # Custom prompt style
│ └── starship.njg.toml # Alternative prompt style
└── windows/ # Windows-specific setup
├── DiagnosticFunctions.ps1 # System diagnostics
├── EnvFunctions.ps1 # Environment management
├── SSHFunctions.ps1 # SSH automation
├── install_dependencies.ps1 # Package installation
├── uninstall_dependencies.ps1 # Package removal
├── profile.ps1 # PowerShell profile
└── README.md # Windows setup guide
Location: conda/base_template.yml
A comprehensive base environment with modern development tools pre-configured:
- Python: 3.12.11 with pip, setuptools, and wheel
- CLI Tools: bat, fd-find, ripgrep, tree, jq
- Version Control: git 2.49.0
- Modern Tools: direnv, starship prompt, uv (fast Python package installer)
- Development: Node.js 24.4.1 for tooling
Usage:
# Create environment from template
conda env create -f conda/base_template.yml
# Activate environment
conda activate base_template
# Update template for your needs
conda env export > my_environment.ymlKey Features:
- Optimized for Linux but portable across platforms
- Uses conda-forge as primary channel for latest packages
- Includes bioconda for bioinformatics tools
- Pre-configured with modern alternatives (bat > cat, fd > find, ripgrep > grep)
Location: nvm-uv/
A modern alternative to conda-based environments using nvm (Node Version Manager) and uv (ultra-fast Python package installer). Perfect for Python + Node.js projects that need automatic environment switching.
Key Features:
- Automatic activation: direnv handles environment switching when you
cdinto projects - Node.js version management: nvm with
.nvmrcfor consistent Node versions across team - Fast Python packaging: uv for lightning-fast dependency resolution and installation
- Environment isolation: Python virtual environments + Node version pinning
- Cross-platform: Works on Linux, macOS, and Windows (via WSL2)
Template Files:
.envrc- direnv configuration for auto-activation.nvmrc- Node.js version specification.env.example- Environment variables template.gitignore- Comprehensive ignore patterns for Python + Node projectsDEVELOPMENT.md- Platform-specific setup guide
Usage:
# Copy templates to your project
cp ~/.fresh/nvm-uv/.envrc ./
cp ~/.fresh/nvm-uv/.nvmrc ./
cp ~/.fresh/nvm-uv/.env.example ./
cp ~/.fresh/nvm-uv/.gitignore ./
# Set up environments
uv venv # Create Python virtual environment
nvm install # Install Node version from .nvmrc
cp .env.example .env # Configure environment variables
# Allow direnv (first time only)
direnv allow
# Now cd into directory auto-activates everything!When to Use:
- Python + Node.js full-stack projects
- Projects requiring specific Node.js versions
- Teams wanting fast Python dependency management
- Developers preferring lightweight, tool-specific solutions over conda
Setup Guide: See nvm-uv/DEVELOPMENT.md for complete installation instructions across platforms.
Location: python/
Production-ready Python project configuration files:
Modern Python packaging configuration with:
- Build System: setuptools backend
- Testing: pytest with coverage reporting and structured logging
- Linting/Formatting: Ruff configuration (120 char line length, Python 3.12 target)
- Import Sorting: isort rules for clean imports
- Style: PEP 257 docstring conventions
Common development tasks automated:
make install # Install all dependencies
make uninstall # Remove all packages
make test # Run tests with coverage
make lint # Check code style
make format # Auto-format code
make pre-commit # Format, lint, and test
make jupyter # Start JupyterLabrequirements.txt- Production dependenciesrequirements-dev.txt- Development tools (pytest, ruff, etc.)requirements-build.txt- Build tools
Package metadata and tool configurations
Usage:
# Copy to your project
cp -r .fresh/python/* ~/my-project/
# Install dependencies
cd ~/my-project
make install
# Run development workflow
make pre-commitLocation: editors/
- Custom terminal color scheme for improved readability
- Commented sections for Jupyter notebook customization (light/dark themes)
- Clean ANSI color palette for terminal output
Usage:
# Copy to VS Code settings
cp editors/settings.json ~/.config/Code/User/settings.json
# Or on macOS:
cp editors/settings.json ~/Library/Application\ Support/Code/User/settings.jsonCustom JupyterLab configuration for optimal notebook experience.
Usage:
# Copy to Jupyter config directory
mkdir -p ~/.jupyter
cp editors/jupyter_lab_config.py ~/.jupyter/Location: ai/ollama-opencode/
Complete setup for local AI development using Ollama (local model server) and OpenCode (AI coding assistant). This enables you to run powerful coding models locally without relying on cloud services.
- Local AI Models: Run coding models like Qwen2.5-Coder on your own hardware
- Privacy-First: No code sent to external services
- Cost-Effective: No API fees after initial setup
- GPU Acceleration: Automatic CUDA GPU detection and usage
- OpenCode Integration: Seamless AI coding assistant experience
- GPU: NVIDIA GPU with CUDA support (recommended) or CPU-only mode
- RAM: 16GB+ recommended for 7B models, 32GB+ for 14B models
- Storage: 10GB+ per model downloaded
Step 1: Install Ollama
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Verify installation
ollama --versionStep 2: Start Ollama Server
# Start the Ollama server (keep this running)
ollama serveStep 3: Download a Coding Model In a new terminal:
# Download a coding model (start with 7B for testing)
ollama pull qwen2.5-coder:7b
# Or download the larger 14B model (more capable)
ollama pull qwen2.5-coder:14b
# List available models
ollama listStep 4: Install OpenCode
# Install OpenCode
curl -fsSL https://opencode.ai/install | bash
# Or using npm
npm install -g opencode-aiStep 5: Configure OpenCode for Ollama
Create an opencode.json file in your project directory:
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"ollama": {
"npm": "@ai-sdk/openai-compatible",
"name": "Ollama (local)",
"options": {
"baseURL": "http://localhost:11434/v1"
},
"models": {
"qwen2.5-coder:7b": {
"name": "Qwen2.5 Coder 7B (local)"
},
"qwen2.5-coder:14b": {
"name": "Qwen2.5 Coder 14B (local)"
}
}
}
}
}Step 6: Test and Use
# Test Ollama is working
curl http://localhost:11434/api/tags
# Start OpenCode in your project directory
cd /path/to/your/project
opencode
# In OpenCode, run:
/models
# Select your Ollama model from the listBasic AI Coding Assistance:
# Start OpenCode
opencode
# Ask questions about your codebase
"How does authentication work in @src/auth.py?"
# Request new features
"Add a user deletion feature with soft delete and recovery screen"
# Get code explanations
"Explain what this function does in @utils/helpers.py"Model Management:
# Download additional models
ollama pull codellama:7b
ollama pull deepseek-coder:6.7b
# Remove unused models
ollama rm qwen2.5-coder:7b
# Check model status
ollama psGPU Optimization:
- Ollama automatically detects and uses NVIDIA GPUs
- For larger models, ensure you have sufficient VRAM
- Monitor GPU usage with
nvidia-smi
Memory Management:
- Use smaller models (7B) on systems with <32GB RAM
- Close other applications when running large models
- Restart Ollama if you encounter memory issues
Model Selection:
- 7B Models: Fast, good for simple tasks, ~8GB RAM
- 14B Models: More capable, better for complex code, ~16GB RAM
- 33B+ Models: Most capable, require 32GB+ RAM and powerful GPU
Common Issues:
-
"Unable to connect" error:
# Check if Ollama is running curl http://localhost:11434/api/tags # Restart Ollama if needed pkill ollama && ollama serve
-
Model download stuck:
# Cancel and retry Ctrl+C ollama pull qwen2.5-coder:7b -
Out of memory errors:
# Use smaller model ollama pull qwen2.5-coder:1.5b # Or check system resources free -h nvidia-smi
-
OpenCode can't find models:
- Verify
opencode.jsonis in your project directory - Check that Ollama server is running
- Ensure model names match exactly
- Verify
Configuration Files:
- Ollama Config:
~/.ollama/config - OpenCode Config:
./opencode.json(project-specific) - Models Location:
~/.ollama/models/
Using OpenCode Connect Command:
opencode
# Run: /connect
# Select "Other"
# Enter provider ID: "ollama"
# Enter API key: "sk-dummy" (any key works for local)
# Then create opencode.json as shown aboveDocker Ollama (Alternative):
# Run Ollama in Docker
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
# Pull models
docker exec -it ollama ollama pull qwen2.5-coder:7bLocation: scripts/
Automated setup for Linux systems with common development tools.
Main setup script that installs:
- Google Cloud SDK: For GCP development
- Docker: Container runtime with user permissions
- kubectl: Kubernetes command-line tool
- System Packages: From
required_packages.txt
Usage:
cd scripts
./setup_environment.sh
# Log out and back in for Docker permissions to take effectVerify installed components and diagnose issues.
List of system packages to install. Customize for your needs.
Features:
- Idempotent: Safe to run multiple times
- Skips already-installed packages
- Adds user to docker group automatically
- Sets up Google Cloud SDK repository
Location: starship/
Custom terminal prompt configurations for the Starship cross-shell prompt.
- Environment Info: Shows conda environment, Python version, Node.js version
- User Context: Username and hostname always visible
- Directory: Smart truncation with repo root highlighting
- Git Status: Branch, ahead/behind, changes, and status indicators
- Minimal: Clean, information-dense without clutter
(conda_env)(py3.12)(npm24.4):username@hostname:~/path/to/dir:(main)$
Usage:
# Install starship (if not in conda environment)
curl -sS https://starship.rs/install.sh | sh
# Copy configuration
cp starship/starship.njg.toml ~/.config/starship.toml
# Or try the alternative style
cp starship/starship.fredericrous.toml ~/.config/starship.toml
# Add to your shell config (~/.bashrc or ~/.zshrc)
eval "$(starship init bash)" # or 'zsh'Location: windows/
Complete Windows development environment setup with PowerShell automation.
profile.ps1 - PowerShell profile with custom functions and SSH automation
EnvFunctions.ps1 - Environment management utilities:
- Conda environment activation helpers
- Path management
- Environment variable utilities
SSHFunctions.ps1 - SSH persistence management:
- Automatic SSH agent startup
- Key management
- Git authentication automation
DiagnosticFunctions.ps1 - System diagnostics:
- Environment checks
- Dependency verification
- Troubleshooting tools
install_dependencies.ps1 - Python package installation:
- Reads from
requirements.txt - Creates
requirements-lock.txtwith pinned versions - Conda environment aware
uninstall_dependencies.ps1 - Clean package removal:
- Removes all pip packages
- Cleans lock files
-
Install Miniconda:
# Download from https://docs.conda.io/en/latest/miniconda.html # Add to PATH during installation conda --version # Verify
-
Enable PowerShell Scripts:
# Check current policy Get-ExecutionPolicy # Enable scripts (as Administrator) Set-ExecutionPolicy RemoteSigned
-
Initialize Conda:
conda init powershell
-
Create Environment:
conda create --name myenv python=3.12 pip conda activate myenv
-
Install Dependencies:
.\install_dependencies.ps1
For complete Windows setup instructions, see windows/README.md.
Best For: Servers, cloud instances, automated deployments
- Clone repository
- Run
scripts/setup_environment.sh - Create conda environment from
conda/base_template.yml - Copy Python templates to projects
- Configure starship prompt
Best For: Local development, data science, general use
- Install Homebrew
- Install Miniconda:
brew install miniconda - Clone repository
- Create conda environment from template
- Copy configurations as needed
- Set up starship prompt
Best For: Windows-specific development, COM automation, Excel integration
- Install Miniconda and Git
- Configure PowerShell execution policy
- Follow
windows/README.mdfor complete setup - Use provided PowerShell functions for automation
- Git
- SSH access to GitHub
- Internet connection for package downloads
- Ubuntu 20.04+ or compatible
- sudo access
- apt package manager
- macOS 10.15+ (Catalina or later)
- Homebrew (recommended)
- Command Line Tools for Xcode
- Windows 10/11
- PowerShell 5.1+
- Administrator access (for initial setup)
- Miniconda or Anaconda
# Create project directory
mkdir ~/my-awesome-project
cd ~/my-awesome-project
# Copy Python templates
cp ~/.fresh/python/* .
# Edit requirements files for your dependencies
nano requirements.txt
# Install dependencies
make install
# Start development
make jupyter # Or start coding!# Start from base template
cp ~/.fresh/conda/base_template.yml my-project-env.yml
# Edit to add your packages
nano my-project-env.yml
# Create environment
conda env create -f my-project-env.yml
conda activate my-project-env# Install starship
conda install -c conda-forge starship
# Or: curl -sS https://starship.rs/install.sh | sh
# Copy configuration
cp ~/.fresh/starship/starship.njg.toml ~/.config/starship.toml
# Customize colors, format, modules
nano ~/.config/starship.toml
# Add to shell (bash/zsh)
echo 'eval "$(starship init bash)"' >> ~/.bashrc# Activate environment
conda activate myenv
# Install packages from requirements.txt
.\install_dependencies.ps1
# Check installed versions
pip list
# Uninstall all packages if needed
.\uninstall_dependencies.ps1This repository is designed as a starting point. Customize it for your needs:
# Edit the template
nano conda/base_template.yml
# Add packages:
dependencies:
- your-package-name=version
# Or add pip packages:
- pip:
- your-pip-package- Adjust line length in
pyproject.toml(Ruff section) - Modify test coverage requirements in Makefile
- Add custom make targets for your workflow
- Update Python version in
pyproject.tomltarget-version
- Change colors: modify style values in
.tomlfile - Add/remove modules: comment out sections you don't need
- Adjust truncation: change
truncation_lengthin directory section - Add custom symbols: update symbol fields
- Add functions to
EnvFunctions.ps1 - Customize SSH behavior in
SSHFunctions.ps1 - Add diagnostic checks in
DiagnosticFunctions.ps1 - Modify package management in install/uninstall scripts
- Reorganize repository structure
- Add automated setup scripts
- Document all tools and configurations
- Create modular installation options
- Add Docker-based development environments
- Include pre-commit hook templates
- Add CI/CD pipeline examples
- Create automated testing for setup scripts
- Add support for additional shells (fish, nushell)
Look at micro commits only
git log --grep="MICRO" --format="%h %ad %s" --date=short -30Look at micro commits with file changes
git log --grep="MICRO" --stat --oneline --format="%h %ad %s" --date=short -10Look at Micro commits with sequential file changes only
git -c color.ui=always log --grep="MICRO" --stat --format="%h %ad" --date=short -10 | awk '/^[a-f0-9]/ {if(NR>1)
print ""; print $0} /^\s+\S+.*\|/ {print " " $0} /^$/ {next} /files? changed/ {next}'Look at macro commits subject only
git -c color.ui=always log --grep="MICRO" --invert-grep --format="%h %ad %s" --date=short -30Look at macro commits full message
git -c color.ui=always log --grep="MICRO" --invert-grep --format="%h %ad %s%n%n%b%n---" --date=short -10