Skip to content

Development

Welcome to the Corgi Recommender Service development documentation! We're excited that you're interested in contributing to this open-source project. Whether you're fixing bugs, adding features, or improving documentation, your contributions help make privacy-first recommendations better for everyone in the Fediverse.

Getting Started

- **Contributing Guide** --- Learn how to contribute code, documentation, and ideas to the Corgi project. Covers our development workflow, coding standards, and pull request process. [:octicons-arrow-right-24: Read Contributing Guide](./01_contributing/) - **Testing Framework** --- Understand our comprehensive testing strategy, from unit tests to end-to-end validation. Learn how to write and run tests for your contributions. [:octicons-arrow-right-24: Explore Testing Framework](./02_testing-framework/)

Development Workflow

Our standard development workflow follows these steps:

graph LR
    A[Fork Repository] --> B[Create Feature Branch]
    B --> C[Write Code]
    C --> D[Run Tests]
    D --> E[Submit Pull Request]
    E --> F[Code Review]
    F --> G[Merge]

    style A fill:#f9f,stroke:#333,stroke-width:2px
    style G fill:#9f9,stroke:#333,stroke-width:2px

Quick Start Commands

# Clone your fork
git clone https://github.com/YOUR_USERNAME/corgi-recommender-service.git
cd corgi-recommender-service

# Install dependencies
make install
make dev-install  # Additional development tools

# Start development environment
make dev  # Automated workflow with monitoring

# Run tests before committing
make validate
make check

Development Tools

Automated Development Workflow

The project includes a sophisticated automated development system that monitors your code and provides real-time feedback:

Command Purpose
make dev Start full development environment with monitoring
make dev-status Check status of all development services
make dev-health Run automated health checks
make dev-browser Test browser integration
make dev-stop Stop all development services

Testing Commands

Command Purpose
make validate Run full validation suite
make dry-validate Safe validation in dry-run mode
make check Quick health check
make final-test Comprehensive sanity test
make proxy-test Test proxy endpoints

Agent Testing Framework

The project includes a unique agent testing system for simulating user behavior:

# Run a single agent test
make run-agent profile=tech_fan

# Run multiple agents in parallel
make multi-agent profiles="tech_fan news_skeptic meme_lover"

# Generate agent behavior report
make agent-report

Development Environment

Prerequisites

  • Python 3.8 or higher
  • PostgreSQL 12+ with pgvector extension
  • Redis 6+
  • Docker and Docker Compose (optional but recommended)
  • Node.js 16+ (for frontend development)

Environment Configuration

Create a .env file in the project root:

# Database
POSTGRES_DB=corgi_recommender
POSTGRES_USER=your_username
POSTGRES_PASSWORD=your_secure_password

# API Server
PORT=5001
HOST=0.0.0.0
FLASK_ENV=development
DEBUG=True

# Security
SECRET_KEY=your-secret-key-here

Code Quality Standards

Pre-commit Checks

Before committing code, ensure:

  1. All tests pass: make validate
  2. Code follows style guidelines: Python code follows PEP 8
  3. Documentation is updated: Update relevant docs and docstrings
  4. No security vulnerabilities: Run security checks

Continuous Integration

All pull requests automatically run:

  • Unit and integration tests
  • Code quality checks
  • Security vulnerability scanning
  • Documentation building

Getting Help

Resources

  • GitHub Issues: Report bugs or request features
  • Discussions: Ask questions and share ideas
  • RAG System: Query the knowledge base for technical details
# Query the RAG system for help
python3 scripts/cursor_rag_query.py "How do I implement a new recommendation algorithm?"

Community Guidelines

  • Be respectful and inclusive
  • Follow the code of conduct
  • Help others when you can
  • Document your contributions

Advanced Development

Performance Profiling

# Run performance benchmarks
python3 tests/test_performance_benchmarks.py

# Monitor system metrics during development
make dev-status

Database Management

# Reset database for clean state
make reset-db

# Run migrations
python3 db/setup.py

Docker Development

# Build and run in Docker
docker-compose up --build

# Run tests in Docker
docker-compose run api pytest

Next Steps

Ready to contribute? Here's how to get started:

  1. Set up your environment: Follow the Contributing Guide
  2. Understand the codebase: Read the Architecture Overview
  3. Pick an issue: Check GitHub issues labeled "good first issue"
  4. Write tests: Learn our Testing Framework
  5. Submit your PR: We'll review and provide feedback

Thank you for contributing to Corgi! Your efforts help build a more private and personalized Fediverse experience.