Common Workflows
Task-oriented guides for everyday development with Lovelace CLI
This guide provides practical workflows for common development tasks. Each workflow includes clear instructions, example commands, and best practices to help you get the most from Lovelace CLI.
Daily Development Workflows
Morning Standup Routine
Start your development day with AI-assisted planning and context gathering:
# 1. Sync workspace with latest changes
lovelace workspace sync
# 2. Start chat to review today's priorities
lovelace chat "What tasks should I focus on today based on my recent work?"
# 3. Check agent tasks from yesterday
lovelace agents list --status completed --since yesterday
# 4. Review session insights
lovelace sessions list --date today
Expected outcome: Clear understanding of priorities, completed background tasks, and recent context.
###End-of-Day Summary
Capture progress and prepare for tomorrow:
# 1. Export today's chat sessions
lovelace sessions export today --format markdown --output daily-log.md
# 2. Generate work summary
lovelace chat "Summarize my development work today and suggest tomorrow's priorities"
# 3. Check running agents
lovelace agents list --status running
# 4. Commit and sync workspace
git add . && git commit
lovelace workspace sync
Expected outcome: Documented progress, identified blockers, and tomorrow's action items.
Context-Aware Code Review
Get AI assistance reviewing code changes:
# 1. Stage changes for review
git add .
# 2. Start chat with git context
lovelace chat --context git-diff
# In chat session:
You: "Review these changes for potential bugs and edge cases"
You: "Check if these changes follow our team's coding standards"
You: "Suggest improvements for error handling"
Alternative - Agent-based review:
# Run comprehensive code review in background
lovelace agents run code-reviewer "Review current changes for security, performance, and best practices" --context git-diff
# Check results
lovelace agents logs code-reviewer --follow
Expected outcome: Detailed code review with specific suggestions and identified issues.
Code Quality Workflows
Automated Code Analysis
Analyze codebase structure and identify improvement opportunities:
# 1. Run full codebase analysis
lovelace analyze
# 2. Focus on specific concerns
lovelace analyze --symbols # Analyze code symbols and structure
lovelace analyze --dependencies # Dependency graph analysis
lovelace analyze --security # Security vulnerability scan
# 3. Get AI recommendations
lovelace chat "Based on the analysis, what are the top 3 improvements I should make?"
For large codebases:
# Analyze specific directories
lovelace analyze --input ./src/services
lovelace analyze --input ./api --security
# Generate analysis report
lovelace analyze --format json --output analysis.json
Expected outcome: Comprehensive code quality insights and prioritized improvement suggestions.
Test Generation Workflow
Automatically generate tests for your code:
# 1. Generate tests for a single file
lovelace agents run test-generator "Generate comprehensive unit tests" --input ./src/user-service.ts --output ./tests/user-service.test.ts
# 2. Batch test generation for directory
lovelace agents run test-generator "Generate missing tests for all services" --input ./src/services --output ./tests/services
# 3. Review generated tests
lovelace chat "Review the generated tests and suggest edge cases I should add"
Integration test generation:
# Generate integration tests
lovelace agents run integration-test-gen "Create integration tests for API endpoints" --input ./src/api --output ./tests/integration
Expected outcome: Comprehensive test coverage with both unit and integration tests.
Documentation Generation
Create and maintain project documentation:
# 1. Generate API documentation
lovelace agents run docs-generator "Create API documentation from JSDoc comments" --input ./src/api --output ./docs/api
# 2. Update README
lovelace chat "Update the README with recent changes to the authentication system"
# 3. Generate architecture diagrams
lovelace agents run architecture-doc "Create architecture diagram showing service dependencies" --input ./src --output ./docs/architecture.md
Expected outcome: Up-to-date documentation matching your current codebase.
Security Audit
Perform security analysis and vulnerability scanning:
# 1. Run security scan
lovelace agents run security-scanner "Audit codebase for security vulnerabilities" --input ./src
# 2. Check dependencies
lovelace chat "Analyze package.json for known security vulnerabilities"
# 3. Review authentication
lovelace chat --context ./src/auth "Review authentication implementation for security issues"
# 4. Generate security report
lovelace agents results security-scanner --format markdown --output security-audit.md
Expected outcome: Identified security vulnerabilities with remediation suggestions.
Team Collaboration Workflows
Shared Workspace Collaboration
Work effectively in team workspaces:
# 1. Switch to team workspace
lovelace workspace switch team-project
# 2. Sync latest team changes
lovelace workspace sync
# 3. Review team activity
lovelace sessions list --workspace team-project --all-users
# 4. Collaborate on problem-solving
lovelace chat --context team "How should we approach the new authentication requirement?"
Sharing insights:
# Export session for team review
lovelace sessions export <session-id> --format team-summary --output team-insights.md
# Share agent results
lovelace agents results code-analyzer --share team-project
Expected outcome: Aligned team understanding and shared problem-solving insights.
Pull Request Review
Review pull requests with AI assistance:
# 1. Checkout PR branch
git fetch origin pull/123/head:pr-123
git checkout pr-123
# 2. Start PR review chat
lovelace chat --context git-diff "Analyze this pull request"
# In chat:
You: "What are the main changes in this PR?"
You: "Are there any potential breaking changes?"
You: "Check for test coverage of new code"
You: "Review error handling patterns"
# 3. Generate PR review comment
You: "Create a comprehensive review comment for this PR"
Automated PR review:
# Run automated review agent
lovelace agents run pr-reviewer "Review PR #123 for code quality, security, and best practices" --context pr-123 --output pr-review.md
Expected outcome: Thorough PR review with specific feedback and suggestions.
Issue Triage and Management
Manage development tasks and issues:
# 1. Review open issues via MCP (Linear integration)
lovelace mcp exec list_issues assignee="me" status="active"
# 2. Get AI assistance prioritizing
lovelace chat "Based on these issues, what should I work on first?"
# 3. Create detailed issue from discussion
lovelace chat "Create a detailed issue for implementing OAuth 2.0 authentication"
# 4. Update issue status
lovelace mcp exec update_issue issue_id="ENG-123" status="in_progress"
Expected outcome: Prioritized task list and clear issue descriptions.
DevOps & Automation Workflows
CI/CD Integration
Integrate Lovelace CLI into continuous integration pipelines:
GitHub Actions Example:
# .github/workflows/ai-review.yml
name: AI Code Review
on: [pull_request]
jobs:
review:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install Lovelace CLI
run: npm install -g @lovelace-ai/cli
- name: Authenticate
run: lovelace auth signin --device-flow
env:
LOVELACE_API_TOKEN: ${{ secrets.LOVELACE_TOKEN }}
- name: Run AI Code Review
run: |
lovelace agents run code-reviewer \
"Review this PR for issues" \
--input ./src \
--output review-results.md
- name: Post Review Comment
uses: actions/github-script@v7
with:
script: |
const fs = require('fs');
const review = fs.readFileSync('review-results.md', 'utf8');
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: review
});
Pre-commit Hook Example:
#!/bin/sh
# .git/hooks/pre-commit
# Run AI linting
lovelace analyze --diff --fail-on-issues
# Run security check
lovelace agents run security-scanner "Check staged changes for security issues" --context git-diff
# Exit if issues found
if [ $? -ne 0 ]; then
echo "❌ AI review found issues. Please fix before committing."
exit 1
fi
Expected outcome: Automated code quality gates in your development pipeline.
Infrastructure Analysis
Analyze and optimize infrastructure code:
# 1. Analyze Terraform/CloudFormation
lovelace chat --context ./infrastructure "Review this infrastructure code for best practices"
# 2. Check for security issues
lovelace agents run infrastructure-audit "Audit infrastructure for security misconfigurations" --input ./infrastructure
# 3. Optimize resource usage
lovelace chat "Suggest optimizations to reduce cloud costs based on our infrastructure"
Expected outcome: Infrastructure improvements and cost optimization suggestions.
Deployment Validation
Validate deployments before going live:
# 1. Pre-deployment check
lovelace agents run deployment-checker "Validate production deployment readiness" --env production
# 2. Review deployment script
lovelace chat --context ./scripts/deploy.sh "Check this deployment script for potential issues"
# 3. Generate deployment checklist
lovelace chat "Create a deployment checklist based on our current configuration"
Expected outcome: Validated deployment process with identified risks.
AI Agent Workflows
Creating Specialized Agents
Build custom agents for specific tasks:
# 1. Create agent with template
lovelace agents create api-tester --template api-testing
# 2. Configure agent
lovelace agents config api-tester --set model=claude-3-opus
lovelace agents config api-tester --set timeout=600
# 3. Test agent
lovelace agents run api-tester "Test all API endpoints and generate report" --input ./src/api
# 4. Save agent configuration
lovelace agents export api-tester --output ./config/agents/api-tester.json
Available templates:
code-reviewer- Code review and quality analysistest-generator- Test generationdocs-generator- Documentation creationsecurity-scanner- Security auditingapi-testing- API endpoint testingrefactor-assistant- Code refactoring suggestions
Expected outcome: Reusable specialized agents for your workflow.
Long-Running Background Tasks
Execute tasks that take significant time:
# 1. Start long-running analysis
lovelace agents run deep-analyzer "Perform comprehensive codebase analysis including complexity metrics, dependency graphs, and architecture review" --input ./src
# 2. Continue working while agent runs
# Agent executes in background daemon
# 3. Check progress
lovelace agents status deep-analyzer
# 4. Stream logs
lovelace agents logs deep-analyzer --follow
# 5. Retrieve results when complete
lovelace agents results deep-analyzer
Managing multiple agents:
# List all running agents
lovelace agents list --status running
# Cancel if needed
lovelace agents cancel deep-analyzer
# View agent history
lovelace agents history
Expected outcome: Complex analysis completed while you focus on other tasks.
Agent Orchestration
Coordinate multiple agents for complex workflows:
# 1. Create workflow script
cat > review-workflow.sh << 'EOF'
#!/bin/bash
# Comprehensive code review workflow
# Start all agents in parallel
lovelace agents run security-scanner "Security audit" --input ./src &
lovelace agents run code-reviewer "Code quality review" --input ./src &
lovelace agents run test-analyzer "Test coverage analysis" --input ./tests &
# Wait for completion
wait
# Aggregate results
lovelace chat "Summarize results from recent agent executions and create action items"
EOF
# 2. Execute workflow
chmod +x review-workflow.sh
./review-workflow.sh
Expected outcome: Coordinated multi-agent analysis with aggregated insights.
Best Practices
Effective Prompting
Be specific:
# ❌ Vague
lovelace chat "Fix this code"
# ✅ Specific
lovelace chat "Review user-service.ts authentication logic for SQL injection vulnerabilities and suggest secure alternatives"
Provide context:
# ❌ No context
lovelace chat "How should I structure this?"
# ✅ With context
lovelace chat --context ./src/architecture.md "How should I structure the new payment service to align with our existing architecture?"
Iterate and refine:
# Start broad
lovelace chat "Analyze the API design"
# Then get specific based on results
You: "Focus on the authentication endpoints - are there rate limiting issues?"
You: "Show me an example implementation with proper rate limiting"
Session Management
Save important sessions:
# Export sessions regularly
lovelace sessions export important-discussion --format markdown
# Tag sessions for easy retrieval
lovelace sessions tag <session-id> "authentication" "security"
# Search by tag
lovelace sessions search --tag security
Resume productive sessions:
# List recent sessions
lovelace sessions list --limit 10
# Resume where you left off
lovelace sessions resume <session-id>
Workspace Organization
Separate concerns:
# Personal experimentation
lovelace workspace switch personal-experiments
# Team collaboration
lovelace workspace switch team-project
# Client work
lovelace workspace switch client-project
Regular maintenance:
# Sync workspaces weekly
lovelace workspace sync
# Clean old data
lovelace workspace cleanup --older-than 30days
# Backup important workspaces
lovelace workspace export team-project --output backup/
Next Steps
- CLI Reference - Complete command documentation
- Settings & Configuration - Customize your CLI experience
- Editor Integration - Use Lovelace from your editor
- Troubleshooting - Common issues and solutions