CI/CD pipelines are central to modern software development. Whether you're applying for backend, full-stack, or DevOps roles, interviewers expect you to understand how code goes from commit to production.
This guide covers CI/CD fundamentals and practical GitHub Actions knowledge that comes up in interviews, with the questions and patterns you need to know.
Table of Contents
- CI/CD Fundamentals Questions
- GitHub Actions Core Questions
- Workflow Triggers Questions
- Job Dependencies and Parallelism Questions
- Matrix Builds Questions
- Secrets and Environment Variables Questions
- Caching and Performance Questions
- Artifacts and Job Communication Questions
- Deployment Strategies Questions
- Production Pipeline Questions
- Reusable Workflows Questions
- Troubleshooting and Best Practices Questions
CI/CD Fundamentals Questions
Understanding the distinction between CI and CD is often the opening question in DevOps interviews.
What is the difference between CI and CD?
Many candidates give vague answers like "CI/CD is automated deployment," which misses the key distinctions. CI and CD address different problems in the software delivery process and can be implemented independently.
Continuous Integration (CI) focuses on code quality and integration:
- Automatically builds and tests code when changes are pushed
- Catches integration issues early (merge conflicts, test failures)
- Every developer integrates frequently (at least daily)
- The build is the single source of truth for code health
Continuous Delivery (CD) focuses on release readiness:
- Code is always in a deployable state
- Automated pipeline to staging/pre-production
- Manual approval for production deployment
- "Could deploy at any time"
Continuous Deployment takes CD further:
- Fully automated deployment to production
- Every passing commit goes live automatically
- Requires high test coverage and confidence
- "Do deploy every time"
flowchart LR
subgraph ci["CI"]
Push --> Build --> Test
end
subgraph cd["CD"]
Test --> Staging --> Approval{"Approval?"} --> Production
endWhy is Continuous Integration important?
CI catches problems early when they're cheapest to fix. Without CI, developers work in isolation for days or weeks, then face painful "integration hell" when merging. The longer code diverges from main, the more conflicts and bugs accumulate.
With CI, every push triggers automated builds and tests. A broken build is immediately visible to the entire team. This creates social pressure to keep the build green and encourages small, frequent commits rather than large, risky changes.
Key benefits:
- Fast feedback on code changes (minutes, not days)
- Reduced integration risk through frequent merging
- Automated quality gates (tests, linting, security scans)
- Single source of truth for what works
What is the difference between Continuous Delivery and Continuous Deployment?
This distinction confuses many candidates. Both start with "CD" and both automate the path to production, but they differ in the final step.
Continuous Delivery automates everything up to production but requires manual approval for the final deployment. This suits organizations with compliance requirements, release windows, or lower test confidence. The key principle is that code is always deployable—you could release at any time.
Continuous Deployment removes the manual gate entirely. Every commit that passes the pipeline automatically deploys to production. This requires excellent test coverage, feature flags for incomplete work, and strong monitoring for quick rollback. Companies like Netflix and Amazon deploy thousands of times daily using this approach.
GitHub Actions Core Questions
GitHub Actions is the most common CI/CD platform for GitHub repositories and appears frequently in interviews.
What are workflows, jobs, and steps in GitHub Actions?
Understanding the hierarchy is essential for writing and debugging pipelines. Each level has different characteristics and constraints that affect how you structure your automation.
A workflow is a YAML file in .github/workflows/ that defines an automated process. Workflows are triggered by events (push, pull request, schedule) and contain one or more jobs.
Jobs are independent units of work that run on separate virtual machines (runners). By default, jobs run in parallel. Each job starts with a fresh environment—no files or state carry over from other jobs.
Steps are sequential tasks within a job. They share the same runner and filesystem. Steps can be shell commands (run:) or reusable actions (uses:).
# .github/workflows/ci.yml
name: CI Pipeline # Workflow name
on: # Triggers
push:
branches: [main]
pull_request:
branches: [main]
jobs: # Jobs run in parallel by default
test:
runs-on: ubuntu-latest # Runner environment
steps: # Steps run sequentially
- uses: actions/checkout@v4 # Action (reusable)
- uses: actions/setup-node@v4
with:
node-version: '20'
- run: npm ci # Shell command
- run: npm test
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
- run: npm ci
- run: npm run lintWhat is the difference between run and uses in a step?
Steps execute work in two ways, and choosing correctly affects reusability and maintenance.
run: executes shell commands directly on the runner. Use this for simple commands, scripts, or when you need full control over execution. Commands run in the default shell (bash on Linux/macOS, PowerShell on Windows).
uses: invokes a reusable action—a packaged unit of automation. Actions can come from the marketplace, other repositories, or your own repo. They handle complex tasks like checking out code, setting up languages, or deploying to cloud providers.
steps:
# Using an action - packaged, versioned, reusable
- uses: actions/checkout@v4
# Running a command - direct shell execution
- run: npm test
# Multi-line command
- run: |
echo "Building..."
npm run build
echo "Done!"Best practice: Use actions for common tasks (checkout, setup, deploy) and run for project-specific commands.
What runners are available in GitHub Actions?
Runners are the machines that execute your jobs. GitHub provides hosted runners, or you can use self-hosted runners for more control.
GitHub-hosted runners are managed virtual machines that GitHub maintains:
ubuntu-latest,ubuntu-22.04,ubuntu-20.04windows-latest,windows-2022,windows-2019macos-latest,macos-14,macos-13
Hosted runners are free for public repositories and have usage limits for private repos. They start fresh for each job with common tools pre-installed.
Self-hosted runners are machines you manage:
- Run on your own infrastructure (on-premise, cloud)
- Access to internal networks and resources
- Persistent environment (can cache more aggressively)
- No usage limits but you pay for infrastructure
Workflow Triggers Questions
Triggers determine when workflows run. Configuring them correctly prevents wasted CI minutes and ensures appropriate automation.
What events can trigger a GitHub Actions workflow?
GitHub Actions supports dozens of trigger events. Knowing the common ones and their options is essential for efficient pipelines.
on:
# Push/PR triggers
push:
branches: [main, develop]
paths:
- 'src/**' # Only trigger for src changes
- '!src/**/*.md' # Exclude markdown files
pull_request:
types: [opened, synchronize, reopened]
# Scheduled (cron)
schedule:
- cron: '0 0 * * *' # Daily at midnight UTC
# Manual trigger
workflow_dispatch:
inputs:
environment:
description: 'Deploy environment'
required: true
default: 'staging'
type: choice
options:
- staging
- production
# From other workflows
workflow_call: # Reusable workflow
# External events
repository_dispatch: # API triggerCommon triggers:
push/pull_request- Code changesschedule- Cron jobs (nightly builds, cleanup)workflow_dispatch- Manual runs with inputsworkflow_call- Called by other workflowsrelease- When releases are published
How do you prevent running CI on documentation changes?
Running full CI on README updates wastes time and resources. Path filters let you skip workflows when only certain files change.
Use paths-ignore to skip workflows for specific patterns:
on:
push:
paths-ignore:
- '**.md'
- 'docs/**'
- '.github/ISSUE_TEMPLATE/**'Alternatively, use paths to only run on specific changes:
on:
push:
paths:
- 'src/**'
- 'tests/**'
- 'package.json'Important: Path filters only work with push and pull_request events. Scheduled workflows always run regardless of path filters.
How do you trigger a workflow manually with parameters?
The workflow_dispatch event enables manual triggers with custom inputs. This is useful for deployments, data migrations, or any operation requiring human judgment.
on:
workflow_dispatch:
inputs:
environment:
description: 'Target environment'
required: true
type: choice
options:
- staging
- production
debug_enabled:
description: 'Enable debug logging'
required: false
type: boolean
default: false
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- run: echo "Deploying to ${{ inputs.environment }}"
- if: inputs.debug_enabled
run: echo "Debug mode enabled"Manual workflows appear in the Actions tab with a "Run workflow" button that shows the input form.
Job Dependencies and Parallelism Questions
Understanding job execution order is crucial for efficient pipelines that don't waste time or miss dependencies.
How do you control the order jobs run in?
By default, jobs run in parallel for maximum speed. Use the needs keyword to create dependencies when jobs must run sequentially.
jobs:
build:
runs-on: ubuntu-latest
steps:
- run: echo "Building..."
test:
needs: build # Waits for build to complete
runs-on: ubuntu-latest
steps:
- run: echo "Testing..."
deploy-staging:
needs: test
runs-on: ubuntu-latest
steps:
- run: echo "Deploying to staging..."
deploy-production:
needs: deploy-staging
runs-on: ubuntu-latest
environment: production # Requires approval
steps:
- run: echo "Deploying to production..."This creates a linear pipeline:
flowchart LR
build --> test --> deploy-staging --> deploy-productionHow do you run jobs in parallel with a shared dependency?
Some jobs can run in parallel but must all complete before a later job starts. Use an array in needs to wait for multiple jobs.
jobs:
lint:
runs-on: ubuntu-latest
steps:
- run: npm run lint
test:
runs-on: ubuntu-latest
steps:
- run: npm test
security-scan:
runs-on: ubuntu-latest
steps:
- run: npm audit
deploy:
needs: [lint, test, security-scan] # Waits for ALL three
runs-on: ubuntu-latest
steps:
- run: ./deploy.shflowchart LR
lint --> deploy
test --> deploy
security-scan --> deployLint, test, and security-scan run simultaneously. Deploy only starts after all three succeed.
What happens if a job in the dependency chain fails?
When a job fails, all jobs that depend on it (directly or indirectly) are skipped by default. This prevents deploying broken code or wasting resources on doomed jobs.
You can override this behavior with conditionals:
jobs:
test:
runs-on: ubuntu-latest
steps:
- run: npm test
report:
needs: test
if: always() # Run even if test fails
runs-on: ubuntu-latest
steps:
- run: echo "Test completed with status: ${{ needs.test.result }}"
deploy:
needs: test
if: success() # Only if test succeeded (default)
runs-on: ubuntu-latest
steps:
- run: ./deploy.shConditional options:
success()- Previous jobs succeeded (default)failure()- At least one previous job failedalways()- Run regardless of previous job statuscancelled()- Workflow was cancelled
Matrix Builds Questions
Matrix builds test across multiple configurations efficiently, a common requirement for libraries and cross-platform applications.
What is a build matrix and when would you use it?
A matrix runs the same job multiple times with different configurations. GitHub Actions automatically creates a job for each combination of matrix values.
This is essential for libraries that must work across Node versions, Python versions, or operating systems. Instead of duplicating job definitions, you define the variations once.
jobs:
test:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macos-latest]
node-version: [18, 20, 22]
fail-fast: false # Don't cancel others if one fails
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node-version }}
- run: npm ci
- run: npm testThis creates 9 parallel jobs (3 operating systems × 3 Node versions).
How do you exclude or include specific matrix combinations?
Sometimes certain combinations don't make sense or need special handling. Use exclude and include to customize the matrix.
Excluding combinations:
strategy:
matrix:
os: [ubuntu-latest, windows-latest]
node-version: [18, 20]
exclude:
- os: windows-latest
node-version: 18 # Skip Node 18 on WindowsIncluding additional combinations with extra variables:
strategy:
matrix:
os: [ubuntu-latest]
node-version: [18, 20]
include:
- os: ubuntu-latest
node-version: 22
experimental: true # Add extra variable for this comboYou can then use matrix.experimental in conditionals or step configuration.
What does fail-fast do in a matrix build?
By default, fail-fast is true, meaning GitHub cancels all remaining matrix jobs when any job fails. This saves resources when you know the entire matrix is broken.
Set fail-fast: false when you want all combinations to complete regardless of individual failures. This is useful when:
- Debugging which specific combinations fail
- Each combination's results are independently valuable
- You're testing optional/experimental configurations
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, windows-latest, macos-latest]Secrets and Environment Variables Questions
Handling sensitive data correctly is critical for pipeline security. This topic appears in almost every DevOps interview.
How do you handle secrets in CI/CD pipelines?
Never hardcode secrets in code or workflow files. Secrets committed to git are compromised forever—even if deleted, they exist in history. Use your platform's secret management system.
GitHub Secrets are encrypted and only exposed to workflow runs. Access them through the secrets context:
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Deploy to production
env:
API_KEY: ${{ secrets.API_KEY }}
DATABASE_URL: ${{ secrets.DATABASE_URL }}
run: ./deploy.shBest practices:
- Use repository secrets for repo-wide values
- Use environment secrets for environment-specific values
- Rotate secrets regularly
- Use OIDC for cloud authentication instead of long-lived credentials
- Never echo or log secrets (GitHub masks them, but be careful with encoding)
What is the difference between repository secrets and environment secrets?
GitHub supports secrets at multiple scopes, providing flexibility for different security requirements.
Repository secrets are available to all workflows in the repository. Use these for values needed across environments (API tokens for external services, registry credentials).
Environment secrets are scoped to specific environments (staging, production) and can require approval before use. Use these for environment-specific credentials:
jobs:
deploy:
runs-on: ubuntu-latest
environment: production # Uses production-specific secrets
steps:
- run: ./deploy.sh
env:
DEPLOY_KEY: ${{ secrets.PROD_DEPLOY_KEY }}Environments can also require manual approval, adding a human gate before production deployments.
How do you use OIDC for cloud authentication instead of storing credentials?
OpenID Connect (OIDC) lets workflows authenticate to cloud providers without storing long-lived credentials. The workflow requests a short-lived token that the cloud provider validates.
This is more secure than storing access keys because:
- No secrets to rotate or leak
- Tokens are short-lived (minutes)
- Fine-grained permissions per workflow
permissions:
id-token: write
contents: read
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::123456789:role/github-actions
aws-region: us-east-1
- run: aws s3 sync ./dist s3://my-bucketAWS, GCP, and Azure all support OIDC authentication with GitHub Actions.
Caching and Performance Questions
Slow pipelines waste developer time and delay feedback. Caching is the primary technique for speeding up CI.
How do you speed up CI pipelines with caching?
Caching stores files between workflow runs, avoiding repeated downloads. The most common use is caching package dependencies that rarely change.
steps:
- uses: actions/checkout@v4
- name: Cache node modules
uses: actions/cache@v4
with:
path: ~/.npm
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
- run: npm ci
- run: npm testThe cache key includes a hash of the lockfile, so the cache invalidates when dependencies change. The restore-keys provide fallback patterns for partial matches.
Many setup actions have built-in caching:
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm' # Automatic caching!What should you cache in CI pipelines?
Cache anything that's expensive to recreate and changes infrequently relative to your code.
Common cache targets:
~/.npmornode_modules(Node.js)~/.cache/pip(Python)~/.m2/repository(Maven)~/.gradle/caches(Gradle)- Docker layers (using buildx cache)
- Compiled dependencies (Rust target directory)
Cache key strategy:
key: ${{ runner.os }}-npm-${{ hashFiles('**/package-lock.json') }}This key changes only when the lockfile changes, ensuring fresh installs when dependencies update but cache hits otherwise.
What other techniques speed up CI pipelines?
Beyond caching, several strategies reduce pipeline duration.
Run jobs in parallel when they don't depend on each other:
jobs:
lint:
runs-on: ubuntu-latest
# ...
test:
runs-on: ubuntu-latest
# ... runs simultaneously with lintUse path filters to skip unnecessary runs:
on:
push:
paths:
- 'src/**' # Only run for source changesUse shallow clones when you don't need full history:
- uses: actions/checkout@v4
with:
fetch-depth: 1 # Only latest commitRun affected tests only using tools like Jest's --changedSince or Nx affected commands.
Artifacts and Job Communication Questions
Jobs run on separate machines and can't share files directly. Artifacts bridge this gap.
How do you pass files between jobs?
Since jobs run on different runners, they don't share filesystems. Use artifacts to upload files from one job and download them in another.
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm ci
- run: npm run build
- uses: actions/upload-artifact@v4
with:
name: build-output
path: dist/
retention-days: 7
deploy:
needs: build
runs-on: ubuntu-latest
steps:
- uses: actions/download-artifact@v4
with:
name: build-output
path: dist/
- run: ./deploy.sh dist/Artifacts are stored for 90 days by default (configurable with retention-days).
When would you use artifacts versus caching?
Artifacts and caching serve different purposes despite both storing files.
Artifacts pass data between jobs in the same workflow run or preserve outputs for later use:
- Build outputs needed by deploy jobs
- Test reports and coverage data
- Logs for debugging failed runs
Caching speeds up workflows by reusing data across runs:
- Package dependencies (node_modules)
- Compiled binaries that don't change often
- Downloaded tools
Key difference: Artifacts are specific to a workflow run. Caches are shared across runs of the same workflow.
Deployment Strategies Questions
Deployment strategies minimize risk when releasing new code. This is a common conceptual topic in DevOps interviews.
What is blue-green deployment and when would you use it?
Blue-green deployment maintains two identical production environments. One (blue) serves live traffic while the other (green) sits idle. You deploy to the idle environment, test it, then switch traffic instantly.
flowchart TB
subgraph environments["Environments"]
direction LR
B["Blue<br/>(current)"]
G["Green<br/>(new)"]
end
LB["Load Balancer"]
LB --> B
LB -.->|"switch"| GThe process:
- Blue is live, Green is idle
- Deploy new version to Green
- Test Green thoroughly
- Switch load balancer to Green
- Blue becomes idle (instant rollback ready)
Advantages:
- Instant rollback by switching back to Blue
- Full testing of production environment before traffic
- Zero-downtime deployments
Disadvantages:
- Double infrastructure cost
- Database schema changes are complex
- State synchronization between environments
What is canary deployment and how does it differ from blue-green?
Canary deployment gradually routes traffic to the new version, starting with a small percentage and increasing if metrics look healthy.
flowchart TB
LB["Load Balancer"]
subgraph versions["Traffic Split"]
direction LR
C["Current<br/>Version"]
K["Canary<br/>(new)"]
end
LB -->|"95%"| C
LB -->|"5%"| KThe process:
- Deploy new version alongside current
- Route 5% of traffic to canary
- Monitor metrics (errors, latency, business KPIs)
- Gradually increase (10%, 25%, 50%, 100%)
- Rollback immediately if metrics degrade
Advantages:
- Catches issues with minimal user impact
- Real production traffic testing
- Gradual confidence building
Disadvantages:
- Complex traffic routing infrastructure
- Longer rollout time
- Need robust monitoring and alerting
What is rolling deployment?
Rolling deployment updates instances one at a time (or in small batches) until all run the new version. It's simpler than blue-green or canary but has slower rollback.
The process:
- Take one instance out of the load balancer
- Update it to new version
- Health check, return to load balancer
- Repeat for remaining instances
Advantages:
- No extra infrastructure needed
- Gradual rollout
- Simple to implement
Disadvantages:
- Slower rollback (must re-roll forward or backward)
- Mixed versions during deployment
- Potential issues if old and new versions are incompatible
Production Pipeline Questions
Interviewers often ask you to walk through a complete CI/CD pipeline to assess your understanding of the full picture.
How would you structure a production CI/CD pipeline?
A production pipeline balances speed with safety. Fast feedback in CI, controlled deployment in CD.
name: CI/CD Pipeline
on:
push:
branches: [main]
pull_request:
branches: [main]
env:
NODE_VERSION: '20'
jobs:
# ========== CI ==========
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'
- run: npm ci
- run: npm run lint
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'
- run: npm ci
- run: npm test -- --coverage
- uses: actions/upload-artifact@v4
with:
name: coverage-report
path: coverage/
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'
- run: npm ci
- run: npm run build
- uses: actions/upload-artifact@v4
with:
name: build
path: dist/
# ========== CD ==========
deploy-staging:
needs: [lint, test, build]
if: github.ref == 'refs/heads/main'
runs-on: ubuntu-latest
environment: staging
steps:
- uses: actions/download-artifact@v4
with:
name: build
path: dist/
- name: Deploy to Staging
run: ./scripts/deploy.sh staging
env:
DEPLOY_TOKEN: ${{ secrets.STAGING_DEPLOY_TOKEN }}
deploy-production:
needs: deploy-staging
if: github.ref == 'refs/heads/main'
runs-on: ubuntu-latest
environment: production # Manual approval required
steps:
- uses: actions/download-artifact@v4
with:
name: build
path: dist/
- name: Deploy to Production
run: ./scripts/deploy.sh production
env:
DEPLOY_TOKEN: ${{ secrets.PROD_DEPLOY_TOKEN }}Key patterns:
- Lint, test, and build run in parallel (fast CI feedback)
- Deploy jobs wait for all CI jobs to pass
- Staging deploys automatically on main
- Production requires manual approval via environment settings
Why separate build from deploy jobs?
Separating build and deploy enables several important patterns and provides better visibility.
Reuse build artifacts across environments. Build once, deploy the same artifact to staging, then production. This ensures you're deploying exactly what was tested.
Isolate failures. If deployment fails, you know the build was successful. Rerunning just the deploy job is faster than rebuilding.
Different permissions. Build jobs don't need deployment credentials. Deploy jobs don't need repository write access. Principle of least privilege.
Manual gates. You can require approval between build and production deploy without rebuilding.
Reusable Workflows Questions
Reusable workflows reduce duplication across repositories and teams.
How do you avoid duplicating workflow code?
Reusable workflows let you define common patterns once and call them from multiple workflows. They accept inputs and secrets, returning outputs.
Define the reusable workflow:
# .github/workflows/reusable-deploy.yml
name: Reusable Deploy
on:
workflow_call:
inputs:
environment:
required: true
type: string
secrets:
deploy_token:
required: true
jobs:
deploy:
runs-on: ubuntu-latest
environment: ${{ inputs.environment }}
steps:
- uses: actions/checkout@v4
- run: ./deploy.sh
env:
DEPLOY_TOKEN: ${{ secrets.deploy_token }}Call it from another workflow:
# .github/workflows/main.yml
jobs:
deploy-staging:
uses: ./.github/workflows/reusable-deploy.yml
with:
environment: staging
secrets:
deploy_token: ${{ secrets.STAGING_TOKEN }}
deploy-production:
needs: deploy-staging
uses: ./.github/workflows/reusable-deploy.yml
with:
environment: production
secrets:
deploy_token: ${{ secrets.PROD_TOKEN }}What is the difference between reusable workflows and composite actions?
Both reduce duplication but operate at different levels.
Reusable workflows are complete workflow files called with uses: at the job level. They can contain multiple jobs, use secrets, and have their own triggers. Best for complex, multi-step processes.
Composite actions are custom actions that combine multiple steps. They're simpler but limited to steps—no jobs, no secrets directly. Best for packaging a sequence of steps you use frequently.
# Composite action (action.yml)
runs:
using: composite
steps:
- run: npm ci
shell: bash
- run: npm test
shell: bashTroubleshooting and Best Practices Questions
Interviewers often ask scenario-based questions about handling failures and maintaining pipelines.
A deployment failed. How do you roll back?
The rollback strategy depends on your deployment approach and infrastructure. Have a plan before you need it.
Option 1: Revert and redeploy
git revert HEAD
git push origin main
# CI/CD automatically deploys the revertOption 2: Blue-green switch back Switch the load balancer back to the previous (blue) environment. Instant, no rebuild needed.
Option 3: Kubernetes rollback
kubectl rollout undo deployment/appOption 4: Redeploy previous artifact Keep previous build artifacts available. Trigger a deploy of the known-good version.
Best practices:
- Test rollback procedures before you need them
- Keep at least one previous version deployable
- Monitor closely after deployments
- Have runbooks for common failure scenarios
How do you handle database migrations in CI/CD?
Database migrations are tricky because they can't easily roll back and may conflict with running code. Handle them carefully.
Key principles:
-
Run migrations before deploying new code - The old code must work with the new schema during rollout
-
Make migrations backward-compatible - Add columns, don't remove. Rename in steps (add new → migrate data → remove old)
-
Separate migration from deployment - Run migration as its own job with explicit approval:
jobs:
migrate:
runs-on: ubuntu-latest
environment: production-db # Separate approval
steps:
- run: npm run migrate
deploy:
needs: migrate
# ...- Consider blue-green for breaking changes - Run both schema versions simultaneously during transition
What are common CI/CD pipeline anti-patterns?
Avoiding these patterns keeps pipelines maintainable and reliable.
Long-running pipelines - If CI takes 30+ minutes, developers won't wait for feedback. Split into stages, cache aggressively, run tests in parallel.
No failing tests - A pipeline that always passes isn't testing anything. Green means confidence, not just completion.
Hardcoded values - Environment-specific values like URLs or credentials should be secrets or environment variables, not in code.
No artifact promotion - Build once and deploy the same artifact everywhere. Don't rebuild for each environment.
Manual steps in "automated" pipeline - If someone must SSH in to complete deployment, it's not truly automated. Automate or document why not.
Quick Reference
| Concept | Purpose |
|---|---|
| Workflow | YAML file defining automated process |
| Job | Independent unit on separate runner |
| Step | Sequential task within a job |
| Action | Reusable unit (uses: owner/repo@version) |
| Secret | Encrypted variable for sensitive data |
| Artifact | Files passed between jobs |
| Matrix | Run same job with different configs |
| Environment | Deployment target with secrets & approvals |
| Cache | Speed up workflows by reusing files across runs |
Related Articles
- Complete DevOps Engineer Interview Guide - Comprehensive DevOps interview preparation
- Docker Interview Guide - Building images in CI pipelines
- Kubernetes Interview Guide - Deploying to K8s from CI/CD
- Linux Commands Interview Guide - Shell commands used in pipeline steps
- Git Rebase vs Merge Interview Guide - Version control workflows that trigger CI
