Skip to content

GitLab CI/CD 完整实践指南

GitLab CI/CD是一个完整的DevOps平台,提供从源代码管理到部署的端到端自动化解决方案。通过内置的CI/CD功能,GitLab实现了代码、构建、测试、部署的无缝集成,特别适合现代化的云原生应用开发。

🎯 GitLab CI/CD 核心概念

架构组件概述

yaml
gitlab_cicd_architecture:
  core_components:
    gitlab_server:
      description: "GitLab服务器实例"
      responsibilities:
        - "Git仓库管理"
        - "CI/CD流水线编排"
        - "用户认证和授权"
        - "项目管理和协作"
        - "容器镜像仓库"
      
      editions:
        community_edition: "免费开源版本"
        enterprise_edition: "商业版本,包含高级功能"
        gitlab_saas: "GitLab.com托管服务"
    
    gitlab_runner:
      description: "CI/CD任务执行器"
      types:
        shared_runners:
          description: "所有项目共享的Runner"
          use_cases: ["公共构建环境", "标准化构建流程"]
          
        group_runners:
          description: "组级别的专用Runner"
          use_cases: ["团队专用环境", "特殊配置需求"]
          
        project_runners:
          description: "项目专用Runner"
          use_cases: ["敏感项目", "特殊硬件需求"]
      
      executors:
        docker_executor:
          advantages: ["环境隔离", "快速启动", "易于管理"]
          use_cases: ["容器化应用", "多语言支持"]
          
        kubernetes_executor:
          advantages: ["弹性扩展", "资源优化", "云原生"]
          use_cases: ["大规模构建", "资源受限环境"]
          
        shell_executor:
          advantages: ["直接系统访问", "灵活配置"]
          use_cases: ["特殊工具需求", "系统级操作"]
    
    gitlab_ci_yml:
      description: "CI/CD配置文件"
      location: "项目根目录下的.gitlab-ci.yml"
      features:
        - "声明式流水线定义"
        - "阶段和任务配置"
        - "条件执行控制"
        - "变量和密钥管理"
        - "缓存和构件配置"
  
  workflow_concepts:
    pipelines:
      definition: "一系列按顺序执行的阶段(stages)的集合"
      trigger_conditions:
        - "代码推送(push)"
        - "合并请求(merge request)"
        - "定时调度(scheduled)"
        - "手动触发(manual)"
        - "API调用(API)"
      
      pipeline_types:
        basic_pipeline: "基础流水线,按阶段顺序执行"
        dag_pipeline: "有向无环图流水线,支持并行和依赖"
        parent_child_pipeline: "父子流水线,支持复杂工作流"
    
    stages:
      definition: "流水线中的执行阶段"
      default_stages: [".pre", "build", "test", "deploy", ".post"]
      custom_stages: "可自定义阶段名称和顺序"
      
      stage_execution:
        sequential: "阶段间按顺序执行"
        parallel: "同一阶段内的作业并行执行"
        conditional: "基于条件跳过或执行阶段"
    
    jobs:
      definition: "在Runner上执行的最小单位"
      job_attributes:
        - "script: 执行脚本"
        - "stage: 所属阶段"
        - "image: 容器镜像"
        - "services: 依赖服务"
        - "variables: 环境变量"
        - "rules: 执行条件"
        - "artifacts: 构件配置"
        - "cache: 缓存配置"
yaml
gitlab_ci_features:
  integrated_platform:
    all_in_one_solution:
      components:
        - "源代码管理 (Git)"
        - "问题跟踪 (Issues)"
        - "代码审查 (Merge Requests)"
        - "CI/CD 流水线"
        - "容器镜像仓库"
        - "包管理仓库"
        - "安全扫描"
        - "监控和分析"
      
      benefits:
        - "统一的用户体验"
        - "数据集成和关联"
        - "简化的权限管理"
        - "降低工具链复杂度"
    
    built_in_features:
      security_scanning:
        - "静态应用安全测试(SAST)"
        - "动态应用安全测试(DAST)"
        - "依赖扫描(Dependency Scanning)"
        - "容器扫描(Container Scanning)"
        - "许可证合规扫描"
      
      devops_insights:
        - "部署频率分析"
        - "变更失败率统计"
        - "恢复时间监控"
        - "交付周期时间"
        - "价值流映射"
  
  cloud_native_support:
    kubernetes_integration:
      features:
        - "K8s集群连接管理"
        - "Helm Chart部署支持"
        - "GitOps工作流"
        - "环境管理"
        - "监控集成"
      
      auto_devops:
        description: "自动化DevOps工作流"
        capabilities:
          - "自动构建检测"
          - "自动化测试"
          - "安全扫描"
          - "自动部署"
          - "监控配置"
    
    container_registry:
      features:
        - "内置Docker镜像仓库"
        - "漏洞扫描"
        - "镜像清理策略"
        - "访问控制"
        - "地理复制"
  
  collaboration_features:
    merge_request_integration:
      ci_cd_integration:
        - "自动触发流水线"
        - "状态检查反馈"
        - "部署预览环境"
        - "安全扫描结果展示"
      
      review_workflow:
        - "代码质量检查"
        - "自动化测试验证"
        - "批准工作流"
        - "合规性检查"
    
    environment_management:
      features:
        - "环境定义和跟踪"
        - "部署历史记录"
        - "回滚功能"
        - "环境URL链接"
        - "环境变量管理"

Runner 管理和配置

yaml
runner_management:
  installation_methods:
    docker_installation:
      quick_start: |
        # Docker方式安装GitLab Runner
        docker run -d --name gitlab-runner --restart always \
          -v /srv/gitlab-runner/config:/etc/gitlab-runner \
          -v /var/run/docker.sock:/var/run/docker.sock \
          gitlab/gitlab-runner:latest
      
      registration_example: |
        # Runner注册
        docker exec -it gitlab-runner gitlab-runner register \
          --non-interactive \
          --url "https://gitlab.example.com/" \
          --registration-token "GR1348941..." \
          --executor "docker" \
          --docker-image alpine:latest \
          --description "docker-runner" \
          --tag-list "docker,linux" \
          --run-untagged="true" \
          --locked="false" \
          --access-level="not_protected"
    
    kubernetes_installation:
      helm_chart: |
        # Kubernetes环境中使用Helm安装Runner
        helm repo add gitlab https://charts.gitlab.io
        helm repo update
        
        # 创建values.yaml配置文件
        cat > values.yaml << EOF
        runnerRegistrationToken: "GR1348941..."
        gitlabUrl: https://gitlab.example.com/
        
        runners:
          config: |
            [[runners]]
              [runners.kubernetes]
                namespace = "gitlab-runner"
                image = "ubuntu:20.04"
                privileged = true
                
                # 资源限制
                cpu_limit = "1"
                memory_limit = "2Gi"
                service_cpu_limit = "0.5"
                service_memory_limit = "512Mi"
                
                # 存储配置
                [[runners.kubernetes.volumes.host_path]]
                  name = "docker-sock"
                  mount_path = "/var/run/docker.sock"
                  host_path = "/var/run/docker.sock"
        EOF
        
        # 安装Runner
        helm install gitlab-runner gitlab/gitlab-runner \
          --namespace gitlab-runner \
          --create-namespace \
          -f values.yaml
      
      manual_kubernetes: |
        # 手动Kubernetes部署配置
        apiVersion: apps/v1
        kind: Deployment
        metadata:
          name: gitlab-runner
          namespace: gitlab-runner
        spec:
          replicas: 2
          selector:
            matchLabels:
              app: gitlab-runner
          template:
            metadata:
              labels:
                app: gitlab-runner
            spec:
              containers:
              - name: gitlab-runner
                image: gitlab/gitlab-runner:latest
                command:
                - /usr/bin/dumb-init
                - /entrypoint
                - run
                - --user=gitlab-runner
                - --working-directory=/home/gitlab-runner
                env:
                - name: CI_SERVER_URL
                  value: "https://gitlab.example.com/"
                - name: REGISTRATION_TOKEN
                  valueFrom:
                    secretKeyRef:
                      name: gitlab-runner-secret
                      key: registration-token
                - name: RUNNER_EXECUTOR
                  value: "kubernetes"
                volumeMounts:
                - name: config
                  mountPath: /etc/gitlab-runner
              volumes:
              - name: config
                configMap:
                  name: gitlab-runner-config
  
  runner_configuration:
    docker_executor_config: |
      # Docker执行器详细配置
      concurrent = 10  # 并发作业数
      check_interval = 0  # 检查间隔
      
      [[runners]]
        name = "docker-runner"
        url = "https://gitlab.example.com/"
        token = "runner-token"
        executor = "docker"
        
        [runners.docker]
          tls_verify = false
          image = "alpine:latest"
          privileged = true
          disable_entrypoint_overwrite = false
          oom_kill_disable = false
          disable_cache = false
          
          # 容器配置
          volumes = ["/cache", "/var/run/docker.sock:/var/run/docker.sock:rw"]
          shm_size = 0
          network_mode = "host"
          
          # 资源限制
          memory = "2g"
          memory_swap = "4g"
          memory_reservation = "1g"
          cpus = "2.0"
          
          # 镜像拉取策略
          pull_policy = "if-not-present"
          
          # 服务容器配置
          allowed_images = ["alpine:*", "ubuntu:*", "node:*", "python:*"]
          allowed_services = ["postgres:*", "redis:*", "mysql:*"]
    
    kubernetes_executor_config: |
      # Kubernetes执行器配置
      [[runners]]
        name = "k8s-runner"
        url = "https://gitlab.example.com/"
        token = "runner-token"
        executor = "kubernetes"
        
        [runners.kubernetes]
          namespace = "gitlab-runner"
          image = "ubuntu:20.04"
          privileged = true
          
          # Pod配置
          cpu_limit = "2"
          memory_limit = "4Gi"
          service_cpu_limit = "1"
          service_memory_limit = "1Gi"
          
          # 节点选择
          node_selector = { "gitlab-runner" = "true" }
          node_tolerations = { "gitlab-runner/dedicated" = "NoSchedule" }
          
          # 存储卷配置
          [[runners.kubernetes.volumes.empty_dir]]
            name = "cache"
            mount_path = "/cache"
            medium = "Memory"
          
          [[runners.kubernetes.volumes.secret]]
            name = "docker-registry"
            mount_path = "/etc/docker/certs.d"
            read_only = true
            [runners.kubernetes.volumes.secret.items]
              "ca.crt" = "ca.crt"
    
    auto_scaling: |
      # 自动扩缩容配置
      [[runners]]
        name = "docker-autoscaling"
        url = "https://gitlab.example.com/"
        token = "runner-token"
        executor = "docker+machine"
        limit = 20  # 最大Runner数量
        
        [runners.machine]
          # 机器创建配置
          IdleCount = 2     # 空闲时保持的机器数
          IdleTime = 1800   # 空闲时间(秒)
          MaxBuilds = 100   # 每台机器最大构建数
          MachineDriver = "amazonec2"
          MachineName = "gitlab-runner-machine-%s"
          
          # AWS EC2配置
          MachineOptions = [
            "amazonec2-access-key=AKIAIOSFODNN7EXAMPLE",
            "amazonec2-secret-key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
            "amazonec2-region=us-west-2",
            "amazonec2-vpc-id=vpc-12345678",
            "amazonec2-subnet-id=subnet-12345678",
            "amazonec2-instance-type=t3.medium",
            "amazonec2-ami=ami-0abcdef1234567890"
          ]
yaml
runner_monitoring:
  health_monitoring:
    runner_status_check: |
      # Runner状态检查脚本
      #!/bin/bash
      
      # 检查Runner连接状态
      check_runner_status() {
          local runner_token="$1"
          local gitlab_url="$2"
          
          response=$(curl -s -H "PRIVATE-TOKEN: ${runner_token}" \
                          "${gitlab_url}/api/v4/runners")
          
          echo "$response" | jq '.[] | {id: .id, description: .description, active: .active, online: .online}'
      }
      
      # 重启离线Runner
      restart_offline_runners() {
          offline_runners=$(docker ps -a --filter "name=gitlab-runner" --filter "status=exited" -q)
          
          if [ ! -z "$offline_runners" ]; then
              echo "Restarting offline runners: $offline_runners"
              docker restart $offline_runners
          fi
      }
    
    metrics_collection: |
      # Prometheus监控配置
      # 在Runner上启用metrics
      # /etc/gitlab-runner/config.toml
      listen_address = "0.0.0.0:9252"
      
      # Prometheus scrape配置
      - job_name: 'gitlab-runner'
        static_configs:
          - targets: ['runner-1:9252', 'runner-2:9252']
        metrics_path: /metrics
        scrape_interval: 15s
        
        metric_relabeling_configs:
          - source_labels: [__name__]
            regex: 'gitlab_runner_.*'
            target_label: __name__
            replacement: '${1}'
    
    alerting_rules: |
      # Prometheus告警规则
      groups:
      - name: gitlab-runner-alerts
        rules:
        - alert: GitLabRunnerDown
          expr: up{job="gitlab-runner"} == 0
          for: 5m
          labels:
            severity: critical
          annotations:
            summary: "GitLab Runner is down"
            description: "Runner {{ $labels.instance }} has been down for more than 5 minutes"
        
        - alert: GitLabRunnerHighJobQueue
          expr: gitlab_runner_jobs{state="queued"} > 10
          for: 2m
          labels:
            severity: warning
          annotations:
            summary: "GitLab Runner job queue is high"
            description: "Runner has {{ $value }} queued jobs"
  
  performance_optimization:
    concurrent_optimization: |
      # 并发优化策略
      optimization_guidelines:
        cpu_based_concurrency:
          calculation: "concurrent = CPU核心数 * 1.5"
          example: "8核CPU → concurrent = 12"
          
        memory_based_concurrency:
          calculation: "concurrent = 总内存(GB) / 每作业平均内存(GB)"
          example: "32GB总内存,每作业2GB → concurrent = 16"
          
        mixed_workload_optimization:
          light_jobs: "并发数可以更高"
          heavy_jobs: "限制并发数避免资源竞争"
          
        configuration_example: |
          # 动态并发配置
          concurrent = 15
          check_interval = 3
          
          [[runners]]
            limit = 10  # 单个Runner最大并发
            request_concurrency = 5  # API请求并发数
    
    cache_optimization: |
      # 缓存优化配置
      cache_strategies:
        distributed_cache:
          type: "分布式缓存"
          storage: "S3, GCS, Azure Blob"
          benefits: ["多Runner共享", "持久化存储", "高可用性"]
          
          s3_cache_config: |
            [runners.cache]
              Type = "s3"
              Shared = true
              [runners.cache.s3]
                ServerAddress = "s3.amazonaws.com"
                BucketName = "gitlab-runner-cache"
                BucketLocation = "us-west-2"
                Insecure = false
        
        local_cache:
          type: "本地缓存"
          storage: "主机文件系统"
          benefits: ["快速访问", "零网络延迟"]
          limitations: ["不共享", "单点故障"]
          
          local_cache_config: |
            [runners.cache]
              Type = "local"
              Path = "/cache"
              Shared = false
              
        hybrid_cache:
          strategy: "本地缓存 + 分布式缓存"
          implementation: "优先使用本地,fallback到分布式"

🚀 GitLab CI 最佳实践

流水线设计模式

yaml
pipeline_patterns:
  simple_pipeline:
    description: "简单的线性流水线"
    use_case: "小型项目,简单构建流程"
    
    example: |
      # 简单的.gitlab-ci.yml
      stages:
        - build
        - test
        - deploy
      
      variables:
        DOCKER_DRIVER: overlay2
        DOCKER_TLS_CERTDIR: "/certs"
      
      before_script:
        - echo "Starting CI/CD pipeline"
        - date
      
      build_job:
        stage: build
        image: node:16-alpine
        script:
          - npm install
          - npm run build
        artifacts:
          paths:
            - dist/
          expire_in: 1 hour
        cache:
          key: "$CI_COMMIT_REF_NAME"
          paths:
            - node_modules/
      
      test_job:
        stage: test
        image: node:16-alpine
        script:
          - npm install
          - npm run test
        coverage: '/Lines\s*:\s*(\d+\.\d+)%/'
        artifacts:
          reports:
            junit: junit.xml
            coverage_report:
              coverage_format: cobertura
              path: coverage/cobertura-coverage.xml
      
      deploy_job:
        stage: deploy
        image: alpine:latest
        script:
          - echo "Deploying application"
          - echo "Deployment completed"
        environment:
          name: production
          url: https://app.example.com
        only:
          - main
  
  multi_environment_pipeline:
    description: "多环境部署流水线"
    environments: ["development", "staging", "production"]
    
    example: |
      # 多环境流水线配置
      stages:
        - build
        - test
        - deploy_dev
        - deploy_staging
        - deploy_prod
      
      variables:
        DOCKER_REGISTRY: registry.gitlab.com
        IMAGE_NAME: $CI_PROJECT_PATH
        
      .deploy_template: &deploy_template
        image: alpine/helm:latest
        before_script:
          - kubectl config use-context $KUBE_CONTEXT
          - helm repo update
        script:
          - |
            helm upgrade --install $APP_NAME ./helm-chart \
              --namespace $NAMESPACE \
              --set image.repository=$DOCKER_REGISTRY/$IMAGE_NAME \
              --set image.tag=$CI_COMMIT_SHA \
              --set environment=$ENVIRONMENT \
              --wait --timeout 10m
      
      build:
        stage: build
        image: docker:20.10.16
        services:
          - docker:20.10.16-dind
        script:
          - docker build -t $DOCKER_REGISTRY/$IMAGE_NAME:$CI_COMMIT_SHA .
          - docker push $DOCKER_REGISTRY/$IMAGE_NAME:$CI_COMMIT_SHA
      
      deploy_development:
        <<: *deploy_template
        stage: deploy_dev
        variables:
          ENVIRONMENT: development
          NAMESPACE: app-dev
          APP_NAME: app-dev
          KUBE_CONTEXT: dev-cluster
        environment:
          name: development
          url: https://app-dev.example.com
        rules:
          - if: $CI_COMMIT_BRANCH == "develop"
      
      deploy_staging:
        <<: *deploy_template
        stage: deploy_staging
        variables:
          ENVIRONMENT: staging
          NAMESPACE: app-staging
          APP_NAME: app-staging
          KUBE_CONTEXT: staging-cluster
        environment:
          name: staging
          url: https://app-staging.example.com
        rules:
          - if: $CI_COMMIT_BRANCH == "main"
        when: manual
      
      deploy_production:
        <<: *deploy_template
        stage: deploy_prod
        variables:
          ENVIRONMENT: production
          NAMESPACE: app-prod
          APP_NAME: app-prod
          KUBE_CONTEXT: prod-cluster
        environment:
          name: production
          url: https://app.example.com
        rules:
          - if: $CI_COMMIT_TAG =~ /^v\d+\.\d+\.\d+$/
        when: manual
  
  microservices_pipeline:
    description: "微服务单体仓库流水线"
    features: ["变更检测", "并行构建", "依赖管理"]
    
    example: |
      # 微服务Pipeline配置
      stages:
        - changes
        - build
        - test
        - deploy
      
      variables:
        DOCKER_REGISTRY: harbor.company.com
        
      # 检测变更的服务
      detect_changes:
        stage: changes
        image: alpine/git:latest
        script:
          - |
            # 检测变更的服务目录
            CHANGED_SERVICES=""
            for service in services/*/; do
              if git diff --quiet HEAD~1 HEAD -- "$service"; then
                echo "No changes in $service"
              else
                echo "Changes detected in $service"
                CHANGED_SERVICES="$CHANGED_SERVICES $(basename $service)"
              fi
            done
            echo "CHANGED_SERVICES=$CHANGED_SERVICES" > changes.env
        artifacts:
          reports:
            dotenv: changes.env
      
      # 动态构建模板
      .build_service: &build_service
        stage: build
        image: docker:20.10.16
        services:
          - docker:20.10.16-dind
        before_script:
          - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $DOCKER_REGISTRY
        script:
          - |
            cd services/$SERVICE_NAME
            docker build -t $DOCKER_REGISTRY/$CI_PROJECT_PATH/$SERVICE_NAME:$CI_COMMIT_SHA .
            docker push $DOCKER_REGISTRY/$CI_PROJECT_PATH/$SERVICE_NAME:$CI_COMMIT_SHA
        parallel:
          matrix:
            - SERVICE_NAME: [user-service, order-service, payment-service, notification-service]
        rules:
          - if: '$CHANGED_SERVICES =~ /$SERVICE_NAME/'
      
      build_services:
        <<: *build_service
      
      # 集成测试
      integration_tests:
        stage: test
        image: docker/compose:latest
        services:
          - docker:20.10.16-dind
        script:
          - docker-compose -f docker-compose.test.yml up --build --abort-on-container-exit
          - docker-compose -f docker-compose.test.yml down
        coverage: '/Total coverage: (\d+\.\d+)%/'
        artifacts:
          reports:
            junit: test-results.xml
yaml
advanced_features:
  dag_pipelines:
    description: "有向无环图流水线"
    benefits: ["优化执行时间", "复杂依赖关系", "并行执行"]
    
    example: |
      # DAG Pipeline配置
      stages:
        - build
        - test
        - security
        - deploy
      
      # 构建作业
      build_backend:
        stage: build
        script: echo "Building backend"
        artifacts:
          paths:
            - backend/dist/
      
      build_frontend:
        stage: build
        script: echo "Building frontend"
        artifacts:
          paths:
            - frontend/dist/
      
      # 测试作业(依赖构建)
      test_backend:
        stage: test
        needs: ["build_backend"]
        script: echo "Testing backend"
      
      test_frontend:
        stage: test
        needs: ["build_frontend"]
        script: echo "Testing frontend"
      
      # 集成测试(依赖所有构建)
      integration_test:
        stage: test
        needs: ["build_backend", "build_frontend"]
        script: echo "Integration testing"
      
      # 安全扫描(可以并行进行)
      security_scan:
        stage: security
        needs: ["build_backend"]
        script: echo "Security scanning"
      
      # 部署(依赖所有测试)
      deploy:
        stage: deploy
        needs: ["test_backend", "test_frontend", "integration_test", "security_scan"]
        script: echo "Deploying"
  
  parent_child_pipelines:
    description: "父子流水线"
    use_cases: ["复杂工作流", "权限隔离", "模块化管理"]
    
    parent_pipeline: |
      # 父流水线配置
      stages:
        - triggers
      
      # 触发子流水线
      trigger_backend:
        stage: triggers
        trigger:
          include: backend/.gitlab-ci.yml
          strategy: depend
        variables:
          SERVICE_NAME: backend
          DEPLOY_ENV: $DEPLOY_ENV
      
      trigger_frontend:
        stage: triggers
        trigger:
          include: frontend/.gitlab-ci.yml
          strategy: depend
        variables:
          SERVICE_NAME: frontend
          DEPLOY_ENV: $DEPLOY_ENV
      
      trigger_infrastructure:
        stage: triggers
        trigger:
          include: infrastructure/.gitlab-ci.yml
        variables:
          ACTION: deploy
        rules:
          - if: $CI_COMMIT_BRANCH == "main"
    
    child_pipeline: |
      # 子流水线配置 (backend/.gitlab-ci.yml)
      stages:
        - build
        - test
        - deploy
      
      variables:
        SERVICE_PATH: backend
        
      build:
        stage: build
        script:
          - cd $SERVICE_PATH
          - echo "Building $SERVICE_NAME"
          - docker build -t $SERVICE_NAME:$CI_COMMIT_SHA .
      
      test:
        stage: test
        script:
          - cd $SERVICE_PATH
          - echo "Testing $SERVICE_NAME"
          - npm test
      
      deploy:
        stage: deploy
        script:
          - echo "Deploying $SERVICE_NAME to $DEPLOY_ENV"
        environment:
          name: $DEPLOY_ENV
          url: https://$SERVICE_NAME-$DEPLOY_ENV.example.com
        rules:
          - if: $DEPLOY_ENV
  
  dynamic_pipelines:
    description: "动态生成的流水线"
    use_cases: ["配置驱动", "多项目管理", "复杂规则"]
    
    example: |
      # 动态流水线生成
      stages:
        - generate
        - child_pipelines
      
      generate_pipelines:
        stage: generate
        image: python:3.9-alpine
        script:
          - |
            # Python脚本生成子流水线配置
            python3 << 'EOF'
            import yaml
            import json
            
            # 读取配置文件
            with open('pipeline-config.json') as f:
                config = json.load(f)
            
            for service in config['services']:
                pipeline = {
                    'stages': ['build', 'test', 'deploy'],
                    'variables': {
                        'SERVICE_NAME': service['name'],
                        'SERVICE_PORT': service['port']
                    },
                    'build': {
                        'stage': 'build',
                        'script': [f"echo 'Building {service['name']}'"]
                    },
                    'test': {
                        'stage': 'test',
                        'script': [f"echo 'Testing {service['name']}'"]
                    },
                    'deploy': {
                        'stage': 'deploy',
                        'script': [f"echo 'Deploying {service['name']}'"],
                        'environment': {
                            'name': f"{service['name']}-prod",
                            'url': f"https://{service['name']}.example.com"
                        }
                    }
                }
                
                # 保存生成的流水线配置
                with open(f"generated-{service['name']}.yml", 'w') as out:
                    yaml.dump(pipeline, out)
            EOF
        artifacts:
          paths:
            - generated-*.yml
      
      # 触发生成的子流水线
      trigger_dynamic_pipelines:
        stage: child_pipelines
        trigger:
          include:
            - artifact: generated-user-service.yml
              job: generate_pipelines
            - artifact: generated-order-service.yml  
              job: generate_pipelines
            - artifact: generated-payment-service.yml
              job: generate_pipelines
          strategy: depend

📋 GitLab CI 面试重点

核心概念类

  1. GitLab CI/CD的核心组件有哪些?

    • GitLab Server的作用和功能
    • GitLab Runner的类型和执行器
    • .gitlab-ci.yml配置文件结构
    • Pipeline、Stage、Job的关系
  2. GitLab CI与Jenkins的主要区别?

    • 集成化程度对比
    • 配置方式差异
    • Runner vs Agent架构
    • 社区生态和企业支持
  3. 如何选择合适的Runner执行器?

    • Docker执行器的优缺点
    • Kubernetes执行器的应用场景
    • Shell执行器的使用限制
    • 性能和资源考虑

配置实践类

  1. 如何设计高效的CI/CD流水线?

    • 阶段划分和作业设计
    • 并行执行和依赖管理
    • 缓存策略优化
    • 构件管理最佳实践
  2. GitLab CI中的变量和密钥管理?

    • 预定义变量的使用
    • 自定义变量的作用域
    • 敏感信息保护
    • 环境变量继承规则
  3. 如何实现复杂的部署策略?

    • 多环境部署配置
    • 条件部署和手动批准
    • 蓝绿部署实现
    • 回滚机制设计

高级特性类

  1. DAG流水线的应用场景?

    • needs关键字的使用
    • 复杂依赖关系处理
    • 性能优化效果
    • 可视化展示优势
  2. 父子流水线的设计模式?

    • 触发机制和变量传递
    • 权限隔离实现
    • 复杂工作流管理
    • 监控和调试方法
  3. GitLab CI的监控和优化?

    • Runner性能监控
    • 流水线执行分析
    • 成本优化策略
    • 故障排查方法

🔗 相关内容


GitLab CI/CD作为一体化的DevOps平台,通过其内置的CI/CD功能和云原生支持,为现代应用开发提供了完整的自动化解决方案。掌握其核心概念、配置技巧和最佳实践,是构建高效DevOps流程的重要技能。

正在精进