侧边栏壁纸
博主头像
一朵云的博客博主等级

拥抱生活,向阳而生。

  • 累计撰写 107 篇文章
  • 累计创建 28 个标签
  • 累计收到 7 条评论

目 录CONTENT

文章目录

swagger接口文档 -- 定时巡检核心接口是否变更

一朵云
2024-08-17 / 0 评论 / 0 点赞 / 7 阅读 / 31405 字

swagger接口文档 -- 定时巡检核心接口是否变更

前言:

编写接口自动化测试用例后,维护也很重要的!!!

“维护”就是监控新迭代版本时,会不会对我们之前编写了接口自动化测试产生影响?

接口变更的监控最好还是通过技术手段来自动发现和验证,而不是依赖开发“口头告知”,这样更严谨,效率也更高!

这里我们可以基于swagger的json文档机制进行巡检比对。如:xxxx.com/wallet/v2/api-docs

image-ifgh.png

awagger结构:

{
  "swagger": "2.0",
  "info": { ... },
  "host": "xxxxxx.com",
  "basePath": "/wallet",
  "paths": {
    "/coupon/record/list": {
      "post": {
        "tags": ["代金券"],
        "summary": "用户代金券列表",
        "parameters": [ ... ],
        "responses": { ... }
      }
    }
  },
  "definitions": { ... }
}

思路:

通过 Python+Jenkins 实现定期轮训。

检查swagger文档是否新增了接口;并对我们已经写好的接口测试用例进行检查,是否接口被修改。

检测项 范围 说明
新增 API 所有接口 paths 中出现新路径
删除 API 核心接口 paths 中某个路径消失
请求头参数变更 核心接口 Authorization 是否必填、类型变化
Query/Path 参数变更 核心接口 参数增删改
请求体(Body)结构变更 核心接口 深度比对 schema
响应体结构变更 核心接口 特别是 200 返回的 DTO 结构

准备:

环境:

python 3.9

依赖文件:

httpx==0.28.1
PyYAML==6.0.2

操作:

1、项目结构

check_swagger_api/                          # 项目根目录(也是 Git 仓库根)
├── Jenkinsfile                             # Jenkins 构建脚本
├── requirements.txt                        # Python 依赖
├── apis/                                   # 各业务模块的 API 定义和基线
│   ├── activity/                           # 活动模块
│   │   ├── baseline_swagger.json           # 活动模块的 Swagger 基线
│   │   ├── current_swagger.json            # 当前抓取的最新 Swagger
│   │   └── critical.yml                    # 活动模块的关键接口定义
│   ├── charge/                             # 支付模块
│   │   └── critical.yml
│   ├── forum/                              # 社区模块
│   │   └── critical.yml
│   ├── login/                              # 登录模块
│   │   └── critical.yml
│   ├── odsLog/                             # 日志模块
│   │   └── critical.yml
│   └── wallet/                             # 钱包模块
│       └── critical.yml
├── config/                                 # 配置文件
│   ├── __init__.py                         # 使 config 成为 Python 包
│   └── settings.py                         # 配置变量,如 URL、路径等
├── logs/                                   # 日志输出目录(Jenkins 构建时生成)
├── utils/                                  # 工具函数
│   ├── __init__.py
│   ├── fetch_swagger.py                    # 抓取 Swagger JSON
│   ├── logger_config.py                    # 日志配置
│   └── swagger_diff.py                     # 对比 baseline 和 current 的差异
└── README.md                               # 项目说明文档

2、抓取接口文档

fetch_swagger.py

# -*- coding: utf-8 -*-
"""
功能:从指定 URL 获取 Swagger JSON 文档
      - 首次运行:保存为 current + 创建 baseline
      - 后续运行:仅更新 current
"""
import json
import sys
from config import CONFIG, MODULES
from pathlib import Path
from utils.logger_config import setup_logger

import httpx


# --- 日志配置 ---
logger = setup_logger("fetch_swagger")

all_paths_count = 0


def ensure_output_dir(path: Path):
    """确保输出目录存在"""
    try:
        path.mkdir(parents=True, exist_ok=True)
        logger.debug(f"确保目录存在: {path}")
    except Exception as e:
        logger.error(f"创建目录失败: {path} - {e}")
        sys.exit(1)


def fetch_swagger_json(url: str, timeout: float = 10.0):
    """获取 Swagger JSON 数据"""
    try:
        with httpx.Client(timeout=timeout) as client:
            logger.info(f"正在请求: {url}")
            response = client.get(url)

            if response.status_code != 200:
                logger.error(f"请求失败,状态码: {response.status_code}")
                logger.error(f"响应预览: {response.text[:200]}...")
                return None

            try:
                data = response.json()
                paths_count = len(data.get("paths", {}))
                global all_paths_count
                all_paths_count += paths_count
                logger.info(f"成功获取 Swagger 文档,共 {paths_count} 个接口")
                return data
            except json.JSONDecodeError as e:
                logger.error(f"响应不是合法的 JSON: {e}")
                logger.error(f"原始响应: {response.text[:300]}...")
                return None

    except httpx.TimeoutException:
        logger.error("请求超时,请检查网络或 URL 是否可达")
    except httpx.RequestError as e:
        logger.error(f"网络请求出错: {e}")
    except Exception as e:
        logger.error(f"未知错误: {type(e).__name__}: {e}")
    return None


def save_json(data, filepath: Path):
    """保存 JSON 文件"""
    try:
        filepath.parent.mkdir(parents=True, exist_ok=True)  # 确保父目录存在
        with open(filepath, "w", encoding="utf-8") as f:
            json.dump(data, f, ensure_ascii=False, indent=2)
        logger.info(f"已保存: {filepath}")
    except Exception as e:
        logger.error(f"保存文件失败: {filepath} - {e}")
        sys.exit(1)


def main():
    # 解析路径
    output_dir = CONFIG["OUTPUT_DIR"]

    # 确保输出目录存在
    ensure_output_dir(output_dir)

    # 遍历模块列表
    for module in MODULES:
        name = module["name"]
        url = module["url"]
        current_path = output_dir / name / CONFIG["CURRENT_FILE"]
        baseline_path = output_dir / name / CONFIG["BASELINE_FILE"]

        # 1. 抓取 Swagger
        swagger_data = fetch_swagger_json(url, CONFIG["TIMEOUT"])
        if not swagger_data:
            logger.error("获取 Swagger 失败,程序退出。")
            sys.exit(1)

        # 2. 保存为 current(每次都保存)
        save_json(swagger_data, current_path)

        # 3. 如果没有 baseline,则创建一份
        if not baseline_path.exists():
            save_json(swagger_data, baseline_path)
            logger.info(f"基准文件创建成功: {baseline_path}")
        else:
            logger.info(f"基准文件已存在,跳过创建: {baseline_path}")

    logger.info(f"🎉 获取流程完成,一共{all_paths_count}个接口!")


if __name__ == "__main__":
    main()

3、文档差异对比

swagger_diff.py

# -*- coding: utf-8 -*-
"""
功能:对比 baseline_swagger.json 和 current_swagger.json
      检测:接口增删、请求方式、请求参数、请求头、响应字段 变化
"""
import json
import sys
import yaml
from pathlib import Path
from config import CONFIG, MODULES
from utils.logger_config import setup_logger

# --- 配置 ---
BASELINE_FILE = CONFIG.get("BASELINE_FILE")
CURRENT_FILE = CONFIG.get("CURRENT_FILE")
OUTPUT_DIR = CONFIG.get("OUTPUT_DIR")  # 与 fetch 脚本一致

# --- 日志配置 ---
logger = setup_logger("swagger_diff")


def load_json(filepath: Path):
    """加载 JSON 文件"""
    try:
        with open(filepath, 'r', encoding='utf-8') as f:
            data = json.load(f)
            logger.info(f"加载成功: {filepath.name}")
            return data
    except FileNotFoundError:
        logger.error(f"文件未找到: {filepath}")
        sys.exit(1)
    except json.JSONDecodeError as e:
        logger.error(f"JSON 解析失败 {filepath}: {e}")
        sys.exit(1)
    except Exception as e:
        logger.error(f"读取文件异常 {filepath}: {e}")
        sys.exit(1)


# === 修复顺序:把 extract_schema_structure 提到最前面 ===
def extract_schema_structure(schema, definitions, prefix="", in_type=None):
    """
    通用函数:递归提取 schema 的字段路径结构
    适用于请求参数(body/query等)和响应参数
    """
    result = []
    if not schema:
        return result

    # 处理 $ref
    if schema.get("$ref"):
        ref_name = schema["$ref"].split("/")[-1]
        if ref_name in definitions:
            nested_schema = definitions[ref_name]
            props = nested_schema.get("properties", {})
            required_list = nested_schema.get("required", [])
            for name, prop_schema in props.items():
                field_prefix = f"{prefix}.{name}" if prefix else name
                is_required = name in required_list
                result.extend(extract_schema_structure(prop_schema, definitions, field_prefix, in_type))
        return result

    # 处理对象
    if schema.get("type") == "object":
        props = schema.get("properties", {})
        required_list = schema.get("required", [])
        for name, prop_schema in props.items():
            field_prefix = f"{prefix}.{name}" if prefix else name
            is_required = name in required_list
            field_type = prop_schema.get("type", "unknown")
            result.append((field_prefix, field_type, is_required, in_type))
            result.extend(extract_schema_structure(prop_schema, definitions, field_prefix, in_type))

    # 处理数组
    elif schema.get("type") == "array":
        items_schema = schema.get("items", {})
        array_prefix = f"{prefix}[]" if prefix else "[]"
        result.append((array_prefix, "array", False, in_type))
        result.extend(extract_schema_structure(items_schema, definitions, array_prefix, in_type))

    # 基本类型
    else:
        field_type = schema.get("type", "unknown")
        if prefix:
            result.append((prefix, field_type, False, in_type))

    return result


def extract_parameters(parameters, definitions):
    """
    提取所有请求参数的结构化信息
    包括 query/path/header/body
    """
    result = []
    if not parameters:
        return result

    for p in parameters:
        name = p.get("name")
        _in = p.get("in")  # query, path, header, formData, body
        required = p.get("required", False)

        if not name or not _in:
            continue

        if _in == "body" and p.get("schema"):
            schema = p["schema"]
            fields = extract_schema_structure(schema, definitions, prefix=name, in_type="body")
            result.extend(fields)
        else:
            field_type = p.get("type", "string")
            result.append((name, field_type, required, _in))

    return sorted(result)


def extract_headers(params):
    """提取所有 header 类型的参数"""
    headers = [p for p in params if p.get("in") == "header"]
    return [(h["name"], h.get("required", False)) for h in headers]


def extract_response_fields(responses, definitions):
    """
    提取 200 响应的所有字段路径(递归)
    """
    resp = responses.get("200") or responses.get("201") or responses.get("default")
    if not resp or not resp.get("schema"):
        return []

    schema = resp["schema"]
    fields = extract_schema_structure(schema, definitions, prefix="", in_type="response")
    return sorted(fields)


def extract_apis(data):
    """
    提取所有接口的指纹信息
    返回: {(method, path): {summary, parameters, headers, response_fields}}
    """
    definitions = data.get("definitions") or data.get("components", {}).get("schemas", {})
    base_path = (data.get("basePath", "") or "").rstrip("/")
    apis = {}

    paths = data.get("paths", {})
    for path, methods in paths.items():
        full_path = (base_path + path) if base_path else path
        for method, api in methods.items():
            method = method.upper()
            if method not in ["GET", "POST", "PUT", "DELETE", "PATCH", "OPTIONS"]:
                continue

            key = (method, full_path)
            parameters = api.get("parameters", [])
            apis[key] = {
                "path": full_path,
                "method": method,
                "summary": api.get("summary", "").strip(),
                "parameters": extract_parameters(parameters, definitions),  # 传 definitions
                "headers": extract_headers(parameters),
                "response_fields": extract_response_fields(api.get("responses", {}), definitions),
            }
    return apis


def compare_apis_add(old_apis, new_apis):
    """对比两个 API 字典,返回差异报告"""
    old_keys = set(old_apis.keys())
    new_keys = set(new_apis.keys())

    added = new_keys - old_keys
    return list(added)


def compare_apis(old_apis, new_apis):
    """对比两个 API 字典,返回差异报告"""
    old_keys = set(old_apis.keys())
    new_keys = set(new_apis.keys())

    removed = old_keys - new_keys
    common = old_keys & new_keys

    changed = []

    for key in common:
        old_api = old_apis[key]
        new_api = new_apis[key]
        changes = []

        # 1. 请求参数变化
        old_params = set(old_api["parameters"])
        new_params = set(new_api["parameters"])
        if old_params != new_params:
            added_p = new_params - old_params
            removed_p = old_params - new_params
            if added_p:
                # 注意:现在是 (name, type, required, in_type)
                items = [f"{n}({t})" + ("*" if r else "") for n, t, r, i in added_p]
                changes.append(f"新增参数: {', '.join(items)}")
            if removed_p:
                items = [f"{n}({t})" + ("*" if r else "") for n, t, r, i in removed_p]
                changes.append(f"删除参数: {', '.join(items)}")

        # 2. Header 变化
        old_headers = set(old_api["headers"])
        new_headers = set(new_api["headers"])
        if old_headers != new_headers:
            added_h = new_headers - old_headers
            removed_h = old_headers - new_headers
            if added_h:
                items = [f"{n}{'*' if r else ''}" for n, r in added_h]
                changes.append(f"新增请求头: {', '.join(items)}")
            if removed_h:
                items = [f"{n}{'*' if r else ''}" for n, r in removed_h]
                changes.append(f"删除请求头: {', '.join(items)}")

        # 3. 响应字段变化
        old_fields = set(old_api["response_fields"])  # (path, type, required, in_type)
        new_fields = set(new_api["response_fields"])
        if old_fields != new_fields:
            added_f = new_fields - old_fields
            removed_f = old_fields - new_fields
            if added_f:
                # 只显示字段路径
                paths = [f"{n}" for n, t, r, i in added_f]
                changes.append(f"新增响应字段: {', '.join(paths)}")
            if removed_f:
                paths = [f"{n}" for n, t, r, i in removed_f]
                changes.append(f"删除响应字段: {', '.join(paths)}")

        # 4. 摘要变化(可选)
        if old_api["summary"] != new_api["summary"]:
            changes.append(f"摘要变更: '{old_api['summary']}' → '{new_api['summary']}'")

        if changes:
            changed.append({
                "method": key[0],
                "path": key[1],
                "changes": changes
            })

    return list(removed), changed


def print_report(added, removed, changed, old_apis, new_apis, module_name):
    """打印差异报告"""
    has_change = False

    if added:
        has_change = True
        logger.warning("🆕 **新增接口**")
        for method, path in sorted(added):
            summary = new_apis.get((method, path), {}).get("summary", "")
            logger.info(f"   ➕ {method} {path} - {summary}")

    if removed:
        has_change = True
        logger.warning("🗑️ **删除接口**")
        for method, path in sorted(removed):
            summary = old_apis.get((method, path), {}).get("summary", "")
            logger.info(f"   🔻 {method} {path} - {summary}")

    if changed:
        has_change = True
        logger.warning("🔄 **变更接口**")
        for item in changed:
            logger.info(f"   🔄 {item['method']} {item['path']}")
            for change in item["changes"]:
                logger.info(f"     - {change}")

    if not has_change:
        logger.info(f"{module_name}所有接口保持一致,无变更。")
    else:
        logger.warning(f"{module_name}检测到 API 变更,请及时评估影响!")

    return has_change


def load_critical_apis(module_name: str) -> set:
    """
    加载某个模块的关键接口,返回 {(path, method)} 集合
    """
    critical_path = CONFIG["OUTPUT_DIR"] / module_name / "critical.yml"
    if not critical_path.exists():
        logger.warning(f"⚠️ 未找到关键接口配置: {critical_path}")
        return set()

    try:
        with open(critical_path, 'r', encoding='utf-8') as f:
            data = yaml.safe_load(f)
            if not data:
                return set()
            # 返回 (path, method.upper()) 的集合,用于快速查找
            return {
                (item["method"].strip().upper(), f"/{module_name}{item['path'].strip()}")
                for item in data
                if "path" in item and "method" in item
            }
    except Exception as e:
        logger.error(f"❌ 解析 {critical_path} 失败: {e}")
        return set()


# ======================
# 主程序
# ======================
if __name__ == "__main__":

    has_change_list = []

    for module in MODULES:
        module_name = module.get("name")

        baseline_path = OUTPUT_DIR / module_name / BASELINE_FILE
        current_path = OUTPUT_DIR / module_name / CURRENT_FILE

        logger.info(f"🔍 开始对比{module_name} 的 baseline 与 current Swagger 文档...")

        # 1. 加载关键接口列表
        critical_apis = load_critical_apis(module_name)  # {(path, method), ...}
        # print("✅ critical_apis:", critical_apis)
        if not critical_apis:
            logger.warning(f"{module_name} 无关键接口,跳过对比")
            has_change_list.append(False)
            continue

        # 2. 加载文件
        baseline_data = load_json(baseline_path)
        current_data = load_json(current_path)

        # 3. 提取所有接口
        old_apis = extract_apis(baseline_data)
        new_apis = extract_apis(current_data)

        # 4. 筛选出关键接口(只保留 critical.yml 中定义的)
        old_critical = {k: v for k, v in old_apis.items() if k in critical_apis}
        # for k in old_apis.keys():
        #     print(f"  {k} (type: {type(k)})")
        new_critical = {k: v for k, v in new_apis.items() if k in critical_apis}

        # 5. 对比
        removed, changed = compare_apis(old_critical, new_critical)
        added = compare_apis_add(old_apis,new_apis)

        # 6. 输出报告(传入 old_apis 和 new_apis)
        has_change = print_report(added, removed, changed, old_critical, new_critical, module_name)
        has_change_list.append(has_change)

    # 退出码:任意模块有变更 → 1(Jenkins 可识别为 UNSTABLE)
    if any(has_change_list):
        logger.warning("检测到 API 变更,构建标记为 UNSTABLE")
        sys.exit(1)
    else:
        logger.info("无 API 变更,构建正常")
        sys.exit(0)

3、日志配置

logger_config.py

import logging
import os
from datetime import datetime
from pathlib import Path


def setup_logger(
        name: str = __name__,
        log_dir: str = "logs",
        log_level: int = logging.INFO,
        console_level: int = logging.INFO,
        file_level: int = logging.DEBUG,
        format_string: str = "%(asctime)s - %(levelname)s - %(message)s"
) -> logging.Logger:
    """
    设置并返回一个配置好的 logger
    :param name: logger 名称(通常用 __name__)
    :param log_dir: 日志存储目录
    :param log_level: logger 总体级别
    :param console_level: 控制台输出级别
    :param file_level: 文件输出级别
    :param format_string: 日志格式
    :return: 配置好的 logger 实例
    """
    # 创建日志目录
    log_dir = Path(__file__).parent.parent / log_dir
    os.makedirs(log_dir, exist_ok=True)

    # 生成带时间戳的日志文件名
    timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
    script_name = name.split(".")[-1]  # 取模块名,如 fetch_swagger
    log_file = os.path.join(log_dir, f"{script_name}_{timestamp}.log")

    # 获取或创建 logger
    logger = logging.getLogger(name)
    logger.setLevel(log_level)

    # 避免重复添加 handler
    if logger.handlers:
        logger.handlers.clear()

    # 文件 Handler
    file_handler = logging.FileHandler(log_file, encoding='utf-8')
    file_handler.setLevel(file_level)
    file_formatter = logging.Formatter(format_string)
    file_handler.setFormatter(file_formatter)
    logger.addHandler(file_handler)

    # 控制台 Handler
    console_handler = logging.StreamHandler()
    console_handler.setLevel(console_level)
    console_formatter = logging.Formatter("%(levelname)s: %(message)s")
    console_handler.setFormatter(console_formatter)
    logger.addHandler(console_handler)

    # 记录初始化信息
    logger.info(f"日志系统已启动 -> {log_file}")

    return logger

4、公共配置

setting.py

from pathlib import Path

# --- 输出目录等路径配置 ---
CONFIG = {
    "OUTPUT_DIR": Path(__file__).parent.parent / "apis",
    "CURRENT_FILE": "current_swagger.json",
    "BASELINE_FILE": "baseline_swagger.json",
    "TIMEOUT": 10.0,
}

# --- 每个模块的元数据 ---
MODULES = [
    {
        "name": "activity",
        "url": "https://xxxx.com/activity/v2/api-docs"
    },
    {
        "name": "charge",
        "url": "https://xxxx/charge/v2/api-docs"
    },
    {
        "name": "forum",
        "url": "https://xxxx/forum/v2/api-docs"
    },
    {
        "name": "login",
        "url": "https://xxxx/login/v2/api-docs"
    },
    {
        "name": "odsLog",
        "url": "https://xxxx/odsLog/v2/api-docs"
    },
    {
        "name": "wallet",
        "url": "https://xxxx/wallet/v2/api-docs"
    }
]

_ init _.py 方便后续其他目录的py导入config模块使用 CONFIG, MODULES

from .settings import CONFIG, MODULES

5、核心接口文件

critical.yml

- path: /activityCenter/info1
  method: post
  description: 查询活动中心配置

- path: /activityGroup/info
  method: post
  description: 查询活动组

- path: /activityCenter/module/list
  method: post
  description: 查询活动中心配置list

6、Jenkinsfile流水线文件

pipeline {
    agent any

    environment {
        VENV = 'venv'
        PERSISTENT_LOGS = "/var/jenkins_data/api_check/logs"
    }

    stages {

        stage('创建虚拟环境') {
            steps {
                sh '''
                    echo "创建 Python 虚拟环境..."
                    python3 -m venv ${VENV}
                    python3 --version
                    echo "虚拟环境创建成功: ${VENV}"
                '''
            }
        }

        stages {
            stage('Setup') {
                steps {
                    sh '''
                        mkdir -p ${PERSISTENT_LOGS}
                        # 软链接,代码里还是用 ./logs/,但实际写到外部
                        ln -sf ${PERSISTENT_LOGS} ${WORKSPACE}/logs
                    '''
                }
            }

        stage('安装依赖') {
            steps {
                sh '''
                    echo "激活虚拟环境并升级 pip..."
                    source ${VENV}/bin/activate
                    pip install --upgrade pip -i https://pypi.tuna.tsinghua.edu.cn/simple --trusted-host pypi.tuna.tsinghua.edu.cn
                    pip cache purge
                    pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple --trusted-host pypi.tuna.tsinghua.edu.cn
                    echo "依赖安装完成"
                '''
            }
        }

         stage('运行 fetch_swagger 拉取文档') {
              steps {
                   script {
                        // 在这里执行,先进入根目录,再以 -m 模块运行
                        def exitCode = sh(
                             script: """
                                  cd "${WORKSPACE}"
                                  source venv/bin/activate
                                  python -m utils.fetch_swagger
                             """,
                             returnStatus: true  // 关键:返回状态码,而不是抛异常
                             )

                             if (exitCode == 0) {
                                  echo "已拉取文档正常"
                                  // 构建继续
                             } else if (exitCode == 1) {
                                  echo "拉取文档失败"
                                  currentBuild.result = 'UNSTABLE'
                                  // 构建继续,标记为黄色
                             } else {
                                  echo "拉取文档异常(exit code: ${exitCode})"
                                  error("拉取文档失败,可能是命令不存在或环境错误,终止构建")
                        }
                   }
              }
         }

         stage('运行 swagger_diff 对比文档') {
              steps {
                   script {
                        // 在这里执行,先进入根目录,再以 -m 模块运行
                        def exitCode = sh(
                             script: """
                                  cd "${WORKSPACE}"
                                  source venv/bin/activate
                                  python -m utils.swagger_diff
                             """,
                             returnStatus: true  // 关键:返回状态码,而不是抛异常
                             )

                             if (exitCode == 0) {
                                  echo "接口文档无差异"
                                  // 构建继续
                             } else if (exitCode == 1) {
                                  echo "接口文档存在差异"
                                  currentBuild.result = 'UNSTABLE'
                                  // 构建继续,标记为黄色
                             } else {
                                  echo "对比文档异常(exit code: ${exitCode})"
                                  error("对比文档失败,可能是命令不存在或环境错误,终止构建")
                        }
                   }
              }
         }

    }

    post {
        success {
            echo '✅ 构建成功,对比无异常!'

        }
        unstable {
            echo '⚠️ 构建完成,但有存在差异'

        }
        failure {
            echo '❌ 构建失败!可能是脚本错误、依赖问题或 py 执行异常'

        }

        always {
            echo "构建状态: ${currentBuild.result}"

             // 归档所有日志文件
             archiveArtifacts artifacts: 'logs/*.log', allowEmptyArchive: true


            script {
                // 定义消息标题和状态图标
                def statusTitle = "构建结果: ${currentBuild.currentResult}"
                def statusEmoji = currentBuild.currentResult == 'SUCCESS' ? '✅' : '❌'

                // 构建 Markdown 消息内容数组
                def markdownText = [
                    "# ${statusTitle} ${statusEmoji}", // 一级标题,包含结果和图标
                    "> ## 项目信息", // 二级标题
                    "- **项目名称**: `${env.JOB_NAME}`", // 项目全名
                    "- **构建编号**: `${currentBuild.displayName}`", // 构建号,如 #15
                    "- **触发原因**: ${currentBuild.getBuildCauses().collect{it.shortDescription}.join(', ')}", // 触发原因
                    "> ", // 空行
                    "## 构建详情", // 二级标题
                    "- **开始时间**: ${new Date(currentBuild.startTimeInMillis).format('yyyy-MM-dd HH:mm:ss')}", // 格式化时间
                    "- **持续时间**: ${currentBuild.durationString}", // 如 "1 min 20 sec"
                    "- **构建日志**: [点击查看完整日志](${currentBuild.absoluteUrl}console)", // 超链接到日志
                    "> ", // 空行
                    "**请及时关注构建状态!**" // 强调文字
                ]

                // 发送 Markdown 消息
                wxwork(
                    robot: 'Api_Swagger_Bot', // 替换成你在Jenkins系统配置中的机器人ID
                    type: 'markdown', // 指定消息类型为 markdown
                    text: markdownText // 传入构建好的Markdown内容数组
                )
            }
        }
    }
}

7、定时执行

每天早上9点的Cron表达式: 0 9 * * *

image-qagk.png

8、jenkins操作说明

当核对了接口变更后,可以在配置中开启clean工作空间,这会将之前的基准文档baseline_swagger.json清理掉,下次执行时会重新读取最新的接口文档作为基准文档。

9、日志演示:

抓取演示:

image-warj.png

差异演示:

image-luzu.png

企微通知:

image-tsni.png

0

评论区