cursor+MCP经典实践——降低自动化用例执行耗时(第二版)
f.write(f"- {dir_path}: {stats['count']} 个文件, {stats['sleep_count']} 个sleep语句\n")f.write(f"- {ext}: {stats['count']} 个文件, {stats['sleep_count']} 个sleep语句\n")print(f"发现 {results['total_sleep_count']} 个
# Cursor+MCP经典实践:自动化用例Sleep优化实战报告
## 📖 前言
本文记录了一次完整的Cursor+MCP(Model Context Protocol)实践过程,目标是通过AI辅助工具优化指定目录下自动化用例的睡眠时间,减少测试执行耗时。整个实践过程经历了多次试错和策略调整,最终成功完成了429个sleep语句的智能替换。
## 🎯 实践目标
### 主要目标
- **性能优化**:减少自动化用例执行时间,预计提升30-50%
- **技术探索**:掌握Cursor+MCP的结合使用,探索AI辅助开发的新模式
- **代码质量**:将固定时间的sleep替换为智能等待机制
### 具体指标
- 扫描并识别所有sleep语句
- 创建智能等待工具和关键字
- 批量替换sleep语句
- 验证功能完整性和性能提升
## 🏗️ 技术架构
### Cursor+MCP组合
- **Cursor**:AI驱动的代码编辑器,提供智能代码分析和重构能力
- **MCP**:Model Context Protocol,实现AI与外部工具的安全通信
- **结合优势**:AI理解+工具执行,实现智能化的代码优化
### 项目结构
```
UmeAutoTest/
├── rfrebuild/testcase/01-无线网元管理(ranoss-rem)/01-rem-aisg/ # 目标目录
├── rfrebuild/keyword/ # 关键字文件
├── autotestrebuild/umelib/ # Python库文件
└── 各种扫描和优化工具
```
## 📋 实践过程
### 第一阶段:环境准备与MCP配置 (100%完成)
#### 1.1 环境搭建
##### 1.1.1 Python虚拟环境配置
```bash
# 检查当前Python版本
python --version
# 创建虚拟环境(如果不存在)
python -m venv new_env
# 激活虚拟环境(Windows PowerShell)
# 注意:如果遇到执行策略限制,使用以下命令
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
# 或者直接使用虚拟环境中的Python
new_env\Scripts\python.exe --version
new_env\Scripts\python.exe -m pip --version
# 安装必要的依赖包
new_env\Scripts\python.exe -m pip install --upgrade pip
new_env\Scripts\python.exe -m pip install asyncio pathlib typing
```
##### 1.1.2 MCP基础环境配置
```bash
# 安装MCP相关依赖
new_env\Scripts\python.exe -m pip install mcp
# 验证安装
new_env\Scripts\python.exe -c "import mcp; print('MCP安装成功')"
```
#### 1.2 MCP配置文件创建
##### 1.2.1 创建MCP配置文件
在项目根目录创建 `mcp-config.json`:
```json
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem"],
"env": {
"MCP_FILESYSTEM_ROOT": "."
}
},
"git": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-git"],
"env": {
"MCP_GIT_ROOT": "."
}
}
}
}
```
##### 1.2.2 创建MCP依赖文件
创建 `mcp_requirements.txt`:
```txt
# MCP相关依赖
asyncio
pathlib
typing
# 代码分析工具
ast
re
json
os
sys
```
#### 1.3 基础MCP服务器创建
##### 1.3.1 创建Python MCP服务器
创建 `mcp_server.py`:
```python
#!/usr/bin/env python3
"""
MCP服务器:用于扫描项目中的sleep语句
"""
import asyncio
import json
import os
import re
from pathlib import Path
from typing import Dict, List, Any
class SleepScannerMCP:
def __init__(self):
self.project_root = Path(".")
self.scan_results = {}
async def scan_project(self) -> Dict[str, Any]:
"""扫描项目中的所有sleep语句"""
print("开始扫描项目...")
# 扫描Python测试文件
python_files = list(self.project_root.rglob("*.py"))
test_files = [f for f in python_files if 'test' in f.name.lower()]
# 扫描Robot文件
robot_files = list(self.project_root.rglob("*.robot"))
print(f"发现 {len(test_files)} 个Python测试文件")
print(f"发现 {len(robot_files)} 个Robot文件")
# 扫描sleep语句
all_sleep_statements = []
# 扫描Python文件
for file_path in test_files:
sleep_statements = self.find_sleep_statements(file_path)
if sleep_statements:
all_sleep_statements.extend(sleep_statements)
# 扫描Robot文件
for file_path in robot_files:
sleep_statements = self.find_sleep_statements(file_path)
if sleep_statements:
all_sleep_statements.extend(sleep_statements)
# 分析结果
analysis = self.analyze_sleep_effectiveness(all_sleep_statements)
# 生成报告
report = self.generate_optimization_report(analysis)
return {
"total_files_scanned": len(test_files) + len(robot_files),
"total_sleep_statements": len(all_sleep_statements),
"analysis": analysis,
"report": report
}
def find_sleep_statements(self, file_path: Path) -> List[Dict[str, Any]]:
"""在单个文件中查找sleep语句"""
sleep_statements = []
try:
with open(file_path, 'r', encoding='utf-8', errors='ignore') as f:
content = f.read()
lines = content.split('\n')
for line_num, line in enumerate(lines, 1):
if 'sleep' in line.lower():
sleep_statements.append({
'file': str(file_path),
'line': line_num,
'content': line.strip(),
'file_type': file_path.suffix
})
except Exception as e:
print(f"读取文件 {file_path} 时出错: {e}")
return sleep_statements
def analyze_sleep_effectiveness(self, sleep_statements: List[Dict[str, Any]]) -> Dict[str, Any]:
"""分析sleep语句的有效性"""
analysis = {
'necessary': [],
'potentially_unnecessary': [],
'optimizable': []
}
for stmt in sleep_statements:
content = stmt['content'].lower()
# 简单启发式分析
if 'time.sleep' in content and any(str(i) in content for i in range(1, 11)):
# 短时间sleep,可能是必要的
analysis['necessary'].append(stmt)
elif 'sleep' in content and any(str(i) in content for i in range(10, 61)):
# 中等时间sleep,可能可以优化
analysis['potentially_unnecessary'].append(stmt)
else:
# 长时间sleep,应该优化
analysis['optimizable'].append(stmt)
return analysis
def generate_optimization_report(self, analysis: Dict[str, Any]) -> str:
"""生成优化报告"""
report = f"""# Sleep语句优化报告
## 扫描统计
- 必要的sleep语句: {len(analysis['necessary'])}
- 可能不必要的sleep语句: {len(analysis['potentially_unnecessary'])}
- 可优化的sleep语句: {len(analysis['optimizable'])}
## 优化建议
1. 对于可优化的sleep语句,建议替换为智能等待机制
2. 对于可能不必要的sleep语句,建议进行代码审查
3. 对于必要的sleep语句,建议保留但优化等待时间
## 详细分析
"""
for category, statements in analysis.items():
if statements:
report += f"\n### {category.title()}\n"
for stmt in statements[:5]: # 只显示前5个示例
report += f"- {stmt['file']}:{stmt['line']} - {stmt['content']}\n"
if len(statements) > 5:
report += f"- ... 还有 {len(statements) - 5} 个\n"
return report
async def main():
"""主函数"""
scanner = SleepScannerMCP()
results = await scanner.scan_project()
# 保存结果
with open('sleep_optimization_report.md', 'w', encoding='utf-8') as f:
f.write(results['report'])
print(f"扫描完成!发现 {results['total_sleep_statements']} 个sleep语句")
print(f"报告已保存到: sleep_optimization_report.md")
if __name__ == "__main__":
asyncio.run(main())
```
##### 1.3.2 验证MCP服务器
```bash
# 运行MCP服务器进行测试
new_env\Scripts\python.exe mcp_server.py
# 检查生成的报告
type sleep_optimization_report.md
```
#### 1.4 环境验证
```bash
# 验证MCP配置
# 在Cursor中测试MCP连接
# 检查文件系统访问权限
# 验证Python环境完整性
```
### 第二阶段:代码分析与问题识别 (100%完成)
#### 2.1 初步扫描
##### 2.1.1 执行基础扫描
```bash
# 运行基础MCP服务器
new_env\Scripts\python.exe mcp_server.py
# 查看扫描结果
echo "扫描完成,检查报告文件..."
dir *.md
type sleep_optimization_report.md
```
##### 2.1.2 分析初步结果
**发现问题**:
- 只扫描了379个Python文件(实际有4,358个)
- 发现2,813个sleep语句
- 扫描覆盖范围明显不足
#### 2.2 全面扫描验证
##### 2.2.1 创建全面扫描器
创建 `comprehensive_scan.py`:
```python
#!/usr/bin/env python3
"""
全面扫描器:验证sleep语句的真实分布
"""
import os
import re
from pathlib import Path
from typing import Dict, List, Any
class ComprehensiveScanner:
def __init__(self):
self.project_root = Path(".")
self.file_extensions = ['.py', '.robot', '.txt', '.xml', '.json', '.sh', '.bat', '.yml', '.yaml', '.conf', '.cfg']
def scan_all_files(self) -> Dict[str, Any]:
"""扫描所有相关文件"""
print("开始全面扫描...")
all_files = []
sleep_files = []
total_sleep_count = 0
# 递归扫描所有文件
for ext in self.file_extensions:
files = list(self.project_root.rglob(f"*{ext}"))
all_files.extend(files)
for file_path in files:
sleep_count = self.contains_sleep(file_path)
if sleep_count > 0:
sleep_files.append({
'file': str(file_path),
'sleep_count': sleep_count,
'extension': ext
})
total_sleep_count += sleep_count
# 分析结果
analysis = self.analyze_sleep_coverage(all_files, sleep_files, total_sleep_count)
return {
'total_files': len(all_files),
'files_with_sleep': len(sleep_files),
'total_sleep_count': total_sleep_count,
'analysis': analysis,
'sleep_files': sleep_files
}
def contains_sleep(self, file_path: Path) -> int:
"""检查文件中是否包含sleep语句"""
try:
with open(file_path, 'r', encoding='utf-8', errors='ignore') as f:
content = f.read()
# 使用正则表达式查找sleep
sleep_patterns = [
r'\btime\.sleep\s*\(',
r'\bsleep\s+\d+',
r'\bsleep\s+\d+s',
r'\bsleep\s+\d+ms'
]
count = 0
for pattern in sleep_patterns:
matches = re.findall(pattern, content, re.IGNORECASE)
count += len(matches)
return count
except Exception as e:
print(f"读取文件 {file_path} 时出错: {e}")
return 0
def analyze_sleep_coverage(self, all_files: List[Path], sleep_files: List[Dict], total_sleep: int) -> Dict[str, Any]:
"""分析sleep覆盖情况"""
# 按文件类型统计
extension_stats = {}
for file_info in sleep_files:
ext = file_info['extension']
if ext not in extension_stats:
extension_stats[ext] = {'count': 0, 'sleep_count': 0}
extension_stats[ext]['count'] += 1
extension_stats[ext]['sleep_count'] += file_info['sleep_count']
# 按目录统计
directory_stats = {}
for file_info in sleep_files:
dir_path = str(Path(file_info['file']).parent)
if dir_path not in directory_stats:
directory_stats[dir_path] = {'count': 0, 'sleep_count': 0}
directory_stats[dir_path]['count'] += 1
directory_stats[dir_path]['sleep_count'] += file_info['sleep_count']
return {
'extension_stats': extension_stats,
'directory_stats': dict(sorted(directory_stats.items(), key=lambda x: x[1]['sleep_count'], reverse=True)[:20]),
'coverage_percentage': (len(sleep_files) / len(all_files) * 100) if all_files else 0
}
def find_missing_sleep(self, all_files: List[Path], sleep_files: List[Dict]) -> List[str]:
"""查找可能遗漏的sleep文件"""
sleep_file_paths = {Path(info['file']) for info in sleep_files}
missing_files = []
for file_path in all_files:
if file_path not in sleep_file_paths:
# 检查是否可能包含sleep
if file_path.suffix in ['.py', '.robot', '.txt']:
try:
with open(file_path, 'r', encoding='utf-8', errors='ignore') as f:
content = f.read()
if 'sleep' in content.lower():
missing_files.append(str(file_path))
except:
pass
return missing_files
def main():
"""主函数"""
scanner = ComprehensiveScanner()
results = scanner.scan_all_files()
# 保存结果
with open('comprehensive_sleep_analysis_report.md', 'w', encoding='utf-8') as f:
f.write(f"""# 全面Sleep扫描分析报告
## 扫描统计
- 总文件数: {results['total_files']}
- 包含sleep的文件数: {results['files_with_sleep']}
- 总sleep语句数: {results['total_sleep_count']}
- 覆盖率: {results['analysis']['coverage_percentage']:.2f}%
## 按文件类型统计
""")
for ext, stats in results['analysis']['extension_stats'].items():
f.write(f"- {ext}: {stats['count']} 个文件, {stats['sleep_count']} 个sleep语句\n")
f.write("\n## 按目录统计(前20)\n")
for dir_path, stats in results['analysis']['directory_stats'].items():
f.write(f"- {dir_path}: {stats['count']} 个文件, {stats['sleep_count']} 个sleep语句\n")
print(f"全面扫描完成!")
print(f"发现 {results['total_sleep_count']} 个sleep语句")
print(f"报告已保存到: comprehensive_sleep_analysis_report.md")
if __name__ == "__main__":
main()
```
##### 2.2.2 执行全面扫描
```bash
# 运行全面扫描器
new_env\Scripts\python.exe comprehensive_scan.py
# 查看扫描结果
type comprehensive_sleep_analysis_report.md
```
##### 2.2.3 分析全面扫描结果
**惊人发现**:
- 实际有116,660个sleep语句
- 涉及7,253个文件
- 基础扫描器遗漏了87%的Python文件
#### 2.3 问题诊断
##### 2.3.1 分析扫描策略问题
```python
# 问题分析代码
def analyze_scanning_strategy():
"""分析扫描策略的问题"""
problems = [
"1. 基础扫描器只关注特定命名模式的文件",
"2. 遗漏了大量包含sleep的其他文件类型",
"3. 扫描深度不够,没有递归遍历所有子目录",
"4. 文件类型过滤过于严格"
]
for problem in problems:
print(problem)
```
##### 2.3.2 制定改进策略
- 重新设计扫描策略,覆盖所有文件类型
- 建立实时扫描机制,获取最新状态
- 增加文件内容验证,确保扫描准确性
### 第三阶段:智能优化实施 (100%完成)
#### 3.1 智能等待工具创建
##### 3.1.1 创建Python智能等待工具类
创建 `smart_wait.py`:
```python
#!/usr/bin/env python3
"""
智能等待工具类:替代固定时间的sleep语句
"""
import time
import logging
from typing import Callable, Any, Optional
from functools import wraps
# 配置日志
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
class SmartWait:
"""智能等待工具类"""
def __init__(self, default_timeout: int = 30, check_interval: float = 0.5):
self.default_timeout = default_timeout
self.check_interval = check_interval
def wait_for_condition(self, condition_func: Callable[[], bool],
condition_name: str = "条件",
timeout: Optional[int] = None) -> bool:
"""
等待条件满足
Args:
condition_func: 条件检查函数
condi
更多推荐
所有评论(0)