diff --git a/.github/skills/fix-pyright/SKILL.md b/.github/skills/fix-pyright/SKILL.md new file mode 100644 index 000000000000..4f51830cd3ba --- /dev/null +++ b/.github/skills/fix-pyright/SKILL.md @@ -0,0 +1,203 @@ +--- +name: fix-pyright +description: Automatically fix pyright type checking issues in any Azure SDK for Python package following Azure SDK Python patterns. +--- + +# Fix Pyright Issues Skill + +This skill automatically fixes pyright type checking errors in any Azure SDK for Python package by analyzing existing code patterns and applying fixes with 100% confidence. + +## Overview + +Intelligently fixes pyright issues by: +1. Getting the package path or GitHub issue URL from the user +2. Reading and analyzing the issue details (if issue URL provided) +3. Setting up or using existing virtual environment +4. Installing required dependencies +5. Running pyright on the package +6. Analyzing the pyright output to identify type errors +7. Searching codebase for existing type annotation patterns +8. Applying fixes only with 100% confidence +9. Re-running pyright to verify fixes +10. Creating a pull request +11. Providing a summary of what was fixed + +## Running Pyright + +**Command:** +```powershell +cd +azpysdk --isolate pyright . +``` + +> **Note:** `azpysdk pyright` runs with a pinned version of pyright at the package level only. To focus on specific files, run the full check and filter the output by file path. + +**Using Latest Pyright:** +```powershell +azpysdk --isolate next-pyright . +``` + +> Use `azpysdk next-pyright` to run with the latest version of pyright. This is useful for catching issues that may be flagged by newer pyright versions. + +## Reference Documentation + +- [Official Pyright Documentation](https://microsoft.github.io/pyright/) +- [Pyright Configuration](https://microsoft.github.io/pyright/#/configuration) +- [Pyright Error Codes](https://microsoft.github.io/pyright/#/configuration?id=type-check-diagnostics-settings) +- [Azure SDK Python Type Checking Guide](https://github.com/Azure/azure-sdk-for-python/blob/main/doc/dev/static_type_checking_cheat_sheet.md) + +## Fixing Strategy + +### Step 0: Get Package and Issue Details + +**Check if user provided in their request:** +- GitHub issue URL (look for `https://github.com/Azure/azure-sdk-for-python/issues/...` in user's message) +- Package path or name (e.g. `sdk/storage/azure-storage-blob` or `azure-storage-blob`) +- Virtual environment path (look for phrases like "using venv", "use env", "virtual environment at", or just the venv name) + +**If both GitHub issue URL and package path are missing:** +Ask: "Please provide either the GitHub issue URL or the package path (e.g. sdk/storage/azure-storage-blob) for the pyright type checking problems you want to fix." + +**If a GitHub issue URL is provided:** +Read the issue to understand which package and files/modules are affected, and the specific error codes to fix. + +**If only a package path is provided:** +Run pyright checks directly on the package. + +**If virtual environment is missing:** +Ask: "Do you have an existing virtual environment path, or should I create 'env'?" + +### Step 1: CRITICAL - Activate Virtual Environment FIRST + +**IMMEDIATELY activate the virtual environment before ANY other command:** + +```powershell +# Activate the provided virtual environment (e.g., env, venv) +.\\Scripts\Activate.ps1 + +# If creating new virtual environment +python -m venv env +.\env\Scripts\Activate.ps1 +``` + +**⚠️ IMPORTANT: ALL subsequent commands MUST run within the activated virtual environment. Never run commands outside the venv.** + +### Step 2: Install Dependencies (within activated venv) + +```powershell +# Navigate to the package directory (within activated venv) +cd + +# Install dev dependencies from dev_requirements.txt (within activated venv) +pip install -r dev_requirements.txt + +# Install the package in editable mode (within activated venv) +pip install -e . +``` + +### Step 3: Identify Target Files (within activated venv) + +Based on the GitHub issue details, determine which files to check: + +**Option A - Run pyright on the package and filter output:** +```powershell +# Ensure you're in the package directory (within activated venv) +cd + +# Run pyright on the full package, then filter output for files from the issue +azpysdk --isolate next-pyright . +# Review output for errors in the specific files/modules mentioned in the issue +``` + +**Option B - Check modified files (if no specific target):** +```powershell +git diff --name-only HEAD | Select-String "" +git diff --cached --name-only | Select-String "" +``` + +### Step 4: Run Pyright (within activated venv) + +**⚠️ Ensure virtual environment is still activated before running:** + +```powershell +# Navigate to the package directory +cd + +# Run pyright on the package (within activated venv) +azpysdk --isolate pyright . +# Filter output for the specific files/modules from the issue +``` + +### Step 5: Analyze Type Errors + +Parse the pyright output to identify: +- Error type and rule (e.g., reportGeneralClassIssues, reportMissingTypeArgument, reportAttributeAccessIssue) +- File path and line number +- Specific error description +- Expected vs actual types +- **Cross-reference with the GitHub issue** (if provided) to ensure you're fixing the right problems + +### Step 6: Search for Existing Type Annotation Patterns + +Before fixing, search the codebase for how similar types are annotated: +```powershell +# Example: Search for similar function signatures +grep -r "def similar_function" / -A 5 + +# Search for type imports +grep -r "from typing import" / +``` + +Use the existing type annotation patterns to ensure consistency. + +### Step 7: Apply Fixes (ONLY if 100% confident) + +**ALLOWED ACTIONS:** +✅ Fix type errors with 100% confidence +✅ Use existing type annotation patterns as reference +✅ Follow Azure SDK Python type checking guidelines +✅ Add missing type hints +✅ Fix incorrect type annotations +✅ Add proper type narrowing (isinstance checks, assertions) +✅ Make minimal, targeted changes + +**FORBIDDEN ACTIONS:** +❌ Fix errors without complete confidence +❌ Create new files for solutions +❌ Import non-existent types or modules +❌ Add new dependencies or imports outside typing module +❌ Use `# type: ignore` or `# pyright: ignore` without clear justification +❌ Change code logic to avoid type errors +❌ Delete code without clear justification + +### Step 8: Verify Fixes + +Re-run pyright to ensure: +- The type error is resolved +- No new errors were introduced +- The code still functions correctly + +### Step 9: Summary + +Provide a summary: +- GitHub issue being addressed +- Number of type errors fixed +- Number of errors remaining +- Types of fixes applied (e.g., added type hints, fixed return types, added type narrowing) +- Any errors that need manual review + +### Step 10: Create Pull Request + +> **⚠️ REQUIRED when a GitHub issue URL was provided:** You MUST create a pull request after validating fixes. This is not optional. + +Create a pull request with a descriptive title and body referencing the issue. Include what was fixed and confirm all pyright checks pass. The PR title should follow the format: "fix(): Resolve pyright type errors (#)". + +## Notes + +- Always read the existing code to understand type annotation patterns before making changes +- Prefer following existing patterns over adding new complex types +- Use Python 3.10+ compatible type hints (use `Optional[X]` instead of `X | None`) +- If unsure about a fix, mark it for manual review +- Some errors may require architectural changes - don't force fixes +- Test the code after fixing to ensure functionality is preserved +- Avoid using `# pyright: ignore` unless absolutely necessary and document why diff --git a/doc/analyze_check_versions.md b/doc/analyze_check_versions.md index 9ce213687481..62b9c5e54571 100644 --- a/doc/analyze_check_versions.md +++ b/doc/analyze_check_versions.md @@ -11,4 +11,3 @@ MyPy | 1.19.1 | 1.19.1 | 2026-07-13 | Pyright | 1.1.407 | 1.1.407 | 2026-07-13 | Sphinx | 8.2.0 | N/A | N/A | Black | 24.4.0 | N/A | N/A | -Ruff | 0.15.11 | N/A | N/A | diff --git a/doc/eng_sys_checks.md b/doc/eng_sys_checks.md index 3515f24c3f8c..447ff1c81eda 100644 --- a/doc/eng_sys_checks.md +++ b/doc/eng_sys_checks.md @@ -597,6 +597,16 @@ The weekly pipeline also runs "next" variants of mypy, pylint, pyright, and sphi Results are posted as GitHub issues in the repository. These checks run with `continueOnError: true` and do not block PRs. +#### Copilot auto-fix + +For `pylint`, `mypy`, `sphinx`, and `pyright` failures, the weekly pipeline automatically assigns the Copilot coding agent to open a fix PR. + +- **Review the PR**: review and merge it like any other PR. +- **To opt out**, add the `copilot-auto-fix-disabled` label to the issue. +- **If Copilot fails** to be assigned, the pipeline logs a warning and retries automatically on the next run. +- **Version bumps**: when the checker version changes, Copilot is unassigned and reassigned to trigger a fresh fix attempt with the updated errors. +- **Duplicate detection**: if an open PR already references the issue or mentions the package and check type, Copilot is not reassigned. + To test a "next" check locally, use `--next`: ```bash diff --git a/eng/tools/azure-sdk-tools/gh_tools/vnext_issue_creator.py b/eng/tools/azure-sdk-tools/gh_tools/vnext_issue_creator.py index 66f0f1e08779..2418036bf514 100644 --- a/eng/tools/azure-sdk-tools/gh_tools/vnext_issue_creator.py +++ b/eng/tools/azure-sdk-tools/gh_tools/vnext_issue_creator.py @@ -25,6 +25,286 @@ CHECK_TYPE = Literal["mypy", "pylint", "pyright", "sphinx"] +# --------------------------------------------------------------------------- +# Auto-fix automation constants +# --------------------------------------------------------------------------- + +#: Label constants for auto-fix state management. +LABEL_AUTO_FIX = "copilot-auto-fix" +LABEL_AUTO_FIX_DISABLED = "copilot-auto-fix-disabled" + +#: Managed block markers for Copilot instructions in issue bodies. +COPILOT_AUTOFIX_START = "" +COPILOT_AUTOFIX_END = "" + +#: Copilot coding-agent bot login. +DEFAULT_COPILOT_LOGIN = "copilot-swe-agent" + + +# --------------------------------------------------------------------------- +# Auto-fix helpers +# --------------------------------------------------------------------------- + + +def _resolve_copilot_node_id(issue, github_instance) -> Optional[str]: + """Return the Copilot bot node ID from GitHub's assignable actor data.""" + login = DEFAULT_COPILOT_LOGIN + issue_node_id = issue.raw_data["node_id"] + + # dynamically query GitHub for the assignable actor node ID for the Copilot bot + try: + _, data = github_instance._Github__requester.graphql_query( + """ + query($assignableId: ID!, $login: String!) { + node(id: $assignableId) { + ... on Issue { + suggestedActors(first: 10, query: $login) { + nodes { + __typename + ... on Bot { + id + login + } + } + } + } + } + } + """, + {"assignableId": issue_node_id, "login": login}, + ) + actors = data.get("data", {}).get("node", {}).get("suggestedActors", {}).get("nodes", []) + for actor in actors: + if actor.get("login", "").lower() == login.lower() and actor.get("id"): + return actor["id"] + except Exception as e: + logging.warning(f"Failed to resolve Copilot node ID dynamically: {e}") + + logging.warning(f"Could not find {DEFAULT_COPILOT_LOGIN} in suggested actors for issue #{issue.number}") + return None + + +def is_auto_fix_eligible( + issue_labels: list[str], +) -> bool: + """Return True when the package/check combination qualifies for auto-fix.""" + if LABEL_AUTO_FIX_DISABLED in issue_labels: + return False + return True + + +def find_existing_fix_prs( + repo, + issue_number: int, + package_name: str, + check_type: str, +) -> list: + """Search for open PRs that likely address the same vnext failure. + + Returns a (possibly empty) list of matching PR objects. + """ + matches = [] + try: + open_prs = repo.get_pulls(state="open", sort="created", direction="desc") + for pr in open_prs: + body = pr.body or "" + title = pr.title or "" + search_text = f"{title} {body}".lower() + + # 1. PR explicitly references the issue number + issue_ref = f"#{issue_number}" + has_issue_ref = issue_ref in title or issue_ref in body + + # 2. PR mentions package + check type + has_pkg_and_check = package_name.lower() in search_text and check_type.lower() in search_text + + if has_issue_ref or has_pkg_and_check: + matches.append(pr) + except GithubException as e: + logging.warning(f"Failed to search PRs for duplicate detection: {e}") + + return matches + + +def build_copilot_instructions(package_path: str, check_type: str) -> str: + """Build the Copilot auto-fix instruction block for the issue body.""" + skill_name = f"fix-{check_type}" + + inner = ( + f"## Copilot instructions:\n\n" + f"Use the `{skill_name}` skill to resolve `{check_type}` failures " + f"in `{package_path}`.\n" + f"Do not make unrelated formatting, changelog, version, or " + f"generated-code changes.\n\n" + f"Open a PR that links this issue and includes:\n\n" + f"> Automated Fix: This PR was automatically generated by Copilot " + f"in response to a vnext compatibility issue.\n\n" + f"If a safe fix is not possible, describe the " + f"attempted commands, the failure category, and the recommended " + f"manual next step." + ) + return f"\n\n{COPILOT_AUTOFIX_START}\n{inner}\n{COPILOT_AUTOFIX_END}" + + +def _upsert_copilot_instructions(body: str, instructions: str) -> str: + """Replace existing Copilot instruction block or append if absent. + + Uses managed HTML-comment markers to locate the block, preserving any + human-authored content outside the markers. + """ + start_idx = body.find(COPILOT_AUTOFIX_START) + end_idx = body.find(COPILOT_AUTOFIX_END) + if start_idx != -1 and end_idx != -1: + return body[:start_idx].rstrip() + instructions + return body + instructions + + +def reconcile_auto_fix_labels(issue, eligible: bool) -> None: + """Add or verify automation labels on the issue. + + Preserves all existing labels; only adds auto-fix labels when eligible. + """ + current_labels = [lbl.name if hasattr(lbl, "name") else str(lbl) for lbl in issue.labels] + + if eligible: + labels_to_add = [] + if LABEL_AUTO_FIX not in current_labels: + labels_to_add.append(LABEL_AUTO_FIX) + for label in labels_to_add: + try: + issue.add_to_labels(label) + logging.info(f"Added label '{label}' to issue #{issue.number}") + except GithubException as e: + logging.warning(f"Failed to add label '{label}' to issue #{issue.number}: {e}") + + +def _is_copilot_already_assigned(issue) -> bool: + """Check whether the Copilot login is already among the issue assignees.""" + for assignee in issue.assignees: + name = assignee.login if hasattr(assignee, "login") else str(assignee) + if name.lower() == DEFAULT_COPILOT_LOGIN.lower(): + return True + return False + + +def _unassign_copilot(issue, github_instance, copilot_node_id: str) -> bool: + """Remove the Copilot coding agent from the issue assignees. + + Uses the GraphQL ``removeAssigneesFromAssignable`` mutation. + Treats "not currently assigned" as success (idempotent). + """ + issue_node_id = issue.raw_data["node_id"] + try: + github_instance._Github__requester.graphql_named_mutation( + "removeAssigneesFromAssignable", + { + "assignableId": issue_node_id, + "assigneeIds": [copilot_node_id], + }, + output_schema="assignable { ... on Issue { id } }", + ) + logging.info(f"Unassigned {DEFAULT_COPILOT_LOGIN} from issue #{issue.number}") + return True + except Exception as e: + logging.warning(f"Failed to unassign {DEFAULT_COPILOT_LOGIN} from issue #{issue.number}: {e}") + return False + + +def assign_copilot( + issue, + github_instance, + copilot_node_id: str, + package_name: str, + check_type: str, + force_reassign: bool = False, +) -> bool: + """Attempt to assign the Copilot coding agent to the issue. + + Uses the GraphQL ``addAssigneesToAssignable`` mutation because the + Copilot bot (``copilot-swe-agent``) is not assignable via the REST + assignees endpoint. + + When *force_reassign* is True and Copilot is already assigned, the + agent is first unassigned then reassigned so that a new Copilot + session is triggered (e.g. after a checker version bump). + + Returns True on success, False on failure after logging a warning. + """ + if _is_copilot_already_assigned(issue): + if not force_reassign: + logging.info(f"Copilot already assigned to issue #{issue.number}, skipping") + return True + logging.info(f"Copilot already assigned to issue #{issue.number}, " f"re-assigning to trigger new session") + if not _unassign_copilot(issue, github_instance, copilot_node_id): + return False + + issue_node_id = issue.raw_data["node_id"] + try: + github_instance._Github__requester.graphql_named_mutation( + "addAssigneesToAssignable", + { + "assignableId": issue_node_id, + "assigneeIds": [copilot_node_id], + }, + output_schema="assignable { ... on Issue { id } }", + ) + logging.info(f"Assigned {DEFAULT_COPILOT_LOGIN} to issue #{issue.number} for {package_name}/{check_type}") + return True + except Exception as e: + logging.warning(f"Failed to assign {DEFAULT_COPILOT_LOGIN} to issue #{issue.number}: {e}") + return False + + +def _try_auto_fix( + repo, + issue, + github_instance, + package_name: str, + package_path: str, + check_type: str, + issue_labels: list[str], + version_changed: bool = False, +) -> None: + """Run the auto-fix eligibility → duplicate check → assign flow. + + When *version_changed* is True (e.g. checker version bump), Copilot is + unassigned and reassigned so a fresh session picks up the new errors. + """ + eligible = is_auto_fix_eligible(issue_labels) + + if not eligible: + return + + # Duplicate PR detection + matching_prs = find_existing_fix_prs(repo, issue.number, package_name, check_type) + if matching_prs: + pr_urls = ", ".join(pr.html_url for pr in matching_prs) + logging.info(f"Skipping Copilot assignment for issue #{issue.number}: " f"matching PR(s) found: {pr_urls}") + return + + # Upsert Copilot instructions (replace existing block or append) + body = issue.body or "" + instructions = build_copilot_instructions(package_path, check_type) + updated_body = _upsert_copilot_instructions(body, instructions) + try: + issue.edit(body=updated_body) + except GithubException as exc: + logging.warning( + "Failed to update Copilot instructions for issue #%s: %s", + issue.number, + exc, + ) + + copilot_node_id = _resolve_copilot_node_id(issue, github_instance) + if not copilot_node_id: + return + + # Assign Copilot (force reassignment on version bumps) + if assign_copilot( + issue, github_instance, copilot_node_id, package_name, check_type, force_reassign=version_changed + ): + reconcile_auto_fix_labels(issue, eligible=True) + def get_version_running(check_type: CHECK_TYPE) -> str: commands = [sys.executable, "-m", check_type, "--version"] @@ -58,7 +338,7 @@ def get_build_link(check_type: CHECK_TYPE) -> str: ) -def get_merge_dates(year: str) -> typing.List[datetime.datetime]: +def get_merge_dates(year: int) -> typing.List[datetime.date]: """We'll merge the latest version of the type checker/linter quarterly on the Monday after release week. This function returns those 4 Mondays for the given year. @@ -81,7 +361,7 @@ def get_merge_dates(year: str) -> typing.List[datetime.datetime]: return merge_dates -def get_date_for_version_bump(today: datetime.datetime) -> str: +def get_date_for_version_bump(today: datetime.date) -> str: merge_dates = get_merge_dates(today.year) try: merge_date = min(date for date in merge_dates if date >= today) @@ -145,16 +425,16 @@ def create_vnext_issue(package_dir: str, check_type: CHECK_TYPE, check_version: """This is called when a client library fails a vnext check. An issue is created with the details or an existing issue is updated with the latest information.""" - package_path = pathlib.Path(package_dir) - package_name = package_path.name - service_directory = package_path.parent.name + package_dir_path = pathlib.Path(package_dir) + package_name = package_dir_path.name + service_directory = package_dir_path.parent.name auth = Auth.Token(os.environ["GH_TOKEN"]) g = Github(auth=auth) today = datetime.date.today() repo = g.get_repo("Azure/azure-sdk-for-python") - issues = repo.get_issues(state="open", labels=[check_type], creator="azure-sdk") + issues = repo.get_issues(state="open", labels=[check_type], creator="azure-sdk") # type: ignore[arg-type] vnext_issue = [issue for issue in issues if issue.title.split("needs")[0].strip() == package_name] version = check_version or get_version_running(check_type) @@ -186,6 +466,8 @@ def create_vnext_issue(package_dir: str, check_type: CHECK_TYPE, check_version: f"See the {guide_link} for more information." ) + package_path = f"sdk/{service_directory}/{package_name}" + # create an issue for the library failing the vnext check if not vnext_issue: try: @@ -194,7 +476,7 @@ def create_vnext_issue(package_dir: str, check_type: CHECK_TYPE, check_version: logging.warning(f"Failed to get labels and assignees from CODEOWNERS for {package_name}: {e}") labels = [] assignees = [] - if "mgmt" in package_name: + if package_name.startswith("azure-mgmt-"): labels.append("Mgmt") labels.extend([check_type]) @@ -208,6 +490,10 @@ def create_vnext_issue(package_dir: str, check_type: CHECK_TYPE, check_version: logging.info(f"Assigned {assignee} to issue for {package_name}") except GithubException as e: logging.warning(f"Failed to assign {assignee} to issue for {package_name}: {e}") + + # Auto-fix: check eligibility and assign Copilot + issue_label_names = [lbl if isinstance(lbl, str) else lbl.name for lbl in issue.labels] + _try_auto_fix(repo, issue, g, package_name, package_path, check_type, issue_label_names) return # an issue exists, let's update it so it reflects the latest typing/linting errors @@ -219,10 +505,13 @@ def create_vnext_issue(package_dir: str, check_type: CHECK_TYPE, check_version: labels = [] assignees = [] + # Detect version change so Copilot can be re-triggered + old_title = vnext_issue[0].title vnext_issue[0].edit( title=title, body=template, ) + version_changed = old_title != title # Assign codeowners individually with error handling for assignee in assignees: @@ -232,6 +521,19 @@ def create_vnext_issue(package_dir: str, check_type: CHECK_TYPE, check_version: except GithubException as e: logging.warning(f"Failed to assign {assignee} to issue for {package_name}: {e}") + # Auto-fix: reconcile labels and retry assignment if no matching PR + issue_label_names = [lbl.name if hasattr(lbl, "name") else str(lbl) for lbl in vnext_issue[0].labels] + _try_auto_fix( + repo, + vnext_issue[0], + g, + package_name, + package_path, + check_type, + issue_label_names, + version_changed=version_changed, + ) + def close_vnext_issue(package_name: str, check_type: CHECK_TYPE) -> None: """This is called when a client library passes a vnext check. If an issue exists for the library, it is closed.""" @@ -241,7 +543,7 @@ def close_vnext_issue(package_name: str, check_type: CHECK_TYPE) -> None: repo = g.get_repo("Azure/azure-sdk-for-python") - issues = repo.get_issues(state="open", labels=[check_type], creator="azure-sdk") + issues = repo.get_issues(state="open", labels=[check_type], creator="azure-sdk") # type: ignore[arg-type] vnext_issue = [issue for issue in issues if issue.title.split("needs")[0].strip() == package_name] if vnext_issue: logging.info(f"{package_name} passes {check_type}. Closing existing GH issue #{vnext_issue[0].number}...") diff --git a/eng/tools/azure-sdk-tools/tests/test_vnext_auto_fix.py b/eng/tools/azure-sdk-tools/tests/test_vnext_auto_fix.py new file mode 100644 index 000000000000..9579b68b94db --- /dev/null +++ b/eng/tools/azure-sdk-tools/tests/test_vnext_auto_fix.py @@ -0,0 +1,338 @@ +# -------------------------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# -------------------------------------------------------------------------------------------- + +"""Tests for vnext issue auto-fix automation helpers.""" + +from __future__ import annotations + +from types import SimpleNamespace +from unittest.mock import MagicMock + +import pytest +from github import GithubException + +from gh_tools.vnext_issue_creator import ( + LABEL_AUTO_FIX, + LABEL_AUTO_FIX_DISABLED, + _is_copilot_already_assigned, + _try_auto_fix, + assign_copilot, + build_copilot_instructions, + find_existing_fix_prs, + is_auto_fix_eligible, + reconcile_auto_fix_labels, +) + + +# --------------------------------------------------------------------------- +# Helpers to build lightweight fakes +# --------------------------------------------------------------------------- + + +def _make_label(name: str) -> SimpleNamespace: + return SimpleNamespace(name=name) + + +def _make_assignee(login: str) -> SimpleNamespace: + return SimpleNamespace(login=login) + + +def _make_issue( + number: int = 1, + body: str = "", + labels: list | None = None, + assignees: list | None = None, + node_id: str = "I_abc123", +) -> MagicMock: + issue = MagicMock() + issue.number = number + issue.body = body + issue.labels = [_make_label(l) for l in (labels or [])] + issue.assignees = [_make_assignee(a) for a in (assignees or [])] + issue.html_url = f"https://github.com/test/repo/issues/{number}" + issue.raw_data = {"node_id": node_id} + return issue + + +def _make_github_instance() -> MagicMock: + """Create a mock Github instance with a requester that supports graphql_named_mutation.""" + g = MagicMock() + g._Github__requester = MagicMock() + g._Github__requester.graphql_query.return_value = ( + {}, + { + "data": { + "node": { + "suggestedActors": { + "nodes": [{"login": "copilot-swe-agent", "id": "BOT_dynamic"}], + } + } + } + }, + ) + return g + + +def _make_pr( + title: str = "", + body: str = "", + html_url: str = "https://github.com/test/repo/pull/99", +) -> SimpleNamespace: + return SimpleNamespace(title=title, body=body, html_url=html_url) + + +# --------------------------------------------------------------------------- +# Eligibility tests +# --------------------------------------------------------------------------- + + +class TestIsAutoFixEligible: + """Tests for is_auto_fix_eligible.""" + + def test_eligible_by_default(self): + assert is_auto_fix_eligible([]) is True + assert is_auto_fix_eligible(["pylint"]) is True + assert is_auto_fix_eligible(["mypy", "some-service-label"]) is True + + def test_opt_out_label(self): + assert is_auto_fix_eligible([LABEL_AUTO_FIX_DISABLED]) is False + + +# --------------------------------------------------------------------------- +# Duplicate PR detection tests +# --------------------------------------------------------------------------- + + +class TestFindExistingFixPrs: + + def test_match_by_issue_ref_in_title(self): + repo = MagicMock() + repo.get_pulls.return_value = [ + _make_pr(title="Fix pylint for azure-ai-test #42"), + ] + result = find_existing_fix_prs(repo, 42, "azure-ai-test", "pylint") + assert len(result) == 1 + + def test_match_by_issue_ref_in_body(self): + repo = MagicMock() + repo.get_pulls.return_value = [ + _make_pr(body="Fixes #42"), + ] + result = find_existing_fix_prs(repo, 42, "azure-ai-test", "pylint") + assert len(result) == 1 + + def test_match_by_package_and_check(self): + repo = MagicMock() + repo.get_pulls.return_value = [ + _make_pr(title="Fix azure-ai-test pylint errors"), + ] + result = find_existing_fix_prs(repo, 99, "azure-ai-test", "pylint") + assert len(result) == 1 + + def test_no_match(self): + repo = MagicMock() + repo.get_pulls.return_value = [ + _make_pr(title="Unrelated PR", body="Nothing here"), + ] + result = find_existing_fix_prs(repo, 42, "azure-ai-test", "pylint") + assert len(result) == 0 + + def test_github_exception_returns_empty(self): + repo = MagicMock() + repo.get_pulls.side_effect = GithubException(500, "error", None) + result = find_existing_fix_prs(repo, 42, "azure-ai-test", "pylint") + assert result == [] + + +# --------------------------------------------------------------------------- +# Copilot instruction builder tests +# --------------------------------------------------------------------------- + + +class TestBuildCopilotInstructions: + + @pytest.mark.parametrize("check_type", ["pylint", "mypy", "sphinx", "pyright"]) + def test_contains_required_elements(self, check_type): + result = build_copilot_instructions("sdk/ai/azure-ai-test", check_type) + + assert f"fix-{check_type}" in result + assert "sdk/ai/azure-ai-test" in result + assert "Automated Fix" in result + assert "Do not make unrelated" in result + + +# --------------------------------------------------------------------------- +# Label reconciliation tests +# --------------------------------------------------------------------------- + + +class TestReconcileAutoFixLabels: + + def test_adds_auto_fix_label(self): + issue = _make_issue(labels=["pylint"]) + reconcile_auto_fix_labels(issue, eligible=True) + issue.add_to_labels.assert_called_once_with(LABEL_AUTO_FIX) + + def test_skips_if_already_labeled(self): + issue = _make_issue(labels=["pylint", LABEL_AUTO_FIX]) + reconcile_auto_fix_labels(issue, eligible=True) + issue.add_to_labels.assert_not_called() + + def test_not_eligible_no_op(self): + issue = _make_issue(labels=["pylint"]) + reconcile_auto_fix_labels(issue, eligible=False) + issue.add_to_labels.assert_not_called() + issue.remove_from_labels.assert_not_called() + + +# --------------------------------------------------------------------------- +# Copilot assignment tests +# --------------------------------------------------------------------------- + + +class TestAssignCopilot: + + def test_success(self): + issue = _make_issue() + g = _make_github_instance() + assert assign_copilot(issue, g, "BOT_dynamic", "azure-ai-test", "pylint") is True + g._Github__requester.graphql_named_mutation.assert_called_once() + call_args = g._Github__requester.graphql_named_mutation.call_args + assert call_args[0][0] == "addAssigneesToAssignable" + assert call_args[0][1]["assigneeIds"] == ["BOT_dynamic"] + + def test_already_assigned_skips(self): + issue = _make_issue(assignees=["copilot-swe-agent"]) + g = _make_github_instance() + assert assign_copilot(issue, g, "BOT_dynamic", "azure-ai-test", "pylint") is True + g._Github__requester.graphql_named_mutation.assert_not_called() + + def test_failure_returns_false(self): + issue = _make_issue() + g = _make_github_instance() + g._Github__requester.graphql_named_mutation.side_effect = Exception("mutation failed") + assert assign_copilot(issue, g, "BOT_dynamic", "azure-ai-test", "pylint") is False + + def test_force_reassign_returns_false_when_unassign_fails(self): + issue = _make_issue(assignees=["copilot-swe-agent"]) + g = _make_github_instance() + g._Github__requester.graphql_named_mutation.side_effect = Exception("remove failed") + assert assign_copilot(issue, g, "BOT_dynamic", "azure-ai-test", "pylint", force_reassign=True) is False + g._Github__requester.graphql_named_mutation.assert_called_once() + + +# --------------------------------------------------------------------------- +# _is_copilot_already_assigned tests +# --------------------------------------------------------------------------- + + +class TestIsCopilotAlreadyAssigned: + + def test_assigned(self): + issue = _make_issue(assignees=["copilot-swe-agent"]) + assert _is_copilot_already_assigned(issue) is True + + def test_not_assigned(self): + issue = _make_issue(assignees=["human-user"]) + assert _is_copilot_already_assigned(issue) is False + + def test_case_insensitive(self): + issue = _make_issue(assignees=["Copilot-SWE-Agent"]) + assert _is_copilot_already_assigned(issue) is True + + +# --------------------------------------------------------------------------- +# Integration: _try_auto_fix tests +# --------------------------------------------------------------------------- + + +class TestTryAutoFix: + + def test_eligible_no_duplicate_assigns(self): + repo = MagicMock() + repo.get_pulls.return_value = [] + issue = _make_issue(labels=["pylint"]) + g = _make_github_instance() + + _try_auto_fix(repo, issue, g, "azure-ai-test", "sdk/ai/azure-ai-test", "pylint", ["pylint"]) + + # Labels reconciled + issue.add_to_labels.assert_any_call(LABEL_AUTO_FIX) + # Instructions appended + issue.edit.assert_called_once() + body_arg = issue.edit.call_args[1]["body"] + assert "Copilot instructions" in body_arg + # Copilot assigned via GraphQL + g._Github__requester.graphql_named_mutation.assert_called_once() + g._Github__requester.graphql_query.assert_called_once() + + def test_eligible_with_duplicate_pr_skips(self): + repo = MagicMock() + repo.get_pulls.return_value = [ + _make_pr(body="Fixes #1"), + ] + issue = _make_issue(number=1, labels=["pylint"]) + g = _make_github_instance() + + _try_auto_fix(repo, issue, g, "azure-ai-test", "sdk/ai/azure-ai-test", "pylint", ["pylint"]) + + # Should NOT assign Copilot + g._Github__requester.graphql_named_mutation.assert_not_called() + + def test_opt_out_label_prevents_assignment(self): + repo = MagicMock() + issue = _make_issue(labels=["pylint", LABEL_AUTO_FIX_DISABLED]) + g = _make_github_instance() + + _try_auto_fix( + repo, + issue, + g, + "azure-ai-test", + "sdk/ai/azure-ai-test", + "pylint", + ["pylint", LABEL_AUTO_FIX_DISABLED], + ) + + g._Github__requester.graphql_named_mutation.assert_not_called() + + def test_weekly_retry_reassigns_when_no_pr(self): + """Simulates a weekly re-run: issue already has copilot-auto-fix label + but no matching PR exists, so Copilot should be reassigned.""" + repo = MagicMock() + repo.get_pulls.return_value = [] + issue = _make_issue(labels=["pylint", LABEL_AUTO_FIX]) + g = _make_github_instance() + + _try_auto_fix(repo, issue, g, "azure-ai-test", "sdk/ai/azure-ai-test", "pylint", ["pylint", LABEL_AUTO_FIX]) + + g._Github__requester.graphql_named_mutation.assert_called_once() + + def test_assignment_failure_does_not_crash(self): + repo = MagicMock() + repo.get_pulls.return_value = [] + issue = _make_issue(labels=["pylint"]) + g = _make_github_instance() + g._Github__requester.graphql_named_mutation.side_effect = Exception("mutation failed") + + _try_auto_fix(repo, issue, g, "azure-ai-test", "sdk/ai/azure-ai-test", "pylint", ["pylint"]) + + issue.add_to_labels.assert_not_called() + + def test_missing_copilot_node_id_skips_assignment(self): + repo = MagicMock() + repo.get_pulls.return_value = [] + issue = _make_issue(labels=["pylint"]) + g = _make_github_instance() + g._Github__requester.graphql_query.return_value = ( + {}, + {"data": {"node": {"suggestedActors": {"nodes": []}}}}, + ) + + _try_auto_fix(repo, issue, g, "azure-ai-test", "sdk/ai/azure-ai-test", "pylint", ["pylint"]) + + g._Github__requester.graphql_query.assert_called_once() + g._Github__requester.graphql_named_mutation.assert_not_called() + issue.add_to_labels.assert_not_called()