diff --git a/AGENTS.md b/AGENTS.md index 27472ebec..2b076dc38 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -17,7 +17,7 @@ Each AI agent is a self-contained **integration subpackage** under `src/specify_ ``` src/specify_cli/integrations/ ├── __init__.py # INTEGRATION_REGISTRY + _register_builtins() -├── base.py # IntegrationBase, MarkdownIntegration, TomlIntegration, SkillsIntegration +├── base.py # IntegrationBase, MarkdownIntegration, TomlIntegration, YamlIntegration, SkillsIntegration ├── manifest.py # IntegrationManifest (file tracking) ├── claude/ # Example: SkillsIntegration subclass │ ├── __init__.py # ClaudeIntegration class @@ -48,6 +48,7 @@ The registry is the **single source of truth for Python integration metadata**. |---|---| | Standard markdown commands (`.md`) | `MarkdownIntegration` | | TOML-format commands (`.toml`) | `TomlIntegration` | +| YAML recipe files (`.yaml`) | `YamlIntegration` | | Skill directories (`speckit-/SKILL.md`) | `SkillsIntegration` | | Fully custom output (companion files, settings merge, etc.) | `IntegrationBase` directly | @@ -343,16 +344,82 @@ Command content with {SCRIPT} and {{args}} placeholders. """ ``` +### YAML Format + +Used by: Goose + +```yaml +version: 1.0.0 +title: "Command Title" +description: "Command description" +author: + contact: spec-kit +extensions: + - type: builtin + name: developer +activities: + - Spec-Driven Development +prompt: | + Command content with {SCRIPT} and {{args}} placeholders. +``` + ## Argument Patterns Different agents use different argument placeholders. The placeholder used in command files is always taken from `registrar_config["args"]` for each integration — check there first when in doubt: - **Markdown/prompt-based**: `$ARGUMENTS` (default for most markdown agents) - **TOML-based**: `{{args}}` (e.g., Gemini) +- **YAML-based**: `{{args}}` (e.g., Goose) - **Custom**: some agents override the default (e.g., Forge uses `{{parameters}}`) - **Script placeholders**: `{SCRIPT}` (replaced with actual script path) - **Agent placeholders**: `__AGENT__` (replaced with agent name) +## Special Processing Requirements + +Some agents require custom processing beyond the standard template transformations: + +### Copilot Integration + +GitHub Copilot has unique requirements: +- Commands use `.agent.md` extension (not `.md`) +- Each command gets a companion `.prompt.md` file in `.github/prompts/` +- Installs `.vscode/settings.json` with prompt file recommendations +- Context file lives at `.github/copilot-instructions.md` + +Implementation: Extends `IntegrationBase` with custom `setup()` method that: +1. Processes templates with `process_template()` +2. Generates companion `.prompt.md` files +3. Merges VS Code settings + +### Forge Integration + +Forge has special frontmatter and argument requirements: +- Uses `{{parameters}}` instead of `$ARGUMENTS` +- Strips `handoffs` frontmatter key (Forge-specific collaboration feature) +- Injects `name` field into frontmatter when missing + +Implementation: Extends `MarkdownIntegration` with custom `setup()` method that: +1. Inherits standard template processing from `MarkdownIntegration` +2. Adds extra `$ARGUMENTS` → `{{parameters}}` replacement after template processing +3. Applies Forge-specific transformations via `_apply_forge_transformations()` +4. Strips `handoffs` frontmatter key +5. Injects missing `name` fields +6. Ensures the shared `update-agent-context.*` scripts include a `forge` case that maps context updates to `AGENTS.md` and lists `forge` in their usage/help text + +### Goose Integration + +Goose is a YAML-format agent using Block's recipe system: +- Uses `.goose/recipes/` directory for YAML recipe files +- Uses `{{args}}` argument placeholder +- Produces YAML with `prompt: |` block scalar for command content + +Implementation: Extends `YamlIntegration` (parallel to `TomlIntegration`): +1. Processes templates through the standard placeholder pipeline +2. Extracts title and description from frontmatter +3. Renders output as Goose recipe YAML (version, title, description, author, extensions, activities, prompt) +4. Uses `yaml.safe_dump()` for header fields to ensure proper escaping +5. Context updates map to `AGENTS.md` (shared with opencode/codex/pi/forge) + ## Common Pitfalls 1. **Using shorthand keys for CLI-based integrations**: For CLI-based integrations (`requires_cli: True`), the `key` must match the executable name (e.g., `"cursor-agent"` not `"cursor"`). `shutil.which(key)` is used for CLI tool checks — mismatches require special-case mappings. IDE-based integrations (`requires_cli: False`) are not subject to this constraint. diff --git a/README.md b/README.md index 9301fbaf8..df16e07d1 100644 --- a/README.md +++ b/README.md @@ -302,6 +302,7 @@ Community projects that extend, visualize, or build on Spec Kit: | [Cursor](https://cursor.sh/) | ✅ | | | [Forge](https://forgecode.dev/) | ✅ | CLI tool: `forge` | | [Gemini CLI](https://github.com/google-gemini/gemini-cli) | ✅ | | +| [Goose](https://block.github.io/goose/) | ✅ | Uses YAML recipe format in `.goose/recipes/` with slash command support | | [GitHub Copilot](https://code.visualstudio.com/) | ✅ | | | [IBM Bob](https://www.ibm.com/products/bob) | ✅ | IDE-based agent with slash command support | | [Jules](https://jules.google.com/) | ✅ | | @@ -640,7 +641,7 @@ specify init . --force --ai claude specify init --here --force --ai claude ``` -The CLI will check if you have Claude Code, Gemini CLI, Cursor CLI, Qwen CLI, opencode, Codex CLI, Qoder CLI, Tabnine CLI, Kiro CLI, Pi, Forge, or Mistral Vibe installed. If you do not, or you prefer to get the templates without checking for the right tools, use `--ignore-agent-tools` with your command: +The CLI will check if you have Claude Code, Gemini CLI, Cursor CLI, Qwen CLI, opencode, Codex CLI, Qoder CLI, Tabnine CLI, Kiro CLI, Pi, Forge, Goose, or Mistral Vibe installed. If you do not, or you prefer to get the templates without checking for the right tools, use `--ignore-agent-tools` with your command: ```bash specify init --ai claude --ignore-agent-tools diff --git a/scripts/bash/update-agent-context.sh b/scripts/bash/update-agent-context.sh index fce379b34..2f71bb893 100644 --- a/scripts/bash/update-agent-context.sh +++ b/scripts/bash/update-agent-context.sh @@ -30,12 +30,12 @@ # # 5. Multi-Agent Support # - Handles agent-specific file paths and naming conventions -# - Supports: Claude, Gemini, Copilot, Cursor, Qwen, opencode, Codex, Windsurf, Junie, Kilo Code, Auggie CLI, Roo Code, CodeBuddy CLI, Qoder CLI, Amp, SHAI, Tabnine CLI, Kiro CLI, Mistral Vibe, Kimi Code, Pi Coding Agent, iFlow CLI, Forge, Antigravity or Generic +# - Supports: Claude, Gemini, Copilot, Cursor, Qwen, opencode, Codex, Windsurf, Junie, Kilo Code, Auggie CLI, Roo Code, CodeBuddy CLI, Qoder CLI, Amp, SHAI, Tabnine CLI, Kiro CLI, Mistral Vibe, Kimi Code, Pi Coding Agent, iFlow CLI, Forge, Goose, Antigravity or Generic # - Can update single agents or all existing agent files # - Creates default Claude file if no agent files exist # # Usage: ./update-agent-context.sh [agent_type] -# Agent types: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|junie|kilocode|auggie|roo|codebuddy|amp|shai|tabnine|kiro-cli|agy|bob|vibe|qodercli|kimi|trae|pi|iflow|forge|generic +# Agent types: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|junie|kilocode|auggie|roo|codebuddy|amp|shai|tabnine|kiro-cli|agy|bob|vibe|qodercli|kimi|trae|pi|iflow|forge|goose|generic # Leave empty to update all existing agent files set -e @@ -74,7 +74,7 @@ AUGGIE_FILE="$REPO_ROOT/.augment/rules/specify-rules.md" ROO_FILE="$REPO_ROOT/.roo/rules/specify-rules.md" CODEBUDDY_FILE="$REPO_ROOT/CODEBUDDY.md" QODER_FILE="$REPO_ROOT/QODER.md" -# Amp, Kiro CLI, IBM Bob, Pi, and Forge all share AGENTS.md — use AGENTS_FILE to avoid +# Amp, Kiro CLI, IBM Bob, Pi, Forge, and Goose all share AGENTS.md — use AGENTS_FILE to avoid # updating the same file multiple times. AMP_FILE="$AGENTS_FILE" SHAI_FILE="$REPO_ROOT/SHAI.md" @@ -710,12 +710,15 @@ update_specific_agent() { forge) update_agent_file "$AGENTS_FILE" "Forge" || return 1 ;; + goose) + update_agent_file "$AGENTS_FILE" "Goose" || return 1 + ;; generic) log_info "Generic agent: no predefined context file. Use the agent-specific update script for your agent." ;; *) log_error "Unknown agent type '$agent_type'" - log_error "Expected: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|junie|kilocode|auggie|roo|codebuddy|amp|shai|tabnine|kiro-cli|agy|bob|vibe|qodercli|kimi|trae|pi|iflow|forge|generic" + log_error "Expected: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|junie|kilocode|auggie|roo|codebuddy|amp|shai|tabnine|kiro-cli|agy|bob|vibe|qodercli|kimi|trae|pi|iflow|forge|goose|generic" exit 1 ;; esac @@ -759,7 +762,7 @@ update_all_existing_agents() { _update_if_new "$COPILOT_FILE" "GitHub Copilot" || _all_ok=false _update_if_new "$CURSOR_FILE" "Cursor IDE" || _all_ok=false _update_if_new "$QWEN_FILE" "Qwen Code" || _all_ok=false - _update_if_new "$AGENTS_FILE" "Codex/opencode/Amp/Kiro/Bob/Pi/Forge" || _all_ok=false + _update_if_new "$AGENTS_FILE" "Codex/opencode/Amp/Kiro/Bob/Pi/Forge/Goose" || _all_ok=false _update_if_new "$WINDSURF_FILE" "Windsurf" || _all_ok=false _update_if_new "$JUNIE_FILE" "Junie" || _all_ok=false _update_if_new "$KILOCODE_FILE" "Kilo Code" || _all_ok=false @@ -800,7 +803,7 @@ print_summary() { fi echo - log_info "Usage: $0 [claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|junie|kilocode|auggie|roo|codebuddy|amp|shai|tabnine|kiro-cli|agy|bob|vibe|qodercli|kimi|trae|pi|iflow|forge|generic]" + log_info "Usage: $0 [claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|junie|kilocode|auggie|roo|codebuddy|amp|shai|tabnine|kiro-cli|agy|bob|vibe|qodercli|kimi|trae|pi|iflow|forge|goose|generic]" } #============================================================================== diff --git a/scripts/powershell/update-agent-context.ps1 b/scripts/powershell/update-agent-context.ps1 index 12caa306d..3ee45d383 100644 --- a/scripts/powershell/update-agent-context.ps1 +++ b/scripts/powershell/update-agent-context.ps1 @@ -9,7 +9,7 @@ Mirrors the behavior of scripts/bash/update-agent-context.sh: 2. Plan Data Extraction 3. Agent File Management (create from template or update existing) 4. Content Generation (technology stack, recent changes, timestamp) - 5. Multi-Agent Support (claude, gemini, copilot, cursor-agent, qwen, opencode, codex, windsurf, junie, kilocode, auggie, roo, codebuddy, amp, shai, tabnine, kiro-cli, agy, bob, vibe, qodercli, kimi, trae, pi, iflow, forge, generic) + 5. Multi-Agent Support (claude, gemini, copilot, cursor-agent, qwen, opencode, codex, windsurf, junie, kilocode, auggie, roo, codebuddy, amp, shai, tabnine, kiro-cli, agy, bob, vibe, qodercli, kimi, trae, pi, iflow, forge, goose, generic) .PARAMETER AgentType Optional agent key to update a single agent. If omitted, updates all existing agent files (creating a default Claude file if none exist). @@ -25,7 +25,7 @@ Relies on common helper functions in common.ps1 #> param( [Parameter(Position=0)] - [ValidateSet('claude','gemini','copilot','cursor-agent','qwen','opencode','codex','windsurf','junie','kilocode','auggie','roo','codebuddy','amp','shai','tabnine','kiro-cli','agy','bob','vibe','qodercli','kimi','trae','pi','iflow','forge','generic')] + [ValidateSet('claude','gemini','copilot','cursor-agent','qwen','opencode','codex','windsurf','junie','kilocode','auggie','roo','codebuddy','amp','shai','tabnine','kiro-cli','agy','bob','vibe','qodercli','kimi','trae','pi','iflow','forge','goose','generic')] [string]$AgentType ) @@ -68,6 +68,7 @@ $KIMI_FILE = Join-Path $REPO_ROOT 'KIMI.md' $TRAE_FILE = Join-Path $REPO_ROOT '.trae/rules/project_rules.md' $IFLOW_FILE = Join-Path $REPO_ROOT 'IFLOW.md' $FORGE_FILE = Join-Path $REPO_ROOT 'AGENTS.md' +$GOOSE_FILE = Join-Path $REPO_ROOT 'AGENTS.md' $TEMPLATE_FILE = Join-Path $REPO_ROOT '.specify/templates/agent-file-template.md' @@ -417,8 +418,9 @@ function Update-SpecificAgent { 'pi' { Update-AgentFile -TargetFile $AGENTS_FILE -AgentName 'Pi Coding Agent' } 'iflow' { Update-AgentFile -TargetFile $IFLOW_FILE -AgentName 'iFlow CLI' } 'forge' { Update-AgentFile -TargetFile $FORGE_FILE -AgentName 'Forge' } + 'goose' { Update-AgentFile -TargetFile $GOOSE_FILE -AgentName 'Goose' } 'generic' { Write-Info 'Generic agent: no predefined context file. Use the agent-specific update script for your agent.' } - default { Write-Err "Unknown agent type '$Type'"; Write-Err 'Expected: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|junie|kilocode|auggie|roo|codebuddy|amp|shai|tabnine|kiro-cli|agy|bob|vibe|qodercli|kimi|trae|pi|iflow|forge|generic'; return $false } + default { Write-Err "Unknown agent type '$Type'"; Write-Err 'Expected: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|junie|kilocode|auggie|roo|codebuddy|amp|shai|tabnine|kiro-cli|agy|bob|vibe|qodercli|kimi|trae|pi|iflow|forge|goose|generic'; return $false } } } @@ -460,7 +462,7 @@ function Update-AllExistingAgents { if (-not (Update-IfNew -FilePath $COPILOT_FILE -AgentName 'GitHub Copilot')) { $ok = $false } if (-not (Update-IfNew -FilePath $CURSOR_FILE -AgentName 'Cursor IDE')) { $ok = $false } if (-not (Update-IfNew -FilePath $QWEN_FILE -AgentName 'Qwen Code')) { $ok = $false } - if (-not (Update-IfNew -FilePath $AGENTS_FILE -AgentName 'Codex/opencode/Amp/Kiro/Bob/Pi/Forge')) { $ok = $false } + if (-not (Update-IfNew -FilePath $AGENTS_FILE -AgentName 'Codex/opencode/Amp/Kiro/Bob/Pi/Forge/Goose')) { $ok = $false } if (-not (Update-IfNew -FilePath $WINDSURF_FILE -AgentName 'Windsurf')) { $ok = $false } if (-not (Update-IfNew -FilePath $JUNIE_FILE -AgentName 'Junie')) { $ok = $false } if (-not (Update-IfNew -FilePath $KILOCODE_FILE -AgentName 'Kilo Code')) { $ok = $false } @@ -490,7 +492,7 @@ function Print-Summary { if ($NEW_FRAMEWORK) { Write-Host " - Added framework: $NEW_FRAMEWORK" } if ($NEW_DB -and $NEW_DB -ne 'N/A') { Write-Host " - Added database: $NEW_DB" } Write-Host '' - Write-Info 'Usage: ./update-agent-context.ps1 [-AgentType claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|junie|kilocode|auggie|roo|codebuddy|amp|shai|tabnine|kiro-cli|agy|bob|vibe|qodercli|kimi|trae|pi|iflow|forge|generic]' + Write-Info 'Usage: ./update-agent-context.ps1 [-AgentType claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|junie|kilocode|auggie|roo|codebuddy|amp|shai|tabnine|kiro-cli|agy|bob|vibe|qodercli|kimi|trae|pi|iflow|forge|goose|generic]' } function Main { diff --git a/src/specify_cli/agents.py b/src/specify_cli/agents.py index ec7af8876..e978d0136 100644 --- a/src/specify_cli/agents.py +++ b/src/specify_cli/agents.py @@ -18,6 +18,7 @@ def _build_agent_configs() -> dict[str, Any]: """Derive CommandRegistrar.AGENT_CONFIGS from INTEGRATION_REGISTRY.""" from specify_cli.integrations import INTEGRATION_REGISTRY + configs: dict[str, dict[str, Any]] = {} for key, integration in INTEGRATION_REGISTRY.items(): if key == "generic": @@ -75,7 +76,7 @@ def parse_frontmatter(content: str) -> tuple[dict, str]: return {}, content frontmatter_str = content[3:end_marker].strip() - body = content[end_marker + 3:].strip() + body = content[end_marker + 3 :].strip() try: frontmatter = yaml.safe_load(frontmatter_str) or {} @@ -100,7 +101,9 @@ def render_frontmatter(fm: dict) -> str: if not fm: return "" - yaml_str = yaml.dump(fm, default_flow_style=False, sort_keys=False, allow_unicode=True) + yaml_str = yaml.dump( + fm, default_flow_style=False, sort_keys=False, allow_unicode=True + ) return f"---\n{yaml_str}---\n" def _adjust_script_paths(self, frontmatter: dict) -> dict: @@ -146,16 +149,16 @@ def rewrite_project_relative_paths(text: str) -> str: # ".specify/extensions//scripts/..." remain intact. text = re.sub(r'(^|[\s`"\'(])(?:\.?/)?memory/', r"\1.specify/memory/", text) text = re.sub(r'(^|[\s`"\'(])(?:\.?/)?scripts/', r"\1.specify/scripts/", text) - text = re.sub(r'(^|[\s`"\'(])(?:\.?/)?templates/', r"\1.specify/templates/", text) + text = re.sub( + r'(^|[\s`"\'(])(?:\.?/)?templates/', r"\1.specify/templates/", text + ) - return text.replace(".specify/.specify/", ".specify/").replace(".specify.specify/", ".specify/") + return text.replace(".specify/.specify/", ".specify/").replace( + ".specify.specify/", ".specify/" + ) def render_markdown_command( - self, - frontmatter: dict, - body: str, - source_id: str, - context_note: str = None + self, frontmatter: dict, body: str, source_id: str, context_note: str = None ) -> str: """Render command in Markdown format. @@ -172,12 +175,7 @@ def render_markdown_command( context_note = f"\n\n" return self.render_frontmatter(frontmatter) + "\n" + context_note + body - def render_toml_command( - self, - frontmatter: dict, - body: str, - source_id: str - ) -> str: + def render_toml_command(self, frontmatter: dict, body: str, source_id: str) -> str: """Render command in TOML format. Args: @@ -192,7 +190,7 @@ def render_toml_command( if "description" in frontmatter: toml_lines.append( - f'description = {self._render_basic_toml_string(frontmatter["description"])}' + f"description = {self._render_basic_toml_string(frontmatter['description'])}" ) toml_lines.append("") @@ -226,6 +224,41 @@ def _render_basic_toml_string(value: str) -> str: ) return f'"{escaped}"' + def render_yaml_command( + self, + frontmatter: dict, + body: str, + source_id: str, + cmd_name: str = "", + ) -> str: + """Render command in YAML recipe format for Goose. + + Args: + frontmatter: Command frontmatter + body: Command body content + source_id: Source identifier (extension or preset ID) + cmd_name: Command name used as title fallback + + Returns: + Formatted YAML recipe file content + """ + from specify_cli.integrations.base import YamlIntegration + + title = frontmatter.get("title", "") or frontmatter.get("name", "") + if not isinstance(title, str): + title = str(title) if title is not None else "" + if not title and cmd_name: + title = YamlIntegration._human_title(cmd_name) + if not title and source_id: + title = YamlIntegration._human_title(Path(str(source_id)).stem) + if not title: + title = "Command" + + description = frontmatter.get("description", "") + if not isinstance(description, str): + description = str(description) if description is not None else "" + return YamlIntegration._render_yaml(title, description, body, source_id) + def render_skill_command( self, agent_name: str, @@ -252,9 +285,13 @@ def render_skill_command( frontmatter = {} if agent_name in {"codex", "kimi"}: - body = self.resolve_skill_placeholders(agent_name, frontmatter, body, project_root) + body = self.resolve_skill_placeholders( + agent_name, frontmatter, body, project_root + ) - description = frontmatter.get("description", f"Spec-kit workflow command: {skill_name}") + description = frontmatter.get( + "description", f"Spec-kit workflow command: {skill_name}" + ) skill_frontmatter = self.build_skill_frontmatter( agent_name, skill_name, @@ -288,7 +325,9 @@ def build_skill_frontmatter( return skill_frontmatter @staticmethod - def resolve_skill_placeholders(agent_name: str, frontmatter: dict, body: str, project_root: Path) -> str: + def resolve_skill_placeholders( + agent_name: str, frontmatter: dict, body: str, project_root: Path + ) -> str: """Resolve script placeholders for skills-backed agents.""" try: from . import load_init_options @@ -312,7 +351,9 @@ def resolve_skill_placeholders(agent_name: str, frontmatter: dict, body: str, pr script_variant = init_opts.get("script") if script_variant not in {"sh", "ps"}: fallback_order = [] - default_variant = "ps" if platform.system().lower().startswith("win") else "sh" + default_variant = ( + "ps" if platform.system().lower().startswith("win") else "sh" + ) secondary_variant = "sh" if default_variant == "ps" else "ps" if default_variant in scripts or default_variant in agent_scripts: @@ -334,7 +375,9 @@ def resolve_skill_placeholders(agent_name: str, frontmatter: dict, body: str, pr script_command = script_command.replace("{ARGS}", "$ARGUMENTS") body = body.replace("{SCRIPT}", script_command) - agent_script_command = agent_scripts.get(script_variant) if script_variant else None + agent_script_command = ( + agent_scripts.get(script_variant) if script_variant else None + ) if agent_script_command: agent_script_command = agent_script_command.replace("{ARGS}", "$ARGUMENTS") body = body.replace("{AGENT_SCRIPT}", agent_script_command) @@ -342,7 +385,9 @@ def resolve_skill_placeholders(agent_name: str, frontmatter: dict, body: str, pr body = body.replace("{ARGS}", "$ARGUMENTS").replace("__AGENT__", agent_name) return CommandRegistrar.rewrite_project_relative_paths(body) - def _convert_argument_placeholder(self, content: str, from_placeholder: str, to_placeholder: str) -> str: + def _convert_argument_placeholder( + self, content: str, from_placeholder: str, to_placeholder: str + ) -> str: """Convert argument placeholder format. Args: @@ -356,14 +401,16 @@ def _convert_argument_placeholder(self, content: str, from_placeholder: str, to_ return content.replace(from_placeholder, to_placeholder) @staticmethod - def _compute_output_name(agent_name: str, cmd_name: str, agent_config: Dict[str, Any]) -> str: + def _compute_output_name( + agent_name: str, cmd_name: str, agent_config: Dict[str, Any] + ) -> str: """Compute the on-disk command or skill name for an agent.""" if agent_config["extension"] != "/SKILL.md": return cmd_name short_name = cmd_name if short_name.startswith("speckit."): - short_name = short_name[len("speckit."):] + short_name = short_name[len("speckit.") :] short_name = short_name.replace(".", "-") return f"speckit-{short_name}" @@ -375,7 +422,7 @@ def register_commands( source_id: str, source_dir: Path, project_root: Path, - context_note: str = None + context_note: str = None, ) -> List[str]: """Register commands for a specific agent. @@ -432,12 +479,24 @@ def register_commands( if agent_config["extension"] == "/SKILL.md": output = self.render_skill_command( - agent_name, output_name, frontmatter, body, source_id, cmd_file, project_root + agent_name, + output_name, + frontmatter, + body, + source_id, + cmd_file, + project_root, ) elif agent_config["format"] == "markdown": - output = self.render_markdown_command(frontmatter, body, source_id, context_note) + output = self.render_markdown_command( + frontmatter, body, source_id, context_note + ) elif agent_config["format"] == "toml": output = self.render_toml_command(frontmatter, body, source_id) + elif agent_config["format"] == "yaml": + output = self.render_yaml_command( + frontmatter, body, source_id, cmd_name + ) else: raise ValueError(f"Unsupported format: {agent_config['format']}") @@ -451,34 +510,68 @@ def register_commands( registered.append(cmd_name) for alias in cmd_info.get("aliases", []): - alias_output_name = self._compute_output_name(agent_name, alias, agent_config) + alias_output_name = self._compute_output_name( + agent_name, alias, agent_config + ) # For agents with inject_name, render with alias-specific frontmatter if agent_config.get("inject_name"): alias_frontmatter = deepcopy(frontmatter) # Use custom name formatter if provided (e.g., Forge's hyphenated format) format_name = agent_config.get("format_name") - alias_frontmatter["name"] = format_name(alias) if format_name else alias + alias_frontmatter["name"] = ( + format_name(alias) if format_name else alias + ) if agent_config["extension"] == "/SKILL.md": alias_output = self.render_skill_command( - agent_name, alias_output_name, alias_frontmatter, body, source_id, cmd_file, project_root + agent_name, + alias_output_name, + alias_frontmatter, + body, + source_id, + cmd_file, + project_root, ) elif agent_config["format"] == "markdown": - alias_output = self.render_markdown_command(alias_frontmatter, body, source_id, context_note) + alias_output = self.render_markdown_command( + alias_frontmatter, body, source_id, context_note + ) elif agent_config["format"] == "toml": - alias_output = self.render_toml_command(alias_frontmatter, body, source_id) + alias_output = self.render_toml_command( + alias_frontmatter, body, source_id + ) + elif agent_config["format"] == "yaml": + alias_output = self.render_yaml_command( + alias_frontmatter, body, source_id, alias + ) else: - raise ValueError(f"Unsupported format: {agent_config['format']}") + raise ValueError( + f"Unsupported format: {agent_config['format']}" + ) else: # For other agents, reuse the primary output alias_output = output if agent_config["extension"] == "/SKILL.md": alias_output = self.render_skill_command( - agent_name, alias_output_name, frontmatter, body, source_id, cmd_file, project_root + agent_name, + alias_output_name, + frontmatter, + body, + source_id, + cmd_file, + project_root, ) - alias_file = commands_dir / f"{alias_output_name}{agent_config['extension']}" + alias_file = ( + commands_dir / f"{alias_output_name}{agent_config['extension']}" + ) + try: + alias_file.resolve().relative_to(commands_dir.resolve()) + except ValueError: + raise ValueError( + f"Alias output path escapes commands directory: {alias_file!r}" + ) alias_file.parent.mkdir(parents=True, exist_ok=True) alias_file.write_text(alias_output, encoding="utf-8") if agent_name == "copilot": @@ -506,7 +599,7 @@ def register_commands_for_all_agents( source_id: str, source_dir: Path, project_root: Path, - context_note: str = None + context_note: str = None, ) -> Dict[str, List[str]]: """Register commands for all detected agents in the project. @@ -529,8 +622,12 @@ def register_commands_for_all_agents( if agent_dir.exists(): try: registered = self.register_commands( - agent_name, commands, source_id, source_dir, project_root, - context_note=context_note + agent_name, + commands, + source_id, + source_dir, + project_root, + context_note=context_note, ) if registered: results[agent_name] = registered @@ -540,9 +637,7 @@ def register_commands_for_all_agents( return results def unregister_commands( - self, - registered_commands: Dict[str, List[str]], - project_root: Path + self, registered_commands: Dict[str, List[str]], project_root: Path ) -> None: """Remove previously registered command files from agent directories. @@ -559,13 +654,17 @@ def unregister_commands( commands_dir = project_root / agent_config["dir"] for cmd_name in cmd_names: - output_name = self._compute_output_name(agent_name, cmd_name, agent_config) + output_name = self._compute_output_name( + agent_name, cmd_name, agent_config + ) cmd_file = commands_dir / f"{output_name}{agent_config['extension']}" if cmd_file.exists(): cmd_file.unlink() if agent_name == "copilot": - prompt_file = project_root / ".github" / "prompts" / f"{cmd_name}.prompt.md" + prompt_file = ( + project_root / ".github" / "prompts" / f"{cmd_name}.prompt.md" + ) if prompt_file.exists(): prompt_file.unlink() diff --git a/src/specify_cli/integrations/__init__.py b/src/specify_cli/integrations/__init__.py index 3eb58622e..a5fb3833d 100644 --- a/src/specify_cli/integrations/__init__.py +++ b/src/specify_cli/integrations/__init__.py @@ -36,6 +36,7 @@ def get_integration(key: str) -> IntegrationBase | None: # -- Register built-in integrations -------------------------------------- + def _register_builtins() -> None: """Register all built-in integrations. @@ -58,6 +59,7 @@ def _register_builtins() -> None: from .forge import ForgeIntegration from .gemini import GeminiIntegration from .generic import GenericIntegration + from .goose import GooseIntegration from .iflow import IflowIntegration from .junie import JunieIntegration from .kilocode import KilocodeIntegration @@ -87,6 +89,7 @@ def _register_builtins() -> None: _register(ForgeIntegration()) _register(GeminiIntegration()) _register(GenericIntegration()) + _register(GooseIntegration()) _register(IflowIntegration()) _register(JunieIntegration()) _register(KilocodeIntegration()) diff --git a/src/specify_cli/integrations/base.py b/src/specify_cli/integrations/base.py index 1b09347dc..87eca9d3b 100644 --- a/src/specify_cli/integrations/base.py +++ b/src/specify_cli/integrations/base.py @@ -28,6 +28,7 @@ # IntegrationOption # --------------------------------------------------------------------------- + @dataclass(frozen=True) class IntegrationOption: """Declares an option that an integration accepts via ``--integration-options``. @@ -51,6 +52,7 @@ class IntegrationOption: # IntegrationBase — abstract base class # --------------------------------------------------------------------------- + class IntegrationBase(ABC): """Abstract base class every integration must implement. @@ -275,7 +277,7 @@ def process_template( 2. Replace ``{SCRIPT}`` with the extracted script command 3. Extract ``agent_scripts.`` and replace ``{AGENT_SCRIPT}`` 4. Strip ``scripts:`` and ``agent_scripts:`` sections from frontmatter - 5. Replace ``{ARGS}`` with *arg_placeholder* + 5. Replace ``{ARGS}`` and ``$ARGUMENTS`` with *arg_placeholder* 6. Replace ``__AGENT__`` with *agent_name* 7. Rewrite paths: ``scripts/`` → ``.specify/scripts/`` etc. """ @@ -348,8 +350,9 @@ def process_template( output_lines.append(line) content = "".join(output_lines) - # 5. Replace {ARGS} + # 5. Replace {ARGS} and $ARGUMENTS content = content.replace("{ARGS}", arg_placeholder) + content = content.replace("$ARGUMENTS", arg_placeholder) # 6. Replace __AGENT__ content = content.replace("__AGENT__", agent_name) @@ -358,6 +361,7 @@ def process_template( # CommandRegistrar so extension-local paths are preserved and # boundary rules stay consistent across the codebase. from specify_cli.agents import CommandRegistrar + content = CommandRegistrar.rewrite_project_relative_paths(content) return content @@ -433,9 +437,7 @@ def install( **opts: Any, ) -> list[Path]: """High-level install — calls ``setup()`` and returns created files.""" - return self.setup( - project_root, manifest, parsed_options=parsed_options, **opts - ) + return self.setup(project_root, manifest, parsed_options=parsed_options, **opts) def uninstall( self, @@ -452,6 +454,7 @@ def uninstall( # MarkdownIntegration — covers ~20 standard agents # --------------------------------------------------------------------------- + class MarkdownIntegration(IntegrationBase): """Concrete base for integrations that use standard Markdown commands. @@ -492,12 +495,18 @@ def setup( dest.mkdir(parents=True, exist_ok=True) script_type = opts.get("script_type", "sh") - arg_placeholder = self.registrar_config.get("args", "$ARGUMENTS") if self.registrar_config else "$ARGUMENTS" + arg_placeholder = ( + self.registrar_config.get("args", "$ARGUMENTS") + if self.registrar_config + else "$ARGUMENTS" + ) created: list[Path] = [] for src_file in templates: raw = src_file.read_text(encoding="utf-8") - processed = self.process_template(raw, self.key, script_type, arg_placeholder) + processed = self.process_template( + raw, self.key, script_type, arg_placeholder + ) dst_name = self.command_filename(src_file.stem) dst_file = self.write_file_and_record( processed, dest / dst_name, project_root, manifest @@ -512,6 +521,7 @@ def setup( # TomlIntegration — TOML-format agents (Gemini, Tabnine) # --------------------------------------------------------------------------- + class TomlIntegration(IntegrationBase): """Concrete base for integrations that use TOML command format. @@ -603,13 +613,17 @@ def _render_toml_string(value: str) -> str: if "'''" not in value and not value.endswith("'"): return "'''\n" + value + "'''" - return '"' + ( - value.replace("\\", "\\\\") - .replace('"', '\\"') - .replace("\n", "\\n") - .replace("\r", "\\r") - .replace("\t", "\\t") - ) + '"' + return ( + '"' + + ( + value.replace("\\", "\\\\") + .replace('"', '\\"') + .replace("\n", "\\n") + .replace("\r", "\\r") + .replace("\t", "\\t") + ) + + '"' + ) @staticmethod def _render_toml(description: str, body: str) -> str: @@ -628,7 +642,9 @@ def _render_toml(description: str, body: str) -> str: toml_lines: list[str] = [] if description: - toml_lines.append(f"description = {TomlIntegration._render_toml_string(description)}") + toml_lines.append( + f"description = {TomlIntegration._render_toml_string(description)}" + ) toml_lines.append("") body = body.rstrip("\n") @@ -665,13 +681,19 @@ def setup( dest.mkdir(parents=True, exist_ok=True) script_type = opts.get("script_type", "sh") - arg_placeholder = self.registrar_config.get("args", "{{args}}") if self.registrar_config else "{{args}}" + arg_placeholder = ( + self.registrar_config.get("args", "{{args}}") + if self.registrar_config + else "{{args}}" + ) created: list[Path] = [] for src_file in templates: raw = src_file.read_text(encoding="utf-8") description = self._extract_description(raw) - processed = self.process_template(raw, self.key, script_type, arg_placeholder) + processed = self.process_template( + raw, self.key, script_type, arg_placeholder + ) _, body = self._split_frontmatter(processed) toml_content = self._render_toml(description, body) dst_name = self.command_filename(src_file.stem) @@ -684,6 +706,188 @@ def setup( return created +# --------------------------------------------------------------------------- +# YamlIntegration — YAML-format agents (Goose) +# --------------------------------------------------------------------------- + + +class YamlIntegration(IntegrationBase): + """Concrete base for integrations that use YAML recipe format. + + Mirrors ``TomlIntegration`` closely: subclasses only need to set + ``key``, ``config``, ``registrar_config`` (and optionally + ``context_file``). Everything else is inherited. + + ``setup()`` processes command templates through the same placeholder + pipeline as ``MarkdownIntegration``, then converts the result to + YAML recipe format (version, title, description, prompt block scalar). + """ + + def command_filename(self, template_name: str) -> str: + """YAML commands use ``.yaml`` extension.""" + return f"speckit.{template_name}.yaml" + + @staticmethod + def _extract_frontmatter(content: str) -> dict[str, Any]: + """Extract frontmatter as a dict from YAML frontmatter block.""" + import yaml + + if not content.startswith("---"): + return {} + + lines = content.splitlines(keepends=True) + if not lines or lines[0].rstrip("\r\n") != "---": + return {} + + frontmatter_end = -1 + for i, line in enumerate(lines[1:], start=1): + if line.rstrip("\r\n") == "---": + frontmatter_end = i + break + + if frontmatter_end == -1: + return {} + + frontmatter_text = "".join(lines[1:frontmatter_end]) + try: + fm = yaml.safe_load(frontmatter_text) or {} + except yaml.YAMLError: + return {} + + return fm if isinstance(fm, dict) else {} + + @staticmethod + def _split_frontmatter(content: str) -> tuple[str, str]: + """Split YAML frontmatter from the remaining body content.""" + if not content.startswith("---"): + return "", content + + lines = content.splitlines(keepends=True) + if not lines or lines[0].rstrip("\r\n") != "---": + return "", content + + frontmatter_end = -1 + for i, line in enumerate(lines[1:], start=1): + if line.rstrip("\r\n") == "---": + frontmatter_end = i + break + + if frontmatter_end == -1: + return "", content + + frontmatter = "".join(lines[1:frontmatter_end]) + body = "".join(lines[frontmatter_end + 1 :]) + return frontmatter, body + + @staticmethod + def _human_title(identifier: str) -> str: + """Convert an identifier to a human-readable title. + + Strips a leading ``speckit.`` prefix and replaces ``.``, ``-``, + and ``_`` with spaces before title-casing. + """ + text = identifier + if text.startswith("speckit."): + text = text[len("speckit.") :] + return text.replace(".", " ").replace("-", " ").replace("_", " ").title() + + @staticmethod + def _render_yaml(title: str, description: str, body: str, source_id: str) -> str: + """Render a YAML recipe file from title, description, and body. + + Produces a Goose-compatible recipe with a literal block scalar + for the prompt content. Uses ``yaml.safe_dump()`` for the + header fields to ensure proper escaping. + """ + import yaml + + header = { + "version": "1.0.0", + "title": title, + "description": description, + "author": {"contact": "spec-kit"}, + "extensions": [{"type": "builtin", "name": "developer"}], + "activities": ["Spec-Driven Development"], + } + + header_yaml = yaml.safe_dump( + header, + sort_keys=False, + allow_unicode=True, + default_flow_style=False, + ).strip() + + # Indent each line for YAML block scalar + indented = "\n".join(f" {line}" for line in body.split("\n")) + + lines = [header_yaml, "prompt: |", indented, "", f"# Source: {source_id}"] + return "\n".join(lines) + "\n" + + def setup( + self, + project_root: Path, + manifest: IntegrationManifest, + parsed_options: dict[str, Any] | None = None, + **opts: Any, + ) -> list[Path]: + templates = self.list_command_templates() + if not templates: + return [] + + project_root_resolved = project_root.resolve() + if manifest.project_root != project_root_resolved: + raise ValueError( + f"manifest.project_root ({manifest.project_root}) does not match " + f"project_root ({project_root_resolved})" + ) + + dest = self.commands_dest(project_root).resolve() + try: + dest.relative_to(project_root_resolved) + except ValueError as exc: + raise ValueError( + f"Integration destination {dest} escapes " + f"project root {project_root_resolved}" + ) from exc + dest.mkdir(parents=True, exist_ok=True) + + script_type = opts.get("script_type", "sh") + arg_placeholder = ( + self.registrar_config.get("args", "{{args}}") + if self.registrar_config + else "{{args}}" + ) + created: list[Path] = [] + + for src_file in templates: + raw = src_file.read_text(encoding="utf-8") + fm = self._extract_frontmatter(raw) + description = fm.get("description", "") + if not isinstance(description, str): + description = str(description) if description is not None else "" + title = fm.get("title", "") or fm.get("name", "") + if not isinstance(title, str): + title = str(title) if title is not None else "" + if not title: + title = self._human_title(src_file.stem) + + processed = self.process_template( + raw, self.key, script_type, arg_placeholder + ) + _, body = self._split_frontmatter(processed) + yaml_content = self._render_yaml( + title, description, body, f"templates/commands/{src_file.name}" + ) + dst_name = self.command_filename(src_file.stem) + dst_file = self.write_file_and_record( + yaml_content, dest / dst_name, project_root, manifest + ) + created.append(dst_file) + + created.extend(self.install_scripts(project_root, manifest)) + return created + + # --------------------------------------------------------------------------- # SkillsIntegration — skills-format agents (Codex, Kimi, Agy) # --------------------------------------------------------------------------- @@ -713,9 +917,7 @@ def skills_dest(self, project_root: Path) -> Path: Raises ``ValueError`` when ``config`` or ``folder`` is missing. """ if not self.config: - raise ValueError( - f"{type(self).__name__}.config is not set." - ) + raise ValueError(f"{type(self).__name__}.config is not set.") folder = self.config.get("folder") if not folder: raise ValueError( diff --git a/src/specify_cli/integrations/goose/__init__.py b/src/specify_cli/integrations/goose/__init__.py new file mode 100644 index 000000000..0fc4d9d57 --- /dev/null +++ b/src/specify_cli/integrations/goose/__init__.py @@ -0,0 +1,21 @@ +"""Goose integration — Block's open source AI agent.""" + +from ..base import YamlIntegration + + +class GooseIntegration(YamlIntegration): + key = "goose" + config = { + "name": "Goose", + "folder": ".goose/", + "commands_subdir": "recipes", + "install_url": "https://block.github.io/goose/docs/getting-started/installation", + "requires_cli": True, + } + registrar_config = { + "dir": ".goose/recipes", + "format": "yaml", + "args": "{{args}}", + "extension": ".yaml", + } + context_file = "AGENTS.md" diff --git a/src/specify_cli/integrations/goose/scripts/update-context.ps1 b/src/specify_cli/integrations/goose/scripts/update-context.ps1 new file mode 100644 index 000000000..eeb31f629 --- /dev/null +++ b/src/specify_cli/integrations/goose/scripts/update-context.ps1 @@ -0,0 +1,33 @@ +# update-context.ps1 — Goose integration: create/update AGENTS.md +# +# Thin wrapper that delegates to the shared update-agent-context script. +# Activated in Stage 7 when the shared script uses integration.json dispatch. +# +# Until then, this delegates to the shared script as a subprocess. + +$ErrorActionPreference = 'Stop' + +# Derive repo root from script location (walks up to find .specify/) +$scriptDir = Split-Path -Parent $MyInvocation.MyCommand.Definition +$repoRoot = try { git rev-parse --show-toplevel 2>$null } catch { $null } +# If git did not return a repo root, or the git root does not contain .specify, +# fall back to walking up from the script directory to find the initialized project root. +if (-not $repoRoot -or -not (Test-Path (Join-Path $repoRoot '.specify'))) { + $repoRoot = $scriptDir + $fsRoot = [System.IO.Path]::GetPathRoot($repoRoot) + while ($repoRoot -and $repoRoot -ne $fsRoot -and -not (Test-Path (Join-Path $repoRoot '.specify'))) { + $repoRoot = Split-Path -Parent $repoRoot + } +} + +$sharedScript = "$repoRoot/.specify/scripts/powershell/update-agent-context.ps1" + +# Always delegate to the shared updater; fail clearly if it is unavailable. +if (-not (Test-Path $sharedScript)) { + Write-Error "Error: shared agent context updater not found: $sharedScript" + Write-Error "Goose integration requires support in scripts/powershell/update-agent-context.ps1." + exit 1 +} + +& $sharedScript -AgentType goose +exit $LASTEXITCODE diff --git a/src/specify_cli/integrations/goose/scripts/update-context.sh b/src/specify_cli/integrations/goose/scripts/update-context.sh new file mode 100755 index 000000000..759ae3045 --- /dev/null +++ b/src/specify_cli/integrations/goose/scripts/update-context.sh @@ -0,0 +1,38 @@ +#!/usr/bin/env bash +# update-context.sh — Goose integration: create/update AGENTS.md +# +# Thin wrapper that delegates to the shared update-agent-context script. +# Activated in Stage 7 when the shared script uses integration.json dispatch. +# +# Until then, this delegates to the shared script as a subprocess. + +set -euo pipefail + +# Derive repo root from script location (walks up to find .specify/) +_script_dir="$(cd "$(dirname "$0")" && pwd)" +_root="$_script_dir" +while [ "$_root" != "/" ] && [ ! -d "$_root/.specify" ]; do _root="$(dirname "$_root")"; done +if [ -z "${REPO_ROOT:-}" ]; then + if [ -d "$_root/.specify" ]; then + REPO_ROOT="$_root" + else + git_root="$(git rev-parse --show-toplevel 2>/dev/null || true)" + if [ -n "$git_root" ] && [ -d "$git_root/.specify" ]; then + REPO_ROOT="$git_root" + else + REPO_ROOT="$_root" + fi + fi +fi + +shared_script="$REPO_ROOT/.specify/scripts/bash/update-agent-context.sh" + +# Always delegate to the shared updater; fail clearly if it is unavailable. +if [ ! -x "$shared_script" ]; then + echo "Error: shared agent context updater not found or not executable:" >&2 + echo " $shared_script" >&2 + echo "Goose integration requires support in scripts/bash/update-agent-context.sh." >&2 + exit 1 +fi + +exec "$shared_script" goose diff --git a/tests/integrations/test_integration_base_toml.py b/tests/integrations/test_integration_base_toml.py index fcded1834..4d0bfe2cf 100644 --- a/tests/integrations/test_integration_base_toml.py +++ b/tests/integrations/test_integration_base_toml.py @@ -84,7 +84,9 @@ def test_setup_writes_to_correct_directory(self, tmp_path): m = IntegrationManifest(self.KEY, tmp_path) created = i.setup(tmp_path, m) expected_dir = i.commands_dest(tmp_path) - assert expected_dir.exists(), f"Expected directory {expected_dir} was not created" + assert expected_dir.exists(), ( + f"Expected directory {expected_dir} was not created" + ) cmd_files = [f for f in created if "scripts" not in f.parts] assert len(cmd_files) > 0, "No command files were created" for f in cmd_files: @@ -134,6 +136,12 @@ def test_toml_uses_correct_arg_placeholder(self, tmp_path): # At least one file should contain {{args}} from the {ARGS} placeholder has_args = any("{{args}}" in f.read_text(encoding="utf-8") for f in cmd_files) assert has_args, "No TOML command file contains {{args}} placeholder" + has_dollar_args = any( + "$ARGUMENTS" in f.read_text(encoding="utf-8") for f in cmd_files + ) + assert not has_dollar_args, ( + "TOML command still contains $ARGUMENTS instead of {{args}}" + ) @pytest.mark.parametrize( ("frontmatter", "expected"), @@ -156,19 +164,13 @@ def test_toml_uses_correct_arg_placeholder(self, tmp_path): ), ], ) - def test_toml_extract_description_supports_block_scalars(self, frontmatter, expected): + def test_toml_extract_description_supports_block_scalars( + self, frontmatter, expected + ): assert TomlIntegration._extract_description(frontmatter) == expected def test_split_frontmatter_ignores_indented_delimiters(self): - content = ( - "---\n" - "description: |\n" - " line one\n" - " ---\n" - " line two\n" - "---\n" - "Body\n" - ) + content = "---\ndescription: |\n line one\n ---\n line two\n---\nBody\n" frontmatter, body = TomlIntegration._split_frontmatter(content) @@ -205,7 +207,7 @@ def test_toml_prompt_excludes_frontmatter(self, tmp_path, monkeypatch): assert "---" not in parsed["prompt"] def test_toml_no_ambiguous_closing_quotes(self, tmp_path, monkeypatch): - """Multiline body ending with `"` must not produce `""""` (#2113).""" + """Multiline body ending with a double quote must not produce an ambiguous TOML multiline-string closing delimiter (#2113).""" i = get_integration(self.KEY) template = tmp_path / "sample.md" template.write_text( @@ -230,7 +232,9 @@ def test_toml_no_ambiguous_closing_quotes(self, tmp_path, monkeypatch): assert '"""\n' in raw, "body must use multiline basic string" parsed = tomllib.loads(raw) assert parsed["prompt"].endswith('specified?"') - assert not parsed["prompt"].endswith("\n"), "parsed value must not gain a trailing newline" + assert not parsed["prompt"].endswith("\n"), ( + "parsed value must not gain a trailing newline" + ) def test_toml_triple_double_and_single_quote_ending(self, tmp_path, monkeypatch): """Body containing `\"\"\"` and ending with `'` falls back to escaped basic string.""" @@ -254,11 +258,15 @@ def test_toml_triple_double_and_single_quote_ending(self, tmp_path, monkeypatch) assert len(cmd_files) == 1 raw = cmd_files[0].read_text(encoding="utf-8") - assert "''''" not in raw, "literal string must not produce ambiguous closing quotes" + assert "''''" not in raw, ( + "literal string must not produce ambiguous closing quotes" + ) parsed = tomllib.loads(raw) assert parsed["prompt"].endswith("'single'") assert '"""triple"""' in parsed["prompt"] - assert not parsed["prompt"].endswith("\n"), "parsed value must not gain a trailing newline" + assert not parsed["prompt"].endswith("\n"), ( + "parsed value must not gain a trailing newline" + ) def test_toml_closing_delimiter_inline_when_safe(self, tmp_path, monkeypatch): """Body NOT ending with `"` keeps closing `\"\"\"` inline (no extra newline).""" @@ -284,8 +292,9 @@ def test_toml_closing_delimiter_inline_when_safe(self, tmp_path, monkeypatch): raw = cmd_files[0].read_text(encoding="utf-8") parsed = tomllib.loads(raw) assert parsed["prompt"] == "Line one\nPlain body content" - assert raw.rstrip().endswith('content"""'), \ + assert raw.rstrip().endswith('content"""'), ( "closing delimiter should be inline when body does not end with a quote" + ) def test_toml_is_valid(self, tmp_path): """Every generated TOML file must parse without errors.""" @@ -354,7 +363,14 @@ def test_sh_script_is_executable(self, tmp_path): i = get_integration(self.KEY) m = IntegrationManifest(self.KEY, tmp_path) i.setup(tmp_path, m) - sh = tmp_path / ".specify" / "integrations" / self.KEY / "scripts" / "update-context.sh" + sh = ( + tmp_path + / ".specify" + / "integrations" + / self.KEY + / "scripts" + / "update-context.sh" + ) assert os.access(sh, os.X_OK) # -- CLI auto-promote ------------------------------------------------- @@ -369,10 +385,20 @@ def test_ai_flag_auto_promotes(self, tmp_path): try: os.chdir(project) runner = CliRunner() - result = runner.invoke(app, [ - "init", "--here", "--ai", self.KEY, "--script", "sh", "--no-git", - "--ignore-agent-tools", - ], catch_exceptions=False) + result = runner.invoke( + app, + [ + "init", + "--here", + "--ai", + self.KEY, + "--script", + "sh", + "--no-git", + "--ignore-agent-tools", + ], + catch_exceptions=False, + ) finally: os.chdir(old_cwd) assert result.exit_code == 0, f"init --ai {self.KEY} failed: {result.output}" @@ -390,13 +416,25 @@ def test_integration_flag_creates_files(self, tmp_path): try: os.chdir(project) runner = CliRunner() - result = runner.invoke(app, [ - "init", "--here", "--integration", self.KEY, "--script", "sh", "--no-git", - "--ignore-agent-tools", - ], catch_exceptions=False) + result = runner.invoke( + app, + [ + "init", + "--here", + "--integration", + self.KEY, + "--script", + "sh", + "--no-git", + "--ignore-agent-tools", + ], + catch_exceptions=False, + ) finally: os.chdir(old_cwd) - assert result.exit_code == 0, f"init --integration {self.KEY} failed: {result.output}" + assert result.exit_code == 0, ( + f"init --integration {self.KEY} failed: {result.output}" + ) i = get_integration(self.KEY) cmd_dir = i.commands_dest(project) assert cmd_dir.is_dir(), f"Commands directory {cmd_dir} not created" @@ -406,8 +444,15 @@ def test_integration_flag_creates_files(self, tmp_path): # -- Complete file inventory ------------------------------------------ COMMAND_STEMS = [ - "analyze", "checklist", "clarify", "constitution", - "implement", "plan", "specify", "tasks", "taskstoissues", + "analyze", + "checklist", + "clarify", + "constitution", + "implement", + "plan", + "specify", + "tasks", + "taskstoissues", ] def _expected_files(self, script_variant: str) -> list[str]: @@ -425,23 +470,38 @@ def _expected_files(self, script_variant: str) -> list[str]: files.append(f".specify/integrations/{self.KEY}/scripts/update-context.sh") # Framework files - files.append(f".specify/integration.json") - files.append(f".specify/init-options.json") + files.append(".specify/integration.json") + files.append(".specify/init-options.json") files.append(f".specify/integrations/{self.KEY}.manifest.json") - files.append(f".specify/integrations/speckit.manifest.json") + files.append(".specify/integrations/speckit.manifest.json") if script_variant == "sh": - for name in ["check-prerequisites.sh", "common.sh", "create-new-feature.sh", - "setup-plan.sh", "update-agent-context.sh"]: + for name in [ + "check-prerequisites.sh", + "common.sh", + "create-new-feature.sh", + "setup-plan.sh", + "update-agent-context.sh", + ]: files.append(f".specify/scripts/bash/{name}") else: - for name in ["check-prerequisites.ps1", "common.ps1", "create-new-feature.ps1", - "setup-plan.ps1", "update-agent-context.ps1"]: + for name in [ + "check-prerequisites.ps1", + "common.ps1", + "create-new-feature.ps1", + "setup-plan.ps1", + "update-agent-context.ps1", + ]: files.append(f".specify/scripts/powershell/{name}") - for name in ["agent-file-template.md", "checklist-template.md", - "constitution-template.md", "plan-template.md", - "spec-template.md", "tasks-template.md"]: + for name in [ + "agent-file-template.md", + "checklist-template.md", + "constitution-template.md", + "plan-template.md", + "spec-template.md", + "tasks-template.md", + ]: files.append(f".specify/templates/{name}") files.append(".specify/memory/constitution.md") @@ -457,15 +517,26 @@ def test_complete_file_inventory_sh(self, tmp_path): old_cwd = os.getcwd() try: os.chdir(project) - result = CliRunner().invoke(app, [ - "init", "--here", "--integration", self.KEY, "--script", "sh", - "--no-git", "--ignore-agent-tools", - ], catch_exceptions=False) + result = CliRunner().invoke( + app, + [ + "init", + "--here", + "--integration", + self.KEY, + "--script", + "sh", + "--no-git", + "--ignore-agent-tools", + ], + catch_exceptions=False, + ) finally: os.chdir(old_cwd) assert result.exit_code == 0, f"init failed: {result.output}" - actual = sorted(p.relative_to(project).as_posix() - for p in project.rglob("*") if p.is_file()) + actual = sorted( + p.relative_to(project).as_posix() for p in project.rglob("*") if p.is_file() + ) expected = self._expected_files("sh") assert actual == expected, ( f"Missing: {sorted(set(expected) - set(actual))}\n" @@ -482,15 +553,26 @@ def test_complete_file_inventory_ps(self, tmp_path): old_cwd = os.getcwd() try: os.chdir(project) - result = CliRunner().invoke(app, [ - "init", "--here", "--integration", self.KEY, "--script", "ps", - "--no-git", "--ignore-agent-tools", - ], catch_exceptions=False) + result = CliRunner().invoke( + app, + [ + "init", + "--here", + "--integration", + self.KEY, + "--script", + "ps", + "--no-git", + "--ignore-agent-tools", + ], + catch_exceptions=False, + ) finally: os.chdir(old_cwd) assert result.exit_code == 0, f"init failed: {result.output}" - actual = sorted(p.relative_to(project).as_posix() - for p in project.rglob("*") if p.is_file()) + actual = sorted( + p.relative_to(project).as_posix() for p in project.rglob("*") if p.is_file() + ) expected = self._expected_files("ps") assert actual == expected, ( f"Missing: {sorted(set(expected) - set(actual))}\n" diff --git a/tests/integrations/test_integration_base_yaml.py b/tests/integrations/test_integration_base_yaml.py new file mode 100644 index 000000000..b0f59a627 --- /dev/null +++ b/tests/integrations/test_integration_base_yaml.py @@ -0,0 +1,459 @@ +"""Reusable test mixin for standard YamlIntegration subclasses. + +Each per-agent test file sets ``KEY``, ``FOLDER``, ``COMMANDS_SUBDIR``, +``REGISTRAR_DIR``, and ``CONTEXT_FILE``, then inherits all verification +logic from ``YamlIntegrationTests``. + +Mirrors ``TomlIntegrationTests`` closely — same test structure, +adapted for YAML recipe output format. +""" + +import os + +import yaml + +from specify_cli.integrations import INTEGRATION_REGISTRY, get_integration +from specify_cli.integrations.base import YamlIntegration +from specify_cli.integrations.manifest import IntegrationManifest + + +class YamlIntegrationTests: + """Mixin — set class-level constants and inherit these tests. + + Required class attrs on subclass:: + + KEY: str — integration registry key + FOLDER: str — e.g. ".goose/" + COMMANDS_SUBDIR: str — e.g. "recipes" + REGISTRAR_DIR: str — e.g. ".goose/recipes" + CONTEXT_FILE: str — e.g. "AGENTS.md" + """ + + KEY: str + FOLDER: str + COMMANDS_SUBDIR: str + REGISTRAR_DIR: str + CONTEXT_FILE: str + + # -- Registration ----------------------------------------------------- + + def test_registered(self): + assert self.KEY in INTEGRATION_REGISTRY + assert get_integration(self.KEY) is not None + + def test_is_yaml_integration(self): + assert isinstance(get_integration(self.KEY), YamlIntegration) + + # -- Config ----------------------------------------------------------- + + def test_config_folder(self): + i = get_integration(self.KEY) + assert i.config["folder"] == self.FOLDER + + def test_config_commands_subdir(self): + i = get_integration(self.KEY) + assert i.config["commands_subdir"] == self.COMMANDS_SUBDIR + + def test_registrar_config(self): + i = get_integration(self.KEY) + assert i.registrar_config["dir"] == self.REGISTRAR_DIR + assert i.registrar_config["format"] == "yaml" + assert i.registrar_config["args"] == "{{args}}" + assert i.registrar_config["extension"] == ".yaml" + + def test_context_file(self): + i = get_integration(self.KEY) + assert i.context_file == self.CONTEXT_FILE + + # -- Setup / teardown ------------------------------------------------- + + def test_setup_creates_files(self, tmp_path): + i = get_integration(self.KEY) + m = IntegrationManifest(self.KEY, tmp_path) + created = i.setup(tmp_path, m) + assert len(created) > 0 + cmd_files = [f for f in created if "scripts" not in f.parts] + for f in cmd_files: + assert f.exists() + assert f.name.startswith("speckit.") + assert f.name.endswith(".yaml") + + def test_setup_writes_to_correct_directory(self, tmp_path): + i = get_integration(self.KEY) + m = IntegrationManifest(self.KEY, tmp_path) + created = i.setup(tmp_path, m) + expected_dir = i.commands_dest(tmp_path) + assert expected_dir.exists(), ( + f"Expected directory {expected_dir} was not created" + ) + cmd_files = [f for f in created if "scripts" not in f.parts] + assert len(cmd_files) > 0, "No command files were created" + for f in cmd_files: + assert f.resolve().parent == expected_dir.resolve(), ( + f"{f} is not under {expected_dir}" + ) + + def test_templates_are_processed(self, tmp_path): + """Command files must have placeholders replaced.""" + i = get_integration(self.KEY) + m = IntegrationManifest(self.KEY, tmp_path) + created = i.setup(tmp_path, m) + cmd_files = [f for f in created if "scripts" not in f.parts] + assert len(cmd_files) > 0 + for f in cmd_files: + content = f.read_text(encoding="utf-8") + assert "{SCRIPT}" not in content, f"{f.name} has unprocessed {{SCRIPT}}" + assert "__AGENT__" not in content, f"{f.name} has unprocessed __AGENT__" + assert "{ARGS}" not in content, f"{f.name} has unprocessed {{ARGS}}" + + def test_yaml_has_title(self, tmp_path): + """Every YAML recipe should have a title field.""" + i = get_integration(self.KEY) + m = IntegrationManifest(self.KEY, tmp_path) + created = i.setup(tmp_path, m) + cmd_files = [f for f in created if "scripts" not in f.parts] + for f in cmd_files: + content = f.read_text(encoding="utf-8") + assert "title:" in content, f"{f.name} missing title field" + + def test_yaml_has_prompt(self, tmp_path): + """Every YAML recipe should have a prompt block scalar.""" + i = get_integration(self.KEY) + m = IntegrationManifest(self.KEY, tmp_path) + created = i.setup(tmp_path, m) + cmd_files = [f for f in created if "scripts" not in f.parts] + for f in cmd_files: + content = f.read_text(encoding="utf-8") + assert "prompt: |" in content, f"{f.name} missing prompt block scalar" + + def test_yaml_uses_correct_arg_placeholder(self, tmp_path): + """YAML recipes must use {{args}} placeholder.""" + i = get_integration(self.KEY) + m = IntegrationManifest(self.KEY, tmp_path) + created = i.setup(tmp_path, m) + cmd_files = [f for f in created if "scripts" not in f.parts] + has_args = any("{{args}}" in f.read_text(encoding="utf-8") for f in cmd_files) + assert has_args, "No YAML recipe contains {{args}} placeholder" + has_dollar_args = any( + "$ARGUMENTS" in f.read_text(encoding="utf-8") for f in cmd_files + ) + assert not has_dollar_args, ( + "YAML recipe still contains $ARGUMENTS instead of {{args}}" + ) + + def test_yaml_is_valid(self, tmp_path): + """Every generated YAML file must parse without errors.""" + i = get_integration(self.KEY) + m = IntegrationManifest(self.KEY, tmp_path) + created = i.setup(tmp_path, m) + cmd_files = [f for f in created if "scripts" not in f.parts] + for f in cmd_files: + content = f.read_text(encoding="utf-8") + # Strip trailing source comment before parsing + lines = content.split("\n") + yaml_lines = [l for l in lines if not l.startswith("# Source:")] + try: + parsed = yaml.safe_load("\n".join(yaml_lines)) + except Exception as exc: + raise AssertionError(f"{f.name} is not valid YAML: {exc}") from exc + assert "prompt" in parsed, f"{f.name} parsed YAML has no 'prompt' key" + assert "title" in parsed, f"{f.name} parsed YAML has no 'title' key" + + def test_yaml_prompt_excludes_frontmatter(self, tmp_path, monkeypatch): + i = get_integration(self.KEY) + template = tmp_path / "sample.md" + template.write_text( + "---\n" + "description: Summary line one\n" + "scripts:\n" + " sh: scripts/bash/example.sh\n" + "---\n" + "Body line one\n" + "Body line two\n", + encoding="utf-8", + ) + monkeypatch.setattr(i, "list_command_templates", lambda: [template]) + + m = IntegrationManifest(self.KEY, tmp_path) + created = i.setup(tmp_path, m) + cmd_files = [f for f in created if "scripts" not in f.parts] + assert len(cmd_files) == 1 + + content = cmd_files[0].read_text(encoding="utf-8") + # Strip source comment for parsing + lines = content.split("\n") + yaml_lines = [l for l in lines if not l.startswith("# Source:")] + parsed = yaml.safe_load("\n".join(yaml_lines)) + + assert "description:" not in parsed["prompt"] + assert "scripts:" not in parsed["prompt"] + assert "---" not in parsed["prompt"] + + def test_all_files_tracked_in_manifest(self, tmp_path): + i = get_integration(self.KEY) + m = IntegrationManifest(self.KEY, tmp_path) + created = i.setup(tmp_path, m) + for f in created: + rel = f.resolve().relative_to(tmp_path.resolve()).as_posix() + assert rel in m.files, f"{rel} not tracked in manifest" + + def test_install_uninstall_roundtrip(self, tmp_path): + i = get_integration(self.KEY) + m = IntegrationManifest(self.KEY, tmp_path) + created = i.install(tmp_path, m) + assert len(created) > 0 + m.save() + for f in created: + assert f.exists() + removed, skipped = i.uninstall(tmp_path, m) + assert len(removed) == len(created) + assert skipped == [] + + def test_modified_file_survives_uninstall(self, tmp_path): + i = get_integration(self.KEY) + m = IntegrationManifest(self.KEY, tmp_path) + created = i.install(tmp_path, m) + m.save() + modified_file = created[0] + modified_file.write_text("user modified this", encoding="utf-8") + removed, skipped = i.uninstall(tmp_path, m) + assert modified_file.exists() + assert modified_file in skipped + + # -- Scripts ---------------------------------------------------------- + + def test_setup_installs_update_context_scripts(self, tmp_path): + i = get_integration(self.KEY) + m = IntegrationManifest(self.KEY, tmp_path) + created = i.setup(tmp_path, m) + scripts_dir = tmp_path / ".specify" / "integrations" / self.KEY / "scripts" + assert scripts_dir.is_dir(), f"Scripts directory not created for {self.KEY}" + assert (scripts_dir / "update-context.sh").exists() + assert (scripts_dir / "update-context.ps1").exists() + + def test_scripts_tracked_in_manifest(self, tmp_path): + i = get_integration(self.KEY) + m = IntegrationManifest(self.KEY, tmp_path) + i.setup(tmp_path, m) + script_rels = [k for k in m.files if "update-context" in k] + assert len(script_rels) >= 2 + + def test_sh_script_is_executable(self, tmp_path): + i = get_integration(self.KEY) + m = IntegrationManifest(self.KEY, tmp_path) + i.setup(tmp_path, m) + sh = ( + tmp_path + / ".specify" + / "integrations" + / self.KEY + / "scripts" + / "update-context.sh" + ) + assert os.access(sh, os.X_OK) + + # -- CLI auto-promote ------------------------------------------------- + + def test_ai_flag_auto_promotes(self, tmp_path): + from typer.testing import CliRunner + from specify_cli import app + + project = tmp_path / f"promote-{self.KEY}" + project.mkdir() + old_cwd = os.getcwd() + try: + os.chdir(project) + runner = CliRunner() + result = runner.invoke( + app, + [ + "init", + "--here", + "--ai", + self.KEY, + "--script", + "sh", + "--no-git", + "--ignore-agent-tools", + ], + catch_exceptions=False, + ) + finally: + os.chdir(old_cwd) + assert result.exit_code == 0, f"init --ai {self.KEY} failed: {result.output}" + i = get_integration(self.KEY) + cmd_dir = i.commands_dest(project) + assert cmd_dir.is_dir(), f"--ai {self.KEY} did not create commands directory" + + def test_integration_flag_creates_files(self, tmp_path): + from typer.testing import CliRunner + from specify_cli import app + + project = tmp_path / f"int-{self.KEY}" + project.mkdir() + old_cwd = os.getcwd() + try: + os.chdir(project) + runner = CliRunner() + result = runner.invoke( + app, + [ + "init", + "--here", + "--integration", + self.KEY, + "--script", + "sh", + "--no-git", + "--ignore-agent-tools", + ], + catch_exceptions=False, + ) + finally: + os.chdir(old_cwd) + assert result.exit_code == 0, ( + f"init --integration {self.KEY} failed: {result.output}" + ) + i = get_integration(self.KEY) + cmd_dir = i.commands_dest(project) + assert cmd_dir.is_dir(), f"Commands directory {cmd_dir} not created" + commands = sorted(cmd_dir.glob("speckit.*.yaml")) + assert len(commands) > 0, f"No command files in {cmd_dir}" + + # -- Complete file inventory ------------------------------------------ + + COMMAND_STEMS = [ + "analyze", + "checklist", + "clarify", + "constitution", + "implement", + "plan", + "specify", + "tasks", + "taskstoissues", + ] + + def _expected_files(self, script_variant: str) -> list[str]: + """Build the expected file list for this integration + script variant.""" + i = get_integration(self.KEY) + cmd_dir = i.registrar_config["dir"] + files = [] + + # Command files (.yaml) + for stem in self.COMMAND_STEMS: + files.append(f"{cmd_dir}/speckit.{stem}.yaml") + + # Integration scripts + files.append(f".specify/integrations/{self.KEY}/scripts/update-context.ps1") + files.append(f".specify/integrations/{self.KEY}/scripts/update-context.sh") + + # Framework files + files.append(".specify/integration.json") + files.append(".specify/init-options.json") + files.append(f".specify/integrations/{self.KEY}.manifest.json") + files.append(".specify/integrations/speckit.manifest.json") + + if script_variant == "sh": + for name in [ + "check-prerequisites.sh", + "common.sh", + "create-new-feature.sh", + "setup-plan.sh", + "update-agent-context.sh", + ]: + files.append(f".specify/scripts/bash/{name}") + else: + for name in [ + "check-prerequisites.ps1", + "common.ps1", + "create-new-feature.ps1", + "setup-plan.ps1", + "update-agent-context.ps1", + ]: + files.append(f".specify/scripts/powershell/{name}") + + for name in [ + "agent-file-template.md", + "checklist-template.md", + "constitution-template.md", + "plan-template.md", + "spec-template.md", + "tasks-template.md", + ]: + files.append(f".specify/templates/{name}") + + files.append(".specify/memory/constitution.md") + return sorted(files) + + def test_complete_file_inventory_sh(self, tmp_path): + """Every file produced by specify init --integration --script sh.""" + from typer.testing import CliRunner + from specify_cli import app + + project = tmp_path / f"inventory-sh-{self.KEY}" + project.mkdir() + old_cwd = os.getcwd() + try: + os.chdir(project) + result = CliRunner().invoke( + app, + [ + "init", + "--here", + "--integration", + self.KEY, + "--script", + "sh", + "--no-git", + "--ignore-agent-tools", + ], + catch_exceptions=False, + ) + finally: + os.chdir(old_cwd) + assert result.exit_code == 0, f"init failed: {result.output}" + actual = sorted( + p.relative_to(project).as_posix() for p in project.rglob("*") if p.is_file() + ) + expected = self._expected_files("sh") + assert actual == expected, ( + f"Missing: {sorted(set(expected) - set(actual))}\n" + f"Extra: {sorted(set(actual) - set(expected))}" + ) + + def test_complete_file_inventory_ps(self, tmp_path): + """Every file produced by specify init --integration --script ps.""" + from typer.testing import CliRunner + from specify_cli import app + + project = tmp_path / f"inventory-ps-{self.KEY}" + project.mkdir() + old_cwd = os.getcwd() + try: + os.chdir(project) + result = CliRunner().invoke( + app, + [ + "init", + "--here", + "--integration", + self.KEY, + "--script", + "ps", + "--no-git", + "--ignore-agent-tools", + ], + catch_exceptions=False, + ) + finally: + os.chdir(old_cwd) + assert result.exit_code == 0, f"init failed: {result.output}" + actual = sorted( + p.relative_to(project).as_posix() for p in project.rglob("*") if p.is_file() + ) + expected = self._expected_files("ps") + assert actual == expected, ( + f"Missing: {sorted(set(expected) - set(actual))}\n" + f"Extra: {sorted(set(actual) - set(expected))}" + ) diff --git a/tests/integrations/test_integration_goose.py b/tests/integrations/test_integration_goose.py new file mode 100644 index 000000000..6483666f3 --- /dev/null +++ b/tests/integrations/test_integration_goose.py @@ -0,0 +1,11 @@ +"""Tests for GooseIntegration.""" + +from .test_integration_base_yaml import YamlIntegrationTests + + +class TestGooseIntegration(YamlIntegrationTests): + KEY = "goose" + FOLDER = ".goose/" + COMMANDS_SUBDIR = "recipes" + REGISTRAR_DIR = ".goose/recipes" + CONTEXT_FILE = "AGENTS.md" diff --git a/tests/test_agent_config_consistency.py b/tests/test_agent_config_consistency.py index 35d8c02f7..9cfe1ddbc 100644 --- a/tests/test_agent_config_consistency.py +++ b/tests/test_agent_config_consistency.py @@ -50,16 +50,25 @@ def test_init_ai_help_includes_roo_and_kiro_alias(self): def test_devcontainer_kiro_installer_uses_pinned_checksum(self): """Devcontainer installer should always verify Kiro installer via pinned SHA256.""" - post_create_text = (REPO_ROOT / ".devcontainer" / "post-create.sh").read_text(encoding="utf-8") - - assert 'KIRO_INSTALLER_SHA256="7487a65cf310b7fb59b357c4b5e6e3f3259d383f4394ecedb39acf70f307cffb"' in post_create_text + post_create_text = (REPO_ROOT / ".devcontainer" / "post-create.sh").read_text( + encoding="utf-8" + ) + + assert ( + 'KIRO_INSTALLER_SHA256="7487a65cf310b7fb59b357c4b5e6e3f3259d383f4394ecedb39acf70f307cffb"' + in post_create_text + ) assert "sha256sum -c -" in post_create_text assert "KIRO_SKIP_KIRO_INSTALLER_VERIFY" not in post_create_text def test_agent_context_scripts_use_kiro_cli(self): """Agent context scripts should advertise kiro-cli and not legacy q agent key.""" - bash_text = (REPO_ROOT / "scripts" / "bash" / "update-agent-context.sh").read_text(encoding="utf-8") - pwsh_text = (REPO_ROOT / "scripts" / "powershell" / "update-agent-context.ps1").read_text(encoding="utf-8") + bash_text = ( + REPO_ROOT / "scripts" / "bash" / "update-agent-context.sh" + ).read_text(encoding="utf-8") + pwsh_text = ( + REPO_ROOT / "scripts" / "powershell" / "update-agent-context.ps1" + ).read_text(encoding="utf-8") assert "kiro-cli" in bash_text assert "kiro-cli" in pwsh_text @@ -89,8 +98,12 @@ def test_extension_registrar_includes_tabnine(self): def test_agent_context_scripts_include_tabnine(self): """Agent context scripts should support tabnine agent type.""" - bash_text = (REPO_ROOT / "scripts" / "bash" / "update-agent-context.sh").read_text(encoding="utf-8") - pwsh_text = (REPO_ROOT / "scripts" / "powershell" / "update-agent-context.ps1").read_text(encoding="utf-8") + bash_text = ( + REPO_ROOT / "scripts" / "bash" / "update-agent-context.sh" + ).read_text(encoding="utf-8") + pwsh_text = ( + REPO_ROOT / "scripts" / "powershell" / "update-agent-context.ps1" + ).read_text(encoding="utf-8") assert "tabnine" in bash_text assert "TABNINE_FILE" in bash_text @@ -121,7 +134,9 @@ def test_kimi_in_extension_registrar(self): def test_kimi_in_powershell_validate_set(self): """PowerShell update-agent-context script should include 'kimi' in ValidateSet.""" - ps_text = (REPO_ROOT / "scripts" / "powershell" / "update-agent-context.ps1").read_text(encoding="utf-8") + ps_text = ( + REPO_ROOT / "scripts" / "powershell" / "update-agent-context.ps1" + ).read_text(encoding="utf-8") validate_set_match = re.search(r"\[ValidateSet\(([^)]*)\)\]", ps_text) assert validate_set_match is not None @@ -155,8 +170,12 @@ def test_trae_in_extension_registrar(self): def test_trae_in_agent_context_scripts(self): """Agent context scripts should support trae agent type.""" - bash_text = (REPO_ROOT / "scripts" / "bash" / "update-agent-context.sh").read_text(encoding="utf-8") - pwsh_text = (REPO_ROOT / "scripts" / "powershell" / "update-agent-context.ps1").read_text(encoding="utf-8") + bash_text = ( + REPO_ROOT / "scripts" / "bash" / "update-agent-context.sh" + ).read_text(encoding="utf-8") + pwsh_text = ( + REPO_ROOT / "scripts" / "powershell" / "update-agent-context.ps1" + ).read_text(encoding="utf-8") assert "trae" in bash_text assert "TRAE_FILE" in bash_text @@ -165,7 +184,9 @@ def test_trae_in_agent_context_scripts(self): def test_trae_in_powershell_validate_set(self): """PowerShell update-agent-context script should include 'trae' in ValidateSet.""" - ps_text = (REPO_ROOT / "scripts" / "powershell" / "update-agent-context.ps1").read_text(encoding="utf-8") + ps_text = ( + REPO_ROOT / "scripts" / "powershell" / "update-agent-context.ps1" + ).read_text(encoding="utf-8") validate_set_match = re.search(r"\[ValidateSet\(([^)]*)\)\]", ps_text) assert validate_set_match is not None @@ -200,7 +221,9 @@ def test_pi_in_extension_registrar(self): def test_pi_in_powershell_validate_set(self): """PowerShell update-agent-context script should include 'pi' in ValidateSet.""" - ps_text = (REPO_ROOT / "scripts" / "powershell" / "update-agent-context.ps1").read_text(encoding="utf-8") + ps_text = ( + REPO_ROOT / "scripts" / "powershell" / "update-agent-context.ps1" + ).read_text(encoding="utf-8") validate_set_match = re.search(r"\[ValidateSet\(([^)]*)\)\]", ps_text) assert validate_set_match is not None @@ -210,8 +233,12 @@ def test_pi_in_powershell_validate_set(self): def test_agent_context_scripts_include_pi(self): """Agent context scripts should support pi agent type.""" - bash_text = (REPO_ROOT / "scripts" / "bash" / "update-agent-context.sh").read_text(encoding="utf-8") - pwsh_text = (REPO_ROOT / "scripts" / "powershell" / "update-agent-context.ps1").read_text(encoding="utf-8") + bash_text = ( + REPO_ROOT / "scripts" / "bash" / "update-agent-context.sh" + ).read_text(encoding="utf-8") + pwsh_text = ( + REPO_ROOT / "scripts" / "powershell" / "update-agent-context.ps1" + ).read_text(encoding="utf-8") assert "pi" in bash_text assert "Pi Coding Agent" in bash_text @@ -242,8 +269,12 @@ def test_iflow_in_extension_registrar(self): def test_iflow_in_agent_context_scripts(self): """Agent context scripts should support iflow agent type.""" - bash_text = (REPO_ROOT / "scripts" / "bash" / "update-agent-context.sh").read_text(encoding="utf-8") - pwsh_text = (REPO_ROOT / "scripts" / "powershell" / "update-agent-context.ps1").read_text(encoding="utf-8") + bash_text = ( + REPO_ROOT / "scripts" / "bash" / "update-agent-context.sh" + ).read_text(encoding="utf-8") + pwsh_text = ( + REPO_ROOT / "scripts" / "powershell" / "update-agent-context.ps1" + ).read_text(encoding="utf-8") assert "iflow" in bash_text assert "IFLOW_FILE" in bash_text @@ -253,3 +284,37 @@ def test_iflow_in_agent_context_scripts(self): def test_ai_help_includes_iflow(self): """CLI help text for --ai should include iflow.""" assert "iflow" in AI_ASSISTANT_HELP + + # --- Goose consistency checks --- + + def test_goose_in_agent_config(self): + """AGENT_CONFIG should include goose with correct folder and commands_subdir.""" + assert "goose" in AGENT_CONFIG + assert AGENT_CONFIG["goose"]["folder"] == ".goose/" + assert AGENT_CONFIG["goose"]["commands_subdir"] == "recipes" + assert AGENT_CONFIG["goose"]["requires_cli"] is True + + def test_goose_in_extension_registrar(self): + """Extension command registrar should include goose targeting .goose/recipes.""" + cfg = CommandRegistrar.AGENT_CONFIGS + + assert "goose" in cfg + assert cfg["goose"]["dir"] == ".goose/recipes" + assert cfg["goose"]["format"] == "yaml" + assert cfg["goose"]["args"] == "{{args}}" + + def test_goose_in_agent_context_scripts(self): + """Agent context scripts should support goose agent type.""" + bash_text = ( + REPO_ROOT / "scripts" / "bash" / "update-agent-context.sh" + ).read_text(encoding="utf-8") + pwsh_text = ( + REPO_ROOT / "scripts" / "powershell" / "update-agent-context.ps1" + ).read_text(encoding="utf-8") + + assert "goose" in bash_text + assert "goose" in pwsh_text + + def test_ai_help_includes_goose(self): + """CLI help text for --ai should include goose.""" + assert "goose" in AI_ASSISTANT_HELP