📂 Add new directory for image-scale-cli tool

This commit is contained in:
2026-04-11 11:10:53 +08:00
parent 82e293f4c2
commit 9454512b7b
60 changed files with 3186 additions and 0 deletions

View File

@@ -0,0 +1,150 @@
---
description: Implement tasks from an OpenSpec change (Experimental)
---
Implement tasks from an OpenSpec change.
**Input**: Optionally specify a change name (e.g., `/opsx:apply add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **Select the change**
If a name is provided, use it. Otherwise:
- Infer from conversation context if the user mentioned a change
- Auto-select if only one active change exists
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`).
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
3. **Get apply instructions**
```bash
openspec instructions apply --change "<name>" --json
```
This returns:
- Context file paths (varies by schema)
- Progress (total, complete, remaining)
- Task list with status
- Dynamic instruction based on current state
**Handle states:**
- If `state: "blocked"` (missing artifacts): show message, suggest using `/opsx:continue`
- If `state: "all_done"`: congratulate, suggest archive
- Otherwise: proceed to implementation
4. **Read context files**
Read the files listed in `contextFiles` from the apply instructions output.
The files depend on the schema being used:
- **spec-driven**: proposal, specs, design, tasks
- Other schemas: follow the contextFiles from CLI output
5. **Show current progress**
Display:
- Schema being used
- Progress: "N/M tasks complete"
- Remaining tasks overview
- Dynamic instruction from CLI
6. **Implement tasks (loop until done or blocked)**
For each pending task:
- Show which task is being worked on
- Make the code changes required
- Keep changes minimal and focused
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
- Continue to next task
**Pause if:**
- Task is unclear → ask for clarification
- Implementation reveals a design issue → suggest updating artifacts
- Error or blocker encountered → report and wait for guidance
- User interrupts
7. **On completion or pause, show status**
Display:
- Tasks completed this session
- Overall progress: "N/M tasks complete"
- If all done: suggest archive
- If paused: explain why and wait for guidance
**Output During Implementation**
```
## Implementing: <change-name> (schema: <schema-name>)
Working on task 3/7: <task description>
[...implementation happening...]
✓ Task complete
Working on task 4/7: <task description>
[...implementation happening...]
✓ Task complete
```
**Output On Completion**
```
## Implementation Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 7/7 tasks complete ✓
### Completed This Session
- [x] Task 1
- [x] Task 2
...
All tasks complete! You can archive this change with `/opsx:archive`.
```
**Output On Pause (Issue Encountered)**
```
## Implementation Paused
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 4/7 tasks complete
### Issue Encountered
<description of the issue>
**Options:**
1. <option 1>
2. <option 2>
3. Other approach
What would you like to do?
```
**Guardrails**
- Keep going through tasks until done or blocked
- Always read context files before starting (from the apply instructions output)
- If task is ambiguous, pause and ask before implementing
- If implementation reveals issues, pause and suggest artifact updates
- Keep code changes minimal and scoped to each task
- Update task checkbox immediately after completing each task
- Pause on errors, blockers, or unclear requirements - don't guess
- Use contextFiles from CLI output, don't assume specific file names
**Fluid Workflow Integration**
This skill supports the "actions on a change" model:
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly

View File

@@ -0,0 +1,149 @@
description = "Implement tasks from an OpenSpec change (Experimental)"
prompt = """
Implement tasks from an OpenSpec change.
**Input**: Optionally specify a change name (e.g., `/opsx:apply add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **Select the change**
If a name is provided, use it. Otherwise:
- Infer from conversation context if the user mentioned a change
- Auto-select if only one active change exists
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`).
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
3. **Get apply instructions**
```bash
openspec instructions apply --change "<name>" --json
```
This returns:
- Context file paths (varies by schema)
- Progress (total, complete, remaining)
- Task list with status
- Dynamic instruction based on current state
**Handle states:**
- If `state: "blocked"` (missing artifacts): show message, suggest using `/opsx:continue`
- If `state: "all_done"`: congratulate, suggest archive
- Otherwise: proceed to implementation
4. **Read context files**
Read the files listed in `contextFiles` from the apply instructions output.
The files depend on the schema being used:
- **spec-driven**: proposal, specs, design, tasks
- Other schemas: follow the contextFiles from CLI output
5. **Show current progress**
Display:
- Schema being used
- Progress: "N/M tasks complete"
- Remaining tasks overview
- Dynamic instruction from CLI
6. **Implement tasks (loop until done or blocked)**
For each pending task:
- Show which task is being worked on
- Make the code changes required
- Keep changes minimal and focused
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
- Continue to next task
**Pause if:**
- Task is unclear → ask for clarification
- Implementation reveals a design issue → suggest updating artifacts
- Error or blocker encountered → report and wait for guidance
- User interrupts
7. **On completion or pause, show status**
Display:
- Tasks completed this session
- Overall progress: "N/M tasks complete"
- If all done: suggest archive
- If paused: explain why and wait for guidance
**Output During Implementation**
```
## Implementing: <change-name> (schema: <schema-name>)
Working on task 3/7: <task description>
[...implementation happening...]
✓ Task complete
Working on task 4/7: <task description>
[...implementation happening...]
✓ Task complete
```
**Output On Completion**
```
## Implementation Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 7/7 tasks complete ✓
### Completed This Session
- [x] Task 1
- [x] Task 2
...
All tasks complete! You can archive this change with `/opsx:archive`.
```
**Output On Pause (Issue Encountered)**
```
## Implementation Paused
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 4/7 tasks complete
### Issue Encountered
<description of the issue>
**Options:**
1. <option 1>
2. <option 2>
3. Other approach
What would you like to do?
```
**Guardrails**
- Keep going through tasks until done or blocked
- Always read context files before starting (from the apply instructions output)
- If task is ambiguous, pause and ask before implementing
- If implementation reveals issues, pause and suggest artifact updates
- Keep code changes minimal and scoped to each task
- Update task checkbox immediately after completing each task
- Pause on errors, blockers, or unclear requirements - don't guess
- Use contextFiles from CLI output, don't assume specific file names
**Fluid Workflow Integration**
This skill supports the "actions on a change" model:
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly
"""

View File

@@ -0,0 +1,155 @@
---
description: Archive a completed change in the experimental workflow
---
Archive a completed change in the experimental workflow.
**Input**: Optionally specify a change name after `/opsx:archive` (e.g., `/opsx:archive add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show only active changes (not already archived).
Include the schema used for each change if available.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check artifact completion status**
Run `openspec status --change "<name>" --json` to check artifact completion.
Parse the JSON to understand:
- `schemaName`: The workflow being used
- `artifacts`: List of artifacts with their status (`done` or other)
**If any artifacts are not `done`:**
- Display warning listing incomplete artifacts
- Prompt user for confirmation to continue
- Proceed if user confirms
3. **Check task completion status**
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
**If incomplete tasks found:**
- Display warning showing count of incomplete tasks
- Prompt user for confirmation to continue
- Proceed if user confirms
**If no tasks file exists:** Proceed without task-related warning.
4. **Assess delta spec sync state**
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
**If delta specs exist:**
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
- Determine what changes would be applied (adds, modifications, removals, renames)
- Show a combined summary before prompting
**Prompt options:**
- If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, use Task tool (subagent_type: "general-purpose", prompt: "Use Skill tool to invoke openspec-sync-specs for change '<name>'. Delta spec analysis: <include the analyzed delta spec summary>"). Proceed to archive regardless of choice.
5. **Perform the archive**
Create the archive directory if it doesn't exist:
```bash
mkdir -p openspec/changes/archive
```
Generate target name using current date: `YYYY-MM-DD-<change-name>`
**Check if target already exists:**
- If yes: Fail with error, suggest renaming existing archive or using different date
- If no: Move the change directory to archive
```bash
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
6. **Display summary**
Show archive completion summary including:
- Change name
- Schema that was used
- Archive location
- Spec sync status (synced / sync skipped / no delta specs)
- Note about any warnings (incomplete artifacts/tasks)
**Output On Success**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** ✓ Synced to main specs
All artifacts complete. All tasks complete.
```
**Output On Success (No Delta Specs)**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** No delta specs
All artifacts complete. All tasks complete.
```
**Output On Success With Warnings**
```
## Archive Complete (with warnings)
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** Sync skipped (user chose to skip)
**Warnings:**
- Archived with 2 incomplete artifacts
- Archived with 3 incomplete tasks
- Delta spec sync was skipped (user chose to skip)
Review the archive if this was not intentional.
```
**Output On Error (Archive Exists)**
```
## Archive Failed
**Change:** <change-name>
**Target:** openspec/changes/archive/YYYY-MM-DD-<name>/
Target archive directory already exists.
**Options:**
1. Rename the existing archive
2. Delete the existing archive if it's a duplicate
3. Wait until a different date to archive
```
**Guardrails**
- Always prompt for change selection if not provided
- Use artifact graph (openspec status --json) for completion checking
- Don't block archive on warnings - just inform and confirm
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
- Show clear summary of what happened
- If sync is requested, use the Skill tool to invoke `openspec-sync-specs` (agent-driven)
- If delta specs exist, always run the sync assessment and show the combined summary before prompting

View File

@@ -0,0 +1,154 @@
description = "Archive a completed change in the experimental workflow"
prompt = """
Archive a completed change in the experimental workflow.
**Input**: Optionally specify a change name after `/opsx:archive` (e.g., `/opsx:archive add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show only active changes (not already archived).
Include the schema used for each change if available.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check artifact completion status**
Run `openspec status --change "<name>" --json` to check artifact completion.
Parse the JSON to understand:
- `schemaName`: The workflow being used
- `artifacts`: List of artifacts with their status (`done` or other)
**If any artifacts are not `done`:**
- Display warning listing incomplete artifacts
- Prompt user for confirmation to continue
- Proceed if user confirms
3. **Check task completion status**
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
**If incomplete tasks found:**
- Display warning showing count of incomplete tasks
- Prompt user for confirmation to continue
- Proceed if user confirms
**If no tasks file exists:** Proceed without task-related warning.
4. **Assess delta spec sync state**
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
**If delta specs exist:**
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
- Determine what changes would be applied (adds, modifications, removals, renames)
- Show a combined summary before prompting
**Prompt options:**
- If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, use Task tool (subagent_type: "general-purpose", prompt: "Use Skill tool to invoke openspec-sync-specs for change '<name>'. Delta spec analysis: <include the analyzed delta spec summary>"). Proceed to archive regardless of choice.
5. **Perform the archive**
Create the archive directory if it doesn't exist:
```bash
mkdir -p openspec/changes/archive
```
Generate target name using current date: `YYYY-MM-DD-<change-name>`
**Check if target already exists:**
- If yes: Fail with error, suggest renaming existing archive or using different date
- If no: Move the change directory to archive
```bash
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
6. **Display summary**
Show archive completion summary including:
- Change name
- Schema that was used
- Archive location
- Spec sync status (synced / sync skipped / no delta specs)
- Note about any warnings (incomplete artifacts/tasks)
**Output On Success**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** ✓ Synced to main specs
All artifacts complete. All tasks complete.
```
**Output On Success (No Delta Specs)**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** No delta specs
All artifacts complete. All tasks complete.
```
**Output On Success With Warnings**
```
## Archive Complete (with warnings)
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** Sync skipped (user chose to skip)
**Warnings:**
- Archived with 2 incomplete artifacts
- Archived with 3 incomplete tasks
- Delta spec sync was skipped (user chose to skip)
Review the archive if this was not intentional.
```
**Output On Error (Archive Exists)**
```
## Archive Failed
**Change:** <change-name>
**Target:** openspec/changes/archive/YYYY-MM-DD-<name>/
Target archive directory already exists.
**Options:**
1. Rename the existing archive
2. Delete the existing archive if it's a duplicate
3. Wait until a different date to archive
```
**Guardrails**
- Always prompt for change selection if not provided
- Use artifact graph (openspec status --json) for completion checking
- Don't block archive on warnings - just inform and confirm
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
- Show clear summary of what happened
- If sync is requested, use the Skill tool to invoke `openspec-sync-specs` (agent-driven)
- If delta specs exist, always run the sync assessment and show the combined summary before prompting
"""

View File

@@ -0,0 +1,171 @@
---
description: Enter explore mode - think through ideas, investigate problems, clarify requirements
---
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first and create a change proposal. You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
**Input**: The argument after `/opsx:explore` is whatever the user wants to think about. Could be:
- A vague idea: "real-time collaboration"
- A specific problem: "the auth system is getting unwieldy"
- A change name: "add-dark-mode" (to explore in context of that change)
- A comparison: "postgres vs sqlite for this"
- Nothing (just enter explore mode)
---
## The Stance
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
- **Adaptive** - Follow interesting threads, pivot when new information emerges
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
---
## What You Might Do
Depending on what the user brings, you might:
**Explore the problem space**
- Ask clarifying questions that emerge from what they said
- Challenge assumptions
- Reframe the problem
- Find analogies
**Investigate the codebase**
- Map existing architecture relevant to the discussion
- Find integration points
- Identify patterns already in use
- Surface hidden complexity
**Compare options**
- Brainstorm multiple approaches
- Build comparison tables
- Sketch tradeoffs
- Recommend a path (if asked)
**Visualize**
```
┌─────────────────────────────────────────┐
│ Use ASCII diagrams liberally │
├─────────────────────────────────────────┤
│ │
│ ┌────────┐ ┌────────┐ │
│ │ State │────────▶│ State │ │
│ │ A │ │ B │ │
│ └────────┘ └────────┘ │
│ │
│ System diagrams, state machines, │
│ data flows, architecture sketches, │
│ dependency graphs, comparison tables │
│ │
└─────────────────────────────────────────┘
```
**Surface risks and unknowns**
- Identify what could go wrong
- Find gaps in understanding
- Suggest spikes or investigations
---
## OpenSpec Awareness
You have full context of the OpenSpec system. Use it naturally, don't force it.
### Check for context
At the start, quickly check what exists:
```bash
openspec list --json
```
This tells you:
- If there are active changes
- Their names, schemas, and status
- What the user might be working on
If the user mentioned a specific change name, read its artifacts for context.
### When no change exists
Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create a proposal?"
- Or keep exploring - no pressure to formalize
### When a change exists
If the user mentions a change or you detect one is relevant:
1. **Read existing artifacts for context**
- `openspec/changes/<name>/proposal.md`
- `openspec/changes/<name>/design.md`
- `openspec/changes/<name>/tasks.md`
- etc.
2. **Reference them naturally in conversation**
- "Your design mentions using Redis, but we just realized SQLite fits better..."
- "The proposal scopes this to premium users, but we're now thinking everyone..."
3. **Offer to capture when decisions are made**
| Insight Type | Where to Capture |
|--------------|------------------|
| New requirement discovered | `specs/<capability>/spec.md` |
| Requirement changed | `specs/<capability>/spec.md` |
| Design decision made | `design.md` |
| Scope changed | `proposal.md` |
| New work identified | `tasks.md` |
| Assumption invalidated | Relevant artifact |
Example offers:
- "That's a design decision. Capture it in design.md?"
- "This is a new requirement. Add it to specs?"
- "This changes scope. Update the proposal?"
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
---
## What You Don't Have To Do
- Follow a script
- Ask the same questions every time
- Produce a specific artifact
- Reach a conclusion
- Stay on topic if a tangent is valuable
- Be brief (this is thinking time)
---
## Ending Discovery
There's no required ending. Discovery might:
- **Flow into a proposal**: "Ready to start? I can create a change proposal."
- **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime"
When things crystallize, you might offer a summary - but it's optional. Sometimes the thinking IS the value.
---
## Guardrails
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
- **Don't fake understanding** - If something is unclear, dig deeper
- **Don't rush** - Discovery is thinking time, not task time
- **Don't force structure** - Let patterns emerge naturally
- **Don't auto-capture** - Offer to save insights, don't just do it
- **Do visualize** - A good diagram is worth many paragraphs
- **Do explore the codebase** - Ground discussions in reality
- **Do question assumptions** - Including the user's and your own

View File

@@ -0,0 +1,170 @@
description = "Enter explore mode - think through ideas, investigate problems, clarify requirements"
prompt = """
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first and create a change proposal. You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
**Input**: The argument after `/opsx:explore` is whatever the user wants to think about. Could be:
- A vague idea: "real-time collaboration"
- A specific problem: "the auth system is getting unwieldy"
- A change name: "add-dark-mode" (to explore in context of that change)
- A comparison: "postgres vs sqlite for this"
- Nothing (just enter explore mode)
---
## The Stance
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
- **Adaptive** - Follow interesting threads, pivot when new information emerges
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
---
## What You Might Do
Depending on what the user brings, you might:
**Explore the problem space**
- Ask clarifying questions that emerge from what they said
- Challenge assumptions
- Reframe the problem
- Find analogies
**Investigate the codebase**
- Map existing architecture relevant to the discussion
- Find integration points
- Identify patterns already in use
- Surface hidden complexity
**Compare options**
- Brainstorm multiple approaches
- Build comparison tables
- Sketch tradeoffs
- Recommend a path (if asked)
**Visualize**
```
┌─────────────────────────────────────────┐
│ Use ASCII diagrams liberally │
├─────────────────────────────────────────┤
│ │
│ ┌────────┐ ┌────────┐ │
│ │ State │────────▶│ State │ │
│ │ A │ │ B │ │
│ └────────┘ └────────┘ │
│ │
│ System diagrams, state machines, │
│ data flows, architecture sketches, │
│ dependency graphs, comparison tables │
│ │
└─────────────────────────────────────────┘
```
**Surface risks and unknowns**
- Identify what could go wrong
- Find gaps in understanding
- Suggest spikes or investigations
---
## OpenSpec Awareness
You have full context of the OpenSpec system. Use it naturally, don't force it.
### Check for context
At the start, quickly check what exists:
```bash
openspec list --json
```
This tells you:
- If there are active changes
- Their names, schemas, and status
- What the user might be working on
If the user mentioned a specific change name, read its artifacts for context.
### When no change exists
Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create a proposal?"
- Or keep exploring - no pressure to formalize
### When a change exists
If the user mentions a change or you detect one is relevant:
1. **Read existing artifacts for context**
- `openspec/changes/<name>/proposal.md`
- `openspec/changes/<name>/design.md`
- `openspec/changes/<name>/tasks.md`
- etc.
2. **Reference them naturally in conversation**
- "Your design mentions using Redis, but we just realized SQLite fits better..."
- "The proposal scopes this to premium users, but we're now thinking everyone..."
3. **Offer to capture when decisions are made**
| Insight Type | Where to Capture |
|--------------|------------------|
| New requirement discovered | `specs/<capability>/spec.md` |
| Requirement changed | `specs/<capability>/spec.md` |
| Design decision made | `design.md` |
| Scope changed | `proposal.md` |
| New work identified | `tasks.md` |
| Assumption invalidated | Relevant artifact |
Example offers:
- "That's a design decision. Capture it in design.md?"
- "This is a new requirement. Add it to specs?"
- "This changes scope. Update the proposal?"
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
---
## What You Don't Have To Do
- Follow a script
- Ask the same questions every time
- Produce a specific artifact
- Reach a conclusion
- Stay on topic if a tangent is valuable
- Be brief (this is thinking time)
---
## Ending Discovery
There's no required ending. Discovery might:
- **Flow into a proposal**: "Ready to start? I can create a change proposal."
- **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime"
When things crystallize, you might offer a summary - but it's optional. Sometimes the thinking IS the value.
---
## Guardrails
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
- **Don't fake understanding** - If something is unclear, dig deeper
- **Don't rush** - Discovery is thinking time, not task time
- **Don't force structure** - Let patterns emerge naturally
- **Don't auto-capture** - Offer to save insights, don't just do it
- **Do visualize** - A good diagram is worth many paragraphs
- **Do explore the codebase** - Ground discussions in reality
- **Do question assumptions** - Including the user's and your own
"""

View File

@@ -0,0 +1,104 @@
---
description: Propose a new change - create it and generate all artifacts in one step
---
Propose a new change - create the change and generate all artifacts in one step.
I'll create a change with artifacts:
- proposal.md (what & why)
- design.md (how)
- tasks.md (implementation steps)
When ready to implement, run /opsx:apply
---
**Input**: The argument after `/opsx:propose` is the change name (kebab-case), OR a description of what the user wants to build.
**Steps**
1. **If no input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Create the change directory**
```bash
openspec new change "<name>"
```
This creates a scaffolded change at `openspec/changes/<name>/` with `.openspec.yaml`.
3. **Get the artifact build order**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to get:
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
- `artifacts`: list of all artifacts with their status and dependencies
4. **Create artifacts in sequence until apply-ready**
Use the **TodoWrite tool** to track progress through the artifacts.
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
a. **For each artifact that is `ready` (dependencies satisfied)**:
- Get instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- The instructions JSON includes:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance for this artifact type
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- Read any completed dependency files for context
- Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json`
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
- Stop when all `applyRequires` artifacts are done
c. **If an artifact requires user input** (unclear context):
- Use **AskUserQuestion tool** to clarify
- Then continue with creation
5. **Show final status**
```bash
openspec status --change "<name>"
```
**Output**
After completing all artifacts, summarize:
- Change name and location
- List of artifacts created with brief descriptions
- What's ready: "All artifacts created! Ready for implementation."
- Prompt: "Run `/opsx:apply` to start implementing."
**Artifact Creation Guidelines**
- Follow the `instruction` field from `openspec instructions` for each artifact type
- The schema defines what each artifact should contain - follow it
- Read dependency artifacts for context before creating new ones
- Use `template` as the structure for your output file - fill in its sections
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output
**Guardrails**
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
- Always read dependency artifacts before creating a new one
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
- If a change with that name already exists, ask if user wants to continue it or create a new one
- Verify each artifact file exists after writing before proceeding to next

View File

@@ -0,0 +1,103 @@
description = "Propose a new change - create it and generate all artifacts in one step"
prompt = """
Propose a new change - create the change and generate all artifacts in one step.
I'll create a change with artifacts:
- proposal.md (what & why)
- design.md (how)
- tasks.md (implementation steps)
When ready to implement, run /opsx:apply
---
**Input**: The argument after `/opsx:propose` is the change name (kebab-case), OR a description of what the user wants to build.
**Steps**
1. **If no input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Create the change directory**
```bash
openspec new change "<name>"
```
This creates a scaffolded change at `openspec/changes/<name>/` with `.openspec.yaml`.
3. **Get the artifact build order**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to get:
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
- `artifacts`: list of all artifacts with their status and dependencies
4. **Create artifacts in sequence until apply-ready**
Use the **TodoWrite tool** to track progress through the artifacts.
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
a. **For each artifact that is `ready` (dependencies satisfied)**:
- Get instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- The instructions JSON includes:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance for this artifact type
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- Read any completed dependency files for context
- Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json`
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
- Stop when all `applyRequires` artifacts are done
c. **If an artifact requires user input** (unclear context):
- Use **AskUserQuestion tool** to clarify
- Then continue with creation
5. **Show final status**
```bash
openspec status --change "<name>"
```
**Output**
After completing all artifacts, summarize:
- Change name and location
- List of artifacts created with brief descriptions
- What's ready: "All artifacts created! Ready for implementation."
- Prompt: "Run `/opsx:apply` to start implementing."
**Artifact Creation Guidelines**
- Follow the `instruction` field from `openspec instructions` for each artifact type
- The schema defines what each artifact should contain - follow it
- Read dependency artifacts for context before creating new ones
- Use `template` as the structure for your output file - fill in its sections
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output
**Guardrails**
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
- Always read dependency artifacts before creating a new one
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
- If a change with that name already exists, ask if user wants to continue it or create a new one
- Verify each artifact file exists after writing before proceeding to next
"""

View File

@@ -0,0 +1,11 @@
{
"permissions": {
"allow": [
"Bash(openspec *)",
"Bash(mkdir *)",
"Bash(deno *)",
"Bash(mv *)"
]
},
"$version": 3
}

View File

@@ -0,0 +1,7 @@
{
"permissions": {
"allow": [
"Bash(openspec *)"
]
}
}

View File

@@ -0,0 +1,156 @@
---
name: openspec-apply-change
description: Implement tasks from an OpenSpec change. Use when the user wants to start implementing, continue implementation, or work through tasks.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.2.0"
---
Implement tasks from an OpenSpec change.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **Select the change**
If a name is provided, use it. Otherwise:
- Infer from conversation context if the user mentioned a change
- Auto-select if only one active change exists
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`).
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
3. **Get apply instructions**
```bash
openspec instructions apply --change "<name>" --json
```
This returns:
- Context file paths (varies by schema - could be proposal/specs/design/tasks or spec/tests/implementation/docs)
- Progress (total, complete, remaining)
- Task list with status
- Dynamic instruction based on current state
**Handle states:**
- If `state: "blocked"` (missing artifacts): show message, suggest using openspec-continue-change
- If `state: "all_done"`: congratulate, suggest archive
- Otherwise: proceed to implementation
4. **Read context files**
Read the files listed in `contextFiles` from the apply instructions output.
The files depend on the schema being used:
- **spec-driven**: proposal, specs, design, tasks
- Other schemas: follow the contextFiles from CLI output
5. **Show current progress**
Display:
- Schema being used
- Progress: "N/M tasks complete"
- Remaining tasks overview
- Dynamic instruction from CLI
6. **Implement tasks (loop until done or blocked)**
For each pending task:
- Show which task is being worked on
- Make the code changes required
- Keep changes minimal and focused
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
- Continue to next task
**Pause if:**
- Task is unclear → ask for clarification
- Implementation reveals a design issue → suggest updating artifacts
- Error or blocker encountered → report and wait for guidance
- User interrupts
7. **On completion or pause, show status**
Display:
- Tasks completed this session
- Overall progress: "N/M tasks complete"
- If all done: suggest archive
- If paused: explain why and wait for guidance
**Output During Implementation**
```
## Implementing: <change-name> (schema: <schema-name>)
Working on task 3/7: <task description>
[...implementation happening...]
✓ Task complete
Working on task 4/7: <task description>
[...implementation happening...]
✓ Task complete
```
**Output On Completion**
```
## Implementation Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 7/7 tasks complete ✓
### Completed This Session
- [x] Task 1
- [x] Task 2
...
All tasks complete! Ready to archive this change.
```
**Output On Pause (Issue Encountered)**
```
## Implementation Paused
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 4/7 tasks complete
### Issue Encountered
<description of the issue>
**Options:**
1. <option 1>
2. <option 2>
3. Other approach
What would you like to do?
```
**Guardrails**
- Keep going through tasks until done or blocked
- Always read context files before starting (from the apply instructions output)
- If task is ambiguous, pause and ask before implementing
- If implementation reveals issues, pause and suggest artifact updates
- Keep code changes minimal and scoped to each task
- Update task checkbox immediately after completing each task
- Pause on errors, blockers, or unclear requirements - don't guess
- Use contextFiles from CLI output, don't assume specific file names
**Fluid Workflow Integration**
This skill supports the "actions on a change" model:
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly

View File

@@ -0,0 +1,114 @@
---
name: openspec-archive-change
description: Archive a completed change in the experimental workflow. Use when the user wants to finalize and archive a change after implementation is complete.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.2.0"
---
Archive a completed change in the experimental workflow.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show only active changes (not already archived).
Include the schema used for each change if available.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check artifact completion status**
Run `openspec status --change "<name>" --json` to check artifact completion.
Parse the JSON to understand:
- `schemaName`: The workflow being used
- `artifacts`: List of artifacts with their status (`done` or other)
**If any artifacts are not `done`:**
- Display warning listing incomplete artifacts
- Use **AskUserQuestion tool** to confirm user wants to proceed
- Proceed if user confirms
3. **Check task completion status**
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
**If incomplete tasks found:**
- Display warning showing count of incomplete tasks
- Use **AskUserQuestion tool** to confirm user wants to proceed
- Proceed if user confirms
**If no tasks file exists:** Proceed without task-related warning.
4. **Assess delta spec sync state**
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
**If delta specs exist:**
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
- Determine what changes would be applied (adds, modifications, removals, renames)
- Show a combined summary before prompting
**Prompt options:**
- If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, use Task tool (subagent_type: "general-purpose", prompt: "Use Skill tool to invoke openspec-sync-specs for change '<name>'. Delta spec analysis: <include the analyzed delta spec summary>"). Proceed to archive regardless of choice.
5. **Perform the archive**
Create the archive directory if it doesn't exist:
```bash
mkdir -p openspec/changes/archive
```
Generate target name using current date: `YYYY-MM-DD-<change-name>`
**Check if target already exists:**
- If yes: Fail with error, suggest renaming existing archive or using different date
- If no: Move the change directory to archive
```bash
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
6. **Display summary**
Show archive completion summary including:
- Change name
- Schema that was used
- Archive location
- Whether specs were synced (if applicable)
- Note about any warnings (incomplete artifacts/tasks)
**Output On Success**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** ✓ Synced to main specs (or "No delta specs" or "Sync skipped")
All artifacts complete. All tasks complete.
```
**Guardrails**
- Always prompt for change selection if not provided
- Use artifact graph (openspec status --json) for completion checking
- Don't block archive on warnings - just inform and confirm
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
- Show clear summary of what happened
- If sync is requested, use openspec-sync-specs approach (agent-driven)
- If delta specs exist, always run the sync assessment and show the combined summary before prompting

View File

@@ -0,0 +1,288 @@
---
name: openspec-explore
description: Enter explore mode - a thinking partner for exploring ideas, investigating problems, and clarifying requirements. Use when the user wants to think through something before or during a change.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.2.0"
---
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first and create a change proposal. You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
---
## The Stance
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
- **Adaptive** - Follow interesting threads, pivot when new information emerges
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
---
## What You Might Do
Depending on what the user brings, you might:
**Explore the problem space**
- Ask clarifying questions that emerge from what they said
- Challenge assumptions
- Reframe the problem
- Find analogies
**Investigate the codebase**
- Map existing architecture relevant to the discussion
- Find integration points
- Identify patterns already in use
- Surface hidden complexity
**Compare options**
- Brainstorm multiple approaches
- Build comparison tables
- Sketch tradeoffs
- Recommend a path (if asked)
**Visualize**
```
┌─────────────────────────────────────────┐
│ Use ASCII diagrams liberally │
├─────────────────────────────────────────┤
│ │
│ ┌────────┐ ┌────────┐ │
│ │ State │────────▶│ State │ │
│ │ A │ │ B │ │
│ └────────┘ └────────┘ │
│ │
│ System diagrams, state machines, │
│ data flows, architecture sketches, │
│ dependency graphs, comparison tables │
│ │
└─────────────────────────────────────────┘
```
**Surface risks and unknowns**
- Identify what could go wrong
- Find gaps in understanding
- Suggest spikes or investigations
---
## OpenSpec Awareness
You have full context of the OpenSpec system. Use it naturally, don't force it.
### Check for context
At the start, quickly check what exists:
```bash
openspec list --json
```
This tells you:
- If there are active changes
- Their names, schemas, and status
- What the user might be working on
### When no change exists
Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create a proposal?"
- Or keep exploring - no pressure to formalize
### When a change exists
If the user mentions a change or you detect one is relevant:
1. **Read existing artifacts for context**
- `openspec/changes/<name>/proposal.md`
- `openspec/changes/<name>/design.md`
- `openspec/changes/<name>/tasks.md`
- etc.
2. **Reference them naturally in conversation**
- "Your design mentions using Redis, but we just realized SQLite fits better..."
- "The proposal scopes this to premium users, but we're now thinking everyone..."
3. **Offer to capture when decisions are made**
| Insight Type | Where to Capture |
|--------------|------------------|
| New requirement discovered | `specs/<capability>/spec.md` |
| Requirement changed | `specs/<capability>/spec.md` |
| Design decision made | `design.md` |
| Scope changed | `proposal.md` |
| New work identified | `tasks.md` |
| Assumption invalidated | Relevant artifact |
Example offers:
- "That's a design decision. Capture it in design.md?"
- "This is a new requirement. Add it to specs?"
- "This changes scope. Update the proposal?"
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
---
## What You Don't Have To Do
- Follow a script
- Ask the same questions every time
- Produce a specific artifact
- Reach a conclusion
- Stay on topic if a tangent is valuable
- Be brief (this is thinking time)
---
## Handling Different Entry Points
**User brings a vague idea:**
```
User: I'm thinking about adding real-time collaboration
You: Real-time collab is a big space. Let me think about this...
COLLABORATION SPECTRUM
════════════════════════════════════════════
Awareness Coordination Sync
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Presence│ │Cursors │ │ CRDT │
│ "3 │ │ Multi │ │Conflict│
│online" │ │ select │ │ free │
└────────┘ └────────┘ └────────┘
│ │ │
trivial moderate complex
Where's your head at?
```
**User brings a specific problem:**
```
User: The auth system is a mess
You: [reads codebase]
┌─────────────────────────────────────────────┐
│ CURRENT AUTH FLOW │
└─────────────────────────────────────────────┘
┌─────────────┼─────────────┐
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐
│ Google │ │ GitHub │ │ Email │
│ OAuth │ │ OAuth │ │ Magic │
└────┬────┘ └────┬────┘ └────┬────┘
│ │ │
└─────────────┼─────────────┘
┌───────────┐
│ Session │
└─────┬─────┘
┌───────────┐
│ Perms │
└───────────┘
I see three tangles. Which one's burning?
```
**User is stuck mid-implementation:**
```
User: /opsx:explore add-auth-system
The OAuth integration is more complex than expected
You: [reads change artifacts]
You're on task 4: "Implement OAuth flow"
Let me trace what's involved...
[draws diagram, explores options, suggests paths]
Want to update the design to reflect this?
Or add a spike task to investigate?
```
**User wants to compare options:**
```
User: Should we use Postgres or SQLite?
You: Generic answer is boring. What's the context?
User: A CLI tool that tracks local dev environments
You: That changes everything.
┌─────────────────────────────────────────────────┐
│ CLI TOOL DATA STORAGE │
└─────────────────────────────────────────────────┘
Key constraints:
• No daemon running
• Must work offline
• Single user
SQLite Postgres
Deployment embedded ✓ needs server ✗
Offline yes ✓ no ✗
Single file yes ✓ no ✗
SQLite. Not even close.
Unless... is there a sync component?
```
---
## Ending Discovery
There's no required ending. Discovery might:
- **Flow into a proposal**: "Ready to start? I can create a change proposal."
- **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime"
When it feels like things are crystallizing, you might summarize:
```
## What We Figured Out
**The problem**: [crystallized understanding]
**The approach**: [if one emerged]
**Open questions**: [if any remain]
**Next steps** (if ready):
- Create a change proposal
- Keep exploring: just keep talking
```
But this summary is optional. Sometimes the thinking IS the value.
---
## Guardrails
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
- **Don't fake understanding** - If something is unclear, dig deeper
- **Don't rush** - Discovery is thinking time, not task time
- **Don't force structure** - Let patterns emerge naturally
- **Don't auto-capture** - Offer to save insights, don't just do it
- **Do visualize** - A good diagram is worth many paragraphs
- **Do explore the codebase** - Ground discussions in reality
- **Do question assumptions** - Including the user's and your own

View File

@@ -0,0 +1,110 @@
---
name: openspec-propose
description: Propose a new change with all artifacts generated in one step. Use when the user wants to quickly describe what they want to build and get a complete proposal with design, specs, and tasks ready for implementation.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.2.0"
---
Propose a new change - create the change and generate all artifacts in one step.
I'll create a change with artifacts:
- proposal.md (what & why)
- design.md (how)
- tasks.md (implementation steps)
When ready to implement, run /opsx:apply
---
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
**Steps**
1. **If no clear input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Create the change directory**
```bash
openspec new change "<name>"
```
This creates a scaffolded change at `openspec/changes/<name>/` with `.openspec.yaml`.
3. **Get the artifact build order**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to get:
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
- `artifacts`: list of all artifacts with their status and dependencies
4. **Create artifacts in sequence until apply-ready**
Use the **TodoWrite tool** to track progress through the artifacts.
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
a. **For each artifact that is `ready` (dependencies satisfied)**:
- Get instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- The instructions JSON includes:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance for this artifact type
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- Read any completed dependency files for context
- Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json`
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
- Stop when all `applyRequires` artifacts are done
c. **If an artifact requires user input** (unclear context):
- Use **AskUserQuestion tool** to clarify
- Then continue with creation
5. **Show final status**
```bash
openspec status --change "<name>"
```
**Output**
After completing all artifacts, summarize:
- Change name and location
- List of artifacts created with brief descriptions
- What's ready: "All artifacts created! Ready for implementation."
- Prompt: "Run `/opsx:apply` or ask me to implement to start working on the tasks."
**Artifact Creation Guidelines**
- Follow the `instruction` field from `openspec instructions` for each artifact type
- The schema defines what each artifact should contain - follow it
- Read dependency artifacts for context before creating new ones
- Use `template` as the structure for your output file - fill in its sections
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output
**Guardrails**
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
- Always read dependency artifacts before creating a new one
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
- If a change with that name already exists, ask if user wants to continue it or create a new one
- Verify each artifact file exists after writing before proceeding to next

135
image-scale-cli/cli.ts Normal file
View File

@@ -0,0 +1,135 @@
#!/usr/bin/env -S deno run --allow-read --allow-write
import { parseArgs } from "@std/cli/parse-args";
import { scaleImage, type ScaleMode } from "./src/scaler.ts";
import { scanDirectory } from "./src/scanner.ts";
import { processBatch } from "./src/batch.ts";
const HELP = `
scale-image-fixer - Scale images to fixed dimensions
Usage:
deno run cli.ts <input> [options]
deno run cli.ts <input...> [options] # Multiple files
deno run cli.ts <directory> [options] # Directory mode
Arguments:
input Input file path, multiple file paths, or directory path
Options:
-w, --width <number> Output width (required)
-h, --height <number> Output height (required)
-m, --mode <mode> Scale mode: stretch, fit, cover (default: fit)
-o, --output <path> Output path (file or directory)
--help Show this help message
Scale Modes:
stretch Resize to exact dimensions (may distort aspect ratio)
fit Fit within dimensions while preserving aspect ratio
cover Cover dimensions while preserving aspect ratio (crops excess)
Examples:
deno run cli.ts input.png -w 800 -h 600
deno run cli.ts input.jpg -w 400 -h 400 -m cover -o output.jpg
deno run cli.ts ./images -w 200 -h 200 -o ./resized
`;
interface Args {
width?: number;
height?: number;
mode?: string;
output?: string;
help?: boolean;
_: string[];
}
function validateArgs(args: Args): { valid: boolean; error?: string } {
if (args.help) {
return { valid: true };
}
if (args._.length === 0) {
return { valid: false, error: "Input file, files, or directory is required" };
}
if (!args.width || !args.height) {
return { valid: false, error: "Width (-w) and height (-h) are required" };
}
if (args.width <= 0 || args.height <= 0) {
return { valid: false, error: "Width and height must be positive numbers" };
}
const validModes = ["stretch", "fit", "cover"];
if (args.mode && !validModes.includes(args.mode)) {
return { valid: false, error: `Mode must be one of: ${validModes.join(", ")}` };
}
return { valid: true };
}
async function main() {
const args = parseArgs(Deno.args, {
string: ["width", "height", "mode", "output"],
alias: {
w: "width",
h: "height",
m: "mode",
o: "output",
},
default: {
mode: "fit",
},
}) as Args;
const validation = validateArgs(args);
if (!validation.valid) {
console.error(`Error: ${validation.error}`);
console.log("\nUse --help for usage information");
Deno.exit(1);
}
if (args.help) {
console.log(HELP);
Deno.exit(0);
}
const inputs = args._.map(String);
const scaleMode: ScaleMode = (args.mode as ScaleMode) || "fit";
const width = args.width!;
const height = args.height!;
const output = args.output;
try {
// Check if inputs are files or directory
const firstInput = await Deno.stat(inputs[0]);
if (firstInput.isDirectory) {
// Directory mode: scan and batch process
const images = await scanDirectory(inputs[0]);
if (images.length === 0) {
console.log("No supported images found in directory");
Deno.exit(0);
}
await processBatch(images, { width, height, mode: scaleMode, outputDir: output });
} else if (inputs.length > 1) {
// Multiple files mode
await processBatch(inputs, { width, height, mode: scaleMode, outputDir: output });
} else {
// Single file mode
const result = await scaleImage(inputs[0], { width, height, mode: scaleMode, output });
console.log(`Scaled: ${result.input}${result.output}`);
}
} catch (error) {
if (error instanceof Deno.errors.NotFound) {
console.error(`Error: File or directory not found: ${inputs[0]}`);
} else if (error instanceof Error) {
console.error(`Error: ${error.message}`);
} else {
console.error("An unexpected error occurred");
}
Deno.exit(1);
}
}
await main();

View File

@@ -0,0 +1,70 @@
import { assert, assertEquals } from "@std/assert";
import { join } from "@std/path";
const CLI_PATH = join(import.meta.dirname!, "cli.ts");
const TEST_IMAGES = {
png: "./test-images/test-100x100.png",
jpg: "./test-images/test-100x100.jpg",
};
async function runCli(args: string[]): Promise<{ code: number; stdout: string; stderr: string }> {
const cmd = new Deno.Command("deno", {
args: ["run", "--allow-read", "--allow-write", "--allow-ffi", "--allow-env", CLI_PATH, ...args],
stdout: "piped",
stderr: "piped",
});
const output = await cmd.output();
return {
code: output.code,
stdout: new TextDecoder().decode(output.stdout),
stderr: new TextDecoder().decode(output.stderr),
};
}
Deno.test("CLI - should show help with --help flag", async () => {
const result = await runCli(["--help"]);
assertEquals(result.code, 0);
assert(result.stdout.includes("scale-image-fixer"));
assert(result.stdout.includes("--width"));
assert(result.stdout.includes("--height"));
assert(result.stdout.includes("--mode"));
});
Deno.test("CLI - should scale a single image", async () => {
const outputPath = "./test-images/cli-test-output.png";
const result = await runCli([TEST_IMAGES.png, "-w", "50", "-h", "50", "-o", outputPath]);
assertEquals(result.code, 0, `CLI failed: ${result.stderr}`);
assert(result.stdout.includes("Scaled:"));
// Verify output file exists
const stat = await Deno.stat(outputPath);
assert(stat.isFile);
// Clean up
await Deno.remove(outputPath);
});
Deno.test("CLI - should handle missing required arguments", async () => {
const result = await runCli([TEST_IMAGES.png]);
assertEquals(result.code, 1);
assert(result.stderr.includes("required") || result.stderr.includes("Error"));
});
Deno.test("CLI - should handle non-existent file", async () => {
const result = await runCli(["./non-existent.png", "-w", "100", "-h", "100"]);
assertEquals(result.code, 1);
assert(result.stderr.includes("not found") || result.stderr.includes("Error"));
});
Deno.test("CLI - should handle invalid mode", async () => {
const result = await runCli([TEST_IMAGES.png, "-w", "100", "-h", "100", "-m", "invalid"]);
assertEquals(result.code, 1);
assert(result.stderr.includes("Mode must be one of") || result.stderr.includes("invalid"));
});

23
image-scale-cli/deno.json Normal file
View File

@@ -0,0 +1,23 @@
{
"name": "scale-image-fixer",
"version": "1.0.0",
"license": "MIT",
"exports": "./cli.ts",
"tasks": {
"start": "deno run --allow-read --allow-write --allow-env --allow-ffi cli.ts",
"test": "deno test --allow-read --allow-write --allow-ffi --allow-run --allow-env --no-check",
"dev": "deno run --allow-read --allow-write --watch cli.ts",
"gen:test-images": "deno run --allow-read --allow-write --allow-env scripts/generate-test-images.ts"
},
"compilerOptions": {
"strict": true,
"lib": ["deno.window"]
},
"imports": {
"@std/path": "jsr:@std/path@1",
"@std/fs": "jsr:@std/fs@1",
"@std/cli": "jsr:@std/cli@1",
"@std/assert": "jsr:@std/assert@1",
"sharp": "npm:sharp@^0.33.0"
}
}

282
image-scale-cli/deno.lock generated Normal file
View File

@@ -0,0 +1,282 @@
{
"version": "5",
"specifiers": {
"jsr:@std/assert@1": "1.0.18",
"jsr:@std/cli@*": "1.0.27",
"jsr:@std/cli@1": "1.0.27",
"jsr:@std/fs@1": "1.0.22",
"jsr:@std/internal@^1.0.12": "1.0.12",
"jsr:@std/path@1": "1.1.4",
"jsr:@std/path@^1.1.4": "1.1.4",
"npm:sharp@0.33": "0.33.5"
},
"jsr": {
"@std/assert@1.0.18": {
"integrity": "270245e9c2c13b446286de475131dc688ca9abcd94fc5db41d43a219b34d1c78",
"dependencies": [
"jsr:@std/internal"
]
},
"@std/cli@1.0.27": {
"integrity": "eba97edd0891871a7410e835dd94b3c260c709cca5983df2689c25a71fbe04de",
"dependencies": [
"jsr:@std/internal"
]
},
"@std/fs@1.0.22": {
"integrity": "de0f277a58a867147a8a01bc1b181d0dfa80bfddba8c9cf2bacd6747bcec9308",
"dependencies": [
"jsr:@std/path@^1.1.4"
]
},
"@std/internal@1.0.12": {
"integrity": "972a634fd5bc34b242024402972cd5143eac68d8dffaca5eaa4dba30ce17b027"
},
"@std/path@1.1.4": {
"integrity": "1d2d43f39efb1b42f0b1882a25486647cb851481862dc7313390b2bb044314b5",
"dependencies": [
"jsr:@std/internal"
]
}
},
"npm": {
"@emnapi/runtime@1.8.1": {
"integrity": "sha512-mehfKSMWjjNol8659Z8KxEMrdSJDDot5SXMq00dM8BN4o+CLNXQ0xH2V7EchNHV4RmbZLmmPdEaXZc5H2FXmDg==",
"dependencies": [
"tslib"
],
"tarball": "https://registry.npmmirror.com/@emnapi/runtime/-/runtime-1.8.1.tgz"
},
"@img/sharp-darwin-arm64@0.33.5": {
"integrity": "sha512-UT4p+iz/2H4twwAoLCqfA9UH5pI6DggwKEGuaPy7nCVQ8ZsiY5PIcrRvD1DzuY3qYL07NtIQcWnBSY/heikIFQ==",
"optionalDependencies": [
"@img/sharp-libvips-darwin-arm64"
],
"os": ["darwin"],
"cpu": ["arm64"],
"tarball": "https://registry.npmmirror.com/@img/sharp-darwin-arm64/-/sharp-darwin-arm64-0.33.5.tgz"
},
"@img/sharp-darwin-x64@0.33.5": {
"integrity": "sha512-fyHac4jIc1ANYGRDxtiqelIbdWkIuQaI84Mv45KvGRRxSAa7o7d1ZKAOBaYbnepLC1WqxfpimdeWfvqqSGwR2Q==",
"optionalDependencies": [
"@img/sharp-libvips-darwin-x64"
],
"os": ["darwin"],
"cpu": ["x64"],
"tarball": "https://registry.npmmirror.com/@img/sharp-darwin-x64/-/sharp-darwin-x64-0.33.5.tgz"
},
"@img/sharp-libvips-darwin-arm64@1.0.4": {
"integrity": "sha512-XblONe153h0O2zuFfTAbQYAX2JhYmDHeWikp1LM9Hul9gVPjFY427k6dFEcOL72O01QxQsWi761svJ/ev9xEDg==",
"os": ["darwin"],
"cpu": ["arm64"],
"tarball": "https://registry.npmmirror.com/@img/sharp-libvips-darwin-arm64/-/sharp-libvips-darwin-arm64-1.0.4.tgz"
},
"@img/sharp-libvips-darwin-x64@1.0.4": {
"integrity": "sha512-xnGR8YuZYfJGmWPvmlunFaWJsb9T/AO2ykoP3Fz/0X5XV2aoYBPkX6xqCQvUTKKiLddarLaxpzNe+b1hjeWHAQ==",
"os": ["darwin"],
"cpu": ["x64"],
"tarball": "https://registry.npmmirror.com/@img/sharp-libvips-darwin-x64/-/sharp-libvips-darwin-x64-1.0.4.tgz"
},
"@img/sharp-libvips-linux-arm64@1.0.4": {
"integrity": "sha512-9B+taZ8DlyyqzZQnoeIvDVR/2F4EbMepXMc/NdVbkzsJbzkUjhXv/70GQJ7tdLA4YJgNP25zukcxpX2/SueNrA==",
"os": ["linux"],
"cpu": ["arm64"],
"tarball": "https://registry.npmmirror.com/@img/sharp-libvips-linux-arm64/-/sharp-libvips-linux-arm64-1.0.4.tgz"
},
"@img/sharp-libvips-linux-arm@1.0.5": {
"integrity": "sha512-gvcC4ACAOPRNATg/ov8/MnbxFDJqf/pDePbBnuBDcjsI8PssmjoKMAz4LtLaVi+OnSb5FK/yIOamqDwGmXW32g==",
"os": ["linux"],
"cpu": ["arm"],
"tarball": "https://registry.npmmirror.com/@img/sharp-libvips-linux-arm/-/sharp-libvips-linux-arm-1.0.5.tgz"
},
"@img/sharp-libvips-linux-s390x@1.0.4": {
"integrity": "sha512-u7Wz6ntiSSgGSGcjZ55im6uvTrOxSIS8/dgoVMoiGE9I6JAfU50yH5BoDlYA1tcuGS7g/QNtetJnxA6QEsCVTA==",
"os": ["linux"],
"cpu": ["s390x"],
"tarball": "https://registry.npmmirror.com/@img/sharp-libvips-linux-s390x/-/sharp-libvips-linux-s390x-1.0.4.tgz"
},
"@img/sharp-libvips-linux-x64@1.0.4": {
"integrity": "sha512-MmWmQ3iPFZr0Iev+BAgVMb3ZyC4KeFc3jFxnNbEPas60e1cIfevbtuyf9nDGIzOaW9PdnDciJm+wFFaTlj5xYw==",
"os": ["linux"],
"cpu": ["x64"],
"tarball": "https://registry.npmmirror.com/@img/sharp-libvips-linux-x64/-/sharp-libvips-linux-x64-1.0.4.tgz"
},
"@img/sharp-libvips-linuxmusl-arm64@1.0.4": {
"integrity": "sha512-9Ti+BbTYDcsbp4wfYib8Ctm1ilkugkA/uscUn6UXK1ldpC1JjiXbLfFZtRlBhjPZ5o1NCLiDbg8fhUPKStHoTA==",
"os": ["linux"],
"cpu": ["arm64"],
"tarball": "https://registry.npmmirror.com/@img/sharp-libvips-linuxmusl-arm64/-/sharp-libvips-linuxmusl-arm64-1.0.4.tgz"
},
"@img/sharp-libvips-linuxmusl-x64@1.0.4": {
"integrity": "sha512-viYN1KX9m+/hGkJtvYYp+CCLgnJXwiQB39damAO7WMdKWlIhmYTfHjwSbQeUK/20vY154mwezd9HflVFM1wVSw==",
"os": ["linux"],
"cpu": ["x64"],
"tarball": "https://registry.npmmirror.com/@img/sharp-libvips-linuxmusl-x64/-/sharp-libvips-linuxmusl-x64-1.0.4.tgz"
},
"@img/sharp-linux-arm64@0.33.5": {
"integrity": "sha512-JMVv+AMRyGOHtO1RFBiJy/MBsgz0x4AWrT6QoEVVTyh1E39TrCUpTRI7mx9VksGX4awWASxqCYLCV4wBZHAYxA==",
"optionalDependencies": [
"@img/sharp-libvips-linux-arm64"
],
"os": ["linux"],
"cpu": ["arm64"],
"tarball": "https://registry.npmmirror.com/@img/sharp-linux-arm64/-/sharp-linux-arm64-0.33.5.tgz"
},
"@img/sharp-linux-arm@0.33.5": {
"integrity": "sha512-JTS1eldqZbJxjvKaAkxhZmBqPRGmxgu+qFKSInv8moZ2AmT5Yib3EQ1c6gp493HvrvV8QgdOXdyaIBrhvFhBMQ==",
"optionalDependencies": [
"@img/sharp-libvips-linux-arm"
],
"os": ["linux"],
"cpu": ["arm"],
"tarball": "https://registry.npmmirror.com/@img/sharp-linux-arm/-/sharp-linux-arm-0.33.5.tgz"
},
"@img/sharp-linux-s390x@0.33.5": {
"integrity": "sha512-y/5PCd+mP4CA/sPDKl2961b+C9d+vPAveS33s6Z3zfASk2j5upL6fXVPZi7ztePZ5CuH+1kW8JtvxgbuXHRa4Q==",
"optionalDependencies": [
"@img/sharp-libvips-linux-s390x"
],
"os": ["linux"],
"cpu": ["s390x"],
"tarball": "https://registry.npmmirror.com/@img/sharp-linux-s390x/-/sharp-linux-s390x-0.33.5.tgz"
},
"@img/sharp-linux-x64@0.33.5": {
"integrity": "sha512-opC+Ok5pRNAzuvq1AG0ar+1owsu842/Ab+4qvU879ippJBHvyY5n2mxF1izXqkPYlGuP/M556uh53jRLJmzTWA==",
"optionalDependencies": [
"@img/sharp-libvips-linux-x64"
],
"os": ["linux"],
"cpu": ["x64"],
"tarball": "https://registry.npmmirror.com/@img/sharp-linux-x64/-/sharp-linux-x64-0.33.5.tgz"
},
"@img/sharp-linuxmusl-arm64@0.33.5": {
"integrity": "sha512-XrHMZwGQGvJg2V/oRSUfSAfjfPxO+4DkiRh6p2AFjLQztWUuY/o8Mq0eMQVIY7HJ1CDQUJlxGGZRw1a5bqmd1g==",
"optionalDependencies": [
"@img/sharp-libvips-linuxmusl-arm64"
],
"os": ["linux"],
"cpu": ["arm64"],
"tarball": "https://registry.npmmirror.com/@img/sharp-linuxmusl-arm64/-/sharp-linuxmusl-arm64-0.33.5.tgz"
},
"@img/sharp-linuxmusl-x64@0.33.5": {
"integrity": "sha512-WT+d/cgqKkkKySYmqoZ8y3pxx7lx9vVejxW/W4DOFMYVSkErR+w7mf2u8m/y4+xHe7yY9DAXQMWQhpnMuFfScw==",
"optionalDependencies": [
"@img/sharp-libvips-linuxmusl-x64"
],
"os": ["linux"],
"cpu": ["x64"],
"tarball": "https://registry.npmmirror.com/@img/sharp-linuxmusl-x64/-/sharp-linuxmusl-x64-0.33.5.tgz"
},
"@img/sharp-wasm32@0.33.5": {
"integrity": "sha512-ykUW4LVGaMcU9lu9thv85CbRMAwfeadCJHRsg2GmeRa/cJxsVY9Rbd57JcMxBkKHag5U/x7TSBpScF4U8ElVzg==",
"dependencies": [
"@emnapi/runtime"
],
"cpu": ["wasm32"],
"tarball": "https://registry.npmmirror.com/@img/sharp-wasm32/-/sharp-wasm32-0.33.5.tgz"
},
"@img/sharp-win32-ia32@0.33.5": {
"integrity": "sha512-T36PblLaTwuVJ/zw/LaH0PdZkRz5rd3SmMHX8GSmR7vtNSP5Z6bQkExdSK7xGWyxLw4sUknBuugTelgw2faBbQ==",
"os": ["win32"],
"cpu": ["ia32"],
"tarball": "https://registry.npmmirror.com/@img/sharp-win32-ia32/-/sharp-win32-ia32-0.33.5.tgz"
},
"@img/sharp-win32-x64@0.33.5": {
"integrity": "sha512-MpY/o8/8kj+EcnxwvrP4aTJSWw/aZ7JIGR4aBeZkZw5B7/Jn+tY9/VNwtcoGmdT7GfggGIU4kygOMSbYnOrAbg==",
"os": ["win32"],
"cpu": ["x64"],
"tarball": "https://registry.npmmirror.com/@img/sharp-win32-x64/-/sharp-win32-x64-0.33.5.tgz"
},
"color-convert@2.0.1": {
"integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==",
"dependencies": [
"color-name"
],
"tarball": "https://registry.npmmirror.com/color-convert/-/color-convert-2.0.1.tgz"
},
"color-name@1.1.4": {
"integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==",
"tarball": "https://registry.npmmirror.com/color-name/-/color-name-1.1.4.tgz"
},
"color-string@1.9.1": {
"integrity": "sha512-shrVawQFojnZv6xM40anx4CkoDP+fZsw/ZerEMsW/pyzsRbElpsL/DBVW7q3ExxwusdNXI3lXpuhEZkzs8p5Eg==",
"dependencies": [
"color-name",
"simple-swizzle"
],
"tarball": "https://registry.npmmirror.com/color-string/-/color-string-1.9.1.tgz"
},
"color@4.2.3": {
"integrity": "sha512-1rXeuUUiGGrykh+CeBdu5Ie7OJwinCgQY0bc7GCRxy5xVHy+moaqkpL/jqQq0MtQOeYcrqEz4abc5f0KtU7W4A==",
"dependencies": [
"color-convert",
"color-string"
],
"tarball": "https://registry.npmmirror.com/color/-/color-4.2.3.tgz"
},
"detect-libc@2.1.2": {
"integrity": "sha512-Btj2BOOO83o3WyH59e8MgXsxEQVcarkUOpEYrubB0urwnN10yQ364rsiByU11nZlqWYZm05i/of7io4mzihBtQ==",
"tarball": "https://registry.npmmirror.com/detect-libc/-/detect-libc-2.1.2.tgz"
},
"is-arrayish@0.3.4": {
"integrity": "sha512-m6UrgzFVUYawGBh1dUsWR5M2Clqic9RVXC/9f8ceNlv2IcO9j9J/z8UoCLPqtsPBFNzEpfR3xftohbfqDx8EQA==",
"tarball": "https://registry.npmmirror.com/is-arrayish/-/is-arrayish-0.3.4.tgz"
},
"semver@7.7.4": {
"integrity": "sha512-vFKC2IEtQnVhpT78h1Yp8wzwrf8CM+MzKMHGJZfBtzhZNycRFnXsHk6E5TxIkkMsgNS7mdX3AGB7x2QM2di4lA==",
"bin": true,
"tarball": "https://registry.npmmirror.com/semver/-/semver-7.7.4.tgz"
},
"sharp@0.33.5": {
"integrity": "sha512-haPVm1EkS9pgvHrQ/F3Xy+hgcuMV0Wm9vfIBSiwZ05k+xgb0PkBQpGsAA/oWdDobNaZTH5ppvHtzCFbnSEwHVw==",
"dependencies": [
"color",
"detect-libc",
"semver"
],
"optionalDependencies": [
"@img/sharp-darwin-arm64",
"@img/sharp-darwin-x64",
"@img/sharp-libvips-darwin-arm64",
"@img/sharp-libvips-darwin-x64",
"@img/sharp-libvips-linux-arm",
"@img/sharp-libvips-linux-arm64",
"@img/sharp-libvips-linux-s390x",
"@img/sharp-libvips-linux-x64",
"@img/sharp-libvips-linuxmusl-arm64",
"@img/sharp-libvips-linuxmusl-x64",
"@img/sharp-linux-arm",
"@img/sharp-linux-arm64",
"@img/sharp-linux-s390x",
"@img/sharp-linux-x64",
"@img/sharp-linuxmusl-arm64",
"@img/sharp-linuxmusl-x64",
"@img/sharp-wasm32",
"@img/sharp-win32-ia32",
"@img/sharp-win32-x64"
],
"scripts": true,
"tarball": "https://registry.npmmirror.com/sharp/-/sharp-0.33.5.tgz"
},
"simple-swizzle@0.2.4": {
"integrity": "sha512-nAu1WFPQSMNr2Zn9PGSZK9AGn4t/y97lEm+MXTtUDwfP0ksAIX4nO+6ruD9Jwut4C49SB1Ws+fbXsm/yScWOHw==",
"dependencies": [
"is-arrayish"
],
"tarball": "https://registry.npmmirror.com/simple-swizzle/-/simple-swizzle-0.2.4.tgz"
},
"tslib@2.8.1": {
"integrity": "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==",
"tarball": "https://registry.npmmirror.com/tslib/-/tslib-2.8.1.tgz"
}
},
"workspace": {
"dependencies": [
"jsr:@std/assert@1",
"jsr:@std/cli@1",
"jsr:@std/fs@1",
"jsr:@std/path@1",
"npm:sharp@0.33"
]
}
}

View File

@@ -0,0 +1,2 @@
schema: spec-driven
created: 2026-04-11

View File

@@ -0,0 +1,34 @@
## Context
A Deno-based TypeScript utility is needed to scale images to fixed dimensions. The solution must work in the Deno runtime environment and support common image formats.
## Goals / Non-Goals
**Goals:**
- Scale images to specified fixed dimensions
- Support PNG, JPEG, WebP, and GIF formats
- Preserve aspect ratio with configurable options
- Provide a simple CLI interface
- Run on Deno runtime
**Non-Goals:**
- Advanced image editing (filters, cropping, rotation)
- Server-side API or web interface
- Real-time image processing pipelines
## Decisions
| Decision | Choice | Rationale |
|----------|--------|-----------|
| Runtime | Deno | User specified TypeScript with Deno - built-in TypeScript support, modern security model |
| Image library | `deno-image` or similar Deno-compatible library | Native Deno support, no Node.js compatibility layer needed |
| CLI approach | Deno args parsing | Simple, no external dependencies for argument handling |
| Aspect ratio | Configurable (stretch or fit) | Flexibility for different use cases |
## Risks / Trade-offs
| Risk | Mitigation |
|------|------------|
| Limited Deno image processing libraries | Evaluate available options; fallback to WASM-based solutions if needed |
| Performance with large images | Add warnings for very large files; consider streaming for batch operations |
| Format compatibility | Test with common formats; document supported formats clearly |

View File

@@ -0,0 +1,24 @@
## Why
Images need to be scaled to consistent dimensions for uniform display across the application. This change adds a Deno-based TypeScript utility to scale images to fixed sizes, ensuring visual consistency and optimized file sizes.
## What Changes
- New CLI utility to scale images to specified dimensions
- Support for common image formats (PNG, JPEG, WebP, GIF)
- Configurable output size with aspect ratio preservation options
- Batch processing capability for multiple images
## Capabilities
### New Capabilities
- `image-scaling`: Core capability to resize images to fixed dimensions using Deno
### Modified Capabilities
-
## Impact
- New Deno-based image processing module
- Dependencies on image processing libraries compatible with Deno
- CLI interface for image scaling operations

View File

@@ -0,0 +1,57 @@
## ADDED Requirements
### Requirement: Scale image to fixed dimensions
The system SHALL resize images to specified width and height dimensions while supporting configurable aspect ratio handling.
#### Scenario: Scale with exact dimensions (stretch)
- **WHEN** user provides width and height with stretch mode
- **THEN** image is resized to exact dimensions regardless of original aspect ratio
#### Scenario: Scale with fit mode (preserve aspect ratio)
- **WHEN** user provides width and height with fit mode
- **THEN** image is resized to fit within dimensions while preserving aspect ratio
#### Scenario: Scale with cover mode (preserve aspect ratio)
- **WHEN** user provides width and height with cover mode
- **THEN** image is resized to cover dimensions while preserving aspect ratio, cropping excess
### Requirement: Support multiple image formats
The system SHALL process PNG, JPEG, WebP, and GIF image formats for both input and output.
#### Scenario: Process PNG image
- **WHEN** user provides a PNG file as input
- **THEN** system successfully scales and outputs the image
#### Scenario: Process JPEG image
- **WHEN** user provides a JPEG file as input
- **THEN** system successfully scales and outputs the image
#### Scenario: Process WebP image
- **WHEN** user provides a WebP file as input
- **THEN** system successfully scales and outputs the image
#### Scenario: Process GIF image
- **WHEN** user provides a GIF file as input
- **THEN** system successfully scales and outputs the image
### Requirement: Batch processing
The system SHALL process multiple images in a single operation when provided with a directory or multiple file paths.
#### Scenario: Process directory of images
- **WHEN** user provides a directory path
- **THEN** all supported images in the directory are scaled
#### Scenario: Process multiple files
- **WHEN** user provides multiple file paths
- **THEN** all specified images are scaled
### Requirement: CLI interface
The system SHALL provide a command-line interface for specifying input, output, and scaling options.
#### Scenario: Scale single image
- **WHEN** user runs command with input file, output path, width, and height
- **THEN** scaled image is saved to output path
#### Scenario: Invalid input handling
- **WHEN** user provides non-existent file or unsupported format
- **THEN** system displays clear error message and exits with non-zero code

View File

@@ -0,0 +1,35 @@
## 1. Project Setup
- [x] 1.1 Initialize Deno project with deno.json configuration
- [x] 1.2 Set up directory structure (src/, cli.ts)
- [x] 1.3 Add image processing dependency compatible with Deno
## 2. Core Image Scaling Implementation
- [x] 2.1 Create image loader module supporting PNG, JPEG, WebP, GIF
- [x] 2.2 Implement scale function with exact dimensions (stretch) mode
- [x] 2.3 Implement scale function with fit mode (preserve aspect ratio)
- [x] 2.4 Implement scale function with cover mode (preserve aspect ratio, crop)
- [x] 2.5 Create image saver module with format detection
## 3. Batch Processing
- [x] 3.1 Implement directory scanner for supported image formats
- [x] 3.2 Create batch processor with progress reporting
- [x] 3.3 Add error handling for failed images in batch
## 4. CLI Interface
- [x] 4.1 Create CLI argument parser (input, output, width, height, mode)
- [x] 4.2 Implement single image scaling command
- [x] 4.3 Implement batch scaling command (directory/multiple files)
- [x] 4.4 Add error messages and exit codes for invalid input
- [x] 4.5 Add help documentation (--help flag)
## 5. Testing & Validation
- [x] 5.1 Create test images for each supported format
- [x] 5.2 Write unit tests for scale functions
- [x] 5.3 Write integration tests for CLI commands
- [x] 5.4 Test batch processing with mixed formats
- [x] 5.5 Verify output dimensions match specifications

View File

@@ -0,0 +1,20 @@
schema: spec-driven
# Project context (optional)
# This is shown to AI when creating artifacts.
# Add your tech stack, conventions, style guides, domain knowledge, etc.
# Example:
# context: |
# Tech stack: TypeScript, React, Node.js
# We use conventional commits
# Domain: e-commerce platform
# Per-artifact rules (optional)
# Add custom rules for specific artifacts.
# Example:
# rules:
# proposal:
# - Keep proposals under 500 words
# - Always include a "Non-goals" section
# tasks:
# - Break tasks into chunks of max 2 hours

View File

@@ -0,0 +1,53 @@
// Script to generate test images for each supported format
import sharp from "sharp";
const TEST_DIR = "./test-images";
// Ensure test directory exists
await Deno.mkdir(TEST_DIR, { recursive: true });
// Create a simple test image (100x100 red square)
const testBuffer = await sharp({
create: {
width: 100,
height: 100,
channels: 3,
background: { r: 255, g: 0, b: 0 },
},
})
.png()
.toBuffer();
// Save in different formats
await sharp(testBuffer).toFile(`${TEST_DIR}/test-100x100.png`);
console.log("Created: test-100x100.png");
await sharp(testBuffer).jpeg({ quality: 85 }).toFile(`${TEST_DIR}/test-100x100.jpg`);
console.log("Created: test-100x100.jpg");
await sharp(testBuffer).webp({ quality: 85 }).toFile(`${TEST_DIR}/test-100x100.webp`);
console.log("Created: test-100x100.webp");
// Create a larger test image (200x150 blue rectangle)
const largeBuffer = await sharp({
create: {
width: 200,
height: 150,
channels: 3,
background: { r: 0, g: 0, b: 255 },
},
})
.png()
.toBuffer();
await sharp(largeBuffer).toFile(`${TEST_DIR}/test-200x150.png`);
console.log("Created: test-200x150.png");
await sharp(largeBuffer).jpeg({ quality: 85 }).toFile(`${TEST_DIR}/test-200x150.jpg`);
console.log("Created: test-200x150.jpg");
// Create a GIF test image
await sharp(largeBuffer).gif().toFile(`${TEST_DIR}/test-200x150.gif`);
console.log("Created: test-200x150.gif");
console.log("\nTest images created successfully!");

View File

@@ -0,0 +1,94 @@
import { scaleImage, type ScaleMode } from "./scaler.ts";
export interface BatchOptions {
width: number;
height: number;
mode: ScaleMode;
outputDir?: string;
}
export interface BatchResult {
total: number;
successful: number;
failed: number;
results: Array<{
input: string;
output?: string;
success: boolean;
error?: string;
}>;
}
/**
* Process multiple images with progress reporting
*/
export async function processBatch(
inputPaths: string[],
options: BatchOptions
): Promise<BatchResult> {
const { width, height, mode, outputDir } = options;
const results: BatchResult = {
total: inputPaths.length,
successful: 0,
failed: 0,
results: [],
};
console.log(`Processing ${inputPaths.length} image(s)...`);
console.log(`Mode: ${mode}, Size: ${width}x${height}`);
if (outputDir) {
console.log(`Output directory: ${outputDir}`);
}
console.log();
for (let i = 0; i < inputPaths.length; i++) {
const inputPath = inputPaths[i];
const current = i + 1;
// Progress indicator
const progress = `[${current}/${inputPaths.length}]`;
Deno.stdout.writeSync(new TextEncoder().encode(`${progress} Processing: ${inputPath} ... `));
try {
// Determine output path
let outputPath: string | undefined;
if (outputDir) {
// Ensure output directory exists
await Deno.mkdir(outputDir, { recursive: true });
const fileName = inputPath.split("/").pop() || inputPath.split("\\").pop() || "image";
outputPath = `${outputDir}/${fileName}`;
}
const result = await scaleImage(inputPath, {
width,
height,
mode,
output: outputPath,
});
results.results.push({
input: inputPath,
output: result.output,
success: true,
});
results.successful++;
Deno.stdout.writeSync(new TextEncoder().encode(`${result.newWidth}x${result.newHeight}\n`));
} catch (error) {
const errorMessage = error instanceof Error ? error.message : "Unknown error";
results.results.push({
input: inputPath,
success: false,
error: errorMessage,
});
results.failed++;
Deno.stdout.writeSync(new TextEncoder().encode(`${errorMessage}\n`));
}
}
console.log();
console.log(`Batch complete: ${results.successful} succeeded, ${results.failed} failed`);
return results;
}

View File

@@ -0,0 +1,67 @@
import sharp from "sharp";
const SUPPORTED_FORMATS = [".png", ".jpg", ".jpeg", ".webp", ".gif"];
/**
* Check if a file extension is supported
*/
export function isSupportedFormat(filePath: string): boolean {
const ext = filePath.toLowerCase().match(/\.[^.]+$/)?.[0];
return ext ? SUPPORTED_FORMATS.includes(ext) : false;
}
/**
* Get the format from a file path
*/
export function getFormat(filePath: string): string {
const ext = filePath.toLowerCase().match(/\.[^.]+$/)?.[0];
if (!ext) {
throw new Error(`Unable to determine format from file: ${filePath}`);
}
// Remove leading dot and normalize
const format = ext.slice(1);
if (format === "jpg") return "jpeg";
return format;
}
/**
* Load an image and return sharp instance
*/
export async function loadImage(filePath: string): Promise<sharp.Sharp> {
try {
const stat = await Deno.stat(filePath);
if (!stat.isFile) {
throw new Error(`Not a file: ${filePath}`);
}
if (!isSupportedFormat(filePath)) {
const ext = filePath.match(/\.[^.]+$/)?.[0] || "unknown";
throw new Error(`Unsupported format: ${ext}. Supported: ${SUPPORTED_FORMATS.join(", ")}`);
}
return sharp(filePath);
} catch (error) {
if (error instanceof Deno.errors.NotFound) {
throw new Error(`File not found: ${filePath}`);
}
throw error;
}
}
/**
* Get image metadata
*/
export async function getImageMetadata(filePath: string): Promise<{ width: number; height: number; format: string }> {
const image = await loadImage(filePath);
const metadata = await image.metadata();
if (!metadata.width || !metadata.height) {
throw new Error(`Unable to get dimensions for: ${filePath}`);
}
return {
width: metadata.width,
height: metadata.height,
format: metadata.format || "unknown",
};
}

View File

@@ -0,0 +1,6 @@
// Module exports
export { scaleImage, type ScaleOptions, type ScaleMode, type ScaleResult } from "./scaler.ts";
export { scanDirectory, getSupportedExtensions } from "./scanner.ts";
export { processBatch, type BatchOptions, type BatchResult } from "./batch.ts";
export { loadImage, isSupportedFormat, getFormat, getImageMetadata } from "./loader.ts";
export { saveImage, determineOutputPath, type SaveOptions } from "./saver.ts";

View File

@@ -0,0 +1,55 @@
import type sharp from "sharp";
import { getFormat } from "./loader.ts";
export interface SaveOptions {
quality?: number;
compressionLevel?: number;
}
/**
* Save a sharp instance to a file with format-specific options
*/
export async function saveImage(
image: sharp.Sharp,
outputPath: string,
options: SaveOptions = {}
): Promise<string> {
const { quality = 85, compressionLevel = 6 } = options;
const format = getFormat(outputPath);
let processed: sharp.Sharp;
switch (format) {
case "jpeg":
processed = image.jpeg({ quality });
break;
case "png":
processed = image.png({ compressionLevel });
break;
case "webp":
processed = image.webp({ quality });
break;
case "gif":
processed = image.gif();
break;
default:
throw new Error(`Unsupported output format: ${format}`);
}
await processed.toFile(outputPath);
return outputPath;
}
/**
* Determine output path based on input and options
*/
export function determineOutputPath(
inputPath: string,
explicitOutput?: string
): string {
if (explicitOutput) {
return explicitOutput;
}
// Default: add -scaled suffix before extension
return inputPath.replace(/\.[^.]+$/, `-scaled${inputPath.match(/\.[^.]+$/)?.[0] || ""}`);
}

View File

@@ -0,0 +1,148 @@
import sharp from "sharp";
import { loadImage, getFormat } from "./loader.ts";
export type ScaleMode = "stretch" | "fit" | "cover";
export interface ScaleOptions {
width: number;
height: number;
mode: ScaleMode;
output?: string;
}
export interface ScaleResult {
input: string;
output: string;
originalWidth: number;
originalHeight: number;
newWidth: number;
newHeight: number;
}
/**
* Calculate dimensions for fit mode (preserve aspect ratio, fit within bounds)
*/
function calculateFitDimensions(
originalWidth: number,
originalHeight: number,
targetWidth: number,
targetHeight: number
): { width: number; height: number } {
const widthRatio = targetWidth / originalWidth;
const heightRatio = targetHeight / originalHeight;
const ratio = Math.min(widthRatio, heightRatio);
return {
width: Math.round(originalWidth * ratio),
height: Math.round(originalHeight * ratio),
};
}
/**
* Calculate dimensions for cover mode (preserve aspect ratio, cover bounds)
*/
function calculateCoverDimensions(
originalWidth: number,
originalHeight: number,
targetWidth: number,
targetHeight: number
): { width: number; height: number } {
const widthRatio = targetWidth / originalWidth;
const heightRatio = targetHeight / originalHeight;
const ratio = Math.max(widthRatio, heightRatio);
return {
width: Math.round(originalWidth * ratio),
height: Math.round(originalHeight * ratio),
};
}
/**
* Scale an image to specified dimensions
*/
export async function scaleImage(
inputPath: string,
options: ScaleOptions
): Promise<ScaleResult> {
const { width, height, mode, output } = options;
const image = await loadImage(inputPath);
const metadata = await image.metadata();
if (!metadata.width || !metadata.height) {
throw new Error(`Unable to get image dimensions: ${inputPath}`);
}
const originalWidth = metadata.width;
const originalHeight = metadata.height;
const format = getFormat(inputPath);
let processed: sharp.Sharp;
switch (mode) {
case "stretch":
// Resize to exact dimensions (may distort)
processed = image.resize(width, height, {
fit: "fill",
});
break;
case "fit": {
// Fit within dimensions while preserving aspect ratio
const fitDims = calculateFitDimensions(originalWidth, originalHeight, width, height);
processed = image.resize(fitDims.width, fitDims.height, {
fit: "contain",
background: { r: 255, g: 255, b: 255, alpha: 1 },
});
break;
}
case "cover": {
// Cover dimensions while preserving aspect ratio (crops excess)
const coverDims = calculateCoverDimensions(originalWidth, originalHeight, width, height);
processed = image.resize(coverDims.width, coverDims.height, {
fit: "cover",
});
break;
}
}
// Determine output path
const outputPath = output || inputPath.replace(/\.[^.]+$/, `-scaled.${format}`);
// Get output format
const outputFormat = getFormat(outputPath);
// Apply format-specific output
switch (outputFormat) {
case "jpeg":
processed = processed.jpeg({ quality: 85 });
break;
case "png":
processed = processed.png({ compressionLevel: 6 });
break;
case "webp":
processed = processed.webp({ quality: 85 });
break;
case "gif":
processed = processed.gif();
break;
}
// Write the output file
await processed.toFile(outputPath);
// Get actual output dimensions
const outputImage = await sharp(outputPath).metadata();
const newWidth = outputImage.width || width;
const newHeight = outputImage.height || height;
return {
input: inputPath,
output: outputPath,
originalWidth,
originalHeight,
newWidth,
newHeight,
};
}

View File

@@ -0,0 +1,103 @@
import { assertEquals, assertExists } from "@std/assert";
import { scaleImage } from "./scaler.ts";
import { isSupportedFormat, getFormat } from "./loader.ts";
const TEST_IMAGES = {
png: "./test-images/test-100x100.png",
jpg: "./test-images/test-100x100.jpg",
webp: "./test-images/test-100x100.webp",
gif: "./test-images/test-200x150.gif",
};
Deno.test("isSupportedFormat - should return true for supported formats", () => {
assertEquals(isSupportedFormat("image.png"), true);
assertEquals(isSupportedFormat("image.jpg"), true);
assertEquals(isSupportedFormat("image.jpeg"), true);
assertEquals(isSupportedFormat("image.webp"), true);
assertEquals(isSupportedFormat("image.gif"), true);
assertEquals(isSupportedFormat("IMAGE.PNG"), true);
});
Deno.test("isSupportedFormat - should return false for unsupported formats", () => {
assertEquals(isSupportedFormat("image.bmp"), false);
assertEquals(isSupportedFormat("image.tiff"), false);
assertEquals(isSupportedFormat("image.txt"), false);
assertEquals(isSupportedFormat("image"), false);
});
Deno.test("getFormat - should extract format from file path", () => {
assertEquals(getFormat("image.png"), "png");
assertEquals(getFormat("image.jpg"), "jpeg");
assertEquals(getFormat("image.jpeg"), "jpeg");
assertEquals(getFormat("image.webp"), "webp");
assertEquals(getFormat("image.gif"), "gif");
});
Deno.test("getFormat - should throw for files without extension", () => {
try {
getFormat("image");
throw new Error("Should have thrown");
} catch (e) {
assertExists(e);
}
});
Deno.test("scaleImage - stretch mode should resize to exact dimensions", async () => {
const result = await scaleImage(TEST_IMAGES.png, {
width: 50,
height: 75,
mode: "stretch",
output: "./test-images/output-stretch.png",
});
assertEquals(result.originalWidth, 100);
assertEquals(result.originalHeight, 100);
assertEquals(result.newWidth, 50);
assertEquals(result.newHeight, 75);
});
Deno.test("scaleImage - fit mode should preserve aspect ratio", async () => {
const result = await scaleImage(TEST_IMAGES.gif, {
width: 100,
height: 100,
mode: "fit",
output: "./test-images/output-fit.png",
});
// Original is 200x150, fitting in 100x100 should result in 100x75
assertEquals(result.originalWidth, 200);
assertEquals(result.originalHeight, 150);
assertEquals(result.newWidth, 100);
assertEquals(result.newHeight, 75);
});
Deno.test("scaleImage - cover mode should preserve aspect ratio and crop", async () => {
const result = await scaleImage(TEST_IMAGES.png, {
width: 50,
height: 50,
mode: "cover",
output: "./test-images/output-cover.png",
});
// Original is 100x100, covering 50x50 should result in 50x50
assertEquals(result.originalWidth, 100);
assertEquals(result.originalHeight, 100);
assertEquals(result.newWidth, 50);
assertEquals(result.newHeight, 50);
});
Deno.test("scaleImage - should handle different input formats", async () => {
const formats = ["png", "jpg", "webp"] as const;
for (const format of formats) {
const result = await scaleImage(TEST_IMAGES[format], {
width: 50,
height: 50,
mode: "fit",
output: `./test-images/output-${format}.png`,
});
assertExists(result);
assertEquals(result.newWidth, 50);
}
});

View File

@@ -0,0 +1,47 @@
import { walk } from "@std/fs/walk";
import { isSupportedFormat } from "./loader.ts";
/**
* Scan a directory for supported image files
*/
export async function scanDirectory(
dirPath: string,
options: { recursive?: boolean } = {}
): Promise<string[]> {
const { recursive = false } = options;
const images: string[] = [];
try {
const dirStat = await Deno.stat(dirPath);
if (!dirStat.isDirectory) {
throw new Error(`Not a directory: ${dirPath}`);
}
} catch (error) {
if (error instanceof Deno.errors.NotFound) {
throw new Error(`Directory not found: ${dirPath}`);
}
throw error;
}
for await (
const entry of walk(dirPath, {
includeDirs: false,
includeFiles: true,
maxDepth: recursive ? 10 : 1,
exts: [".png", ".jpg", ".jpeg", ".webp", ".gif"],
})
) {
if (entry.isFile && isSupportedFormat(entry.path)) {
images.push(entry.path);
}
}
return images.sort();
}
/**
* Get supported image extensions
*/
export function getSupportedExtensions(): string[] {
return [".png", ".jpg", ".jpeg", ".webp", ".gif"];
}

View File

@@ -0,0 +1,37 @@
import { assert } from "@std/assert";
import { scanDirectory } from "./scanner.ts";
import { getSupportedExtensions } from "./scanner.ts";
const TEST_DIR = "./test-images";
Deno.test("scanDirectory - should find all supported images", async () => {
const images = await scanDirectory(TEST_DIR);
assert(images.length >= 6, `Expected at least 6 images, found ${images.length}`);
// Check that all returned paths are in the test directory
for (const path of images) {
assert(path.includes("test-images"), `Path should be in test-images: ${path}`);
}
});
Deno.test("scanDirectory - should handle non-existent directory", async () => {
try {
await scanDirectory("./non-existent-dir");
throw new Error("Should have thrown");
} catch (e) {
assert(e instanceof Error);
if (e instanceof Error) {
assert(e.message.includes("Directory not found"));
}
}
});
Deno.test("getSupportedExtensions - should return correct extensions", () => {
const exts = getSupportedExtensions();
assert(exts.includes(".png"));
assert(exts.includes(".jpg"));
assert(exts.includes(".jpeg"));
assert(exts.includes(".webp"));
assert(exts.includes(".gif"));
});

View File

@@ -0,0 +1,52 @@
// Verify output dimensions match specifications
import { assertEquals } from "@std/assert";
import { getImageMetadata } from "../src/loader.ts";
const RESIZED_DIR = "./test-images/resized";
// Test that 100x100 images scaled to 80x80 with fit mode result in 80x80
Deno.test("Verify dimensions - square images with fit mode", async () => {
const files = [
"test-100x100.png",
"test-100x100.jpg",
"test-100x100.webp",
];
for (const file of files) {
const path = `${RESIZED_DIR}/${file}`;
const metadata = await getImageMetadata(path);
assertEquals(metadata.width, 80, `${file} width should be 80`);
assertEquals(metadata.height, 80, `${file} height should be 80`);
}
});
// Test that 200x150 images scaled to 80x80 with fit mode result in 80x60 (preserving aspect ratio)
Deno.test("Verify dimensions - rectangular images with fit mode", async () => {
const files = [
"test-200x150.png",
"test-200x150.jpg",
"test-200x150.gif",
];
for (const file of files) {
const path = `${RESIZED_DIR}/${file}`;
const metadata = await getImageMetadata(path);
// 200x150 with fit in 80x80: ratio = min(80/200, 80/150) = 0.4
// new width = 200 * 0.4 = 80, new height = 150 * 0.4 = 60
assertEquals(metadata.width, 80, `${file} width should be 80`);
assertEquals(metadata.height, 60, `${file} height should be 60`);
}
});
// Test stretch mode - exact dimensions regardless of aspect ratio
Deno.test("Verify dimensions - stretch mode", async () => {
const path = `${RESIZED_DIR}/output-stretch.png`;
const metadata = await getImageMetadata(path);
// Original was scaled with stretch to 50x75, then batch resized to fit 80x80
// The 50x75 image has ratio 0.667, fitting in 80x80: min(80/50, 80/75) = 1.067
// new = 50*1.067 x 75*1.067 = 53x80
assertEquals(metadata.width, 53);
assertEquals(metadata.height, 80);
});
console.log("All dimension verifications passed!");

Binary file not shown.

After

Width:  |  Height:  |  Size: 158 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 305 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 159 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 158 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 187 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 162 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 222 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 278 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 223 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 222 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 204 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 224 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 222 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 307 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 222 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 94 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 141 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 300 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 189 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 158 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 343 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 392 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 108 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 279 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 465 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 611 B