AI Coding 14 Min. Lesezeit 22. April 2026

Claude Code Skills: The 2026 Guide to Writing and Using Them

Skills are Claude Code's most underrated feature. The six bundled skills you should know, the mental model, and a custom skill you can install in 60 seconds.

AR
Alex Rivera
Senior AI Tools Analyst
Teilen

The best Claude Code tip in circulation reduces to one sentence: if you do something more than once a day, turn it into a skill. Skills are Claude Code's most underrated feature. They replace the reflex of pasting the same multi-step procedure into chat, they keep your `CLAUDE.md` lean, and once you understand how they work, they become the primary way you extend what Claude can do.

Try Claude Code logo Willst du dieses Tool selbst ausprobieren? Try Claude Code

Despite this, most developers using Claude Code in 2026 have never written a skill. Part of the reason is that the existing documentation focuses on the frontmatter syntax instead of the mental model. The question everyone actually has — *when should something be a skill versus a line in `CLAUDE.md`?* — is left implicit. The killer features (`/batch`, `context: fork`, shell command injection) rarely surface in guides aimed at beginners.

This article fills that gap. You will leave with a clear rule for when to reach for a skill, a tour of the six bundled skills that ship with Claude Code (including `/batch`, which is the single biggest productivity multiplier of 2026), and a custom skill you can drop into `~/.claude/skills/` today. If you already know the basics, skim to the bundled skills section — that is where the surprises are.

What is a Claude Code skill, exactly?

A skill is a directory with a `SKILL.md` file inside. The file has YAML frontmatter (metadata) and markdown content (the actual instructions). When the skill is invoked — either by you typing `/skill-name` or by Claude deciding to load it based on the `description` field — the markdown content enters your conversation as a single message and stays there for the rest of the session.

That one-sentence definition hides the most important distinction: a skill loads only when invoked. Content in `CLAUDE.md` is loaded at the start of every session, consuming context tokens permanently. A skill's body consumes zero tokens until you use it. This is what makes skills cheap — you can write long reference material, detailed playbooks, or multi-step procedures without paying for them unless you need them.

Tip

The rule of thumb: facts and conventions that apply to every task live in CLAUDE.md. Procedures and playbooks that apply to specific situations live in skills. If a section of your CLAUDE.md describes sequential steps (first do X, then Y, then Z), migrate it to a skill.

Skills follow the open Agent Skills standard, which means the same skill works across multiple AI tools. Claude Code extends the standard with additional features: controlled invocation, subagent execution, and dynamic context injection. We cover all three below.

Anatomy of a SKILL.md file

Here is the minimal skill, straight from the official docs. It teaches Claude to explain code with analogies and ASCII diagrams:

yaml
---
name: explain-code
description: Explains code with visual diagrams and analogies. Use when explaining how code works, teaching about a codebase, or when the user asks "how does this work?"
---

When explaining code, always include:

1. **Start with an analogy**: Compare the code to something from everyday life
2. **Draw a diagram**: Use ASCII art to show the flow, structure, or relationships
3. **Walk through the code**: Explain step-by-step what happens
4. **Highlight a gotcha**: What's a common mistake or misconception?

Keep explanations conversational. For complex concepts, use multiple analogies.

Save this as `~/.claude/skills/explain-code/SKILL.md`. You can now invoke it two ways. Type `/explain-code src/auth/login.ts` and Claude runs the skill explicitly. Or ask a question like 'how does this code work?' — Claude reads the `description` field, sees it matches, and loads the skill automatically. Either path produces the same behavior.

The frontmatter is where skills get interesting. Only `description` is truly recommended — everything else is optional. But the optional fields are where the power lives.

The frontmatter fields that actually change behavior

  • `description` + `when_to_use` — together, they determine whether Claude auto-invokes the skill. Think of them as the skill's discoverability metadata. Front-load the key use case: Claude only sees the first 1,536 characters of this combined text in the skill listing.
  • `disable-model-invocation: true` — only you can invoke the skill with `/name`. Claude will not trigger it automatically. Use this for skills with side effects (deploy, commit, send-slack-message).
  • `user-invocable: false` — Claude can invoke the skill but it is hidden from your `/` menu. Use for background knowledge like `legacy-system-context` that is not a meaningful user action.
  • **`allowed-tools: Bash(git add *) Bash(git commit *)`** — while the skill is active, these tools are pre-approved. You stop getting permission prompts for routine operations within that skill's workflow.
  • `model: haiku-4-5` — override the model for this skill. A research skill on Haiku is fast and cheap. A critical-logic skill on Opus 4.7 gets the strongest reasoning. The override applies only for the turn.
  • `paths: "src/api//*.ts"`** — glob patterns that make the skill auto-activate only when Claude is working on matching files. Great for framework-specific or domain-specific skills.
  • `context: fork` + `agent: Explore` — the killer feature. Covered in its own section below.

Dynamic arguments with $ARGUMENTS and named params

Skills accept arguments passed on invocation. The simplest pattern: put `$ARGUMENTS` in your skill content and it gets replaced with whatever the user typed after the skill name. `/fix-issue 123` makes `$ARGUMENTS` expand to `123`.

For positional arguments, use `$ARGUMENTS[N]` or the shorthand `$N`. For named arguments, declare them in the frontmatter `arguments:` list. Running `/migrate-component SearchBar React Vue` on the skill below expands `$0` to `SearchBar`, `$1` to `React`, `$2` to `Vue`:

yaml
---
name: migrate-component
description: Migrate a component from one framework to another
---

Migrate the $0 component from $1 to $2.
Preserve all existing behavior and tests.

Two built-in substitutions are especially useful. `${CLAUDE_SESSION_ID}` gives you the current session ID — perfect for logs that need to correlate with sessions. `${CLAUDE_SKILL_DIR}` returns the directory containing the SKILL.md, which you use to reference scripts bundled with your skill regardless of the current working directory.

The six bundled skills everyone should know

Claude Code ships with a growing set of bundled skills. They are marked Skill in the commands reference. Unlike hardcoded commands like `/clear` or `/compact`, bundled skills use the same prompt-based mechanism as skills you write yourself — which means you can read their prompts, learn from them, and imitate the patterns.

/batch — the killer feature

`/batch` is the feature that most justifies Claude Code's existence. Give it a large-scale change and it orchestrates the work in parallel: it researches the codebase, decomposes the job into 5 to 30 independent units, presents a plan for your approval, then spawns one background agent per unit in an isolated git worktree. Each agent implements its unit, runs tests, and opens a pull request. You review the PRs, merge the good ones, reject or retry the broken ones.

bash
# Concrete example from the docs
/batch migrate src/ from Solid to React

# Other jobs that fit the pattern
/batch add TypeScript strict mode file by file
/batch add tests for every untested file in src/utils/
/batch rename all `fetchUser` calls to `getUser` and update callsites

Requirements: a git repository (otherwise there is nothing to worktree). The cost: significant — you are running dozens of parallel Claude Code sessions. Budget accordingly. On Pro, a single `/batch` run can consume a meaningful chunk of your monthly quota. If you plan to use `/batch` more than once a week, Max ($100/month) pays for itself.

Why this is the killer feature: the bottleneck in any large migration has always been human attention, not keyboard speed. `/batch` parallelizes the attention. You review a dozen PRs instead of writing a dozen migrations. For most engineering teams, this is the difference between 'we will do this next quarter' and 'we did this today.'

/simplify — parallel review and fix

`/simplify` spawns three review agents in parallel on your recently changed files, aggregates their findings, and applies fixes. You can focus the review with an argument: `/simplify focus on memory efficiency` or `/simplify focus on error handling`. Run it before opening a PR — it catches the things a self-review misses because you wrote the code.

/debug — structured troubleshooting

Debug logging in Claude Code is off by default unless you start with `claude --debug`. `/debug` enables it mid-session and walks through the session debug log to analyze an issue. Optionally pass a description: `/debug the deploy hook is firing twice`. It reads the log, identifies the anomaly, and proposes a fix.

/loop — self-paced repetition

`/loop [interval] [prompt]` runs a prompt repeatedly while the session stays open. With no interval, Claude self-paces between iterations. With no prompt, Claude runs an autonomous maintenance check (or the prompt in `.claude/loop.md` if present). Canonical use: `/loop 5m check if the deploy finished`. This is how you poll without leaving the session and without burning a subagent.

/claude-api — auto-active on anthropic imports

`/claude-api` loads Claude API reference material for your project's language (Python, TypeScript, Java, Go, Ruby, C#, PHP, or cURL), plus the Managed Agents reference. It covers tool use, streaming, batches, structured outputs, and common pitfalls. The notable part: it activates automatically when Claude detects your code imports `anthropic` or `@anthropic-ai/sdk`. You do not have to remember to invoke it.

/fewer-permission-prompts — auto-build your allowlist

Permission prompts are the tax you pay for Claude Code's safety model. `/fewer-permission-prompts` scans your transcripts for common read-only Bash and MCP tool calls you have approved, then adds a prioritized allowlist to your project's `.claude/settings.json`. After running it once, your daily sessions involve noticeably fewer approval dialogs. This is also a good template for understanding how skills can introspect session state.

Dynamic context injection with !command

Here is the feature that separates beginners from power users. You can run shell commands before the skill content is sent to Claude. The output replaces a placeholder. Claude receives the actual data, not the command.

The pattern uses a backtick-prefixed syntax in your SKILL.md. A skill that summarizes a pull request might look like this:

yaml
---
name: pr-summary
description: Summarize the current pull request
context: fork
agent: Explore
allowed-tools: Bash(gh *)
---

## Pull request context
- Diff: !`gh pr diff`
- Comments: !`gh pr view --comments`
- Changed files: !`gh pr diff --name-only`

## Your task
Summarize this pull request. Call out: the intent of the change, risky areas, any tests that might be missing.

When you invoke `/pr-summary`, three `gh` commands run immediately. Their output gets inlined into the skill content. Claude receives a fully-rendered prompt with the actual diff, the actual comments, and the actual file list — no tool calls needed to go fetch them. This makes the skill fast and deterministic.

For multi-line commands, use a fenced code block opened with the backtick-bang-backtick pattern. The commands run sequentially and their combined output replaces the block. Organizations can disable shell injection via `"disableSkillShellExecution": true` in managed settings — useful for security-sensitive deployments.

context: fork — turn a skill into a subagent

Set `context: fork` in the frontmatter and your skill stops running in your main conversation. It spawns a fresh subagent with an isolated context window, gets the skill body as its prompt, and returns a summary to your main session. The `agent` field picks which subagent configuration drives it — `Explore` for read-only research, `Plan` for plan mode, `general-purpose` for everything else, or any custom subagent from `.claude/agents/`.

yaml
---
name: deep-research
description: Research a topic thoroughly
context: fork
agent: Explore
---

Research $ARGUMENTS thoroughly:

1. Find relevant files using Glob and Grep
2. Read and analyze the code
3. Summarize findings with specific file references

Why this matters: your main conversation's context window stays clean. The skill does the grepping, reading, and analyzing in its own session. You only see the summary. For research-heavy tasks, `context: fork` dramatically reduces the noise in your main conversation.

Warning

Gotcha: `context: fork` only makes sense for skills with actionable instructions. If your skill just contains conventions like 'use these API patterns' without a task, the subagent receives the guidelines but no actionable prompt and returns with no meaningful output. Always include a task, not just reference material.

Where skills live — the four scopes

Skills live in one of four locations, each with a different scope. Higher-priority locations win when skills share the same name.

Skill scopes

  • Enterprise (deployed via managed settings) — organization-wide. Takes precedence over everything else.
  • Personal (`~/.claude/skills//SKILL.md`) — available in all your projects.
  • Project (`.claude/skills//SKILL.md`) — scoped to one project. Check it into git and everyone on the team gets it.
  • Plugin (`/skills//SKILL.md`) — namespaced as `plugin-name:skill-name`, cannot conflict with other levels.

Claude Code watches these directories for changes. Adding, editing, or removing a skill takes effect within the current session without restart. The one exception: creating a top-level `skills/` directory that did not exist at session start requires a restart so the directory can be watched.

Best public skills to install today

The Claude Code ecosystem is young but growing fast. The canonical starting point is hesreallyhim/awesome-claude-code — a curated list of skills, hooks, slash commands, agent orchestrators, and plugins. Clone a few, read their SKILL.md files, adapt them to your workflow.

A second valuable resource is FlorianBruniaux/claude-code-ultimate-guide, which ships production-ready templates for Claude Code features along with a cheatsheet and quizzes. If you want to see what an opinionated Claude Code setup looks like from someone using it daily, check shanraisshan/claude-code-best-practice.

A custom skill you can install in 60 seconds

To make this article practical, here is a skill we actually use. It enforces a 'verify before shipping' pattern — giving Claude a checklist to run through before declaring a task done. Drop it into `~/.claude/skills/verify-done/SKILL.md`:

yaml
---
name: verify-done
description: Verification checklist before marking a task done. Use when completing a feature, closing a ticket, or preparing a PR. Runs tests, type checks, and lint, then surfaces anything suspicious.
allowed-tools: Bash(npm *) Bash(pnpm *) Bash(yarn *) Bash(git *) Read Grep
---

Before declaring this task done, run through the checklist in order. Stop on the first failure and surface it.

## 1. Tests pass
Run the project's test command (check package.json scripts). Report pass/fail counts.

## 2. Types check
Run the TypeScript compiler with \`tsc --noEmit\` or the project's type-check script. Surface any errors.

## 3. Lint passes
Run the project's lint command. If autofix is available, propose running it.

## 4. Git status is clean intent
Run \`git status\` and check for:
- Unrelated modified files (user may have forgotten to revert)
- New files not added to git (user may have forgotten to commit)
- Large binaries or secrets that should not be committed

## 5. Recent changes review
Run \`git diff HEAD\` and review the actual changes. Flag anything that looks:
- Out of scope for the stated task
- Left as debug logging
- A shortcut that bypasses a check (e.g., hardcoded test values, skipped tests)

## 6. Verdict
After running all checks, give a clear verdict:
- **SHIP IT** — all checks pass, no concerns
- **FIX FIRST** — list what to fix before shipping
- **INVESTIGATE** — list things that need human judgment before shipping

Invoke it with `/verify-done` at the end of any task. Claude runs the full checklist, surfaces failures, and either clears you to ship or tells you what needs fixing. This is the simplest possible implementation of the single most impactful tip in Claude Code usage: *give Claude a way to verify its own output*.

Note

Customize the commands for your stack. If you use Jest, pytest, cargo test, or something else, edit the step text. The `allowed-tools` line pre-approves the common package managers so you stop getting permission prompts for routine test runs.

The anti-patterns to avoid

Skills fail in predictable ways. These are the five mistakes we see most often.

Skill anti-patterns

  • Skills that never trigger. The description is too vague or missing the keywords users actually type. Fix: front-load the use case, include trigger phrases in `when_to_use`, and test by asking the kind of question the skill should handle.
  • Skills that trigger too often. The description is too broad. Fix: make it more specific, or set `disable-model-invocation: true` for manual-only invocation.
  • Multi-step procedures in CLAUDE.md. A section titled 'When deploying' that contains 10 numbered steps is not a rule — it is a skill. Move it. Your CLAUDE.md gets shorter, adherence improves, and the procedure only loads when you deploy.
  • SKILL.md files over 500 lines. The official guidance is to keep SKILL.md focused. Move detailed reference material into separate files in the skill directory and reference them with markdown links so Claude knows what each contains and when to load it.
  • Forgetting `disable-model-invocation: true` on skills with side effects. A `/deploy` or `/send-slack-message` skill should never auto-trigger — if Claude decides the conversation looks like the right time to deploy, you have a problem. Gate these with explicit manual invocation.

FAQ

What is the difference between CLAUDE.md and a skill? CLAUDE.md loads at the start of every session and costs tokens permanently. Skills load only when invoked and cost zero tokens until used. Rule: facts and conventions go in CLAUDE.md, procedures and playbooks go in skills.

Can I share a skill with my team? Yes. Put it in `.claude/skills//SKILL.md`, commit to git, and every teammate gets it in their next session. For personal skills only you want, use `~/.claude/skills/`.

Do skills consume context after invocation? Yes. The rendered content enters the conversation and stays. Post-compaction, Claude Code re-attaches the most recent invocation of each skill after the summary, keeping the first 5,000 tokens of each with a combined budget of 25,000 tokens. Older skills can be dropped if you invoke many.

Can I use a skill for deployment or other destructive actions? Yes, but always set `disable-model-invocation: true` so only you can trigger it. You do not want Claude deciding the conversation looks ready to ship.

Skills versus subagents — which should I use? Different primitives. A skill is a prompt injected into your current conversation. A subagent runs in a separate context window with its own tools. Use `context: fork` in a skill to get the best of both — write a skill body that executes in a fresh subagent.

Where do I go from here? If you have not already, read our Claude Code hub guide for orientation across the whole product. If you want to master the `CLAUDE.md` side, see the anatomy of a great CLAUDE.md file. To evaluate whether Claude Code fits your workflow, our Claude Code review has the benchmarks and honest pros and cons.

Offenlegung: Einige Links auf dieser Seite sind Affiliate-Links. Wir erhalten möglicherweise eine Provision ohne zusätzliche Kosten für dich, wenn du über unsere Links einkaufst. Das beeinflusst weder unsere redaktionellen Inhalte noch unsere Bewertungen.

Hat dir der Artikel gefallen?

Erhalte unsere besten Guides und Tool-Reviews jede Woche in deinem Postfach. Schließ dich 5.000+ Entwicklern an, die der KI-Welle voraus bleiben.