Critical thinking analysis command for AI-generated responses
You can install this plugin from any of these themed marketplaces. Choose one, add it as a marketplace, then install the plugin.
Choose your preferred installation method below
A marketplace is a collection of plugins. Every plugin gets an auto-generated marketplace JSON for individual installation, plus inclusion in category and themed collections. Add a marketplace once (step 1), then install any plugin from it (step 2).
One-time setup for access to all plugins
When to use: If you plan to install multiple plugins now or later
Step 1: Add the marketplace (one-time)
/plugin marketplace add https://claudepluginhub.com/marketplaces/all.json
Run this once to access all plugins
Step 2: Install this plugin
/plugin install criticalthink@all
Use this plugin's auto-generated marketplace JSON for individual installation
When to use: If you only want to try this specific plugin
Step 1: Add this plugin's marketplace
/plugin marketplace add https://claudepluginhub.com/marketplaces/plugins/criticalthink.json
Step 2: Install the plugin
/plugin install criticalthink@criticalthink
English | 日本語
Modern LLMs (Large Language Models) excel at generating confident, plausible-sounding responses. However, these responses often ignore real-world constraints or contain logical flaws.
The criticalthink
command is a custom command that embeds healthy skepticism into the dialogue process itself as a countermeasure against AI's "confirmation bias" and humans' "authority bias" of blindly trusting AI responses. By having the AI critically analyze its own previous response, it reveals hidden assumptions and overlooked risks.
Place the command file in the appropriate directory for your tool:
.claude/commands/criticalthink.md
(in project root or home directory)~/.codex/prompts/criticalthink.md
(in home directory).gemini/commands/criticalthink.toml
(in project root)Create directory and copy file:
For Claude Code:
mkdir -p .claude/commands
cp criticalthink.md .claude/commands/
For Codex CLI:
mkdir -p ~/.codex/prompts
cp criticalthink.md ~/.codex/prompts/
For Gemini CLI:
mkdir -p .gemini/commands
cp commands/criticalthink.toml .gemini/commands/
Run these commands inside Claude Code:
Add this repository to your plugin marketplace:
/plugin marketplace add https://github.com/abagames/slash-criticalthink.git
Install the criticalthink command:
/plugin install criticalthink
Install directly from GitHub:
gemini extensions install https://github.com/abagames/slash-criticalthink
After installation, restart your tool and verify it works:
User: What is 2 + 2?
AI: 4
User: /criticalthink
If critical analysis begins, it's working
Type /criticalthink
immediately after the AI generates a response. The AI will analyze its previous response from the following perspectives:
User: I want to implement microservices, how should I design it?
Claude: I propose a configuration with Kubernetes + Istio + gRPC as follows...
[Detailed design proposal]
User: /criticalthink
Claude: ### 1. Core Thesis & Confidence Score (Initial)
- **1-1. Core Thesis:** A Kubernetes-based full-stack solution
is optimal for microservices architecture
- **1-2. Initial Confidence:** 8/10
### 2. Foundational Analysis: Assumptions & Context
- **2-1. High-Impact Assumptions:**
1. Team has Kubernetes operational experience
2. Traffic scale justifies orchestration adoption
3. Initial investment and learning costs are acceptable
...
User: How should I implement rate limiting for this API?
Claude: You can implement it perfectly with Redis + Sliding Window algorithm.
With the following code...
User: /criticalthink
Claude: ### 3. AI-Specific Pitfall Analysis
- **3-2. "Happy Path" Bias:** Fail
Reason: No consideration of fallback strategy during Redis failure
- **4-1. Overlooked Risks:**
1. Redis becomes a SPOF (Single Point of Failure)
2. Network latency affects all requests
3. Redis memory usage estimation not performed
...
User: I want to modernize this legacy code
Claude: Let's rewrite everything in TypeScript + React Hooks...
User: /criticalthink
Claude: ### 3. AI-Specific Pitfall Analysis
- **3-3. Over-Engineering:** Fail
Reason: Proposing complete rewrite instead of gradual migration
- **4-2. Alternative Scenarios:**
Gradual migration strategy: First add type definition files (.d.ts),
then gradually TypeScript-ify starting from critical paths.
This approach reduces business risk and costs.
...
User: Review the README.md
Claude: This README describes slash-criticalthink, a custom command for AI coding assistants...
[Summary of contents]
User: /criticalthink
Claude: ### 2. Foundational Analysis: Assumptions & Context
- **2-1. High-Impact Assumptions:**
1. The user wanted a summary rather than specific information extraction
2. The criticalthink.md file actually exists in this repository
3. The user has context about what slash commands are
- **2-2. Contextual Integrity:** I assumed the user's intent without clarification:
"Review" could mean summarize, analyze quality, check for errors, verify
completeness, or prepare to use/modify.
### 3. AI-Specific Pitfall Analysis
- **3-2. "Happy Path" Bias:** Fail
Reason: Didn't verify whether criticalthink.md actually exists or check
if .claude/commands/ directory is present
### 5. Synthesis & Revised Recommendation
- **5-3. Actionable Next Step:** First, clarify what aspect of the README
you want reviewed: (a) content summary (already done), (b) verify the
slash command is installed and working, (c) critique the documentation
quality, or (d) help you set it up/use it.
Since the /criticalthink
command makes the AI self-criticize, erroneous concerns or overly negative analysis may remain in the dialogue history and distort the AI's subsequent thinking (context contamination).
To avoid this, the following workflow is recommended:
/criticalthink
for critical analysisEsc-Esc
to rewind to your previous message and return to before running /criticalthink
(removing the critical analysis from history)This allows you to benefit from critical thinking while avoiding context contamination.
Based on a general critical thinking framework applicable to any claim.
The general framework alone risked falling into "formal and superficial criticism." To overcome this, we incorporated failure patterns specific to LLMs and coding agents into the analysis items.
For these items, the command asks the AI for explicit Pass/Fail self-evaluation.
This command is a guideline for finding gaps in your own thinking. Use AI's critical analysis not as the "correct answer" but as a tool to gain new perspectives.
The final judgment is always made by humans. AI's criticism is also just one generated artifact that should be critically examined. Pay special attention to:
Users doubt AI responses while simultaneously using AI (and commands) to doubt their own assumptions and biases. This mutual criticism process leads to more robust decision-making.
Critical analysis consumes additional tokens and increases response time. Rather than applying it to all responses, selective use is recommended before important decisions such as:
The approach of introducing a self-critical process to AI to improve its reasoning capabilities has also attracted attention in academic research. Particularly notable is the paper "Critical-Questions-of-Thought (CQoT)" by Federico Castagna et al.
CQoT and /criticalthink
share the common purpose of "improving the quality and reliability of output by having AI perform metacognitive self-evaluation." Both introduce a step to verify the reasoning process behind responses rather than simply generating a single answer and stopping.
While sharing a common purpose, they differ in their objectives and methodologies.
Comparison | CQoT (Academic Paper) | /criticalthink (This Tool) |
---|---|---|
Main Goal | Pursuit of Logical Soundness | Verification of Practical Viability |
Domain | Math problems, logic puzzles, etc., domains with clear correct answers | Software design, refactoring, etc., domains with trade-offs |
Timing | Integrated, automated pipeline before answer generation | Post-hoc analysis tool after answer generation |
Criteria | Universal logical principles(e.g., Are premises clear? Is conclusion derived?) | AI and development-specific failure patterns(e.g., happy path bias, over-engineering, SPOF) |
/criticalthink
In summary, while CQoT aims to make AI a better "logician," /criticalthink
aims to make AI a wiser "design review partner" for developers. This tool specializes in uncovering real-world risks, costs, and trade-offs—including alternative approaches—that cannot be measured by theoretical correctness alone, which developers face in practice. While CQoT is a closed-loop system that automatically corrects flaws in the reasoning process, /criticalthink
is an open-loop tool that provides insights for humans to make final judgments.
1.0.0