asgeirtj/system_prompts_leaks
⭐ 39,933 · #16 · N/A
Extracted system prompts from ChatGPT (GPT-5.5 Thinking), Claude (Opus 4.7, Opus 4.6, Sonnet 4.6, Claude Code), Gemini (3.1 Pro, 3 Flash, Gemini CLI), Grok (4.3 beta), Perplexity, and more. Updated regularly.
ai ai-transparency anthropic Skill
Project Analysis
| 🎯 Positioning | Agent capability enhancement |
| 💡 Core Value | Provides standardized Skills and Prompt templates for AI coding Agents, covering specific scenarios (code review, debugging, architecture design, etc.), enabling higher quality output in these scenarios |
| 👥 Target Audience | Developers using Agent tools like Claude Code/Cursor/Codex, aiming to improve Agent performance on specific tasks |
Why It Deserves Attention
39,933 Stars, good community activity, indicating it solves a real pain point.
In-depth AI Analysis Report
Alright, as a senior technical editor, here is an in-depth analysis report on this GitHub project.
In-depth Project Analysis: asgeirtj/system_prompts_leaks
One-sentence Summary
A systematic archive of leaked underlying instructions from mainstream AI models.
Core Features
This project is not an executable software tool, but a high-quality text archive collection. Its core value lies in the following three aspects:
- Broad Model Coverage: The project collects system prompts from almost all major AI chat and coding assistants, including OpenAI (ChatGPT, Codex), Anthropic (Claude Opus, Sonnet, Code), Google (Gemini), xAI (Grok), Perplexity, and others. It covers everything from flagship models to scenario-specific variants (e.g., Claude Design, Claude for Excel).
- Version Tracking and Updates: The project not only includes the latest versions of prompts (e.g., GPT-5.5 Thinking, Claude Opus 4.7) but also retains older versions (e.g., Opus 4.5), with clear update dates. This is extremely valuable for studying model behavior evolution and understanding vendor strategy adjustments.
- Structure and Readability: Each model has its own independent Markdown file, organized by vendor directory (Anthropic, OpenAI, Google, etc.). For complex models like Claude, it further breaks down into versions like "Human-readable", "Injections", and "No-tools" for easy comparison. The clear "Recently Updated" table and directory index in the README greatly enhance information retrieval efficiency.
Technical Architecture
- Tech Stack: This project is essentially a pure documentation repository with zero tech stack. It relies solely on Git for version control and Markdown as the content format.
- Code Structure Highlights: Its structural design is the biggest highlight. Through clear directory hierarchy (
Vendor/Model_Name.md) and standardized file naming, it achieves high maintainability and extensibility. The README, serving as an "index page", with its structure and changelog design, exemplifies excellent open-source documentation practices worth emulating by other projects.
Quick Start Guide
- Access the Repository: Directly visit
https://github.com/asgeirtj/system_prompts_leaks. - Browse Content: Check the "Recently Updated" section or the vendor-categorized table in the
README.mdfile, then click the link for the corresponding model to view its system prompts. - Clone the Repository (Optional): For offline viewing or contribution, run
git clone https://github.com/asgeirtj/system_prompts_leaks.gitto clone the entire repository locally.
Strengths, Weaknesses, and Use Cases
Strengths:
- Information Scarcity and Timeliness: System prompts are core trade secrets of AI vendors and extremely difficult to obtain. This project provides a public, centralized, and continuously updated "leak" source with extremely high information value.
- Excellent Material for Comparative Research: Developers can horizontally compare differences in security policies, role settings, and output specifications across different vendors (e.g., OpenAI vs Anthropic) or different versions of the same vendor (e.g., Opus 4.6 vs 4.7).
- Zero Barrier to Entry: No environment setup is needed; a browser is sufficient for reading.
Weaknesses:
- Questionable Information Reliability: As "leaked" content, its source cannot be officially verified and risks being forged or outdated. Users must judge and cross-validate independently.
- Passive Consumption Project: The project itself does not provide any analysis tools or visualization capabilities; users must read and understand lengthy texts on their own.
- Legal and Ethical Risks: The project's name and nature operate in a gray area, potentially facing legal challenges from AI vendors or community ethical controversies.
Use Cases:
- AI Prompt Engineers/Researchers: Study how top-tier models are "trained," learning about their security restrictions, role settings, and output format writing.
- AI Application Developers: Understand the underlying model's "personality" and behavioral boundaries to better design and debug upper-layer applications, avoiding unnecessary restrictions.
- AI Safety and Ethics Researchers: Analyze vendors' specific implementation strategies for content moderation, bias avoidance, and safety.
- AI Enthusiasts/Tech Bloggers: Obtain first-hand material for in-depth analysis, community discussion, or content creation.
Community and Popularity
- Stars: 39,933, an astonishing number indicating extremely high community attention and recognition. This places it at the top tier among information aggregation projects.
- Topics: Tags cover all major AI models and keywords (
ai-transparency,prompt-engineering), with excellent SEO optimization for easy discovery. - Recent Updates: The "Recently Updated" table in the README shows high-frequency updates as of April 2026, covering the latest models like GPT-5.5, Claude Cowork, and Grok 4.3 Beta, indicating an active maintainer and strong project vitality.
- License: Uses the MIT license, which is very open for content use and distribution, facilitating information dissemination.
Summary: asgeirtj/system_prompts_leaks is a phenomenal open-source project. It precisely captures the AI community's curiosity and need for "black box" internal information. Through excellent content organization and continuous high-frequency updates, it has built a high-value, high-impact information archive. Despite risks related to source reliability and legality, its contribution to promoting AI transparency and advancing technical research is immense. For anyone seeking a deep understanding of modern AI model behavior, this is an unmissable treasure trove.
Technical Information
- 💻 Language: N/A
- 📂 Topics: ai, ai-transparency, anthropic, chatgpt, claude
- 🕐 Updated: 2026-02-27
- 🔗 Visit GitHub Repository
Data updated on 2026-05-09 · Star count based on actual GitHub data