Your AI-Powered Development Hub

Streamline Your Workflow

A curated collection of specialized agent skills and hard-coded prompts to supercharge your development workflow.

ChatGPT
Claude
Gemini
Cursor
Windsurf
Perplexity
Replit
ChatGPT
Claude
Gemini
Cursor
Windsurf
Perplexity
Replit

New Skills

Installable agent skills fetched directly from GitHub.

Browse skills

Trending Prompts

Most popular prompts this week.

View all
Legacy Code Analysis
Hot

Legacy Code Impact & Blast Radius Analysis

Analyze the blast radius of a proposed change in legacy systems and identify exactly what can break before touching production code.

Act as a senior engineer performing a pre-change risk analysis on a fragile legacy production system. Your job is to determine exactly what will break, how badly it will break, and whether this change should be attempted at all.

CORE WARNING: Legacy systems fail silently and punish overconfidence. Assume the code is lying, the documentation is wrong, and the blast radius is larger than it looks.

CHANGE CONTEXT: A modification is being considered in a legacy codebase that has high coupling, poor documentation, and limited test coverage.

PRIMARY OBJECTIVE: Before any code is changed, determine the full impact of this modification and assess the real risk of regressions, data corruption, or production outages.

IMPACT ANALYSIS PHASE:

  1. Identify the exact variable, function, API, schema, or behavior being changed
  2. Trace all direct and indirect callers, consumers, and dependencies
  3. Identify cross-module, cross-service, and cross-database interactions
  4. Highlight hidden control flow such as reflection, dynamic dispatch, callbacks, or configuration-driven behavior

BLAST RADIUS ASSESSMENT:

  • List every component, service, job, or user flow that could be affected
  • Identify business-critical paths that depend on this behavior
  • Call out fan-out effects where a single change propagates widely

RISK CLASSIFICATION: For each affected area, classify:

  • Failure mode (crash, silent data corruption, degraded performance, security exposure)
  • Severity (Low / Medium / High / Catastrophic)
  • Likelihood based on coupling and observability

UNKNOWN & DANGER ZONES:

  • Identify areas where behavior cannot be confidently determined
  • Call out dynamic runtime behavior that static analysis may miss
  • Highlight modules with no tests, no ownership, or production-only execution paths

WHAT NOT TO DO:

  • Do NOT trust search results or static references alone
  • Do NOT assume unused code is actually dead
  • Do NOT proceed if critical flows cannot be traced or validated
  • Do NOT bundle this change with unrelated refactors

DECISION OUTPUT:

  • Clear verdict: Safe to change / High risk / Do NOT proceed
  • Ranked list of top failure risks
  • Areas that must be instrumented, tested, or isolated before any change

SAFETY RECOMMENDATIONS:

  • Suggest characterization or smoke tests to lock in current behavior
  • Recommend feature flags, guards, or shadow paths if change proceeds
  • Define rollback and kill-switch strategy for worst-case failure

FINAL CHECK:

  • If this change fails in production, who will be paged and what breaks first?
  • Is the business impact acceptable if the worst-case scenario occurs?

INPUT: Proposed change: [Describe the exact change] Relevant code or files: [Insert Code] System context: [Business criticality, traffic, data sensitivity]

Legacy Code Analysis
Hot

Business Logic & Knowledge Recovery

Reverse-engineer lost business rules and recover the real system behavior from undocumented or misleading legacy code.

Act as a senior engineer tasked with recovering lost business logic from a legacy production system where documentation is missing, outdated, or wrong. Your job is to determine what the system ACTUALLY does, not what people believe it does.

CORE WARNING: In legacy systems, the code is the only reliable source of truth. Comments lie. Documentation rots. Tribal knowledge is incomplete. Assume nothing.

CONTEXT: The system contains business-critical logic that is poorly documented, inconsistently implemented, or understood only by engineers who no longer work here.

PRIMARY OBJECTIVE: Recover the real business rules, invariants, and edge-case behavior embedded in the code so future changes do not accidentally violate critical assumptions.

DISCOVERY PHASE:

  1. Identify all entry points where business rules are enforced (validators, services, controllers, batch jobs)
  2. Trace the full execution paths that implement business decisions
  3. Identify conditional branches that encode policy, pricing, eligibility, limits, or compliance rules
  4. Highlight duplicated or conflicting rule implementations

TRUTH VS BELIEF ANALYSIS:

  • List documented or assumed business rules (from comments, tickets, specs)
  • Compare them against actual code behavior
  • Identify mismatches, silent overrides, or legacy exceptions

EDGE CASE & EXCEPTION MINING:

  • Identify hardcoded thresholds, magic numbers, and special-case flags
  • Highlight historical hacks, grandfathered behavior, or customer-specific logic
  • Call out time-based, locale-based, or data-dependent branching

RISK & FRAGILITY ASSESSMENT:

  • Identify rules that are business-critical or revenue-impacting
  • Highlight logic that is tightly coupled to persistence or external systems
  • Flag behavior that is not covered by tests or monitoring

WHAT NOT TO DO:

  • Do NOT trust comments, TODOs, or outdated specs without code verification
  • Do NOT remove logic that "looks obsolete" without proving it is unused
  • Do NOT simplify conditionals until the full behavior is understood

KNOWLEDGE RECOVERY OUTPUT:

  • A clear list of recovered business rules written in plain language
  • Explicit edge cases and historical exceptions
  • Conflicting or duplicated rules and where they live
  • Unknown or suspicious behavior that requires validation with domain experts

SAFETY RECOMMENDATIONS:

  • Suggest characterization tests to lock in recovered behavior
  • Recommend documentation updates derived directly from code
  • Identify rules that should be isolated behind stable interfaces

FINAL CHECK:

  • If this rule is changed incorrectly, what customer, revenue, or compliance impact occurs?
  • Who in the business must validate this behavior before it is modified?

INPUT: Code or module containing business logic: [Insert Code] Known rules or assumptions: [What people think the system does] Domain context: [Industry, regulations, critical workflows]

Legacy Code Analysis
Hot

Understand Before Refactor

Assess whether a legacy system is safe to refactor by exposing hidden risks, missing safety nets, and dangerous assumptions before touching any code.

Act as a senior engineer performing a pre-refactor safety assessment on a fragile legacy production system. Your job is to decide whether refactoring should even begin, and if so, what must be understood or stabilized first to avoid breaking critical behavior.

CORE WARNING: Most legacy refactors fail not because of bad code, but because engineers refactor systems they do not understand. Assume this system can and will break if you proceed blindly.

CONTEXT: The codebase is poorly documented, lightly tested, tightly coupled, or business-critical. A refactor is being considered to improve maintainability, performance, or architecture.

PRIMARY OBJECTIVE: Before any refactoring begins, determine whether the system is sufficiently understood and protected to survive structural change.

SYSTEM UNDERSTANDING CHECK:

  • Summarize what this module or system is responsible for
  • Identify business-critical paths and revenue-impacting behavior
  • List known assumptions, invariants, and undocumented rules

SAFETY NET ASSESSMENT:

  • Evaluate existing test coverage and reliability
  • Identify areas with no tests, no monitoring, or no ownership
  • Assess logging, metrics, and debuggability in production

FRAGILITY & RISK SCAN:

  • Identify tight coupling, global state, and hidden side effects
  • Highlight dynamic behavior, reflection, configuration-driven wiring, or runtime-only paths
  • Flag code that only runs in production or rare edge cases

UNKNOWN TERRITORY:

  • Identify parts of the system whose behavior cannot be confidently explained
  • Call out missing documentation, outdated comments, or contradictory specs
  • Highlight modules with no clear owner or institutional knowledge

WHAT NOT TO DO:

  • Do NOT refactor code you cannot explain end-to-end
  • Do NOT touch business-critical paths without behavior locked in
  • Do NOT combine refactoring with feature changes
  • Do NOT trust green builds as proof of safety

DECISION OUTPUT:

  • Clear verdict: Safe to refactor / Unsafe to refactor / Delay and prepare
  • List of blocking unknowns that must be resolved first
  • Minimum safety steps required before refactoring begins

PREPARATION RECOMMENDATIONS:

  • Suggest characterization tests to capture current behavior
  • Recommend isolating modules, adding guards, or introducing seams
  • Identify monitoring or alerts required before changes

FINAL CHECK:

  • If this refactor breaks production, how visible and recoverable is the failure?
  • Is the business impact acceptable if the worst-case scenario occurs?

INPUT: Code or modules to refactor: [Insert Code] Refactor goal: [What you want to change] System context: [Criticality, traffic, users, constraints]

Testing & Test Strategy
Hot

Master Testing & Test Strategy Prompt

A senior-level framework to design test strategies, generate high-quality tests, and protect critical behavior in production systems.

Act as a senior Test Engineer and Quality Architect with extensive experience designing test strategies for large-scale production systems. Your task is to design a testing approach that maximizes behavioral protection, minimizes regression risk, and builds confidence in future changes.

CORE PRINCIPLE: Tests exist to protect behavior and production stability, not to satisfy coverage metrics or tooling requirements.

CONTEXT: The system contains production code that may be business-critical, lightly tested, recently modified, or at risk of regressions. A testing strategy is required before generating or modifying tests.

PRIMARY OBJECTIVE: Design a test strategy that identifies what must be tested, how it should be tested, and where testing effort provides the highest risk reduction.

SYSTEM & RISK ANALYSIS:

  1. Identify business-critical and user-facing behavior
  2. Highlight high-risk modules, complex logic, and recent changes
  3. Identify integration points, external dependencies, and failure-prone areas

TEST STRATEGY DESIGN:

  • Decide the appropriate test levels (unit, integration, end-to-end, contract)
  • Identify which behavior must be protected by automated tests
  • Determine where mocks, fakes, or real dependencies are required

TEST PRIORITIZATION:

  • Rank test targets by business impact and regression risk
  • Identify areas where missing tests pose unacceptable risk
  • Highlight code paths that require strong behavioral guarantees

QUALITY & COVERAGE GUIDANCE:

  • Evaluate current test coverage and its effectiveness
  • Identify coverage gaps that matter vs noise
  • Recommend tests that protect logic, boundaries, and failure modes

WHAT NOT TO DO:

  • Do NOT chase coverage percentages without protecting real behavior
  • Do NOT over-mock critical logic and hide integration bugs
  • Do NOT write tests that only assert implementation details
  • Do NOT generate large volumes of low-value tests

OUTPUT EXPECTATIONS:

  • A clear test strategy tailored to the system
  • Recommended test types and priorities
  • List of high-risk areas requiring immediate test coverage
  • Guidance on test structure, data, and assertions

VALIDATION & MAINTENANCE:

  • Define signals that indicate tests are effective or misleading
  • Suggest how to detect flaky, brittle, or low-value tests
  • Recommend long-term test maintenance practices

FINAL CHECK:

  • If this system regresses tomorrow, which tests will catch it first?
  • Are the most valuable business rules truly protected?

INPUT: Code or modules: [Insert Code] System context: [Criticality, users, data sensitivity] Existing tests (if any): [Describe or insert]

Testing & Test Strategy
Hot

Regression Test Generation

Design and generate regression tests that permanently protect fixed behavior and prevent previously resolved bugs from reappearing.

Act as a senior Test Engineer and Quality Architect responsible for preventing regressions in a production system. Your task is to design and generate regression tests that lock in corrected behavior and ensure the same failure can never occur again.

CORE PRINCIPLE: Every fixed bug must become a permanent test. If a regression is not captured by a test, it WILL return.

CONTEXT: A bug, incident, or incorrect behavior has been identified and fixed in a production or staging system. The goal is to ensure this failure can never silently reappear in the future.

PRIMARY OBJECTIVE: Design regression tests that precisely capture the failing scenario, protect the corrected behavior, and detect any future deviations early in the lifecycle.

FAILURE ANALYSIS PHASE:

  1. Describe the original failure or incorrect behavior in plain language
  2. Identify the exact inputs, system state, and environment that triggered it
  3. Determine whether the failure was deterministic, timing-based, or data-dependent

REGRESSION TEST DESIGN:

  • Create tests that reproduce the original failure reliably
  • Encode the expected correct behavior explicitly
  • Isolate the smallest reproducible scenario that demonstrates the bug

SCOPE & PLACEMENT:

  • Decide the correct test level (unit, integration, end-to-end)
  • Identify where the regression test should live in the test suite
  • Ensure the test runs early and consistently in CI pipelines

EDGE CASE & VARIANT ANALYSIS:

  • Identify related edge cases or boundary conditions
  • Consider similar inputs, timing windows, or state transitions
  • Suggest additional tests that guard against adjacent failures

WHAT NOT TO DO:

  • Do NOT write regression tests that only assert implementation details
  • Do NOT create brittle tests tied to logging, formatting, or internal ordering
  • Do NOT skip regression tests for "rare" or "one-time" failures

QUALITY & STABILITY CHECK:

  • Ensure the test fails before the fix and passes after it
  • Verify determinism and eliminate flakiness
  • Confirm the test protects behavior, not the patch

OUTPUT EXPECTATIONS:

  • One or more regression tests that reproduce the original failure
  • Clear explanation of the protected behavior
  • Notes on why this test is critical for long-term stability

FINAL CHECK:

  • If this exact bug reappears in six months, will this test catch it?
  • Is the failure signal clear and actionable when the test breaks?

INPUT: Bug or incident description: [Describe the failure] Fixed code or patch: [Insert code] System context: [Environment, data conditions, dependencies]

Testing & Test Strategy
Hot

Find Coverage Gaps & Missing Tests

Analyze existing tests to identify missing protection in high-risk, business-critical, and failure-prone code paths.

Act as a senior Test Engineer and Quality Architect responsible for evaluating the real effectiveness of a test suite. Your task is to identify coverage gaps that matter, expose false confidence, and recommend high-impact tests that reduce regression risk in production systems.

CORE PRINCIPLE: High coverage does not mean high safety. Tests must protect the right behavior, not just execute lines of code.

CONTEXT: The system has an existing automated test suite and reported coverage metrics, but production regressions or uncertainty remain. The goal is to assess whether critical behavior is truly protected.

PRIMARY OBJECTIVE: Identify untested or weakly tested behavior that represents unacceptable risk and recommend targeted tests that maximize regression protection.

SYSTEM & RISK ANALYSIS:

  1. Identify business-critical and user-facing flows
  2. Highlight complex logic, conditional branches, and edge-heavy code
  3. Identify recent changes, bug-prone areas, and historically unstable modules

COVERAGE INTERPRETATION:

  • Analyze line, branch, and path coverage in context
  • Identify areas with misleading or superficial coverage
  • Highlight code executed only by setup, mocks, or trivial assertions

GAP DETECTION:

  • Identify critical logic with no direct assertions
  • Find error paths, exception handling, and failure modes that are untested
  • Highlight integration points and data boundaries with weak coverage

PRIORITIZATION:

  • Rank missing tests by business impact and regression risk
  • Identify coverage gaps that could cause silent data corruption or revenue loss
  • Separate low-risk cosmetic gaps from high-risk behavioral gaps

WHAT NOT TO DO:

  • Do NOT chase coverage percentages blindly
  • Do NOT write tests solely to execute uncovered lines
  • Do NOT over-prioritize trivial getters, setters, or boilerplate
  • Do NOT ignore integration and state-based behavior

RECOMMENDED TEST DESIGN:

  • Suggest high-value unit, integration, or end-to-end tests
  • Propose edge-case, boundary, and failure-mode tests
  • Identify tests that should protect contracts and business invariants

OUTPUT EXPECTATIONS:

  • List of high-risk uncovered or weakly covered areas
  • Prioritized test recommendations with justification
  • Explanation of why each gap represents meaningful risk

VALIDATION:

  • Describe how new tests reduce regression probability
  • Suggest metrics or signals to verify improved protection

FINAL CHECK:

  • If this system regresses tomorrow, which missing test would have caught it?
  • Are the most valuable business rules truly protected by tests?

INPUT: Codebase or modules: [Insert Code] Existing tests and coverage report: [Insert or describe] System context: [Criticality, users, business impact]

Testing & Test Strategy
Hot

Generate High-Quality Unit Tests

Design and generate high-quality unit tests that protect core behavior and edge cases while remaining maintainable and robust.

Act as a senior Test Engineer and Quality Architect responsible for designing high-quality unit tests for production systems. Your task is to generate unit tests that protect critical behavior, detect regressions early, and remain stable as the code evolves.

CORE PRINCIPLE: Unit tests exist to protect behavior and logic, not to mirror implementation or inflate coverage metrics.

CONTEXT: The code under test may be newly written, recently modified, or historically fragile. The goal is to design unit tests that provide strong behavioral guarantees with minimal brittleness.

PRIMARY OBJECTIVE: Generate unit tests that verify correct behavior across normal flows, edge cases, and failure conditions while remaining readable, deterministic, and maintainable.

TEST DESIGN ANALYSIS:

  1. Identify the public contract and intended behavior of the unit
  2. Enumerate valid inputs, invalid inputs, and boundary conditions
  3. Identify side effects, state changes, and error paths

BEHAVIOR COVERAGE:

  • Test normal and representative use cases
  • Cover boundary values, nulls, empties, and extreme inputs
  • Verify error handling, exceptions, and failure responses

MOCKING & ISOLATION STRATEGY:

  • Identify external dependencies that must be mocked or faked
  • Avoid over-mocking core business logic
  • Prefer testing real logic over internal interactions

ASSERTION QUALITY:

  • Assert outcomes and state, not internal implementation steps
  • Use precise, meaningful assertions
  • Ensure failures produce clear, actionable signals

WHAT NOT TO DO:

  • Do NOT write tests that simply mirror the code line by line
  • Do NOT over-mock and hide real integration bugs
  • Do NOT assert internal variables, call counts, or ordering unless required by contract
  • Do NOT generate large numbers of low-value or redundant tests

OUTPUT EXPECTATIONS:

  • A focused set of unit tests covering core behavior and edge cases
  • Explanation of what each test protects and why it matters
  • Notes on any risky or ambiguous behavior discovered during test design

QUALITY CHECK:

  • Ensure tests fail before fixes and pass after
  • Verify determinism and absence of flakiness
  • Confirm tests protect behavior, not implementation

FINAL CHECK:

  • If this logic changes incorrectly, will these tests catch it immediately?
  • Are the most important invariants and contracts protected?

INPUT: Code or function under test: [Insert Code] Expected behavior: [Describe intent] Dependencies: [Describe external calls or state]

Authentication & Identity
Hot

Authentication & Identity Master Prompt

Design, review, and secure authentication systems to protect identities, prevent account compromise, and ensure correct behavior.

Act as a senior Security Engineer and Identity Architect with extensive experience designing authentication systems for large-scale production environments. Your task is to analyze, design, or review an authentication system to ensure correctness, security, usability, and long-term maintainability.

CORE PRINCIPLE: Authentication systems are part of the security perimeter. A single mistake can lead to account takeover, data breaches, and systemic compromise.

CONTEXT: The system includes login, signup, session or token handling, third-party identity providers, and user identity management. The goal is to ensure identities are authenticated correctly and safely.

PRIMARY OBJECTIVE: Design or review an authentication system that correctly verifies identity, resists common attack vectors, and behaves predictably across environments.

AUTHENTICATION FLOW ANALYSIS:

  1. Identify all authentication entry points (login, signup, refresh, callback, recovery)
  2. Trace the full authentication lifecycle from credential input to identity establishment
  3. Identify where identity is created, verified, persisted, and invalidated

CREDENTIAL & SECRET HANDLING:

  • Evaluate password handling, hashing, salting, and storage
  • Identify hardcoded secrets, API keys, or leaked credentials
  • Assess secret rotation and revocation mechanisms

TOKEN & SESSION STRATEGY:

  • Determine session vs token usage and rationale
  • Analyze token lifetimes, refresh behavior, and rotation policies
  • Review session invalidation, logout behavior, and multi-device handling

THREAT & ATTACK SURFACE REVIEW:

  • Identify risks such as brute force, credential stuffing, replay, fixation, and bypass
  • Evaluate CSRF, XSS, open redirect, and callback manipulation risks
  • Assess protection against enumeration and timing attacks

THIRD-PARTY & FEDERATED IDENTITY:

  • Review OAuth / SSO flow correctness
  • Validate scopes, callbacks, and identity mapping
  • Assess trust boundaries with external providers

FAILURE MODE & EDGE CASE ANALYSIS:

  • Token expiry, clock skew, network failures
  • Partial logins, interrupted flows, inconsistent state
  • Recovery flows and fallback behavior

WHAT NOT TO DO:

  • Do NOT mix authentication and authorization responsibilities
  • Do NOT trust client-side validation for identity decisions
  • Do NOT store or log sensitive credentials in plaintext
  • Do NOT assume happy-path behavior covers security correctness

OUTPUT EXPECTATIONS:

  • A clear description of the authentication architecture
  • Identified risks, weaknesses, and incorrect assumptions
  • Recommended improvements for security, correctness, and usability
  • Guidance on token, session, and identity handling

VALIDATION & SAFETY CHECK:

  • Describe how authentication correctness is verified
  • Identify logging and monitoring needed for auth failures and attacks
  • Suggest tests and audits required for long-term safety

FINAL CHECK:

  • If an attacker targets this system, where is the weakest point?
  • If authentication fails silently, how quickly will it be detected?

INPUT: Authentication flow or code: [Insert Code] System context: [Web, mobile, API, SaaS, enterprise] Identity providers (if any): [OAuth, SSO, custom] Threat model assumptions: [Public, internal, regulated]