Making Security Ambient: Seamless Checks in the Age of Vibe-Coding
How Claude Code's Custom Commands Transform Security from Checkpoint to Atmosphere
In this world of vibe-coding—where developers "fully give in to the vibes" as Andrej Karpathy put it—I've been exploring ways to integrate essential security thinking into this new workflow paradigm without sacrificing the speed and flexibility that makes AI-assisted coding so powerful.
The Security Dilemma in Vibe-Coding
Vibe-coding represents a fundamental shift in how software gets built. Instead of meticulously crafting each line, developers describe what they want and let AI generate the implementation. While this approach dramatically accelerates development, it creates blind spots around security considerations that traditional manual code reviews would catch.
I've found a practical middle ground by creating a comprehensive security check custom command for Claude Code that aligns with my philosophy of "performing security checks as early as feasible in the development lifecycle."
Claude Code's Custom Commands: Security Knowledge Transfer
Claude Code—originally developed as a research project for Anthropic's internal engineering teams before its public release—offers a powerful but often overlooked feature: custom commands stored as markdown files. This feature enables something remarkably valuable: codifying security expertise in a form that travels with your projects.
By creating a detailed security check command, I've essentially embedded security review capabilities directly into the development environment. This represents a form of "security knowledge transfer" that makes security practices more accessible without requiring context switching to specialized tools.
The Pull-and-Link Pattern for Security Commands
What's worked particularly well is a pull-and-link pattern for these commands:
Maintain a central repository of security-focused Claude commands
Let developers selectively pull commands that align with their work
Use symbolic links to make these commands available across multiple projects
This approach addresses a common problem in security: the friction between development speed and security thoroughness. With linked commands, security thinking becomes ambient rather than intrusive.
For example, when I discover a new class of vulnerability or security pattern, I can update the command in my central repository, and all linked projects immediately benefit from this enhanced security awareness.
Beyond Checkbox Security
What makes this approach valuable is how it moves beyond "checkbox security" into something more integral to the development process. The security check command doesn't just look for textbook vulnerabilities—it examines code through multiple lenses:
Custom authentication implementations
Session management patterns
Cryptographic choices
Authorization boundaries
Business logic flaws
Perhaps most importantly, it helps bridge the gap between vibe-coding's rapid development style and the need for security thoroughness by making security considerations accessible at any point in the workflow.
A Practical Example
I recently applied this approach to a project where we were rapidly developing with AI assistance. Running /project:security-check
flagged several issues that wouldn't have been immediately obvious during normal development:
A custom session management implementation that, while not vulnerable, represented unnecessary design complexity and potential maintenance issues
Several inline SQL statements that could be better handled through parameterized queries
Console.log statements outputting potentially sensitive information that were left in production code
None of these were critical security flaws, but they represent exactly the kind of "security debt" that accumulates in rapid development and becomes increasingly expensive to address later. Catching these early in the development process allowed us to refactor before patterns became established across the codebase.
When to Run Security Checks
I've found that running security checks after "significant" code changes works best—typically after a couple of sessions with Claude Code. This aligns with my approach of chunking work into smaller units where possible, which helps maintain focus and makes security issues more manageable to address.
Rather than waiting until a major feature is complete, these incremental checks help identify problems while the context is still fresh and the scope of potential fixes is limited. This reduces the cognitive load of addressing multiple issues across disparate parts of the codebase.
The Challenge of Sustainable Fixes
Identifying issues is only half the battle. Getting Claude to fix them in a sustainable way presents its own set of challenges. Current limitations make this particularly difficult:
Context Understanding: Claude may not fully grasp your architectural patterns and design principles
Holistic Design: Fixing isolated issues can introduce inconsistencies with the broader codebase
Long-term Maintainability: Quick fixes might address immediate issues but create technical debt
Security vs. Performance Tradeoffs: Some security improvements may impact other system properties
These challenges stem from fundamental limitations in current AI systems:
Context Window Constraints: LLMs have finite context windows, making it difficult to both find AND fix all issues in one pass
Task Switching Penalties: Models perform better when focused on a single task rather than context-switching
Cognitive Load: Combining detection and remediation increases complexity for the AI
The Case for Task Separation
Given these constraints, I've found that separating issue detection from remediation leads to better outcomes. Rather than trying to accomplish everything in a single command, consider creating a complementary command specifically for remediation:
/project:bugfix [type]
This command could guide Claude through the remediation process for a specific class of issues, providing:
Design principles to follow when fixing each issue type
Performance considerations to maintain
Guidelines for consistency with existing code
Documentation requirements for changes
The key insight here is that AI-driven bug detection is currently more reliable than AI-driven bug fixing. By separating these concerns, you can leverage the strengths of AI while compensating for its limitations.
This separation of concerns has another benefit: it allows you to focus the model on specific classes of issues during remediation, rather than attempting to fix everything at once. For example:
/project:bugfix sql-injection
This targeted approach aligns with how human security experts typically work—methodically addressing related issues together to ensure consistent remediation patterns.
Implementing This Approach
If you'd like to try this approach, start by creating a central directory for your Claude commands. Build a comprehensive security check command tailored to your technology stack, then symlink it into your projects:
# Create a global commands directory
mkdir -p ~/global-claude-commands/.claude/commands
# Then for any project, symlink the directory
ln -s ~/global-claude-commands/.claude/commands /path/to/your-project/.claude/
You can even script this initialization step for new projects, removing any friction from adopting these security practices.
Consider creating a family of related commands that work together:
/project:security-check
- Identifies potential issues/project:bugfix [type]
- Focused remediation for specific issue types
This modular approach allows each command to focus on a specific task, which tends to yield better results from current AI systems. It also makes it easier to update and improve individual commands over time as you discover new patterns or edge cases.
AI Collaboration for Command Development
An interesting meta-aspect of this approach: I created the security-check.md file itself using a multi-model approach. I started with Claude Desktop to generate the initial command structure and content, then after a few iterations, sent that output to ChatGPT's Deep Research to further refine and enhance it.
This collaborative model approach produced a more comprehensive security check command than either model would have created independently. Claude excelled at generating the overall structure and security coverage, while ChatGPT's research capabilities helped refine the specific checks and edge cases.
This highlights an important principle for working with AI tools: leveraging different models for their respective strengths often leads to superior results. Just as the security checks themselves are modular, the process of developing them can benefit from specialized AI capabilities working in concert.
Pro Tip: Keep Context Fresh for Better Results
One practice that significantly improves the effectiveness of security checks is periodically refreshing the project context. Here's a workflow that's worked well for me:
Maintain a comprehensive CLAUDE.md file that describes your project architecture, key components, and security principles
Periodically ask Claude Code to update this file based on the current state of the codebase
Create an "app context" document that your security-check command can read first
Having Claude read this context before running security checks gives it a much better understanding of your codebase's architecture, design patterns, and security requirements. This contextual awareness helps it identify issues that might otherwise be missed and reduces false positives from misunderstanding your intended patterns.
For example, your app context might include:
Authentication patterns the project uses consistently
Database access conventions
Logging standards and sensitive data handling rules
Technologies and frameworks in use
Security boundaries and trust transitions
This might seem like extra work, but it pays dividends in the quality of security checks. As a bonus, maintaining this context document also serves as valuable documentation for new team members and helps ensure consistent patterns across the codebase.
Security as an Ambient Capability
This approach represents a shift in how we think about security in the age of AI-assisted development. Rather than treating security as a separate phase or specialized activity, it becomes an available perspective at any moment in the development journey.
The command doesn't demand attention; it's there when you choose to invoke it. This subtle difference changes the relationship between development speed and security consideration.
The success of this approach ultimately depends on finding the right balance—security checks that are thorough enough to catch meaningful issues but lightweight enough to use regularly. My experience suggests that small, frequent checks integrated into the natural development workflow lead to better outcomes than infrequent, comprehensive reviews.
As AI-assisted development continues to evolve, these patterns of integrating security thinking directly into the workflow will likely become increasingly important. The separation of concerns (finding issues vs. fixing them) represents a pragmatic accommodation to current AI limitations, but this pattern may shift as models improve in context length and reasoning abilities.
What approaches have you found effective for integrating security thinking into your AI-assisted development workflows? How are you balancing the rapid pace of vibe-coding with the need for robust security practices?