Automated Guard Rails for Vibe Coding
None
<p><img decoding="async" src="https://blog.gitguardian.com/content/images/2025/06/guardrailspng.png" alt="Automated Guard Rails for Vibe Coding"></p><p>There are countless warnings and horror stories about "vibe coding"—that flow state where you're cranking out features and everything feels effortless. Sure, it looks productive and the code works, but the end result is often an unmaintainable, insecure, unreliable, and untested mess. But it works, for now. Vibe coding might sound like a trendy term, but it's really just developing software without automated checks and quality gates. Traditional engineering disciplines have always relied on safety measures and quality controls, so vibe coding should be no different in my honest opinion.</p><p>All source code and configurations for this project are available at <a href="https://github.com/reaandrew/acronymcreator?ref=blog.gitguardian.com">https://github.com/reaandrew/acronymcreator</a>.</p><div class="code-block code-block-12 ai-track" data-ai="WzEyLCIiLCJCbG9jayAxMiIsIiIsMV0=" style="margin: 8px 0; clear: both;"> <style> .ai-rotate {position: relative;} .ai-rotate-hidden {visibility: hidden;} .ai-rotate-hidden-2 {position: absolute; top: 0; left: 0; width: 100%; height: 100%;} .ai-list-data, .ai-ip-data, .ai-filter-check, .ai-fallback, .ai-list-block, .ai-list-block-ip, .ai-list-block-filter {visibility: hidden; position: absolute; width: 50%; height: 1px; top: -1000px; z-index: -9999; margin: 0px!important;} .ai-list-data, .ai-ip-data, .ai-filter-check, .ai-fallback {min-width: 1px;} </style> <div class="ai-rotate ai-unprocessed ai-timed-rotation ai-12-1" data-info="WyIxMi0xIiwyXQ==" style="position: relative;"> <div class="ai-rotate-option" style="visibility: hidden;" data-index="1" data-name="VGVjaHN0cm9uZyBHYW5nIFlvdXR1YmU=" data-time="MTA="> <div class="custom-ad"> <div style="margin: auto; text-align: center;"><a href="https://youtu.be/Fojn5NFwaw8" target="_blank"><img src="https://securityboulevard.com/wp-content/uploads/2024/12/Techstrong-Gang-Youtube-PodcastV2-770.png" alt="Techstrong Gang Youtube"></a></div> <div class="clear-custom-ad"></div> </div></div> <div class="ai-rotate-option" style="visibility: hidden; position: absolute; top: 0; left: 0; width: 100%; height: 100%;" data-index="1" data-name="QVdTIEh1Yg==" data-time="MTA="> <div class="custom-ad"> <div style="margin: auto; text-align: center;"><a href="https://devops.com/builder-community-hub/?ref=in-article-ad-1&utm_source=do&utm_medium=referral&utm_campaign=in-article-ad-1" target="_blank"><img src="https://devops.com/wp-content/uploads/2024/10/Gradient-1.png" alt="AWS Hub"></a></div> <div class="clear-custom-ad"></div> </div></div> </div> </div><h2 id="real-example-acronym-creator">Real Example: Acronym Creator</h2><p>To demonstrate these concepts, I built a command-line tool that creates acronyms from phrases:</p><pre><code class="language-bash">acronymcreator "Hello World" # Returns: HW acronymcreator "The Quick Brown Fox" --include-articles # Returns: TQBF </code></pre><p>This project includes comprehensive automated guardrails. Every time I made changes—even during rapid prototyping—the system automatically checked for secrets, formatted the code, ran tests, and verified security. This meant I could code freely without worrying about accidentally introducing problems.</p><div class="code-block code-block-15" style="margin: 8px 0; clear: both;"> <script async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=ca-pub-2091799172090865" crossorigin="anonymous" type="729e046105f48aaae528ae6a-text/javascript"></script> <!-- SB In Article Ad 1 --> <ins class="adsbygoogle" style="display:block" data-ad-client="ca-pub-2091799172090865" data-ad-slot="8723094367" data-ad-format="auto" data-full-width-responsive="true"></ins> <script type="729e046105f48aaae528ae6a-text/javascript"> (adsbygoogle = window.adsbygoogle || []).push({}); </script></div><p>The image below illustrates the automated guardrail cycle and process:<br> <img decoding="async" src="https://blog.gitguardian.com/content/images/2025/06/check-cycles-1.png" alt="Automated Guard Rails for Vibe Coding" loading="lazy"></p><h2 id="pre-commit-hooks-the-first-line-of-defense">Pre-commit Hooks: The First Line of Defense</h2><p><img decoding="async" src="https://blog.gitguardian.com/content/images/2025/06/pre-commit-checks-1.png" alt="Automated Guard Rails for Vibe Coding" loading="lazy"></p><p>Pre-commit hooks are automated checks that run locally on your machine before code enters the repository. They're particularly crucial when working with AI coding assistants, which have transformed how we write code but can also introduce new challenges.</p><p>When AI assistants like Claude Code, GitHub Copilot, or Cursor are in "auto mode," they can rapidly generate and iterate on code. This speed is incredible for productivity, but it can also bypass human review of basic quality standards. Pre-commit hooks help with this by providing immediate, consistent feedback that both human developers and AI assistants can learn from.</p><p>Here's where it gets really powerful: when an AI coding assistant encounters a failed pre-commit check, it doesn't just ignore it—it uses that feedback to iterate and improve the code. The commit attempt blocks until all checks pass, creating a feedback loop where the AI learns to write better code that meets your standards. For example, if a commit fails because of missing test coverage, the AI can immediately add the necessary tests and try again. If code formatting is wrong, the AI learns the project's style requirements.</p><p>This creates a collaborative relationship where the AI handles the heavy lifting of code generation while the automated checks ensure quality standards are maintained. The developer gets the speed benefits of AI assistance without sacrificing code quality, security, or maintainability.</p><h2 id="ci-pipeline-the-second-line-of-defense">CI Pipeline: The Second Line of Defense</h2><p><img decoding="async" src="https://blog.gitguardian.com/content/images/2025/06/post-commit-checks-1.png" alt="Automated Guard Rails for Vibe Coding" loading="lazy"></p><p>While pre-commit hooks catch issues locally, the CI pipeline provides comprehensive validation in a clean, controlled environment. The multi-stage approach ensures that even if something bypasses local checks, it won't reach production.</p><p>The pipeline runs progressive stages: basic linting and testing first, followed by comprehensive security scanning with GitGuardian's repository history analysis, then advanced quality analysis with SonarCloud and Semgrep security scanning. Each stage builds on the previous one, with failures stopping the pipeline immediately. This catches issues that might be missed locally due to environment differences, skipped pre-commit hooks, or complex integration problems that only surface during full builds.</p><p>For AI agents, the CI pipeline becomes even more powerful when they can query job status and receive clear failure messages. This enables the same feedback loop that works with pre-commit hooks—the agent can push code, check the CI results, and iterate based on specific failure details until all checks pass. Clear, descriptive error messages from CI jobs help the agent understand exactly what needs to be fixed.</p><h2 id="automated-guardrails">Automated Guardrails</h2><p>Now let's examine the specific automated checks that form the backbone of a robust development workflow. The Acronym Creator project demonstrates each of these guardrails with real-world configurations that you can copy and adapt for your own projects. The repository can also be used as a repository template or as a reference to create other templates for different application types or languages.</p><h3 id="repository-configuration-files">Repository Configuration Files</h3><p>The following table describes the key files required for implementing the guardrails, using <strong>pre-commit</strong> as the foundation technology for local hooks:</p><table> <thead> <tr> <th>File</th> <th>Purpose</th> <th>Technology Used</th> </tr> </thead> <tbody> <tr> <td><code>.pre-commit-config.yaml</code></td> <td>Defines all pre-commit hooks including GitGuardian, Black, Flake8, and pytest</td> <td>pre-commit framework</td> </tr> <tr> <td><code>.flake8</code></td> <td>Flake8 linting configuration for code style and error checking</td> <td>Flake8</td> </tr> <tr> <td><code>pytest-precommit.ini</code></td> <td>Pytest configuration for pre-commit test execution</td> <td>pytest</td> </tr> <tr> <td><code>.coveragerc</code></td> <td>Coverage.py configuration with 80% threshold and temp file handling</td> <td>coverage.py</td> </tr> <tr> <td><code>pyproject.toml</code></td> <td>Main project configuration with dependencies and build settings</td> <td>Python packaging</td> </tr> <tr> <td><code>.releaserc.json</code></td> <td>Semantic-release configuration for automated versioning</td> <td>semantic-release</td> </tr> <tr> <td><code>.github/workflows/ci.yml</code></td> <td>Complete CI/CD pipeline with GitGuardian, SonarCloud, and Semgrep</td> <td>GitHub Actions</td> </tr> <tr> <td><code>sonar-project.properties</code></td> <td>SonarCloud analysis configuration for code quality scanning</td> <td>SonarCloud</td> </tr> </tbody> </table><h3 id="secret-detection">Secret Detection</h3><p><strong>Tools Used</strong>: GitGuardian ggshield <a href="https://www.gitguardian.com/ggshield?ref=blog.gitguardian.com">https://www.gitguardian.com/ggshield</a></p><p>GitGuardian automatically scans your code for accidentally committed passwords, API keys, and other sensitive information. This protection works at two levels: locally during commits and comprehensively in the CI pipeline.</p><p><strong>Pre-commit Hook</strong> – Scans staged changes before they enter the repository:</p><pre><code class="language-yaml">- repo: https://github.com/gitguardian/ggshield hooks: - id: ggshield </code></pre><p><strong>CI Pipeline Stage</strong> – Scans the complete repository history:</p><pre><code class="language-yaml">- name: GitGuardian scan repository history env: GITGUARDIAN_API_KEY: ${{ secrets.GITGUARDIAN_API_KEY }} run: ggshield secret scan repo . </code></pre><p>Why both? The pre-commit hook catches secrets in new changes, but the CI stage scans the entire git history. This is critical because secrets can exist in previous commits even if they've been removed from current files. Git preserves the complete history of changes, so a password committed six months ago and deleted the next day is still accessible in the repository history. The CI scan ensures comprehensive coverage and catches secrets that might have been introduced through merges, rebases, or commits made with <code>--no-verify</code>. For detailed setup instructions, see the <a href="https://docs.gitguardian.com/ggshield-docs/integrations/cicd-integrations/github-actions?ref=blog.gitguardian.com">GitGuardian GitHub Actions integration guide</a>.</p><h3 id="code-quality-automation">Code Quality Automation</h3><p><strong>Tools Used</strong>: Black (formatting) and Flake8 (linting)</p><p>Black automatically formats your code consistently, while Flake8 catches common programming errors and enforces PEP 8 compliance. This means you can focus on solving problems rather than worrying about spacing and style conventions.</p><pre><code class="language-yaml">- repo: https://github.com/psf/black hooks: - id: black - repo: https://github.com/pycqa/flake8 hooks: - id: flake8 </code></pre><p>For convenience, we simply run the pre-commit tool again in CI, which executes all these checks once more. This is essential for numerous reasons, including catching commits made with the <code>--no-verify</code> flag that bypass local pre-commit hooks.</p><h3 id="test-coverage-safety-net">Test Coverage Safety Net</h3><p><strong>Tools Used</strong>: pytest, coverage.py, and pytest-cov plugin</p><p>The system automatically runs your tests using pytest and calculates coverage with coverage.py through the pytest-cov plugin. Coverage is enforced at multiple levels with specific configuration files controlling the behavior.</p><p><strong>Pre-commit Configuration</strong> – Uses a dedicated config for clean execution:</p><pre><code class="language-ini"># pytest-precommit.ini [pytest] addopts = --cov=src --cov-report=term-missing --cov-fail-under=80 </code></pre><p><strong>Coverage Configuration</strong> – Controls coverage calculation and thresholds:</p><pre><code class="language-ini"># .coveragerc [run] branch = true source = src data_file = /tmp/.coverage_precommit [report] fail_under = 80 exclude_lines = pragma: no cover def __repr__ if __name__ == "__main__": </code></pre><p>The coverage enforcement works by requiring 80% of code lines to be tested, including branch coverage (testing both sides of if/else statements). The pre-commit hook uses a separate data file in <code>/tmp</code> to avoid modifying the working directory, and excludes common patterns like <code>__repr__</code> methods that don't need testing. If coverage drops below the threshold, the commit is blocked until more tests are added. Like the code quality checks, the CI pipeline runs the pre-commit tool again to re-validate all tests and coverage requirements.</p><h3 id="code-quality-and-security-analysis">Code Quality and Security Analysis</h3><p><strong>Tools Used</strong>: SonarCloud and Semgrep</p><p>Security scanning tools like SonarCloud and Semgrep examine your code for common vulnerability patterns, code quality issues, and security hotspots, identifying potential problems before they reach production. These checks are only done in CI since they are not as quick, and I have focused on keeping the pre-commit tests and checks to those which are relatively fast, so you can fail fast.</p><pre><code class="language-yaml"># SonarCloud integration - name: SonarCloud Scan uses: SonarSource/sonarqube-scan-action@master # Semgrep security analysis semgrep: container: image: semgrep/semgrep steps: - run: semgrep ci </code></pre><h2 id="automated-release-management">Automated Release Management</h2><p>The system includes automated release management using semantic-release and semantic versioning. When you write commit messages using conventional formats like "feat:" for new features or "fix:" for bug fixes, the system automatically determines the appropriate version number and creates releases with generated documentation. For more details on conventional commit formats, see <a href="https://www.conventionalcommits.org/?ref=blog.gitguardian.com">conventionalcommits.org</a>.</p><pre><code class="language-json">{ "branches": ["main"], "plugins": [ "@semantic-release/commit-analyzer", "@semantic-release/release-notes-generator", "@semantic-release/changelog" ] } </code></pre><p>This eliminates the manual work of managing versions and release notes, while ensuring consistent documentation of changes.</p><h2 id="implementation-strategy">Implementation Strategy</h2><p>The important principle is to fix issues immediately when the automated checks find them, rather than accumulating technical debt. This keeps the guardrails effective and prevents the quality standards from degrading over time.</p><p>However, there's flexibility in how you handle CI failures when working with AI tools. Some CI tasks can fail without blocking the AI assistant from continuing development work. In the short term, failed checks prevent builds and releases from occurring in this example, but development can continue. While it's preferable to tackle technical debt as soon as it's detected, you may choose to let some accrue while focusing on a feature delivery. Just be aware that the more technical debt that accumulates, the more difficult it becomes to pay back.</p><p>In trunk-based development workflows, failed CI checks mean you've broken the main build. It's advisable to return to a stable main branch as soon as possible to avoid blocking other team members and maintain development velocity.</p><p>For this example, I experimented with a test-first approach using the AI assistant. This was the first time I chose to let the AI assistant create empty test functions first, almost as a way of generating the specification. I then asked it to create the test code and the implementations it was testing. This helped me use the project as a tool to check the guardrails as I introduced them to the solution.</p><h2 id="benefits-beyond-safety">Benefits Beyond Safety</h2><p>These automated systems don't just prevent problems—they actually enable faster development. With confidence that basic issues will be caught automatically, developers can experiment more freely and iterate more quickly. Code reviews become more focused on architecture and business logic rather than formatting and style. Teams can deploy more frequently because they trust that their quality gates will catch regressions.</p><h2 id="getting-started">Getting Started</h2><p>The Acronym Creator project serves as a template that other teams can copy and adapt for their own projects. It includes pre-configured automation for secret detection, code quality enforcement, test coverage, security scanning, and automated releases. New projects can inherit these protections immediately rather than starting from scratch.</p><p><img decoding="async" src="https://blog.gitguardian.com/content/images/2025/06/repo-from-template-1.png" alt="Automated Guard Rails for Vibe Coding" loading="lazy"></p><h2 id="the-bottom-line">The Bottom Line</h2><p>Automated guardrails don't slow down vibe coding—they make it sustainable. They let you maintain that productive flow state while ensuring that the code you're producing meets quality and security standards. The initial setup takes some effort, but the long-term result is faster, safer development with fewer production surprises.</p><p>For teams using premium LLMs or pay-as-you-go AI services, these guardrails also provide significant cost savings. By tackling issues immediately rather than letting technical debt accumulate, you avoid the exponentially higher costs of having AI assistants work through increasingly complex problems. Simple fixes caught early require minimal AI interaction, while accumulated issues can require extensive back-and-forth conversations and multiple iterations to resolve.</p><p>Without guardrails, development time compounds with each feature as the codebase becomes increasingly complex and fragile. What starts as quick feature additions gradually turns into a "whack-a-mole" scenario where fixing one issue creates two more. Each new feature takes longer to implement as developers must navigate around existing problems, leading to exponentially increasing development cycles.</p><p>The goal isn't to restrict creativity or slow down development. It's to catch the common mistakes that happen when you're focused on solving hard problems, letting you maintain both speed and quality without having to constantly worry about the details that computers can handle automatically.</p><p>By using this repository template for new projects, you skip the initial setup overhead and start right at the crossover point where guardrails are already paying dividends from day one.</p><p><strong>The chart below is purely my opinion</strong> and describes what you hear in the news about the results of not using guardrails, but I like it and I think it's effective in explaining the results and benefits of using guardrails:</p><p><img decoding="async" src="https://blog.gitguardian.com/content/images/2025/06/development-time-comparison-1.png" alt="Automated Guard Rails for Vibe Coding" loading="lazy"></p><div class="spu-placeholder" style="display:none"></div><p class="syndicated-attribution">*** This is a Security Bloggers Network syndicated blog from <a href="https://blog.gitguardian.com/">GitGuardian Blog - Take Control of Your Secrets Security</a> authored by <a href="https://securityboulevard.com/author/0/" title="Read other posts by Andy Rea">Andy Rea</a>. Read the original post at: <a href="https://blog.gitguardian.com/automated-guard-rails-for-vibe-coding/">https://blog.gitguardian.com/automated-guard-rails-for-vibe-coding/</a> </p>