Frequently Asked Questions

Everything you need to know about ElevatedIQ.ai security scanning

Getting Started

How do I get started with ElevatedIQ?

Clone the repository from GitHub, configure your GCP project ID and target URL in config/.env, then run your first scan using .\scripts\scan-project.ps1 (PowerShell) or ./scripts/scan-project.sh (Bash).

What are the prerequisites?

You need a Google Cloud Platform account with billing enabled, gcloud CLI installed and authenticated, and permissions to create Cloud Build triggers and access Cloud Storage. No local security tools installation required—everything runs serverless on GCP.

Can I test it before committing?

Yes! Run a scan on a small test repository first. The tool generates detailed reports showing exactly what it found, so you can evaluate the quality and relevance before integrating into your CI/CD pipeline.

Pricing & Costs

How much does ElevatedIQ cost?

ElevatedIQ is open-source and free. You only pay for the GCP resources consumed during scans: Cloud Build execution time (~$0.003/minute), Cloud Storage for reports (~$0.026/GB/month), and minimal data egress. Typical scan costs $0.05-$0.30 depending on repository size.

What's the idle cost?

Zero. The architecture is 100% serverless with scale-to-zero. When you're not scanning, you only pay for stored reports in Cloud Storage (pennies per month). No compute instances, no containers running 24/7.

Can I set a budget limit?

Yes! Configure budget alerts in GCP Billing to get notified when scan costs exceed your threshold. You can also set Cloud Build timeout limits (default 20 minutes) to prevent runaway costs.

Security & Privacy

Is my code sent outside my GCP project?

No. All scanning happens within your own GCP project. Your code never leaves your infrastructure. Cloud Build workers are ephemeral—code is loaded, scanned, and discarded. Only the scan reports are persisted to your Cloud Storage bucket.

What data is stored?

Only scan reports (HTML, JSON, SARIF files) and summary metadata are stored in Cloud Storage. Source code is never persisted. Reports contain vulnerability findings, file paths, and line numbers—no credentials or sensitive business logic.

Can I scan production applications?

SAST and dependency scans are safe for any environment—they analyze code, not runtime behavior. DAST uses OWASP ZAP baseline (passive) mode, which is safe for production but we recommend testing on staging first. Full active DAST should only run against non-production environments.

How do you handle secrets in code?

Semgrep includes rules to detect hardcoded secrets (API keys, tokens, passwords). Detected secrets are flagged in reports but not displayed in full—only their location and pattern type. Always rotate any exposed credentials immediately.

Technical Details

What languages are supported?

Semgrep supports 30+ languages including JavaScript/TypeScript, Python, Java, Go, PHP, Ruby, C/C++, and more. Trivy scans dependencies for npm, pip, Maven, Go modules, and container images. ZAP DAST works with any web application.

How long do scans take?

Typical scans complete in 5-15 minutes depending on repository size and complexity. SAST (Semgrep) is fastest (~2-5 min), dependency scanning (Trivy) adds 1-3 min, and DAST (ZAP baseline) takes 3-10 min depending on site crawl depth.

Can I customize the rules?

Absolutely. Edit config/semgrep-rules.yaml to add custom Semgrep rules, adjust severity thresholds, or disable specific checks. You can also configure ZAP scan policies and Trivy severity filters in the respective config files.

How do I integrate with CI/CD?

Use the provided scripts in GitHub Actions, GitLab CI, Jenkins, or any CI/CD platform. The scripts exit with non-zero status codes when critical vulnerabilities are found, automatically failing builds. See our CI/CD integration guide for examples.

What output formats are available?

Reports are generated in HTML (human-readable), JSON (machine-parseable), and SARIF (GitHub Security/IDE integration). Summary reports in Markdown format provide quick overviews with prioritized fix recommendations.

Troubleshooting

Scan failed with timeout error

Increase the Cloud Build timeout in cloudbuild.yaml (default 20 minutes). For large repositories, consider scanning specific directories or excluding test/vendor folders to reduce scan time.

Too many false positives

Adjust Semgrep rule severity in config/semgrep-rules.yaml. Start with high/critical severity only, then gradually add medium/low as you address major issues. Custom rules can be tuned to your codebase patterns.

Can't download reports

Ensure your GCP service account has storage.objectViewer role on the results bucket. Check that the build ID is correct and the scan completed successfully. Use gsutil ls gs://BUCKET_NAME/ to verify reports were created.

DAST scan found nothing

Verify the target URL is accessible from Cloud Build workers (check firewall rules). Ensure the application is running during the scan. Review zap-baseline-report.html for crawl details—the site may require authentication or have anti-bot protections.

Support & Community

Where can I get help?

Open an issue on the GitHub repository, join our community discussions, or contact us directly for enterprise support options.

Can I contribute?

Yes! We welcome contributions—custom Semgrep rules, new scan integrations, documentation improvements, and bug fixes. See CONTRIBUTING.md in the repository for guidelines.

Do you offer managed services?

We offer consulting for enterprise deployments, custom rule development, and integration support. Contact us to discuss your requirements and get a customized solution.

Still have questions?

We're here to help. Reach out and we'll get back to you within 24 hours.