Automated Security Scanning for Small IT Teams
Most vulnerability scanning advice assumes you have a dedicated security team, a six-figure tooling budget and a CISO who signs off on quarterly pen tests. If you are running IT for a small or mid-sized organisation with one to five people on your team, that advice is useless.
You still need to find vulnerabilities before attackers do. You just need to do it with free tools, limited time and no dedicated security analyst. The good news is that the open source ecosystem has matured to the point where a small team can build an automated scanning pipeline that runs daily, catches real issues and costs nothing beyond the server it runs on.
I have built exactly this kind of pipeline across several organisations. This guide covers the tools I actually use, how to stitch them together, and how to avoid drowning in false positives when you do not have the headcount to triage thousands of findings.
The Problem with Enterprise Scanning Tools
Enterprise vulnerability scanners like Qualys, Rapid7 InsightVM and Tenable Nessus Professional are excellent products. They are also designed for organisations with security operations centres, dedicated vulnerability management teams and annual budgets north of fifty thousand pounds.
When you license one of these for a small team, three things tend to happen:
- The scan runs, produces 4,000 findings, and nobody has time to look at them. A vulnerability scanner that generates noise you cannot act on is worse than no scanner at all. It creates a false sense of security.
- The tool sits idle after the initial excitement wears off. Without a dedicated person to maintain scan schedules, update plugins and chase remediation, the scanner becomes shelfware.
- You spend your entire security budget on detection and have nothing left for remediation. Finding vulnerabilities is only half the job. Fixing them is where the real work happens.
Small teams need a different approach. Fewer findings, higher confidence, automated scheduling and zero licensing cost.
The Scanning Stack I Recommend
After testing dozens of tools over the years, I have settled on a stack of four open source scanners that cover different layers of the infrastructure. None of them cost anything. All of them can be automated with cron jobs or CI pipelines.
1. Nuclei - Network and Web Application Scanning
Nuclei from ProjectDiscovery has become my default scanner for anything HTTP-facing. It uses YAML-based templates to check for specific vulnerabilities, misconfigurations and exposed services. The community template library covers over 8,000 checks and grows daily.
What makes Nuclei exceptional for small teams is its signal-to-noise ratio. Each template targets a specific, known issue. You do not get vague "possible vulnerability" findings - you get "this exact CVE is present on this endpoint" or nothing. That precision means you can actually act on every finding.
A basic scan looks like this:
nuclei -u https://yoursite.com -t cves/ -t misconfigurations/ -severity critical,high -o results.txt
Filter by severity from day one. Critical and high findings only. You can expand to medium later once you have a handle on the serious stuff.
2. Trivy - Container and Infrastructure Scanning
If you run Docker containers (and most organisations do at this point), Trivy from Aqua Security is essential. It scans container images, filesystem paths, Git repositories and Kubernetes clusters for known CVEs in OS packages and application dependencies.
trivy image your-app:latest --severity CRITICAL,HIGH
trivy fs /path/to/project --severity CRITICAL,HIGH
Trivy also scans Infrastructure as Code files (Terraform, CloudFormation, Dockerfiles) for misconfigurations. One tool covers both your running containers and your deployment templates.
3. OpenVAS - Traditional Network Vulnerability Scanning
OpenVAS (now part of Greenbone Community Edition) is the open source equivalent of Nessus. It runs authenticated and unauthenticated scans against network hosts, checking for missing patches, weak configurations and known vulnerabilities across operating systems, network devices and services.
OpenVAS has a steeper learning curve than Nuclei or Trivy. The initial setup involves deploying the Greenbone Community containers, syncing the vulnerability feed (which takes hours on first run) and configuring scan targets. But once it is running, it provides the deep network-level scanning that web-focused tools miss.
I deploy OpenVAS on a dedicated VM and schedule weekly full scans overnight. The web interface generates PDF reports that you can hand directly to management or auditors.
4. CIS Benchmarks with Lynis - Host Hardening Audits
Vulnerability scanners find known CVEs. They do not tell you whether your Linux servers are actually hardened. Lynis fills that gap by auditing system configurations against CIS benchmarks and security best practices.
lynis audit system --quick
Lynis checks SSH configuration, firewall rules, file permissions, kernel parameters, authentication settings and dozens of other hardening controls. The output is a prioritised list of suggestions with a hardening index score. Run it monthly against your server fleet and track the score over time.
Building the Automation Pipeline
Individual tools are useful. An automated pipeline that runs them on a schedule and delivers actionable results is transformative. Here is how I structure it.
The Architecture
Everything runs from a single management VM or server. In my current setup, that is a Debian VM on Proxmox with 4GB of RAM and 2 CPU cores - nothing expensive. The pipeline is orchestrated by simple bash scripts triggered by cron.
Management VM
├── Nuclei (daily, web targets)
├── Trivy (daily, container images)
├── OpenVAS (weekly, network hosts)
├── Lynis (monthly, server audit)
└── Report aggregator (sends summary)
Step 1: Define Your Targets
Before you scan anything, build a target inventory. You cannot secure what you do not know about. I use NetBox for this, but a simple spreadsheet works if you are starting out. What matters is having a definitive list of:
- All public-facing domains and subdomains
- All internal IP ranges and subnets
- All container images you deploy
- All servers and network devices
If you have already segmented your network, you will have a natural structure for organising scan targets by zone.
Step 2: Create the Scan Scripts
I keep each scanner in its own script so they can run independently or together. Here is a simplified version of the daily web scan:
#!/bin/bash
# daily-web-scan.sh
DATE=$(date +%Y-%m-%d)
TARGETS="/opt/scanning/targets/web-targets.txt"
OUTPUT="/opt/scanning/results/${DATE}-nuclei.json"
nuclei -l "$TARGETS" \
-t cves/ -t misconfigurations/ -t exposures/ \
-severity critical,high \
-json-export "$OUTPUT" \
-silent
# Count findings
CRITICAL=$(grep -c '"severity":"critical"' "$OUTPUT" 2>/dev/null || echo 0)
HIGH=$(grep -c '"severity":"high"' "$OUTPUT" 2>/dev/null || echo 0)
if [ "$CRITICAL" -gt 0 ] || [ "$HIGH" -gt 0 ]; then
echo "ALERT: ${CRITICAL} critical, ${HIGH} high findings" | \
mail -s "Security Scan Alert - ${DATE}" [email protected]
fi
The container scan follows the same pattern:
#!/bin/bash
# daily-container-scan.sh
DATE=$(date +%Y-%m-%d)
IMAGES=$(docker image ls --format '{{.Repository}}:{{.Tag}}' | grep -v '<none>')
for IMAGE in $IMAGES; do
trivy image "$IMAGE" --severity CRITICAL,HIGH --format json \
>> "/opt/scanning/results/${DATE}-trivy.json"
done
Step 3: Schedule with Cron
# Daily scans at 02:00
0 2 * * * /opt/scanning/daily-web-scan.sh
0 3 * * * /opt/scanning/daily-container-scan.sh
# Weekly network scan on Sunday at 01:00
0 1 * * 0 /opt/scanning/weekly-network-scan.sh
# Monthly hardening audit on 1st at 04:00
0 4 1 * * /opt/scanning/monthly-lynis-audit.sh
Run scans outside business hours. Network scans in particular can generate noticeable traffic, and you do not want to interfere with production systems during the working day.
Step 4: Aggregate and Report
The biggest mistake small teams make is generating scan reports that nobody reads. My approach is brutal simplicity: one daily email with three numbers.
- Critical findings: drop everything and fix these today
- High findings: plan remediation this week
- Delta from yesterday: are things getting better or worse?
If all three numbers are zero, the email is one line: "No critical or high findings. All clear." That takes two seconds to read and confirms your systems are in good shape.
For management reporting, I generate a monthly summary showing the trend over time. A chart that shows critical findings decreasing week over week is the most powerful evidence you can present to justify your security programme.
Handling the Results Without Drowning
A scanning pipeline that generates findings is only valuable if you can act on them. With a team of one to five people who also handle helpdesk tickets, infrastructure projects and everything else, you need a triage system that is ruthless about prioritisation.
The Three-Bucket Approach
Bucket 1: Fix immediately (critical severity, internet-facing). These are the findings that could lead to a breach tomorrow. Known exploited vulnerabilities on public-facing systems. Exposed admin panels. Default credentials. Drop whatever you are doing and patch.
Bucket 2: Fix this sprint (high severity, or critical on internal systems). These are serious but not imminently exploitable. Missing patches on internal servers, weak TLS configurations, outdated container base images. Schedule them into your normal work cycle.
Bucket 3: Accept or suppress (medium/low, or known exceptions). Some findings are informational. Some are false positives for your environment. Some are on systems you are decommissioning next month. Mark these as accepted with a reason and move on. Do not let them clutter your dashboard.
Tracking Remediation
You do not need a GRC platform. A simple tracking method works - I have used everything from a shared spreadsheet to issues in a Git repository. What matters is recording:
- What was found and when
- Who is responsible for fixing it
- The target remediation date
- When it was actually fixed
This creates an audit trail. When someone asks "how do you manage vulnerabilities?" you can show them a documented process with evidence of findings being identified and resolved. That is what auditors and cyber insurers actually want to see.
If you are building out your security monitoring more broadly, a SIEM strategy can help you correlate scan findings with other security events across your environment.
What Free Tools Cannot Do
I am a strong advocate for open source scanning, but I am not going to pretend it covers everything. There are genuine gaps you should be aware of.
Authenticated web application scanning is where commercial tools still have an edge. Nuclei excels at unauthenticated checks, but crawling behind a login page and testing business logic requires tools like Burp Suite Professional or a manual pen test. Budget for an annual pen test from a CREST-certified provider if your organisation handles sensitive data.
Compliance-specific scanning for PCI DSS, ISO 27001 or Cyber Essentials often requires specific tooling or assessor-approved scan providers. OpenVAS can help you prepare, but the official scan needs to come from an approved vendor.
Cloud-native security posture management (CSPM) for AWS, Azure or GCP is not covered by these tools. If you run cloud infrastructure, look at open source options like Prowler (AWS) or ScoutSuite (multi-cloud) to complement your scanning pipeline.
Attack surface management - discovering assets you did not know you had - requires a different approach. Tools like Subfinder and httpx from ProjectDiscovery can help enumerate your external attack surface, but that is a topic for another post.
Getting Started This Week
If you do nothing else, do this:
- Install Nuclei on any Linux machine. It is a single binary with no dependencies. Scan your public-facing domains tonight.
- Install Trivy and scan your Docker images. You will likely find critical CVEs in base images you have not updated.
- Set up a cron job to run both scans daily. Even without fancy reporting, the scan results accumulate and give you a baseline.
- Create a simple target list. Every IP address, every domain, every container image. You cannot scan what you have not listed.
You can add OpenVAS and Lynis later once you have the basics running. Start small, automate early, and expand as you build confidence.
If a scan does find something critical and you suspect a compromise, having a ransomware response playbook ready means you are not making decisions under pressure.
The Mindset Shift
The real value of automated scanning is not the tools. It is the shift from reactive to proactive security. Most small IT teams only think about vulnerabilities when something goes wrong - a failed audit, a near-miss incident, a news story about the latest critical CVE.
Running daily scans changes that dynamic. You start each morning knowing the state of your infrastructure. You catch issues before auditors do. You build a track record of proactive security that makes a material difference to your organisation's risk posture.
You do not need a security operations centre to do this. You need a VM, four open source tools and an hour to set up some cron jobs. The hardest part is starting.
Share this post
Daniel J Glover
IT Leader with experience spanning IT management, compliance, development, automation, AI, and project management. I write about technology, leadership, and building better systems.
Related Posts
IT Automation Strategy Guide
A practical framework for IT leaders to prioritise automation investments, avoid common pitfalls and deliver measurable operational gains.
Compliance Automation Strategy
How IT leaders can automate compliance monitoring to reduce audit burden, cut costs and maintain continuous regulatory readiness.
AI vulnerability detection in 2026
AI-powered tools are finding critical security flaws that traditional methods missed for years. What IT leaders need to know about this shift in cyber defence.
Let's Work Together
Need expert IT consulting? Let's discuss how I can help your organisation.
Get in Touch