Pricing FAQ About
Book a scoping call
April 17, 2026 · Devon Booker

I Ran My SOC 2 Assessment Tool Against My Own AWS Account. It Scored 35/100.

I spent the last few months building kumo-assess, an agent-driven SOC 2 readiness tool for AWS. Last week, before I pitched it to anyone else, I pointed it at my own account. It gave me a 35 out of 100.

This post is the full walkthrough. What it found, why each finding matters, what the fix looks like, and what it says about the state of SOC 2 readiness in the average startup AWS account.

If you run AWS and you think you're anywhere close to audit-ready, this post is for you.


Why I built the tool in the first place

I'm a security analyst. I've watched dozens of Series A and B startups burn six figures getting SOC 2 compliant. The breakdown usually looks like this:

The most interesting part to me was always the first bucket. The readiness assessment. That's where a consultant spends weeks reading AWS console screenshots, running aws iam commands, and assembling a markdown report. Most of that work is pattern matching against the same 30 to 50 checks every time.

It looked like a good job for an agent.

So I built one. The premise: run a read-only scan against any AWS account, collect the control evidence that matters for SOC 2 CC6 (Logical and Physical Access Controls) and CC7 (System Operations), and produce a report that tells you exactly where your gaps are and what to do about them.

Then I ran it against myself. Here is what it found.


How the scan actually works

Before we get to the findings, a quick architecture note. This matters because "AI does the audit" is not a trust-building statement. You want to know what the tool is actually doing in your account.

The scan runs in four stages:

1. Collectors. Pure Go, read-only. Each collector hits one AWS service (IAM, CloudTrail, S3, Security Hub, Config) and pulls the state that matters for compliance. No mutating API calls exist anywhere in the collector code. You can audit this yourself; the repo is public.

2. Deterministic rules engine. Before any AI gets involved, a hand-written rules engine evaluates the collector output against every SOC 2 check I care about. Is root MFA on? Does the password policy require 14 characters? Is CloudTrail multi-region and actively logging? Each check produces a PASS, FAIL, or PARTIAL with evidence attached. Fourteen automated tests cover this layer.

3. Claude agents. Only after the rules engine has rendered a verdict does Claude enter the picture. A sub-agent per control reads the raw evidence and the rules engine's findings, then produces a narrative summary and specific remediation steps. Claude cannot override a PASS or FAIL; it can only translate the deterministic findings into readable language.

4. Synthesis. A lead agent rolls up the control family. An orchestrator produces the executive summary. A PM agent produces a sales-readiness score and a sprint plan for closing the gaps.

The total scan took about 90 seconds against my account. A human consultant doing the same work would have spent a week.


What it found

My account is a working development environment. It has an IAM admin user, a CloudTrail trail pointed at an S3 bucket, a Terraform state bucket, and not much else. I have not attempted to harden it. This is intentional; it represents roughly the baseline state of a pre-seed or seed stage startup AWS account.

Here are the five findings that mattered most.

Finding 1 — CC6.3 FAILED: AdministratorAccess attached directly to an IAM user

What the tool saw: My IAM user iamadmin has the AWS managed policy AdministratorAccess attached directly. No groups, no scoped role, no break-glass pattern. Just full admin on a long-lived identity.

Why it matters: This is the finding every SOC 2 auditor will flag on day one. The control is CC6.3, which requires "logical access to information assets is restricted to authorized users." Giving one identity full control fails both the least privilege and the role-based access expectations.

What the fix looks like:

# Replace the directly attached AdministratorAccess with an assumable admin role
resource "aws_iam_role" "admin" {
  name                 = "admin-role"
  assume_role_policy   = data.aws_iam_policy_document.admin_trust.json
  max_session_duration = 3600
}

resource "aws_iam_role_policy_attachment" "admin" {
  role       = aws_iam_role.admin.name
  policy_arn = "arn:aws:iam::aws:policy/AdministratorAccess"
}

Better yet, move humans off long-lived IAM users entirely. Use IAM Identity Center (formerly SSO) with federation. This is typically a two-day project for a founder who has never done it before; about 90 minutes for someone who has.

Finding 2 — CC6.1 PARTIAL: No account password policy configured

What the tool saw: A 404 from aws iam get-account-password-policy. Which in AWS-speak means no password policy exists at all.

Why it matters: Every enterprise security questionnaire on earth asks about password policy. Minimum length, complexity, rotation, reuse. If you can't produce a policy, you cannot answer those questions honestly, which means you cannot sell to regulated buyers.

What the fix looks like:

aws iam update-account-password-policy \
  --minimum-password-length 14 \
  --require-symbols \
  --require-numbers \
  --require-uppercase-characters \
  --require-lowercase-characters \
  --max-password-age 90 \
  --password-reuse-prevention 24

Ten seconds of work. Permanently resolves one of the most commonly asked-about controls in every SOC 2 assessment. Nobody does it until an auditor forces them.

Finding 3 — CC6.6 PARTIAL: CloudTrail without log file validation or KMS encryption

What the tool saw: A multi-region trail is actively logging. Good. But log file validation is off, and the logs are not KMS encrypted at rest.

Why it matters: CloudTrail is your audit trail. It is the single most important forensic tool in your AWS account. If an attacker (or a disgruntled admin) tampers with the logs, you need to know. Log file validation uses SHA-256 hash chains to detect tampering. Without it, your audit trail is only as trustworthy as your S3 bucket access controls, which is not very.

KMS encryption matters for a different reason: if a reader with raw S3 read access but no KMS decrypt permission shows up in your account, they cannot read the logs. That separation of concerns is the whole point of defense in depth.

What the fix looks like:

# Enable log file validation
aws cloudtrail update-trail \
  --name primary-trail \
  --enable-log-file-validation

# KMS encryption requires a CMK; create one with a trail-appropriate policy
aws cloudtrail update-trail \
  --name primary-trail \
  --kms-key-id arn:aws:kms:us-east-1:ACCOUNT:key/KEY_ID

Maybe 45 minutes of work if you have the KMS key policy already written. Two hours if not.

Finding 4 — CC6.7 PARTIAL: Unencrypted Terraform state bucket

What the tool saw: My Terraform state bucket (in a different region than the scan's home region, which is actually how I found a region bug in my own collector) has no server-side encryption and no Block Public Access configured.

Why it matters: Your Terraform state file is often the single highest-value target in your AWS account. It can contain:

Leaving this bucket unencrypted with no public access block is the AWS equivalent of leaving a filing cabinet of passwords in the parking lot.

What the fix looks like:

resource "aws_s3_bucket_server_side_encryption_configuration" "tf_state" {
  bucket = aws_s3_bucket.tf_state.id
  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm     = "aws:kms"
      kms_master_key_id = aws_kms_key.tf_state.arn
    }
  }
}

resource "aws_s3_bucket_public_access_block" "tf_state" {
  bucket                  = aws_s3_bucket.tf_state.id
  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}

resource "aws_s3_bucket_versioning" "tf_state" {
  bucket = aws_s3_bucket.tf_state.id
  versioning_configuration {
    status = "Enabled"
  }
}

Thirty minutes if you have your Terraform setup dialed in. Two hours if you're also setting up the KMS key and bucket lifecycle policies at the same time.

Finding 5 — CC6.8 FAILED: AWS Config and Security Hub entirely disabled

What the tool saw: AWS Config recorder is not running. No delivery channels. Security Hub returned an InvalidAccessException meaning it was never enabled. Zero continuous monitoring, zero drift detection, zero security standards evaluation.

Why it matters: Preventive controls (IAM policies, password policy, MFA) only get you so far. Detective controls (Config, Security Hub, GuardDuty) are what catch the things preventive controls missed. If your preventive controls fail and your detective controls were never enabled, you will not know anything is wrong until a customer, auditor, or attacker tells you.

Beyond that, "Security Hub not enabled" is a table-stakes expectation on every SOC 2 vendor questionnaire I have ever seen. Buyers will fail you on this one check before they even get to the rest.

What the fix looks like: Enable Config with a delivery channel, subscribe to Security Hub, enable CIS AWS Foundations Benchmark and AWS Foundational Security Best Practices standards, optionally enable GuardDuty for threat detection.

There's a well-known Terraform module for this (cloudposse/aws-config + cloudposse/security-hub or similar) that makes it a one-day project for a founder, a few hours for an engineer with AWS experience.


What it would actually cost to close every gap

For the record: closing every gap the scan flagged would take me (the person who built the tool) about one full day of focused work. For a founder without AWS experience, probably two to three days spread across a week. For a compliance consultant billing $250 per hour, probably $3,000 to $6,000 of work.

Plus two weeks of sustained operation so the controls are actually auditable (Config needs history, Security Hub needs findings to triage, CloudTrail needs logs written under the new config).

Total calendar time from "nothing hardened" to "CC6 audit-ready": 3 to 4 weeks. Not 6 to 12 months. Not $30,000. The scanning part is the easy part. The remediation is where the work actually lives.


The bigger lesson

The state of my own AWS account is not unusual. It's close to median for an early-stage startup. Most founders I've talked to would score somewhere between 30 and 50 out of 100 on a CC6 scan if you ran one right now.

The compliance market sold startups the idea that SOC 2 is a huge, expensive, many-month endeavor. For the AWS-technical part, that's mostly wrong. Most of the gaps are small, well-understood, and fixable with an afternoon of Terraform work per finding. What makes it expensive is the long-tail interpretation problem: what does this specific finding mean in the context of your specific environment, and what's the right fix for your specific architecture?

That interpretation is where a human practitioner adds value. Not the scanning. Not the list of findings. The judgment call about what order to fix things in, which controls are genuinely load-bearing for your business, and what the Terraform should actually look like for your stack.

That's the business I'm building.


Technical appendix

The full scrubbed scan report is available here as markdown. The tool is at github.com/kumo-security/kumo-assess.

Architecture: Go collectors and scan engine, React and TypeScript UI, SQLite for persistence, Claude Sonnet 4.6 for sub-agents, Claude Opus 4.7 for lead/orchestrator/PM synthesis. Read-only by construction. Fourteen evaluator tests covering the deterministic rules engine. Contributions welcome.

Thinking about SOC 2 for your AWS environment?

I run scoped readiness engagements for AWS-native startups. Scan, walkthrough, remediation delivery, fixed price. If that fits what you need, book a 30-minute scoping call and you'll leave with a fixed price and a start date.