Writing Professional Security Reports

GeneralFoundationsintermediate14 min

Learn to write security reports that drive action. Covers finding structure, impact framing, audience awareness, and the quality standards that separate professional deliverables from forgettable ones.

Theory

Skill Goal

Write a security report where every finding is specific enough to reproduce, clear enough for a non-technical reader to understand the risk, and actionable enough that the remediation team knows exactly what to fix.

Why It Matters

The report is the only artifact most stakeholders ever see from a security engagement. A technically excellent assessment delivered through a weak report wastes everyone's time. Findings that are vague, poorly structured, or disconnected from business impact get deprioritized or ignored entirely.

Strong reports are what build trust with clients, earn repeat work, and establish professional credibility. They are also what interviewers ask about when they want to know whether a candidate can do the full job, not just the technical part.

Key Concepts

  • Specificity is the single most important quality in a security finding — vague findings get ignored, precise findings get fixed
  • Every finding must answer three questions: what is the vulnerability, what is the real-world impact, and what should be done about it
  • Reports serve multiple audiences simultaneously — technical remediators need reproduction steps, executives need business impact, and compliance teams need severity context
  • Severity ratings must be defensible — overscoring erodes trust faster than underscoring, because it signals the tester cannot distinguish critical risk from noise
  • A report is a professional deliverable, not a personal notebook — it should be readable by someone who was not on the engagement

Recommended Workflow

  1. 1.Organize your raw notes and evidence by finding before you start writing. Do not write the report linearly from memory.
  2. 2.Write each finding title as a specific, descriptive sentence. Tell the reader exactly what is wrong and where: Unauthenticated SQL Injection in /api/users Exposes Customer Database, not SQL Injection Found.
  3. 3.Write the description to explain the root cause. Explain what the vulnerability actually is and why it exists, not just that you found it.
  4. 4.Include reproduction steps that a different tester could follow. Assume the reader was not on the engagement and does not have your environment or context.
  5. 5.Frame the impact in business terms. Describe what an attacker could actually do with this, and what the realistic consequence to the organization would be.
  6. 6.Assign severity using a defensible framework like CVSS. Document your reasoning, especially for anything rated High or Critical.
  7. 7.Write remediation guidance that is specific and actionable. Implement parameterized queries in the /api/users endpoint, not fix the SQL injection.
  8. 8.Review the full report for consistency. Verify that severity ratings align with described impact, all screenshots are labeled, and the executive summary accurately reflects the findings.

Strong vs Weak Execution

Strong reporting starts with a descriptive title that tells the reader exactly what is vulnerable, where, and why it matters. The description explains root cause. Reproduction steps are complete enough that a different tester could follow them. Impact is framed in business terms. Severity is defensible and consistent. Remediation is specific.

Weak reporting uses generic titles like XSS Vulnerability with no location or context. The description restates the title instead of explaining the root cause. Reproduction steps assume the reader has the tester's environment and knowledge. Impact says an attacker could do bad things without specifics. Severity is inflated to make findings look more impressive. Remediation says fix the vulnerability without explaining how.

The clearest test: could someone who was not on the engagement read this finding and understand exactly what is wrong, why it matters, and what to do about it? If the answer is no, the finding is not finished.

Common Mistakes

  • Writing vague titles that could apply to any instance of the same vulnerability typeCross-Site Scriptinginstead ofStored XSS in User Profile Bio Field Executes in Admin Dashboard Context
  • Overscoring severity to make the report look more impressive — this destroys credibility with experienced readers and makes it harder to prioritize real critical issues
  • Writing reproduction steps that only work in the tester's exact environment — if a reader cannot follow the steps, the finding becomes unverifiable
  • Describing impact in purely technical terms without connecting to business consequencesthe attacker can read the databasemeans nothing to an executive withoutwhich exposes 50,000 customer records including payment data

Quality Bar

Every finding in the report should be reproducible by a different tester, understandable by a non-technical reader, and actionable by a remediation team — without any of them needing to ask the original tester for clarification.

What Good Output Looks Like

Finding: Unauthenticated SQL Injection in /api/users Exposes Full Customer Database

Severity: Critical (CVSS 9.8 — AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H)

Description: The /api/users endpoint does not sanitize the sort query parameter before including it in a SQL query. An unauthenticated attacker can inject arbitrary SQL through this parameter to read, modify, or delete any data in the application database.

Reproduction Steps: 1. Send a GET request to /api/users?sort=name;SELECT+*+FROM+users-- 2. Observe that the response includes all user records from the database 3. Confirm injection with a time-based payload: /api/users?sort=name;WAITFOR+DELAY+'0:0:5'--

Impact: An external attacker with no credentials can extract the full customer database, including names, email addresses, hashed passwords, and payment tokens. This affects approximately 50,000 active customer records. The attacker could also modify or delete data, potentially disrupting service availability.

Remediation: Replace string concatenation in the sort parameter handler with parameterized queries. Implement an allowlist of valid sort column names. The vulnerable code is in src/api/controllers/users.js at line 47.

Practice Prompt

Take a finding you have written before — or invent one from a practice lab — and rewrite it using this lesson's structure. Write the title, severity with CVSS justification, description, reproduction steps, impact, and remediation. Then read it as if you were a developer who was not on the engagement: could you reproduce it, understand the risk, and fix it without asking anyone?

Communication

How to Explain It in an Interview

When asked about report writing, the strongest answer demonstrates that you write for the reader, not for yourself.

A good response: I structure every finding so a developer can reproduce it, a manager can understand the risk, and a compliance team can verify the severity. I write titles that are specific enough to triage without opening the finding. I score severity using CVSS with documented reasoning so clients can trust the ratings. The test I use is whether someone who was not on the engagement could pick up the report and act on every finding without calling me.

Common Weak Answers

  • Writing vague titles that could apply to any instance of the same vulnerability typeCross-Site Scriptinginstead ofStored XSS in User Profile Bio Field Executes in Admin Dashboard Context
  • Overscoring severity to make the report look more impressive — this destroys credibility with experienced readers and makes it harder to prioritize real critical issues
  • Writing reproduction steps that only work in the tester's exact environment — if a reader cannot follow the steps, the finding becomes unverifiable
  • Describing impact in purely technical terms without connecting to business consequencesthe attacker can read the databasemeans nothing to an executive withoutwhich exposes 50,000 customer records including payment data

Likely Follow-Up Questions

  • A client pushes back on a Critical severity rating, arguing the vulnerable system is behind a VPN. How do you respond, and would you change the rating?
  • You discover a low-severity finding that affects every application the client runs. How do you communicate this in the report so it gets appropriate attention without overscoring?
  • Your engagement found 40 findings. How do you structure the executive summary so leadership understands the overall risk without reading all 40?