Graphletter Logo
    Graphletter
    FrameworksHow It WorksTry It OutResearch

    How It Works

    The 30-second version

    Upload an evidence document. Graphletter reads it against the Secure Controls Framework using AI, then maps the result to every regulatory framework that references those controls. Out comes a per-control pass / partial / fail verdict with reasoning and a gap list.

    Under the hood we build an Evidence Requirement List for each control, extract Evidence Atoms from your document, compute a Mapping from SCF to external frameworks, roll the atoms up into Coverage, and score implementation Maturity on a 1–5 scale.

    Try it with a sample doc
    1. STEP 1

      Upload

      A policy, procedure, or evidence file.

      →
    2. STEP 2

      Extract

      Text extracted; graph built when reliable.

      →
    3. STEP 3

      Map to SCF

      SCF controls scoped to the artifact.

      →
    4. STEP 4

      AI assess

      Objectives evaluated with reasoning.

      →
    5. STEP 5

      Graph scoring

      Weak/moderate/strong support rolled up.

      →
    6. STEP 6

      Coverage report

      Gaps + recommendations per framework.

    Graphletter turns uploaded evidence into structured compliance decisions by mapping content to SCF controls, testing against assessment objectives, and surfacing clear coverage and gaps.

    Need the fast version? Upload evidence, Graphletter maps it to SCF controls, evaluates objective-by-objective, and returns coverage and prioritized gaps.

    1,264
    SCF controls
    79
    Frameworks
    25,957
    Control mappings

    Workflow

    From document upload to compliance insight in six concrete steps.

    1

    Upload an artifact

    What Happens

    You choose the documentation artifact and upload supporting evidence such as a policy, screenshot, or record.

    Why It Matters

    Artifact choice determines which SCF controls are evaluated first.

    Where You See It

    Upload Evidence dialog in Dashboard

    2

    Extract evidence signals

    What Happens

    Text and visual signals are extracted from your file, then normalized into chunked content so evidence can be traced to source locations.

    Why It Matters

    Reliable extraction is the foundation for both objective-level AI assessment and graph-native evidence mapping.

    Where You See It

    Assessment progress and evidence history

    3

    Map to SCF controls

    What Happens

    Graphletter creates evidence atoms from chunks and maps those atoms to one or more SCF controls with mapping polarity and coverage strength.

    Why It Matters

    This creates a reusable evidence graph where one artifact can support many controls and framework projections.

    Where You See It

    Dashboard coverage and framework views

    4

    Run AI objective assessment

    What Happens

    GPT-5 evaluates each SCF assessment objective and returns pass, partial, fail, or not applicable with confidence and reasoning.

    Why It Matters

    Objective-by-objective reasoning makes control interpretation auditable instead of subjective.

    Where You See It

    Assessment Results and assessment review output

    5

    Compute graph-native control status

    What Happens

    Coverage status is computed from graph mappings using strongest support rank and contradiction rank to classify each control as compliant, partial, missing, or conflicting.

    Why It Matters

    Deterministic graph logic keeps coverage and gap reporting traceable to specific atoms.

    Where You See It

    Analytics and control cards

    6

    Project coverage and gaps

    What Happens

    SCF control outcomes are projected across mapped frameworks so one set of evidence can inform SOC 2, ISO 27001, NIST, and more.

    Why It Matters

    You can prioritize remediation where it creates the largest cross-framework impact.

    Where You See It

    Compliance Overview and Framework Explorer

    How Graph Analysis Works

    Graphletter does not jump directly from a document to a final compliance conclusion. It first builds a traceable evidence graph, then applies consistent scoring rules to determine coverage and gaps.

    AI reasoning (GPT-5)

    AI Objective Review

    Tests SCF assessment objectives using structured reasoning over extracted evidence.

    Output

    Objective-level pass, partial, fail, or not applicable with confidence

    Rules-based graph computation

    Graph Coverage Scoring

    Computes explainable coverage and gap statuses from mapped evidence atoms.

    Output

    Control-level compliant, partial, missing, or conflicting with traceable evidence links

    Evidence Relationship Flow

    Consistent Rules-Based Scoring
    Uploaded Document -> Evidence Chunks -> Evidence Atoms -> Control Links -> Coverage & Gaps

    1

    Document Record

    2

    Chunked Content

    3

    Evidence Atoms

    4

    Control Mappings

    5

    Gap Results

    1. Document Record

    Creates a stable root node so every downstream decision can be traced to an upload.

    2. Chunked Content

    Preserves source location context so evidence claims are not detached from original text.

    3. Evidence Atoms

    Turns long files into reusable evidence units that can support multiple controls.

    4. Control Mappings

    Captures how strongly each atom supports or contradicts a control.

    5. Gap Results

    Materializes report-ready, traceable gap outputs for dashboards and exports.

    1

    Document Record

    Captures: Source file metadata, content hash, and ingestion metadata per upload.

    Why it matters: Creates a stable root node so every downstream decision can be traced to an upload.

    2

    Chunked Content

    Captures: Overlapping content slices with char offsets and token counts.

    Why it matters: Preserves source location context so evidence claims are not detached from original text.

    3

    Evidence Atoms

    Captures: Atomic evidence claims, supporting text, confidence, and source locator.

    Why it matters: Turns long files into reusable evidence units that can support multiple controls.

    4

    Control Mappings

    Captures: Edges from atom -> SCF control with mapping polarity and coverage strength.

    Why it matters: Captures how strongly each atom supports or contradicts a control.

    5

    Gap Results

    Captures: Computed status, gap type, summary, and supporting atom IDs per control.

    Why it matters: Materializes report-ready, traceable gap outputs for dashboards and exports.

    Graph Signals

    Mapping polarity

    supports

    The atom provides supporting evidence for a control.

    Mapping polarity

    contradicts

    The atom indicates contradictory evidence; any contradiction drives a conflicting status.

    Coverage strength

    Strong, Moderate, Weak, None

    Stronger mapped evidence produces better coverage outcomes when there is no contradiction.

    Coverage Decision Rules

    Control coverage is classified using clear, consistent rules.

    ConditionStatusGap type

    Any contradictory mapped evidence exists

    If any contradiction exists, the control is marked conflicting regardless of support.

    conflictingconflicting_evidence

    No contradiction and support is moderate or strong

    Moderate or strong support with no contradiction is treated as compliant coverage.

    compliantcovered_by_strong_or_moderate_evidence

    No contradiction and support is only weak

    Weak support is visible as partial coverage and should be strengthened with better evidence.

    partialcovered_by_weak_evidence

    No contradiction and no meaningful support is mapped

    No meaningful supporting atom mapping exists for the control yet.

    missingno_evidence_mapping

    Condition

    Any contradictory mapped evidence exists

    Status: conflicting (conflicting_evidence)

    If any contradiction exists, the control is marked conflicting regardless of support.

    Condition

    No contradiction and support is moderate or strong

    Status: compliant (covered_by_strong_or_moderate_evidence)

    Moderate or strong support with no contradiction is treated as compliant coverage.

    Condition

    No contradiction and support is only weak

    Status: partial (covered_by_weak_evidence)

    Weak support is visible as partial coverage and should be strengthened with better evidence.

    Condition

    No contradiction and no meaningful support is mapped

    Status: missing (no_evidence_mapping)

    No meaningful supporting atom mapping exists for the control yet.

    Objective Result States (AI Layer)

    How objective-level AI outcomes translate into action. Graph coverage states (compliant, partial, missing, conflicting) are computed separately in the Graph Technique section above.

    pass

    Evidence clearly supports the objective or control requirement.

    Next: Keep evidence current and improve documentation quality if confidence is low.

    partial

    Evidence supports part of the requirement but important elements are missing or unclear.

    Next: Address the missing objective elements and upload updated evidence.

    fail

    Current evidence does not demonstrate the requirement is met.

    Next: Prioritize remediation, then upload stronger evidence mapped to the same controls.

    not applicable

    The objective does not apply to the provided evidence or current context.

    Next: Validate applicability assumptions and attach context for audit traceability.

    Maturity Levels

    SCF uses a Cybersecurity & Privacy Capability Maturity Model (C|P-CMM) with six levels. Graphletter assesses your evidence against these levels for each control.

    0Not Performed

    No evidence of a capability to implement the control. Processes are absent or entirely ad hoc.

    1Performed Informally

    Efforts are ad hoc and inconsistent. Controls may exist but lack formal documentation, ownership, or repeatable processes.

    2Planned & Tracked

    Efforts are requirements-driven and formally governed at a local or regional level, but not consistent across the organization.

    3Well Defined

    Efforts are standardized across the organization and centrally managed to ensure consistency. Policies, procedures, and metrics are documented and enforced.

    4Quantitatively Controlled

    Efforts are metrics-driven with sufficient management insight to predict performance and identify deviations proactively.

    5Continuously Improving

    Processes are optimized through continuous feedback loops, adapting to evolving threats and organizational changes.

    After assessment, each control shows its assessed maturity level, an optional target level with gap analysis, and AI-generated recommendations for reaching the next level.

    Core Terms

    Plain-language definitions with Graphletter context.

    SCF

    Secure Controls Framework — a meta-framework with ~1,200 controls that map to 79+ regulatory standards.

    ERL

    Evidence Requirement List — the evidence expected for a control.

    Evidence Atom

    A single extracted assertion from your document that supports or contradicts a control.

    Mapping

    A link between an SCF control and an external framework's control.

    Coverage

    The rolled-up support level (weak/moderate/strong) for a control given the atoms mapped to it.

    Maturity

    A 1–5 scale (Performed informally → Continuously improving) scoring how well the objective is implemented.

    Related assessment terms

    Other vocabulary that shows up in Graphletter's assessment results.

    SCF Assessment Objective

    Definition: A testable statement used to verify whether a control is actually satisfied.

    In Graphletter: Graphletter evaluates each objective separately and then rolls those results into a control-level status.

    Where you see it: Assessment Results and assessment review dialogs

    Assessment Procedure

    Definition: The expected method for checking whether an objective is met.

    In Graphletter: Used as structured guidance for how evidence should be interpreted during objective evaluation.

    Where you see it: Assessment objective data in API and detailed records

    Expected Results

    Definition: The condition or outcome that should be observable when a control is implemented correctly.

    In Graphletter: Compared against evidence claims to determine objective-level pass, partial, or fail outcomes.

    Where you see it: Assessment objective records and outputs

    Pass / Partial / Fail / Not Applicable

    Definition: Standard assessment outcomes describing whether evidence meets an objective.

    In Graphletter: Objective-level outcomes that roll up into control-level status and dashboard metrics.

    Where you see it: Assessment Results, control cards, reports

    Confidence Score

    Definition: An estimate of how strongly the current evidence supports an assessment result.

    In Graphletter: Used to flag weaker conclusions even when a control appears to pass.

    Where you see it: Assessment output, analytics, report exports

    Data Model

    How Graphletter organizes compliance data under the hood.

    SCF Catalog

    • scf_controls — 1,200+ controls across 33 domains
    • scf_frameworks — 79+ regulatory standards
    • scf_control_mappings — cross-framework mapping table
    • scf_assessment_objectives — testable criteria per control
    • scf_evidence_request_list — required artifact types

    Graph Runtime + Assessments

    • documents — graph document root records linked to uploads
    • document_chunks — chunked content with source offsets
    • evidence_atoms — atomic evidence claims with provenance
    • evidence_control_map — atom-to-control mappings with strength/polarity
    • control_gap_analysis — materialized control gap statuses
    • assessments — AI objective and control assessment outputs
    • Multi-tenant isolation via Row-Level Security

    Sources & Attribution

    SCF concepts are grounded in official SCF materials and linked for reference.

    Secure Controls Framework
    SCF Download and resource hub
    SCF release updates

    Ready to apply this? Start in the Dashboard or explore control mappings in Framework Explorer.

    Ready to see it on your own docs?

    Start with a sample. Sign up when you want to upload your own files.

    Try the demoSign up free
    Graphletter Logo
    Graphletter

    Project

    FrameworksHow It WorksTry It OutResearch

    Contact

    hello@graphletter.com

    Resources

    DocumentationPrivacy PolicyTerms of ServiceStatus
    © 2026 Graphletter