Authority boundary, identity attribution, evidence lineage, audit reconstruction, export snapshot posture and protected access control.
Institutional Diligence Evidence Map
This page turns the common first-read questions from ministries, DFIs, climate funds, auditors, and sovereign programme reviewers into a public control-evidence map. It does not expose live records. It shows what is explained publicly, what must be evidenced in a protected walkthrough, which proof artefacts reviewers should request before relying on a deployment, and which claims require formal attestations or contractual commitments.
Signal hierarchy for reviewers
Use the hierarchy below to separate committee-critical controls from deeper probes.
Residency, retention, RLS/access tests, integration boundaries, security controls, SLA/RPO/RTO, support and contractual commitments.
IRI calibration outputs, reviewer consistency signals, bias/divergence reporting, programme comparability and dashboard configuration.
Use this page as your walkthrough script
This page is the practical testing surface. Pair it with the procurement-safe evidence request so each reviewer probe leads to a specific artefact: audit events, evidence lineage, reviewer roles, MRV attachment, export snapshots, residency proof, integration boundaries and contractual commitments.
What this closes
The public narrative already explains the authority boundary, reviewer journey, governance spine, MRV attachment posture, export readiness, and deployment isolation. This evidence map adds the next layer reviewers usually ask for: identity and access proof, audit and RLS proof, evidence-lineage proof, MRV signal definitions, committee-pack evidence-to-conclusion mapping, exact critical audit events, export snapshot proof, sovereignty proof, interoperability proof, operating-resilience proof, and minimum pass criteria for protected walkthrough testing.
Important boundary: public pages can describe controls and provide redacted examples. Deployment-specific records, access matrices, logs, reviewer names, live MRV artefacts, export snapshots, and security configuration evidence remain protected and purpose-bound.
Public narrative is not the same as diligence-complete control evidence
Institutional reviewers should treat public pages as orientation and control mapping, not as final security, procurement, residency, or certification evidence. Concrete attestations, penetration-test posture, incident-response procedures, exact tenancy design, encryption and key-management details, formal residency guarantees, and full worked committee-pack examples should be reviewed under the appropriate protected or confidential diligence process.
| Missing public proof area | Why it is not fully public | What to request |
|---|---|---|
| SOC 2 / ISO / compliance attestations | Public pages can describe alignment, but certifications require current independent artefacts. | Current attestation, readiness assessment, security questionnaire, or no-claim statement. |
| Pen-test and IR posture | Reports and remediation records are sensitive and deployment/vendor-risk specific. | Pen-test summary, remediation tracker, incident-response plan, tabletop evidence, and notification route. |
| Tenancy, encryption, keys, residency | Exact configuration depends on hosting region, deployment model, contractual commitments, and support-access rules. | Tenancy diagram, isolation test, encryption/key-management statement, BYOK availability, residency addendum, backup/log region posture. |
| Worked export / committee-pack examples | Full examples may contain reviewer actions, evidence objects, decision state, recipient scope, or programme data. | Redacted pack, immutable snapshot envelope, watermark/expiry sample, export log, and reproducibility test. |
| Full control evidence | Live control evidence belongs in protected workspaces or confidential diligence packs, not public pages. | Protected walkthrough with RBAC, SoD, audit replay, RLS/access tests, evidence lineage, MRV-to-decision chain, and admin permission-change log. |
Control domains reviewers can test
Reviewer risk / gap test matrix
This matrix converts the main public-diligence gaps into minimum tests for a protected walkthrough. It is intended for ministries, DFIs, climate funds, auditors, and sovereign programme reviewers who need to move from public narrative to demonstrable control evidence.
| Risk / gap | Why it matters | What to test | Pass criteria (minimum bar) |
|---|---|---|---|
| Boundary claims not enforced in product | Accountability and legal risk if system implies automated authority. | Walkthrough of approvals, overrides, decision capture. | Human approvals required; system records decision-maker and rationale. |
| Insufficient RBAC / SoD | Fraud and governance drift risk. | Permission matrix + demo of restricted actions. | Clear roles; admin actions logged; SoD supported or explicit compensating controls. |
| Audit trail not tamper-evident | Audit defensibility risk. | Audit log design + change history + export logging. | Append-only or tamper-evident controls; full traceability of critical events. |
| Evidence integrity / lineage unclear | Evidence drift undermines defensibility. | Evidence object lifecycle demo; versioning; lineage map. | Evidence provenance and changes visible; lineage preserved; controlled reuse. |
| Export pack not snapshot-based | Committee decisions become non-reproducible. | Export from state T, then change record, show export remains reproducible. | Export versioning and immutable snapshot semantics. |
| Sovereignty / residency controls unproven | Regulatory non-compliance. | Deployment region options; data access policies; export controls. | Documented residency; contractual commitments; demonstrable controls. |
| MRV “signals” ambiguous | Methodology and assurance credibility risk. | Define MRV signal types; provenance; validation workflow. | Method provenance recorded; third-party verification steps traceable. |
Use in review: this table does not certify a deployment. It defines the minimum evidence a protected walkthrough should produce before a reviewer relies on Terra Vita Hub for a named deployment, programme, export pack, or committee decision process.
Reviewer-assurance evidence: IRI Whitepaper
The Institutional Review Index Whitepaper is now part of the public diligence set. It defines how reviewer performance, reviewer consistency, bias and divergence detection, programme calibration, and oversight dashboard outputs should remain evidence-linked, explainable, role-governed, privacy-aware, and subject to human review.
| Reviewer question | Where answered | Minimum proof request |
|---|---|---|
| Can the reviewer process itself be trusted? | IRI Whitepaper — governance stack and methodology sections. | Reviewer identity binding, evidence-to-decision links, rationale fields, criteria references, thresholds, and audit events. |
| Are IRI signals black-box scores? | IRI Whitepaper — scoring boundary and safeguards. | Explainable calculation version, source records, threshold basis, role-governed visibility, and human review gate. |
| Can bias, divergence, or drift be acted on automatically? | IRI Whitepaper — governance and safeguards. | Human review, context capture, appeal/correction pathway, and audit-ready exception register. |
Tier‑1 reviewer probe register
This section turns the institutional review observations into specific walkthrough tests. It is deliberately phrased as what reviewers should test, not as a substitute for protected control evidence.
| Reviewer probe | Why it matters | What to test in walkthrough | Minimum pass condition |
|---|---|---|---|
| Assurance claims versus implementation reality | Terms such as audit reconstruction, deployment isolation, and export posture must be testable, not only narrative. | Ask which controls are technically enforced, which are policy-controlled, which are deployment-specific, and what failure modes exist. | Control owner, enforcement layer, evidence artefact, limitation, and escalation route are explicit for each material claim. |
| Model risk and algorithmic decisioning | Committees need assurance that AI, TV-CRI, analytics, or MRV signals do not become shadow approvals. | Trigger an AI/analytics/MRV-supported review and show that no approval, rejection, eligibility, release, or ranking becomes binding without human action. | No automated approvals; every material decision has named reviewer identity, rationale, authority basis, timestamp, and override/condition route where relevant. |
| Override governance and two-person integrity | Overrides are necessary in real governance, but uncontrolled overrides create legal, fiduciary, and audit risk. | Replay an override, show approval authority, rationale, bypassed control, escalation route, follow-up condition, and whether high-risk actions require dual control or compensating review. | Override cannot be silent; high-risk overrides are authority-bound, logged, reviewable, and reportable to committee or audit where required. |
| Export posture drift | Committee packs lose evidentiary value if later record changes silently alter what a reviewer relied on. | Create an export at state T, amend a record, then reproduce the exported pack and audit log. | Export has immutable snapshot semantics, version/snapshot ID, included-record list, watermark/expiry, recipient purpose, and reproducible evidence state. |
| Access model, deprovisioning, and cross-scope visibility | RBAC is only credible if roles map to institution, programme, geography, purpose, and removal events. | Demonstrate restricted actions, external reviewer expiry, admin permission change, deprovisioning, and attempted access outside assigned scope. | Access is denied outside scope, changes are logged, external/elevated access can expire, and deprovisioned users lose access without losing historical audit attribution. |
| Data protection and DPIA readiness | Protected workspaces may contain personal data, reviewer notes, operational records, and programme-sensitive material. | Review data categories, lawful basis/DPA treatment, retention/deletion posture, access logs, export controls, data-subject or legal-hold handling, and DPIA artefacts where required. | Processing roles, data categories, retention/deletion rules, access logs, export boundaries, and privacy responsibilities are documented for the deployment. |
| E-signature and formal approval compatibility | Some institutions require formal electronic signature, committee minute, or statutory approval outside the Hub. | Show how Hub decisions, committee packs, and audit trails attach to external e-signature, minutes, or approval systems without replacing them. | Hub record preserves evidence, rationale, reviewer action, and export posture while external signature/statutory approval remains the authority where applicable. |
| Proof-object governance and tamper resistance | Public proof objects must be credible governance samples, not ad-hoc marketing fragments. | Inspect how a proof object is selected, redacted, versioned, linked to a template or source record, and prevented from exposing live data. | Proof object has selection rationale, redaction status, version/date, source boundary, and no live personal, financial, or programme-sensitive data. |
MRV signals, committee-pack mapping, and exact audit events
This section addresses three specific diligence asks that reviewers commonly raise after reading the public governance narrative: what an MRV signal means in the system, how a committee pack ties conclusions to evidence and reviewer action, and which audit events must be generated by critical actions. It remains a public control contract; deployment-specific payloads, reviewer names, record IDs, and live evidence remain protected.
1. Definition of an MRV signal
MRV signal means a governed evidence input or derived indicator that can inform routing, conditions, monitoring posture, funding readiness, committee review, or closeout. It is not an approval of a methodology by Terra Vita Hub. Each signal should carry source, method or methodology version, programme or site context, timestamp, limitations, reviewer visibility, and the decision-use boundary.
Public MRV posture: Terra Vita Hub should be read as a methodology-agnostic governance wrapper unless a deployment-specific module or integration is explicitly configured. It can attach, route, version, and explain MRV artefacts and signals; it does not certify carbon claims, approve methodologies, or replace external verification authority.
Integration posture: satellite layers, GIS/spatial files, field-verification records, third-party verifier reports, sensors, registries, and national MRV systems can be referenced or connected where the deployment permits. Each source must carry provenance, method/version, scope limits, reviewer visibility, and audit treatment.
| MRV signal example | Concrete object in the system | How it may influence governance | Minimum proof to show |
|---|---|---|---|
| Spatial / remote-sensing signal | Site boundary, satellite-derived vegetation or coastal-risk indicator, land-cover change, NDVI-style vegetation trend, or mangrove-zone change record. | May trigger a condition, monitoring prompt, variance note, escalation, or evidence request when the signal conflicts with claimed progress. | Source, date, geometry/site link, method version, processing note, indicator value, limitations, and linked condition or reviewer note. |
| Field verification signal | Field officer observation, geotagged photo set, restoration event log, farm/site survey, community verification note, or ground-truth record. | May support milestone readiness, validate an evidence claim, create an exception, or require follow-up where field evidence is incomplete. | Collector identity or source, timestamp, location, programme context, file/object link, review status, and reviewer action taken. |
| Third-party methodology / verification signal | Verifier report, methodology document, safeguards assessment, lab result, ecological survey, auditor finding, or national MRV reference artefact. | May support committee confidence, constrain claims, create a conditional approval, or preserve the authority boundary for external methodology decisions. | Document version, issuing body, scope, methodology provenance, applicable indicators, reviewer finding, and decision-use limitation. |
2. Committee pack evidence-to-conclusion map
A committee pack should never present a conclusion without the evidence objects, reviewer actions, unresolved conditions, and export snapshot that support that conclusion. The minimum mapping below should be visible in a protected walkthrough.
| Committee-pack conclusion | Evidence objects that must be mapped | Reviewer actions that must be mapped | Pass criteria |
|---|---|---|---|
| Funding readiness / conditional go | Milestones, tranche records, release dependencies, evidence links, conditions, MRV attachments where relevant, and authority-boundary note. | Submit, review, approve, mark eligible, raise condition, block, reopen, or log release event. | Conclusion shows the evidence state at export time, who reviewed it, what remains conditional, and why readiness does not equal disbursement authority. |
| Safeguards / grievance / incident posture | Safeguards register, grievance intake record, incident escalation log, response evidence, closure evidence, and residual-risk note. | Classify, escalate, request evidence, close condition, or record residual risk. | Committee can see open, closed, escalated, and residual items without losing the responsible owner or rationale. |
| MRV-supported performance or monitoring posture | MRV artefacts, spatial boundary, methodology/provenance record, indicator values, threshold rules, monitoring evidence, and variance note. | Attach artefact, verify source, record finding, raise threshold breach, approve interpretation, or escalate contradiction. | MRV influence is visible without implying Terra Vita replaced national, sectoral, verifier, or methodology authority. |
| Final submission / release gate posture | Submission pack index, reviewer access log, export snapshot, data-room release register, watermark/expiry record, redaction decision, and authority boundary. | Approve release, hold release, condition release, revoke access, renew access, or record recipient purpose. | Exported pack is reproducible as a snapshot; later record changes do not silently change what the committee reviewed. |
3. Exact audit events to demonstrate
For a protected walkthrough, reviewers should ask the operator to perform or replay the four actions below and show the generated audit event. The event names below define the public minimum contract for critical-event reconstruction.
| User action | Required audit event | Minimum event fields | Pass criteria |
|---|---|---|---|
| Approve | tvh.approval.recorded | event_id, event_type, actor_user_id, actor_role, organisation_id, programme_or_project_id, target_record_type, target_record_id, previous_state, new_state, evidence_object_ids, condition_ids, rationale, authority_basis, timestamp, session_id, schema_version. | Approval is attributable, evidence-bound, time-stamped, and reconstructable from the state visible to the reviewer at the time of approval. |
| Override | tvh.override.authorised | event_id, actor_user_id, actor_role, override_scope, target_record_id, blocked_or_bypassed_control, condition_or_exception_id, rationale_required, rationale_text, approving_authority, evidence_object_ids, previous_route, authorised_route, timestamp, schema_version. | Override cannot be silent; it records the bypassed route, authority basis, rationale, evidence context, and follow-up condition. |
| Export | tvh.export.snapshot_created | event_id, exporter_user_id, exporter_role, recipient_or_recipient_class, purpose, export_pack_type, export_version, snapshot_hash_or_version_id, included_record_ids, redaction_status, watermark_id, expiry_date, jurisdiction_or_residency_rule, timestamp, schema_version. | Export is a reproducible snapshot with recipient purpose, version, redaction posture, watermark/expiry, and audit trail. |
| Admin permission change | tvh.admin.permission_changed | event_id, admin_user_id, admin_role, target_user_id, previous_role_or_scope, new_role_or_scope, programme_or_workspace_scope, approval_request_id, reason, effective_from, expires_at where applicable, sensitive_permission_flag, timestamp, schema_version. | Permission changes are attributable, scoped, justified, logged, and reviewable; elevated or external access can be time-bound and revoked. |
Public limitations that must be closed with evidence
Institutional reviewers should treat these as priority evidence requests once the public narrative is accepted as credible.
| Limitation | What the public site can say | Protected proof to request |
|---|---|---|
| Security certifications / independent audits | Public materials can state control intent and alignment, but they do not replace SOC 2, ISO, pen-test, or independent security evidence. | Current certificate/report where available, readiness assessment if not yet certified, pen-test summary, remediation status, vulnerability/patch process, and security questionnaire response. |
| SLAs and reliability targets | Public materials can identify reliability domains, but universal uptime commitments should not be implied without a deployment-specific SLA. | Uptime target band, maintenance policy, monitoring coverage, incident response, support response bands, escalation matrix, RPO/RTO, backup/restore and failover evidence. |
| Interoperability with institutional architecture | Public materials can describe integration surfaces and governance boundaries, but deployment fit depends on specific systems, identity providers, data flows, and export formats. | API documentation, integration inventory, data-flow diagrams, SAML/OIDC requirements, service-account scope, ingress/egress formats, connector logs, and audit treatment. |
Operational reliability, security, data lifecycle, integration, and safeguards
Institutional reviewers also need operational proof beyond governance design. The public assurance posture now maps service reliability, incident response, monitoring, disaster recovery, change management, security hardening, data lifecycle/sovereignty precision, APIs, secure ingestion, identity federation, safeguards alignment, and risk-management interfaces to protected diligence evidence.
| Area | What the public site explains | Protected proof to request |
|---|---|---|
| Operational reliability | Availability/SLA posture, monitoring/alerting boundaries, maintenance windows, incident response, redundancy, disaster recovery, support escalation, RPO/RTO, and deployment controls are deployment-specific commitments. | SLA or service schedule, uptime target, monitoring coverage, incident-response plan, maintenance-window policy, RPO/RTO expectations, backup/restore evidence, support severity matrix, escalation route, and named support/account owner. |
| Security controls | Security review extends beyond governance design into encryption, key management, environment isolation, network segregation, pen testing, third-party security review, patching, vulnerability disclosure, and privileged access logging. | Security questionnaire response, encryption/key-management statement, tenancy/environment isolation diagram, privileged access log sample, pen-test/remediation summary where available, patch-management process, and vulnerability disclosure route. |
| Data lifecycle and sovereignty | Data sovereignty includes retention, destruction/withdrawal logic, raw vs derived data, evidence-state treatment, export controls, temporary/rejected/overridden evidence, and multi-tenant isolation. | Retention schedule, deletion/withdrawal workflow, raw/derived taxonomy, evidence-state model, lineage map, export policy, residency addendum, RLS/access tests, storage boundary, and cross-tenant inference test. |
| Integration surfaces | Data can enter and leave the Hub through governed APIs, secure file drops, evidence/document intake, satellite/GIS/MRV lanes, registry connectors, reporting exports, committee templates, machine-readable outputs, and identity federation where configured. | Integration inventory, data-flow map, API documentation, service-account scope, SAML/OIDC requirements, ingress validation, export snapshot rules, export format list, error/retry logs, and audit event treatment. |
| Risk and safeguards | Safeguard evidence and risk interfaces are governed as evidence objects, conditions, escalations, incident/grievance routes, and committee/export outputs. | Safeguard framework map, ESG/ESS/IFC alignment note where applicable, risk-register linkage, grievance/incident escalation route, committee-pack example, and closeout evidence. |
Public evidence to protected proof map
| Reviewer question | Public evidence | Protected proof to request | Primary route |
|---|---|---|---|
| Who can see, review, approve, override, export, or administer records? | Governance Spine & Assurance Annexes — Annex 2; reviewer accountability pages; public role-bound authority narrative. | Deployment role matrix, access approval trail, privileged-access log, external reviewer scope and expiry, and separation-of-duties test. | Assurance Annexes Reviewer Accountability |
| Can a decision be reconstructed later? | Audit/RLS assurance narrative, audit reconstruction registers, committee-pack and evidence-handover pages. | Sample reconstruction pack showing actor, role, evidence state, condition, escalation, approval, export posture, timestamp, and rationale. | Annex 3 Protected audit register |
| How is evidence kept attached to context? | Evidence lineage, governance flow, and MRV attachment rules explain source, programme context, metadata, and version history. | Redacted evidence-object record, metadata fields, version chain, document-linkage example, and context-reuse restriction test. | Governance Flow Protected traceability matrix |
| What counts as an MRV signal and how is methodology provenance controlled? | Annex 4 describes MRV artefact classes, methodology attachment, thresholds, versioning, role visibility, and MRV-to-decision influence. | Methodology document, artefact metadata, signal-source record, verifier/source identity, threshold configuration, and MRV-linked condition or approval example. | Annex 4 Institutional Assurance Layer |
| Are exports controlled and attributable? | Export posture pages, data-room release registers, watermarking/expiry standard, and private-storage/signed-URL control note. | Export snapshot envelope, export log, recipient purpose, expiry date, watermarking sample, revocation event, and redaction approval. | Protected export standard Controlled sharing |
| Can deployment data leave the jurisdiction or be inferred by another deployment? | Data sovereignty posture, Annex 1, Annex 5, deployment models, and public deployment-isolation statement. | Deployment configuration record, hosting/residency evidence, backup posture, storage bucket policy, integration scope, and cross-deployment isolation test. | Data posture Deployment models |
| What does procurement or vendor-risk review need to see? | Compliance, DPA, SOC2 control summary, ISO 27001 alignment, enterprise SLA, risk disclosure, and institutional evaluator pack. | Security questionnaire response, incident-response process, business-continuity/disaster-recovery evidence, support model, uptime target, and deployment-specific security pack. | Compliance Enterprise SLA |
| How do external systems connect without weakening governance? | Interoperability and deployment architecture sections explain purpose-bound integrations, API-level isolation, national-system interoperability, and integration logs. | Integration approval record, API/service-account scope, data-sharing purpose, retry/error logs, MRV partner artefact log, and export-control treatment. | Annex 5 Governance Architecture |
| How are operational guarantees, integrations, and safeguards tested? | Operational Assurance page, Commercial Scope, Governance Architecture, Enterprise SLA, safeguard/grievance registers, and assurance annexes. | SLA target, monitoring/alerting boundary, incident-response plan, maintenance window, RPO/RTO, change-management register, encryption/key-management statement, privileged access log, data-retention schedule, deletion/withdrawal workflow, integration inventory, SAML/OIDC requirements, API/service-account scope, export-format list, safeguard framework map, incident/grievance route, and risk-register linkage. | Operational Assurance Commercial Scope |
Protected walkthrough evidence checklist
When a formal reviewer moves beyond public review, the protected walkthrough should demonstrate these artefacts against a named deployment, programme, or review purpose:
Assurance position
This page should be read as a diligence bridge, not as a substitute for a protected review. Terra Vita Hub’s institutional claim becomes reviewable when the public governance spine is tested against protected proof: identity, evidence, routing, MRV, audit, export, sovereignty, and deployment isolation must remain connected throughout the decision chain.