Research Foundation

AIVO Evidentia is built on independent, audit-grade AI governance research.

Every assessment framework, drift classification, and evidence standard we apply was developed and validated through AIVO Standard — a non-commercial research programme with peer-reviewed publications, open-source methodologies, and active regulatory submissions to the UK Government, SEC, and EU AI Act enforcement bodies.

Origin

From independent research to governance-grade intelligence

AIVO Standard was established to investigate a systematic governance gap: AI systems are making consequential representations about regulated institutions — to investors, analysts, patients, and consumers — with no oversight, no audit trail, and no accountability.

01
Research Programme
AIVO Standard — independent AI governance research
Non-commercial research programme. Developed the testing methodologies, conducted primary research across 50+ enterprises in regulated and consumer sectors, published open-source frameworks, and submitted findings to the UK Government (acknowledged), SEC, and EU regulatory bodies.
02
Governance Application
AIVO Evidentia — governance intelligence for regulated industries
The governance intelligence arm applying AIVO Standard's validated methodologies to banking, pharmaceuticals, and financial services. Audit-grade evidence, regulatory-aligned reporting, and board-level documentation of AI-mediated risk exposure.
Research Scale

The evidence base behind every assessment

3,780+
Pages of primary
research evidence
1,280
Prompt–response
pairs analysed
320
Four-turn drift
conversations
50+
Enterprises
researched

Every drift pattern, severity classification, and remediation framework deployed by AIVO Evidentia has been validated against real-world, multi-model empirical data. This is not theoretical risk modelling — it is evidence-based governance intelligence built on primary research.

Methodologies

Proprietary frameworks. Open-source verification.

All methodologies are published openly and available for independent audit. Governance credibility requires transparency — we build to that standard.

Testing
Four-Turn Drift Sequence
Standardised multi-turn stress test mirroring real investor, analyst, consumer, and patient decision pathways across AI systems.
Measurement
TCSA™
Temporal Claim Stability Analysis. Measures how AI-generated claims about an institution change over time across repeated testing cycles.
Detection
SCASA™
Structural Claim Absence & Suppression Analysis. Identifies systematic omissions and misrepresentations in AI-generated institutional narratives.
Scoring
PSOS™
Prompt Space Occupancy Score. Quantifies an institution's visibility and accuracy across AI recommendation and disclosure spaces.
Risk
ERI™
External Risk Indicator. Board-level risk scoring translating AI drift severity into regulatory, reputational, and commercial exposure metrics.
Governance
CAL™
Correction & Assurance Ledger. Immutable, audit-grade documentation system providing evidential chain-of-custody for AI-generated claims and corrections.
Regulatory Alignment

Research mapped to compliance frameworks

AIVO Standard research has been submitted to regulatory bodies and mapped against the governance obligations of regulated institutions.

Active regulatory engagement

Findings have been formally submitted to the UK Government (acknowledged), the U.S. Securities and Exchange Commission, and EU AI Act enforcement bodies. The research demonstrates how AI systems generate representations about regulated institutions that may conflict with official disclosures, mandatory filings, and compliance obligations.

EU AI Act
AI system transparency obligations
SEC / 10-K / 10-Q
Disclosure integrity & investor protection
FCA / PRA
Consumer duty & financial promotion
EMA / MHRA
Pharmaceutical product safety
Dodd-Frank / BSA
Financial services compliance
FDCA / FDA
Drug labelling & promotion
Open Source & Publications

Transparent. Verifiable. Audit-ready.

All research methodologies, frameworks, and primary evidence are published openly. Governance credibility cannot be built on proprietary claims alone — our work is available for independent verification by boards, auditors, and regulators.

AIVO Journal
Research publications, methodology papers, and governance analysis on AI-mediated risk in regulated industries.
aivojournal.org →
GitHub
Open-source methodology framework, testing protocols, and governance documentation under MIT License.
github.com/pjsheals/AIVO-Standard →
Zenodo (DOI)
Academic archival with DOI citations for institutional referencing, audit documentation, and regulatory submissions.
DOI: 10.5281/zenodo.16410942 →
SSRN
Pre-print methodology papers available for academic and institutional review.
Available on SSRN →
ORCID & Wikidata
Verified researcher identities (ORCID: 0009-0006-2407-4612) and canonical entity registration (Wikidata: Q135451157).
Verified research identity →
Recognition & Coverage

Where the research has reached

Media Coverage
Fortune
AdAge
Business Insider
Sustained coverage over 6+ months
Regulatory Submissions
UK Government — submitted and formally acknowledged
U.S. Securities and Exchange Commission
EU AI Act enforcement bodies
Ongoing engagement with regulatory frameworks
Regulated Sectors Covered
Banking & Financial Services Pharmaceuticals Investor Disclosures (10-K / 10-Q) Insurance Healthcare Consumer Safety

AIVO Evidentia applies this research to your regulatory and governance obligations.

When we assess your institution, we deploy two years of validated, peer-published methodology — the same frameworks submitted to the UK Government, SEC, and EU regulators — against your specific disclosures, products, and compliance requirements. The research is independent. The evidence is audit-grade. The intelligence is yours.