AI Explanability
Last updated: May 11, 2026
Purpose
This document summarizes how Vala uses AI, what information is visible to users, and which controls and review practices help make AI-assisted outputs understandable and auditable. It is intended for customer security, procurement, and governance review. It does not replace Vala’s product documentation, provider terms, or customer-specific data processing agreements.
Scope Of AI Use
Vala uses GenAI primarily as an internal staff-facing assistant for VA claims work. The main user-facing GenAI surface is vChat and the Reports Tool, which help staff ask questions about a veteran’s record, retrieve supporting evidence, draft claim-related work product, and export reviewed drafts.
AI is also used in supporting document workflows, including document categorization, summarization, claims evaluation, conditions extraction, ratings extraction, biographical extraction, embeddings for retrieval, and selected form/transcription workflows. These workflows assist staff by organizing and extracting information from uploaded records; they do not autonomously submit claims or make final benefits determinations.
Model Providers
Vala primarily uses Google Gemini models for chat, report drafting, document processing, extraction, categorization, summarization, and embeddings. Some lighter services can optionally use Vertex AI when configured. Vala also uses OpenAI-backed workflows for certain form filling and transcription-related processing as described in the system architecture.
Provider-side retention, training, abuse monitoring, and logging are governed by the applicable Google Cloud, Gemini, Vertex AI, OpenAI, and customer contract terms for the deployed environment. This document describes Vala’s application-level controls and transparency mechanisms.
Human Oversight
Vala AI outputs are assistive. Staff are expected to review generated answers, extracted facts, citations, and drafts before relying on them. vChat displays a persistent beta disclaimer that outputs should be reviewed by an accredited representative before use.
For drafting workflows, staff decide what to ask, which outputs to export, whether to revise the draft, and whether the final work product is appropriate for the client’s matter. Vala does not treat AI output as a substitute for legal, medical, or accredited representative judgment.
Explainability And Transparency Mechanisms
Vala’s chat and report generation workflows are designed around document-grounded retrieval rather than unconstrained generation. When a question requires client evidence, the system retrieves relevant document or app-context chunks, ranks them, and uses those retrieved sources to generate an answer.
User-facing transparency mechanisms include:
Visible cited evidence and source documents in chat answers when document evidence is used.
Numbered source markers and a matching Sources section in exported PDF, DOCX, and Markdown report artifacts.
CFR source information, including section, citation, and URL, when an answer uses eCFR lookup data.
Processing status events in the streaming chat UI, such as question analysis, document search, chunk validation, answer generation, and completion.
Form-filling transparency that reports what data was filled, what is missing, and why values were not available.
Data Grounding
Vala narrows most claims-specific answers to information already present in the customer’s client record, uploaded documents, extracted document chunks, app object chunks, and official external sources such as eCFR when used. For form filling, the system uses database sources for client, user, organization, and collated document data, and the user remains responsible for approving generated forms and documents.
The system can still generate incorrect, incomplete, or overbroad language if the uploaded source material is incomplete, ambiguous, poorly extracted, or if the model interprets context incorrectly. Citations and source excerpts are provided to make review practical, not to eliminate the need for professional verification.
Quality Evaluation
Vala includes an AI evaluation system that measures chat quality against stored query logs and retrieval context. The evaluation process tracks groundedness, context relevance, answer relevance, user feedback, and selected citation-related checks. These measures are used for monitoring and improvement, not as a guarantee that any individual answer is correct.
Access Controls And Disablement
AI features can be disabled at the organization level through the organizations.settings.disable_ai setting. When AI is disabled for an organization, chat launchers are hidden or blocked, chat endpoints reject requests, the client Conditions tab is hidden, and new document uploads skip AI processing.
AI usage also depends on deployment configuration and provider credentials, such as GEMINI_API_KEY and selected environment flags. Without the required keys or enabled settings, the corresponding AI services are unavailable.
Vala’s current AI disablement is primarily organization-wide and configuration-based. It is not a fine-grained per-user AI permission model within an enabled organization.
Retention And Audit Trail
Vala stores chat and retrieval audit data in PostgreSQL, including query text, assistant answers, session metadata, feedback, selected document information, and, for GraphRAG chat, the prompt assembled for the model. Generated report artifacts are stored in Google Cloud Storage with metadata in PostgreSQL.
These records support user session history, feedback, debugging, and quality evaluation. Customer deletion and retention obligations depend on the applicable account workflow, database backups, storage lifecycle, provider terms, and contract requirements.
Limitations
Vala’s explainability controls are designed to show the evidence, workflow stage, source documents, citations and quality signals behind an answer. They do not expose or reconstruct the full internal reasoning of the underlying foundation model. AI outputs remain probabilistic and may contain errors, omissions, or unsupported inferences.
Users should review outputs against cited sources, original documents, official VA guidance, and applicable professional standards before use.