Scientific Foundation

45 years of validated
science. One proprietary engine.

MEVA is grounded in the Circumplex Model of Affect — the most widely validated framework in affective science — and the Honest Signals research conducted at MIT's Media Lab, which proved that the nonverbal dimensions of human interaction predict outcomes with the same accuracy as expert human judgment.

Multi-Modal Inference Engine — Valence · Arousal · Dissonance Framework

AROUSALLOWVALENCENEGTENSEHigh arousal · NegativeEXCITEDHigh arousal · PositiveDEPRESSEDLow arousal · NegativeCALMLow arousal · PositiveVERBALLinguistic signalACOUSTICVocal prosodyVISUALMicro-expressionMEVA SCOREComputed centroidSignal streamMEVA Score centroidDissonance — gap between stated and physiological signal
45+
Years of Validation

The Circumplex Model of Affect has been continuously validated since Russell (1980), with over 50,000 peer-reviewed publications.

68–93%
Nonverbal Signal

Arena & Pentland (MIT Media Lab, 2010) demonstrated that nonverbal signals predict outcomes with expert-level accuracy. Standard assessments capture none of it.

3
Signal Streams

MEVA fuses verbal, acoustic, and visual signals simultaneously by detecting dissonance between what a person says and what their physiology reveals.

Verbal

LLM analysis of linguistic patterns, word choice, and narrative construction. What a person says and how they construct meaning in language.

Acoustic

Vocal prosody analysis capturing tone, cadence, pitch, and tremor. The physiological truth beneath the spoken word.

Visual

Computer vision detecting micro-expressions and behavioral markers in the face and body. Signals that emerge under real conditions.