State of the Art Performance. 1% of the Compute.

Introducing the Ensemble Listening Model

Bypassing scaling limits through dynamic ensemble orchestration.

The Challenge

Monolithic models don't scale

The last few years have seen an explosion in the capabilities of Large Language Models (LLMs) and other monoliths, driven in no small part by sheer scale. But that's about to end.

“Pre-training will run out of data. What do you do next? ... But now that compute is [so] big...we are back to the age of research.”

— Ilya Sutskever, Nov 25, 2025


Scaled models scale costs

The larger the model, the more it costs to train, retrain, and deploy at inference time. As GPUs become more expensive and even their raw materials become scarce, there's a need to fit more intelligence into existing compute envelopes, rather than simply scaling up.

The Solution

What is an Ensemble Listening Model?

An ELM is not a single monolithic model. It is a coordinated ensemble of dozens or hundreds of specialized sub-models, all managed under a shared orchestration layer.

It’s well known that ensemble architectures can improve accuracy at decreased costs. Yet orchestrating larger quantities of sub-models - especially heterogeneous models with different focuses - requires more sophisticated orchestration than has been done before.

The ELM is a new AI architecture - battled tested already in real applications for Fortune 500 companies, handling millions of queries each day. By leveraging hierarchical processing and dynamic ensemble blocks, ELMs unlock the ability to match state-of-the-art accuracy from monolith models - but at a tiny fraction of the training and inference costs.

Example: An ELM built for voice understanding, Velma, exceeded the accuracy of leading foundation models, while delivering >100x cost improvements

The core benefit of the ELM is straightforward - despite using potentially hundreds of sub-models, they can afford to be much smaller (thanks to their more specialized role) than generalist, monolith models. When each sub-model uses thousands of times less compute than a fully general monolith, and not all sub-models need be run at all times, the end result is a compute advantage of multiple orders of magnitude.

Inside an Ensemble Listening Model

Dynamic Ensemble Blocks manage the models across multiple layers of analysis, all coordinated in real time.

An example diagram of an Ensemble Listening Model focused on audio processing and analysis.



Dynamic Ensemble Blocks

Dynamic Ensemble Blocks are collections of homogenous models - their input and output formats are the same, they are intended to answer the same question, but they each perform differently in different circumstances.

Each Dynamic Ensemble Block has its own orchestrator which determines which submodel(s) are best given what they know about the data they're processing, ensuring that at each moment, the best possible results are being generated. In some cases, a Dynamic Ensemble Block may also use weighted aggregates of sub-models, rather than simply selecting one.

For instance, a Speech to Text Dynamic Ensemble Block might have a series of models activated at different times, such as:

Generalist Model

With adequate performance on any data, making it a strong starting point for unknown audio data.

Noise-Robust Model

With superior performance under significant background noise, swapped in by the orchestrator after receiving context from other blocks about the noise quality of the audio.

Topic-Specialized Model

For instance, a healthcare-specific model that best handles that kind of jargon, and may be swapped in by the orchestrator after receiving context from other blocks regarding the conversation content.

Fine-Tuned One-Off Model

Optimized by a specific provider for only the healthcare conversations they specifically manage - potentially activated early based on input metadata, but could also be used automatically after the ELM derives from context who the healthchare provider is and that they are indeed the hosts of the conversation.

Hierarchical Layers

Hierarchical Layers are ways of organizing different modalities of Dynamic Ensemble Blocks. While not strictly required for an ELM, they provide valuable structure to collections of Dynamic Ensemble Blocks which operate on similar context windows.

The lowest Hierarchical Layers focus on short context windows and narrow judgements, while higher-up layers focus on longer context windows and more complex, multifacted analysis.

It's critical to note that Hierarchical Layers are not indepedent. Higher levels in the hierarchy conduct inference using the outputs from the lower levels; while lower levels improve inference using context obtained from the higher levels.

Each ELM may have unique Hierarchical Layers as appropriate. For illustration, a few of the Hierarchical Layers depicted above for voice processing include:

Low-Level Signals

Extraction of mostly acoustic data based on short context windows, such as emotion or transcription timeseries data.

Local Feature Analysis

Analysis of patterns over a short time scale, which may rely on some estimation. For instance, "did that comment signal resolution of the current issue?"

Non-Local Feature Analysis

Analysis of patterns across a context covering multiple exchanges, such as "is this a mentor/mentee relationship or unsolicited feedback?"

Monoliths vs Ensembles

Monolithic models are "easy" to train - just add data. In contrast, ensemble architectures like ELMs require more up-front design - but you're rewarded for the effort.

Capability Monoliths ELMs
Efficient Training No
Massively data- and compute-hungry
Yes
Submodels are trained highly efficiently, and ELM orchestration is few-shot by design.
Inference Compute Requirements No Yes
Cost reductions typically around 50-100x from monoliths
Purpose-Specific Training No
Generalist by design
Yes
Underlying intelligence can be broadly applicable, but purpose-built orchestrators slot in trivially
Explainability Limited
Black box decisions, reliant on complex explainability techniques
Yes
Any output traceable to specific detectors
Incremental improvements Hard
Full retrain required
Easy
Update individual models

Why "Listening" Models?

Voice is one of the hardest real-world data types for AI to understand.

Most leading models today were trained on text tokens. When processing multifaceted data like audio, this naturally forces flattening the input into something like a transcript to be processed, which drops the most important context – emotion, suspicious pauses, synthetic-sounding voices, signs of vulnerability or aggression, etc.

This was exactly the challenge facing one voice intelligence company, Modulate, as they sought to crack the code on multidimensional voice understanding. Recognizing the limitations of text-based systems, Modulate started from scratch with several models intended to shed light on different aspects of voice, and found itself immediately pushing the limits of ensemble technology.

This research ultimately resulted in the first ELM, which Modulate coined Velma, a unique voice understanding tool soon employed by top game developers including Activision (Call of Duty) and Rockstar Games (Grand Theft Auto). Modulate has since continued to expand both the capabilities of Velma and the underlying research into ELM design, and despite the much broader applicability of the ELM architecture, chose to keep the term "listening" in the ELM name to reflect this voice-first origin.

Why ELMs are the Answer to Enterprise AI

ELMs don't just improve model performance — they change what enterprises can safely, reliably, and economically deploy in production.

Increased Precision

Since ELMs include a myriad of specialist models, enterprises with unique data environments can rely on ELMs to really account for what makes their context special, and derive more accurate results through that expertise.

Fine-Tuning

Each enterprise's data and problems are unique, yet LLMs are generic, only fine-tunable through months of retraining or awkward, imprecise prompt engineering. In contrast, ELMs allow immediate fine-tuning, as Dynamic Ensemble Blocks and their sub-models can be adapted individually, increasing iteration speed by more than 10x compared to monolithic models.

Transparency and Compliance

Because of their ensemble design, ELMs deliver a clear audit trail explaining where each decision comes from, equipping enterprises with the transparency and reliability they need to trust ELMs in their own systems (and maintain compliance.)

Customizable Outputs

With a language model as an orchestrator, you can "speak" to an ELM just as you can an LLM; but most enterprises actually want something more structured. By swapping in different orchestrator models, ELM's can also deliver results in exact, exportable formats, making it trivial to pass their results into traditional software applications or directly assist human-in-the-loop decision making.

Read the White Paper

Dive deep into the ELM architecture.

FAQ

Answers to common questions about ELMs, architecture, and deployment.

Understanding ELMs

What is an Ensemble Listening Model (ELM)?

An Ensemble Listening Model (ELM) is a AI architecture that orchestrates many specialized sub-models into a single coherent intelligence. ELMs outperform monolithic architectures by combining narrower expert models, each optimized for a specific signal or task.

How are ELMs different from monolithic foundation models?

Traditional foundation models are trained as a single monolith, with opaque internals and extremely broad, general mandates for what they can do. ELMs achieve that same breadth with much less cost by adding a carefully designed internal structure which makes use of individual specialist models to contribute to the overall understanding - kind of like how humans rely on a myriad of different sensory organs before our brain can form a real picture of the surrounding world.

Why move beyond single-model AI?

As models scale, they become harder to retrain, more expensive to run, and less predictable in high-stakes settings. ELMs offer an alternative path forward by coordinating many smaller models that can be updated and governed independently.

Architecture & Design

How many models are in a typical ELM?

A production ELM may include dozens or hundreds of specialized models, activated dynamically based on context and need. Models with similar purposes are grouped into Dynamic Ensemble Blocks, which are themselves each fulfilling a unique purpose as they contribute back to the overall orchestrator.

What is a Dynamic Ensemble Block?

A Dynamic Ensemble Block is a collection of models with the same purpose, but which perform differently under different conditions. Each Dynamic Ensemble Block has a sub-orchestrator which reviews the current context and determines which component models to run and how to aggregate or weigh the results of each.

What does the orchestration layer do?

The orchestration layer determines which models run, how their outputs are combined, and how confidence is assessed, producing structured and traceable results.

What kind of model handles the orchestration?

It depends - this is one of the key free parameters in the ELM architecture. If your use case requires the ability to "talk" to the ELM's intelligence, you might use a language model as your top-level orchestrator - simulating the experience and intelligence of foundation LLMs, but with significantly reduced costs. In other environments, you might deploy models designed for reinforcement learning, or even use deterministic algorithms for certain enterprise use cases that prefer structure and transparency rather than "collective" takeaways.

What does hierarchical processing mean in an ELM?

Hierarchical layers are a way of categorizing the context window required by different Dynamic Ensemble Blocks within the ELM. Each layer processes differently structured inputs, but all should be operating in parallel and the conclusions in one layer may impact the behavior of another.

Can individual models be updated independently?

Yes. Individual models or entire Dynamic Ensemble Blocks can be retrained, replaced, or added without retraining the entire system.

Enterprise Considerations

What kinds of problems are ELMs best suited for?

Problems involving multifaceted data with multiple layers of relevant context (audio conversations, books, stock market trades, large bodies of work or corpuses of data from multiple potentially biased sources, etc.)

Beyond cost, do ELMs present any benefits to businesses?

Yes! The ensemble nature of the ELM means its decisions are explainable - any conclusion can be traced back to the individual component models that impacted it, and this can be easily converted into an audit trail or compliance report. In addition, ELMs are much easier to fine-tune, as individual component models can be tuned, added, or replaced significantly more swiftly than retraining or uncertain prompt engineering of larger monoliths.

Can ELMs operate in real time?

Yes. ELMs support both real-time and batch deployment models. There is no latency increase in an ELM compared to other AI architectures.

How do ELMs support transparency and compliance?

ELMs provide traceable outputs tied to explicit model components, making auditing and governance straightforward.

How do ELMs integrate with existing systems?

ELMs have configurable orchestrators, and can be designed to expose structured outputs via APIs that integrate easily with enterprise software and workflows.