Avera Research
Research for decision-critical growth systems
Avera Research is the core of the lab. Our mission is to develop models and algorithms that improve growth decisions in measurable ways: which narratives are pushed, which segments are prioritized, how creative sequences are constructed, how budgets are allocated, and how experiments are run.
We work at the intersection of:
Multimodal representation learning for creative, brand, and experience.
Audience Dynamics and Behavioral Sequences across digital and real world channels.
Causal inference and counterfactual estimation in high-dimensional, biased environments.
Sequential decision-making and portfolio optimization under uncertainty and constraints.
Scalable experiment design and evaluation for real-world growth systems.
01. Representations
We develop vision–language and sequence models that represent the full surface area through which customers experience a brand:
- Joint embeddings of ads, landing pages, product detail pages, organic content, user flows, and offline creative.
- Ontologies that factor content into narrative structure, emotional register, offers, visual grammar, and brand codes.
- Metrics for similarity, resonance, and saturation that connect directly to performance and fatigue.
These representations are designed to be actionable: they power creative construction, diversification, simulation, and generation—not just offline analysis.
02. Audience Dynamics
We model how individuals and cohorts move through complex, multi-touch journeys:
- Sequence models over heterogeneous event streams (impressions, visits, purchases, churn signals, cross app interactions).
- Dynamic segmentations that reflect stable roles in the system rather than brittle, one-off clusters.
- Temporal models that capture lagged effects, seasonality, memory, and interference across campaigns.
Our goal is not just to forecast, but to provide actionable structure that makes downstream policies more robust and interpretable.
03. Causal Estimation
We emphasize methods that separate signal from selection bias:
- Uplift modeling and heterogeneous treatment effect estimation for incrementality by segment, channel, and creative class.
- Methods that fuse randomized experiments and observational data, balancing extrapolation with identifiability.
- Counterfactual evaluation frameworks for full policies, not just individual treatments.
We care about stability, calibration, and sensitivity: every estimator is evaluated for how it behaves under plausible misspecification, not just how it scores on a single offline metric.
04. Simulation
We study growth as a problem of allocating a finite budget of money and experimentation across a combinatorial space of actions:
- Simulators that roll out trajectories under different policies, while explicitly modeling uncertainty and structural error.
- Portfolio optimization techniques that trade off exploration vs. exploitation, respecting operational constraints like ramp limits, brand guardrails, and regional capacity.
- Risk-aware policies that reason about distributions, not points—choosing actions that are robust to model uncertainty and market volatility.
We evaluate success by how much these policies improve realized risk-adjusted outcomes when deployed, not just simulated rewards.
05. Experimentation
We treat experimentation as a first-class research topic:
- Design of sample-efficient experiments for high-stakes environments where traffic or budget is limited.
- Unified experiment graphs that track hypotheses, configurations, and outcomes across channels and time.
- Continual learning systems that update models as new data arrives, with explicit guardrails against feedback loops and drift.
Our internal evaluation framework combines:
- Offline benchmarks on historical data with strict train/test separation and stress tests.
- Prospective experiments with pre-registered analysis plans and pre-defined success criteria.
- Longitudinal studies on how recommendations affect the broader system over weeks and quarters.
Principles
- Decision-centrismModels are built to inform decisions, not to surface learnings in isolation but rather translate them into impactful actions.
- Quantitative rigorWe emphasize robustness, calibration, and precision, thus all learnings must be derived from controlled experiments and high quality data.
- Operational realismApplied algorithms are designed with real constraints in mind: latency, incomplete data, shifting incentives, and human stakeholders.
- Iterative deploymentResearch and engineering work together; translation from concept to implementation is minimized for rapid learning and product velocity.
Working With Avera Research
For partners, Avera Research serves as an embedded lab: we formulate questions jointly, design experiments, and deploy new methods against live economics.
For researchers and engineers, it is an environment where:
- You work with large, production, high-stakes datasets.
- Your models become policies that control real allocation of capital and attention.
- You can publish open source research and contribute back to the community where it accelerates the field.
If you are excited by research that treats growth as a quantitatively rigorous domain, and by systems that must be both theoretically sound and operationally trusted, we’d love to talk.