← Back to Index
No. 019

Three Pillars: A Research Strategy for the Age of Agentic AI

After months of exploring, reading, and reflecting on what genuinely excites me—and what I'm learning across coursework, research, and industry—I've noticed emergent patterns crystallizing into something coherent. Rather than scatter my energy across a dozen interests, I'm converging on three major grounds that will anchor my work going forward.


The Three Pillars

Pillar 1: Causality

Causality isn't just a technique. It's a substrate—a way of thinking that cuts across nearly everything I care about.

Consider how problem-solving has evolved. First came computational thinking. Then statistical thinking. Big data arrived, and now everyone's rushing into LLMs. But causality remains the connective tissue that makes all of it meaningful.

Why? Because causality touches:

Advanced Analytics — Even the most "typical" business problems—A/B testing, market mix modeling, experimental inference—fundamentally require causal reasoning. You can't answer "what if we change X?" without it.

Econometrics — Studying people, policy, and economic systems demands understanding not just correlations but mechanisms.

Systems Science — Any world phenomenon can be viewed through mechanistic thinking. Complexity, emergence, non-linear behavior—all of this lives in the causal paradigm.

Representation Learning — Learning causal representations from data unveils what's actually happening, not just what patterns surface in plots. Think protein folding, health records, or any domain where "why" matters more than "what."

AI Interpretability — This is where it gets exciting. Whether we're doing LLM interpretability, VLM interpretability, or building embodied agents with world models, causal reasoning is essential for understanding how these systems actually work.

The topics here span causal inference, quasi-experiments, natural experiments, regression discontinuity, counterfactual simulation, and invariance. Causality becomes the lens through which I revisit all of data science and machine learning.


Pillar 2: Theory and Frontier Machine Learning

Last semester, I studied probabilistic machine learning and reinforcement learning. This semester brings machine learning theory and continuous learning. The progression is intentional.

When we talk about "learning" in AI—learning to gain intelligence, learning from experience, developing agency—we're touching something fundamental. To build better systems, we need to understand why learning works, not just that it works.

This pillar covers:

Statistical Learning Theory — The mathematical foundations of generalization, capacity, and learnability.

Computational Learning Theory — What can be efficiently learned? What are the limits?

Online Learning and Bandits — How do we learn while acting? How do we balance exploration and exploitation in real-time?

Reinforcement Learning — Especially its role in current AI: RLHF, alignment training, decision-making under uncertainty.

Lifelong and Continuous Learning — Systems that accumulate knowledge over time without catastrophic forgetting.

Advanced Architectures — State space models, world modeling (think Dreamer), energy-based models, and whatever comes next.

Training Paradigms and Alignment — Not just empirical tricks, but the theoretical underpinnings of how we align models to human intent.

The goal isn't to collect techniques. It's to understand learning itself—so that innovation comes from depth, not just iteration.


Pillar 3: Agentic Decision Sciences (AI-Human Coexistence and Prosperity)

This is the top layer. The canvas where everything comes together.

Causality and machine learning theory are foundations. But they don't directly answer the question: How do we build a world where AI and humans coexist and flourish?

Agentic Decision Sciences is my framing for this challenge. It encompasses:

Alignment and Safety — Ensuring AI systems do what we actually want.

Interpretability — Understanding what's happening inside these systems (connecting back to causal reasoning).

Multi-Agent Coordination — When multiple agents—human, AI, or hybrid—interact, coordination problems emerge. How do we solve them?

Policy and Governance — The societal structures that shape how these technologies are deployed.

The Inversion Problem — A concept from behavioral science: how do we infer intentions from observed behavior? This becomes critical when AI systems and humans interact in complex environments.

This is also where ventures and products emerge. We can't sell theory to the world. We sell utility. Agentic Decision Sciences is where theoretical insights become tools that actually help people.


The Underlying Logic

Two beliefs drive this architecture:

Belief 1: I'm interested in methods that help us study systems, behavior, and intelligence. Not just prediction—understanding.

Belief 2: As AI grows more capable, the question of AI-human coexistence and prosperity becomes paramount. This isn't an abstract concern. It's the defining challenge of our field.

The leverage for addressing both comes from theory. Going deep into causal inference, statistical learning theory, probabilistic modeling, uncertainty quantification—this is where novel solutions originate. Innovation isn't about being first to try something. It's about understanding deeply enough to see what others miss.


Four Phenomena, One Framework

Across these three pillars, I'm really examining four fundamental phenomena:

  1. Inference — How do we draw conclusions from evidence?
  2. Learning — How do systems improve from experience?
  3. Intelligence — What does it mean to be intelligent, and how do we build it?
  4. Agency — How do agents act in the world to achieve goals?

Causality helps with inference. Machine learning theory illuminates learning. Together, they give us purchase on intelligence. And Agentic Decision Sciences is where agency—human and artificial—gets examined, designed, and deployed.


Why This Framing Matters

The temptation in a field moving this fast is to chase every new thing. A paper drops, everyone pivots. A technique trends, everyone implements.

But lasting contributions come from coherent programs of research. From understanding phenomena deeply enough that when the next paradigm shift happens, you're not starting from scratch—you're extending a foundation.

Causality, machine learning theory, and agentic decision sciences aren't three separate interests. They're three perspectives on the same underlying questions: How do we understand systems? How do we build intelligence? How do we ensure it goes well?

The phenomena are there. To understand them, we need theory and apparatus. That's what this framework provides.


The world doesn't need more surface-level takes on AI. It needs people who've gone deep enough to see the connections others miss. That's the bet I'm making.