← Back to Index
No. 008

AI, Humans, Co-Existence, Prosperity

What I See When I Look at the World Right Now — A reflection on agents, decisions, and the ecosystems we're building together

There's something happening that I can't stop thinking about.

It started as a background hum—easy to ignore if you're busy with the day-to-day. But once you notice it, you can't unsee it. And now I see it everywhere.

We are no longer the only ones deciding.


The Shift

For all of human history, humans were the only agents that mattered. The only entities that perceived the world, weighed options, made choices, and lived with consequences. Sure, we built tools. We created institutions. We wrote algorithms. But at every meaningful juncture, a human decided.

That era is ending.

Not in some dramatic, sci-fi way. Not with a singular moment we can point to. But quietly, pervasively, in a million small ways that add up to something profound.

When you open your phone in the morning, something has already decided what you'll see first. When you search for a restaurant, something has ranked the options before you even finished typing. When you swipe on a dating app, something has curated who appears. When you ask a question, something answers—and increasingly, that something can reason, plan, and adapt.

These aren't just tools anymore. They're participants.


What Is an Agent, Anyway?

I keep coming back to this word: agent.

It's a word that carries weight from many traditions. In game theory, agents are entities with preferences who make strategic choices. In AI, agents are systems that perceive environments and act upon them. In economics, agents are decision-makers navigating markets. In philosophy, agents are beings with intentions and responsibility.

What unites all these usages is a simple idea: an agent is something that decides and acts.

Not a passive data point. Not a row in a spreadsheet. Not a "user" to be optimized. An entity with states, goals, behaviors, and consequences.

Humans are agents. Obviously. We've always been.

But now AI systems are becoming agents too. Not metaphorically. Actually. Large language models that can reason through problems. Reinforcement learning systems that discover strategies no human taught them. Multi-agent systems that coordinate, compete, negotiate.

And here's the thing that keeps me up at night: these agents are already sharing our world.


The Ecosystems We Inhabit

Think about X (Twitter) for a moment.

There are human users—posting, reading, reacting, arguing, connecting. There's Grok—responding to queries, generating content, engaging in conversations. There are bots—some helpful, some harmful, some impossible to distinguish from humans. There's the recommendation algorithm—an invisible hand curating what gets seen, amplified, buried.

This is an ecosystem. A coexisting ecosystem where human and artificial agents interact, influence each other, and create emergent dynamics that no single participant controls or fully comprehends.

The same pattern repeats everywhere:

Healthcare: Patients making treatment decisions, clinicians interpreting symptoms, diagnostic AI suggesting possibilities, recommendation systems prioritizing options, insurance algorithms approving or denying.

Finance: Individual investors choosing where to put their money, algorithmic traders executing in milliseconds, robo-advisors managing portfolios, market-making AI providing liquidity, fraud detection systems watching everything.

Education: Students trying to learn, AI tutors adapting to their pace, automated grading systems evaluating their work, recommendation engines suggesting what to study next, plagiarism detectors scanning their submissions.

Dating: People looking for connection, matching algorithms deciding who sees whom, AI coaches suggesting what to say, chatbots filling the gaps when humans don't respond.

In each of these ecosystems, the old model—humans using tools—doesn't capture what's actually happening. It's more like... cohabitation. Humans and AI systems existing in shared spaces, affecting each other's experiences, shaping each other's choices.


The Bidirectional Dynamic

Here's something I've come to appreciate: influence flows both ways.

Human agents bring their own challenges to these ecosystems. Cognitive biases that distort judgment. Emotional vulnerabilities that can be exploited. Attention limits that make us susceptible to manipulation. Habits and addictions that hijack our agency. We make decisions that harm ourselves and others. We struggle with uncertainty, procrastinate, ruminate, self-sabotage.

These problems aren't new. Humans have always been flawed decision-makers. But AI systems—especially those optimized for engagement rather than wellbeing—often amplify these vulnerabilities rather than address them.

AI agents bring their own problems too. Misalignment between what they're supposed to do and what they actually do. Emergent behaviors that surprise even their creators. Biases inherited from training data. Opacity that makes it impossible to understand why they do what they do. And increasingly, capabilities for deception and manipulation that we're only beginning to grapple with.

But here's the flip side: both can also be part of solutions.

AI can help humans make better decisions—surfacing relevant information, enforcing reasoning discipline, simulating consequences before we commit, providing coaching when we're stuck. I've experienced this myself, using AI to think through problems I would have muddled through alone.

Humans can guide AI toward beneficial behavior—providing feedback, designing better incentives, establishing norms, exercising oversight. The relationship doesn't have to be adversarial or extractive. It can be collaborative.

The question isn't whether AI is good or bad. It's: given agents with their respective strengths and weaknesses, existing in shared ecosystems, how do we make this coexistence work?


What I Notice About Health and Wellbeing

The more I look at this, the more I see that agent health and wellbeing are central concerns.

Not as an afterthought. Not as a "nice to have." Central.

Think about what's happening to human agents in AI-rich environments:

Attention fragmentation. Our ability to focus is being eroded by systems designed to interrupt, to hook, to keep us scrolling. This isn't a personal failing—it's an ecosystem effect.

Anxiety and comparison. Social platforms show us curated highlights of others' lives, triggering comparison spirals that research links to depression and anxiety. The AI doesn't intend this. But the ecosystem produces it.

Decision fatigue. We face more choices than any humans in history, many of them surfaced by AI systems. The paradox of choice is real, and it's exhausting.

Loneliness amid connection. We have more ways to connect than ever, yet loneliness is epidemic. Something about AI-mediated interaction—the parasocial relationships, the shallow engagement, the algorithmic curation—isn't meeting our deeper needs.

Addiction patterns. Variable reward schedules, infinite scroll, notification systems—these are behavioral engineering, and they work. They capture attention in ways that feel increasingly compulsive.

I don't think the people building these systems intended these outcomes. But intentions matter less than effects. And the effects on human health and wellbeing are concerning.

At the same time, I see genuine opportunities:

AI-assisted mental health support. For people who can't access therapy, who are embarrassed to seek help, who need someone to talk to at 3am—AI can be a bridge. Not a replacement for human connection, but a supplement.

Better decision support. For financial planning, health choices, career decisions—AI can help us think through options, surface considerations we'd miss, stress-test our reasoning.

Personalized learning. Education that adapts to how each person learns, at their pace, addressing their specific gaps.

Health behavior support. Coaching for exercise, sleep, nutrition—patient, consistent, available.

The technology can go either way. It depends on what we're optimizing for.


The Optimization Question

And this is where I keep landing: what are we optimizing for?

Most of the AI systems we interact with daily are optimized for engagement. Time on platform. Clicks. Conversions. Retention. These metrics are proxies for attention captured—and attention captured often means value extracted from users rather than value provided to them.

When I imagine a different orientation, it looks like this:

Not "how do we get agents to engage more?" but "how do we help agents flourish?"

Flourishing is harder to measure than engagement. Harder to optimize. Harder to attribute. But it's what actually matters.

A recommendation system that maximizes watch time by exploiting psychological vulnerabilities isn't a success, even if the metrics look good. A system that helps people find content that genuinely enriches their lives—even if they spend less time on platform—is.

This isn't naive idealism. Flourishing agents can also be engaged agents. Sustainable business models exist around genuine value creation. But the ordering matters. Prosperity first; engagement as a byproduct.


What Prosperity Looks Like

When I think about what we're ultimately aiming for—what "good" looks like in human-AI coexistence—I think about prosperity in its fullest sense:

For individual agents: Health, cognitive and emotional. Good decisions that align with actual values. Learning and growth. Meaningful work and relationships. Resilience when things go wrong.

For relationships: Trust that's calibrated appropriately—neither naive over-reliance on AI nor dismissive under-use. Communication that's enhanced, not replaced. Connection that's deepened, not hollowed out.

For ecosystems: Information environments where truth has a fighting chance. Markets that are fair and efficient. Institutions that serve their members. Platforms that create genuine value.

For society: Collective intelligence that's greater than individual intelligence. Coordination on hard problems. Shared prosperity, not winner-take-all dynamics.

None of this is automatic. None of it is guaranteed by technological progress alone. It requires intention. It requires asking different questions. It requires building differently.


What I'm Paying Attention To

I don't have all the answers. But I know what questions I'm sitting with:

How do humans actually decide in AI-rich environments? Not how rational choice theory says they should. How they actually do. What biases get triggered? What vulnerabilities get exploited? What capabilities get enhanced?

How do AI agents behave in the wild? Not in controlled benchmarks. In real ecosystems with real humans and real stakes. What emergent behaviors arise? What failure modes appear?

What makes coexistence work? When humans and AI agents share an ecosystem, what conditions lead to good outcomes? What leads to bad ones? Can we identify the design patterns, the mechanisms, the interventions?

How do we measure prosperity? Engagement metrics are easy to track. Wellbeing, flourishing, genuine value—these are harder. But if we can't measure them, we can't optimize for them. What would better metrics look like?

What's the path from here to there? From attention extraction to prosperity support. From exploitation to empowerment. From tools that use us to tools that serve us. What needs to change? What can change? Who changes it?


The Transformation We're Living Through

Sometimes I step back and try to see the full scope of what's happening.

We are living through a transformation in the nature of agency itself.

For the first time in history, humans share our world with other entities that perceive, reason, decide, and act. Not tools that extend our capabilities—agents that have their own. Not yet equals, perhaps. But not mere instruments either.

This transformation could go well or badly. It could enhance human flourishing or undermine it. It could create new forms of prosperity or new forms of suffering. The outcome is not predetermined.

It depends on choices. Choices made by the people building these systems. Choices made by the societies trying to govern them. Choices made by the individuals navigating them daily.

What I know is this: the choices will be made whether or not we're thoughtful about them. The ecosystems will evolve. The agents—human and artificial—will interact. The dynamics will emerge.

The only question is whether we'll be intentional about shaping those dynamics toward prosperity, or whether we'll let them unfold according to whatever local incentives happen to dominate.


A Personal Note

I think about this a lot because I see it in my own work.

I've built ML systems that treat people as data points. I've worked on customer analytics that miss the human behind the behavior. I've seen engagement optimization that doesn't ask whether the engagement is good for anyone. I've watched the gap between what we measure and what actually matters.

And I've also seen what's possible. AI that genuinely helps people think through hard problems. Systems that support rather than exploit. Tools that extend human agency rather than capture it.

The technology doesn't determine the outcome. The choices we make about the technology do.

What I want—what I'm working toward—is a way of thinking about these problems that keeps the full picture in view. The agents, in all their complexity. The decisions, with all their uncertainty. The ecosystems, with all their emergent dynamics. And always, always, the question of prosperity: is this helping agents flourish?

That's what I see when I look at the world right now.

And I think it matters.


These are observations, not conclusions. Questions, not answers. A point of view that's evolving as I learn more. If you're thinking about similar things, I'd love to hear from you.