← Back to Index
No. 010

Spring 2026: Setting a Prior

The spring semester at Columbia starts January 20th, and I want to capture some learning from Fall 2025—how I want to avoid certain troubles, what worked, what didn't. Just setting a prior, you know?


How Was Fall 2025?

For me, it was a very good start. I was able to ground myself on the theoretical side of data science and machine learning.

Algorithms for Data Science was the first time I'm properly learning algorithms. In my undergrad—control systems engineering—I never had analysis of algorithms. And over time I found it really interesting. If one can combine LLMs with specific algorithms—greedy algorithms, metaheuristics—it can solve many problems. I was reading about AlphaEvolve, which combines LLMs with evolutionary algorithms for algorithmic discovery. Even if someone is into survey design or mechanism design, they might need graph structures, heaps, bipartite matching. I could see the potential, I can think very broadly now. I didn't score well on this one because I was so much into the other three courses—I kind of did not fully focus. But the long-term value is there.

Probabilistic Machine Learning—for a very long time I wanted to get theoretical grounding in this. In the past I worked on Bayesian networks, state-space models, latent variable models, but always through libraries. This course teaches how to build your own architecture, your own model. The PhD students and professor who reviewed the project said it's so good and has potential to develop further. The comments were really good, which is what I expected. And I worked for it. That personal satisfaction matters.

Reinforcement Learning—considering the scaling of LLMs and their applications with RL, and Richard Sutton's bitter lesson, and RL being a neighbor of control engineering, I had to take it. The course covers everything from MDP formation to multi-agent RL. I got to work on a multi-agent RL project, produced results, got very good comments and directions from the critics.

Causal Inference -In industry, whatever causal inference I worked on was mostly effort estimation, lift calculation—stemming from A/B testing or double machine learning. Maybe regression discontinuity, but rarely. This one is purely Pearlian framework. Even though I've been reading The Book of Why, the mathematical aspects here were really good. And it's giving me the lead to next semester—I'm doing TA for another causal inference course, so it's happening very naturally. Taken the course, now reinforcing through teaching.

So courses wise, very good start. Very good stepping stone towards ML research and what I want to explore. It also shows me what problem statements I can work on in NLP, causal inference, reinforcement learning going forward.


What I Have to Change

The thing I have to change is I have to start very early.

The problem is I always dwell into a lot of other personal things—conceptualizing a product idea but not building it fully, maybe prototyping it. Getting into books and keep reading a lot of theoretical or very philosophical explorations of how thinking works, how the world works. Last year I was in a very delusional loophole of reading complexity science and history of artificial intelligence. I'm always interested because it gives a good thought process for current things also. But still, I have to balance it up.

Most of the time I dwell, or I run behind old texts or very conceptual aspects. So the assignments, the readings, I'm taking them in the mid-half. I have to start early so I have the freedom to read as much as I want. Prioritize in that aspect.


Collaborations

Collaborations were kind of limited last semester. I was collaborating with Sanjiv from chemical engineering, he's a PhD student, and Kavin from CAS doing his advanced master's research. Many were individual projects—even my probabilistic machine learning was individual.

This semester I might need to start collaborating directly with professors. Mail them, connect with them. More than working on a project, if I can collaborate with a professor in terms of conceptualizing a thesis itself—like, this way of looking into a specific machine learning problem might help us. Very initial aspirations, I would say.

Sometimes collaborations get diluted because people are not trusting it or not curious. I usually ask people, can we collaborate on some ideas, but they're like, not interested in building the idea itself. They always want the complete recipe—let me join, let me cut vegetables, then we put both names. No, that's not how research works, right? Collaborations work on the brainstorming level itself. How we brainstorm, how our brains exchange ideas, process, synthesize, discuss, be devil's advocate, do all sorts of conversations and then keep building, iterating. But many people are like, "Bro, you take me to your project, but I need the complete plan." That's not very inviting.

I had better experiences with some PhD students who were hiring for their research. Maybe I seem overqualified or they expect too much in terms of contribution. Sometimes they're not so open in conversation—I understand they have other commitments. But I might need to try multiple times. Reduce my ego also. Ego in the sense—I should try connecting with them, expressing interest, pitching twice or thrice, keep trying. Right now I say, "I have an idea, I'm expressing it to you," and I expect it very quick. Most of the time it won't happen. It's more or less like dating—we keep meeting people, discussing, seeing if they're interested, then things happen. Not like, "Let us work." That needs a mindset change.


People and Wavelength

I was interactive with people who were inviting and welcoming in conversations. Talked elaborately about LLMs, projects, research experiments with a handful of people. But considering the number of people, I shouldn't expect to talk to everyone. The wavelength and the grasping wavelength differs person to person. Area of interest varies—many people are into finance, or only building interview-ready things.

As someone with years of experience, my understanding of industry should help a lot of people. Whoever is seeking that, I'm helping them out. This spring, I'll work on that. If people find some barrier in building conversations with me, maybe I need to be more welcoming, more friendly. I've been maintaining that, but I have to be more accommodative.


Spring 2026 Courses

This semester I'm taking Advanced Machine Learning, Advanced Deep Learning from Professor Zoran Kostic, and Machine Learning which reinforces all fundamentals of ML theory.

Two other courses I've mailed professors about—I'll struggle and fight hard to get into them once cross-registration starts: LLM Alignment and Interpretability because I worked on alignment training before, and Continual Learning based on my reading and interest in systems behavior and intelligence. I find myself inclined towards continuous learning.

If you see the three electives from fall and the spring electives, they're quite complementing each other. Building a very good setup for me.


The Two Goals This Semester

Last semester we talked, explored, experimented a lot—year of convergence and all that. This semester has two main goals:

One: Build a product.

Two: Build experiments and take them to publishable research.

I'm making it very structured. Weekly sprints—every week there should be an experiment. Work on something, come up with some idea, learning insights, and publish it.

This year started good—I've been writing daily blogs. It's helping me reinforce what I think, what I want to pursue, what I want to build. It's giving theoretical shape to Pracha Labs. I hope over time it evolves into a good venture.

For that, I need to develop one strong mindset: stick with one specific thing. Love one specific thing and love it every day, discuss it every day, build it every day. That's how I conceptualize this thing called Agentic Decision Sciences—focusing on machine learning and humans, AI and humans, AI agents and decision science. Not operational research side, but individual decisions and AI agents side.

It's in very initial aspirational stage. I have to develop it further, take one very specific thing.


Teaching Assistantships

Causal Inference for Data Science with Professor Adam Keller. He's an adjunct professor, worked at Barclays as head of data science research. I read his blogs. I was previously thinking it would be more practical side only, but he's really into conceptualizing and pitching new ideas for data science. One blog was about systems data science—how systems thinking can be seen as causal inference in a natural way. Any economic system or business operation can be seen as a system. Any system can be seen as Pearlian graphs, Pearlian DAGs. With that, one can develop causal inference models. He's talked about limitations also, which made me happy. Working with Professor Adam Keller will help in having such conversations.

QMSSTA with Professor Gregory Erik. I had a very good experience last semester—he was supportive, helped me conduct sessions. This semester I'll be conducting recitations again. It's Advanced Analytic Technique, concentrating on statistical methods, causal inference techniques, econometric techniques.

I'm quite happy because many people in AI come from the tool side—building software, building web applications. They're good at programming, learn the framework, build it. But me seeing things from problem solving with data, from systems thinking—my lineage is between how people go from software engineering to AI versus the foundations of data science, machine learning, and frontier AI.

If you see what I've explored and associated with—last semester's courses, the TAs and RAs—they're around different ML paradigms and data science things. Probabilistic machine learning, causal inference has utilization in data science and AI both. Risk analytics covering AI risk, cybersecurity risk, banking and financial risk.

This semester also, LLM interpretability, continual learning, advanced deep learning along with causal inference—again reinforcing the very basic thing I want to concentrate on: foundations of machine learning, data science, and AI. I don't want to dilute this with any specific domain. I want depth in this rather than application side, because all six years I've worked with almost many industries—commerce, retail, finance, banking, insurance, energy, aquaculture, fashion, software systems.

Learning business nuances is something I've figured out. I believe in the future I'll be able to do that too. I'm not saying I know things, but I'll be able to learn. Now I want to ground myself in theoretical depth, conceptual depth, mathematical rigor—so the thinking process gets refined and we can do very good solutions, not just what we see from blogs or LinkedIn or Twitter.

Going to the root of these concepts. Learning from Professor Elias, the pioneer in causal inference. Professor Blei, author of topic modeling and variational inference research. Professor Shipra Agrawal, one of the finest reinforcement learning professors in academia. Associating with such people gives different thought process and very good direction in conceptual depth.


Mindset and Behavior Changes

I have to start early. Stick with very specific things. Should not wander around.

One thing I've noticed last semester: analysis paralysis. I keep validating, validating, validating—expecting the idea to be perfect or very unique in the mathematical sense. I don't want to put those constraints anymore. If we find it interesting and something pretty new based on observations, keep doing it, iterate it.

This is very usual advice, always there. But the reality is adapting it, considering all the other thoughts we have and aspects we focus on. It takes time. When we keep focusing on nuances, expectations increase. These are the things I'm battling with.


Internships and Outreach

I should reach out to founders and labs for internships, which I'm not doing religiously. Many people apply 50 internships a day—whatever they want. I am peculiar and picky about frontier AI labs or very good applied AI and ML science teams.

I don't want to do dashboarding work or very simple ML engineering things. I want to contribute at the conceptual level, the design level, the core of the problem statement. I don't want to be in secondary teams. I want to be the core contributor.

For that I have to build certain things. Ramp up open source work rather than keeping it private. Be courageous to build in public. It can be right or wrong, but it has to go on.

Learn fast, do fast, fail fast. Rather than doing a lot of mental validations.


Seeking Right Collaborations

I'm going to connect with professors and good PhD students. Planning to apply for MATS fellowship and Anthropic Safety fellowship. Good things.

Continue outreach—keep sharing ideas, keep building thought process so it helps people.

Make new friends. As many places as possible. Spring is so good, New York is going to be beautiful, the weather is going to be too pampering.


Health

I'm fine, doing good. I have to eat properly—breakfast, afternoon something, dinner. I'm making nudges for that: getting good bowls, keeping refrigerator filled so I don't run to fast food.

In winter I sometimes need piping hot pizza. I ate whole pizzas three or four times, which—in fall I didn't do that. This semester I shouldn't eat too much processed food. More protein: chicken, meat, milk, calcium.

Walk 5,000 steps every day. Walking gives good thinking.


Writing and Reading

Continue to write on LinkedIn. Need not worry about Twitter outreach right now—if we keep writing, we can build that also. I believe I have a unique point of view helpful for people in industry and academia. It's also needed for improving my leadership skills, how I belong to the industry.

Take up small mathematical exercises to dive into concepts—these courses will be a good nudge, a good canvas.

Might audit one course for good conversations, finding good people. If the professor is good, shouldn't miss it. Maybe sit in Advanced Reinforcement Learning again. Professor Blei is also doing Applied Causality. Combining causality and reinforcement learning is a very good combination for decision making, recommendations, forecasting, consumer analytics.

Reading: Journey of the Mind will help with continual learning and LLM alignment—we can draw parallels. The Alignment Problem connects to agentic decision science, human-centered AI, AI safety. And Debt, because I'm going to focus on economics and finance, understanding markets and American and global economy. As I build leadership skills, I need understanding of these things.


The One Thing to Keep in Mind

Don't worry about outcomes. Keep building it, keep doing it. Do not get into analysis paralysis.

Develop one specific thesis continuously. That's what I'm naming Agentic Decision Sciences. It's going to get into further shape, maybe very good products or prototypes around it.

Winter was finding myself, grounding in the right position, reading, exploring, doing experiments and testing capacity. A very good exercise.

Let's reduce that ego and apply to some good internships also.

That's what I have for spring. That's what I'm going to do.