In this blog, I want to present something I've been observing in real life—bounded rationality and how it shapes the way graduate students choose their courses. You can actually relate this to many scenarios: how your friend ends up finalizing an offer from Company A over other options, how I ended up choosing Columbia over other admits, or how I decided to pursue a master's degree rather than switching companies. You can take any such situation, but since we're getting started with 2026, I wanted to talk about this particular phenomenon.
Bounded rationality is something I happened to read about in the book Irrationally Rational. The book is essentially a compilation of different case studies and concepts from ten different Nobel laureates in economics, covering how our decisions work within the realm of behavioral economics. If you're someone interested in behavioral economics or how psychology impacts decision-making, I'd also recommend Misbehaving by Richard Thaler—it covers behavioral biases and paradoxes that exist in everyday situations, including classrooms.
What is Bounded Rationality?
Bounded rationality is the idea that people—and even organizations—try to be rational, but their decisions are constrained by various factors:
- Limited time – We can't deliberate forever
- Limited information – We don't have access to everything we need to know
- Limited attention and working memory – Cognitive constraints are real
- Limited computation – We can't optimize across all possibilities
- Imperfect models of the world – Our understanding is incomplete
Because of these limitations, humans tend to make decisions that feel satisfactory rather than truly optimal.
Think about it: every time we make decisions, we want to be optimal. We want to take the best possible decision from the options we have. Take air tickets when traveling back to India—the process should always be optimal, meaning we reduce the cost to the possible minimum. Or consider investments—we want to invest in something that gives the highest returns. It's like we have an objective function and we want to maximize or minimize it.
But the problem is, we have a lot of constraints. You might not have time, so you can't think through all possibilities and calculate everything. You might not have enough cognitive capacity to think that way, or you might not even fully understand the situation. You might be in a hurry. You might lack computational ability. You might have imperfect models about how the world works.
Consider people who know how to do tax-saving investments—they can do things perfectly because they have the apparatus or help from chartered accountants. But those who don't have that knowledge end up doing things very wrong.
Or consider someone choosing between Company A versus Company B. They might have very imperfect models for understanding how the tech ecosystem works. They might just compare salaries without seeing how the industry would evolve or how things might go futuristically.
I always compare this to people who joined Mu Sigma versus those who joined other companies. In the beginning, in the first month, salary-wise, TCS, Infosys, or even some product companies were paying 60,000 per month or maybe 30-35,000 per month, while Mu Sigma used to be around 25,000 per month. But it's not about how we see things at that particular moment—it's about how we see it futuristically. "This company is going to yield a lot of experience. It's going to be a very good platform." Versus someone with less understanding about the tech ecosystem who might just prioritize who pays more.
This is what it's like when we're looking for internships or jobs—we don't know what hurdles or issues might come in the future. It might be related to immigration, or we might get stuck somewhere. We're not sure about things because human decision-making is always under uncertainty. Uncertainty comes with different dimensions, and there are constraints as well—constraints that aren't fixed but change over time.
We also have cognitive limitations. One person might choose a course in a very calculated way, while someone else might not even want to go through the process of comparing multiple things or checking multiple syllabi. They lack the attention, the working memory—the RAM, so to speak. They might also be emotionally weakened, unable to take certain risks.
Then there are computational limitations. You may not be able to validate all the options you have. When I was getting a loan, I wasn't able to go through all possible banks and loan products. I just looked at one or two, compared them against basic things like annual interest rate, whether it's cumulative, whether it has partial interest—very basic stuff. Maybe if I'd explored further, there would be something very underrated but monetarily efficient. But I lacked the ability to validate all possibilities. I couldn't compute things in a very critical way.
And we have time limits. We're bounded with time, so we can't do everything perfectly. We have incentives and goals that are sometimes limiting. We might need to balance things out—one option might have considerable positives and some negatives, but we can't skip it just considering one or two aspects. We need to see the overall goal.
Since these limitations exist, whatever decisions we make, we make sure they are satisfying.
This morning, I wanted to eat something. I had multiple things available, but my bounded rationality wasn't allowing me to calculate macros and micros and make it very healthy. Instead, I just opened my refrigerator and saw a milkshake—so tasty, and it had some minimal calories and protein. One bottle might give me 20 grams of protein and some calories for breakfast. I had avocados, so I just ate them. Sometimes our emotions or feelings push us toward a feast—high in calories, high fat, high carbs—but we're not optimizing against that. We feel it's satisfying.
How Bounded Rationality Influences Course Selection
Now let's get to the actual business: how bounded rationality influences how people choose their courses.
In a very optimal, ambitious, calculated world, every student would want to pick courses that are great. What makes a course great? It's very demanding, relevant to industry or research, very fundamental, taught by world-class faculty. It doesn't have exams but has a great set of projects. The professor brings different guest lectures. Your objective function is parameterized with multiple elements: a good professor, good course, good setup, good curriculum, good projects, no exams, ability to collaborate with many people. Sometimes people also think, "The professor can help me get a job—he has a lot of industry connections," or "Taking this course might help me work with this professor in upcoming semesters."
So logically, a great course has multiple elements put together. But the problem is: the course is great, but are we able to take it?
Here come the constraints. There's a waitlist. There are priorities given to computer science students, so data science students can't register on the first day. There's priority given to students who have taken the professor's previous course. There's a constraint that only PhD students are allowed. Such constraints restrict an individual from taking the optimal decision of "I have to take this course." Because of that, decisions get bypassed to something else.
There are other constraints too. You might be ambitious or bullish about a course, but you're not aware of essential things: the assignments are very tedious, or there are industry visits, or many people failed the course last semester.
Even though we have the opportunity to take a course, we presume or assume things because these can't be simply stated as universal truths. I might see a course as very theoretical or mathematical, while another person might see it as redundant. These limitations in cognition and information distribute the way we assess courses.
In reality or overall, a course might be very good. But as we validate it, people might say, "This is not good at all," because there exist biases or heuristics. If someone is really into applied research or theoretical foundations, they might see any database or computer systems course as boring. Someone interested in systems and distributed computing might wonder, "Why should I even care about computational learning theory?"
In reality, such courses might benefit their career or give them a better outlook on the program. But sometimes people just stay away: "I don't want to do that." They follow the current trend—agentic AI is the trend, or computer vision, or high-performance machine learning sounds fancy.
Not everyone does it the same way. A few do very critical analysis and review, they have their own approach. At the end of the day, whether everyone agrees or not, the individual believes their choice is optimal—optimal to them, and more importantly, satisfying to them.
If you ask someone what courses they're taking and how they ended up selecting them, for their career, we may not be able to calculate that optimization function. As I mentioned, everyone lacks that computational ability or ability to optimize. I may not be able to say that for a particular person's career, Course A makes more sense than Course B. I might be able to calculate it, but the person selecting the course may not be able to.
Consider a beginner taking three or four courses that are completely scattered but very satisfying to them—they're not able to optimize it. Lack of all these things causes what seems like irrational decisions—not optimal decisions. But for whom is it not optimal or rational? For everybody else other than this particular person. They believe it's optimal or rational, but actually it's satisfying rather than rational.
A Case Study: The LLMs and Generative AI Course
I've observed that many people have inclined toward a particular course on LLMs and generative AI, which covers almost everything about the computational and system side of deep learning and LLMs. Lots of people are joining—there are two sections, and they're expanding. But the questions are:
- Is everyone interested in that?
- Is everyone going to benefit from it?
- Considering that LLMs, agents, and such things are becoming more abstracted frameworks—seeming like fundamental software engineering things—do we even need to take a course for it?
There are no right or wrong answers here. I don't want to defend any particular position. But think through this. People still take it because it's the current thing. "This is the current one"—that might be the perfect answer you'd hear from everyone. But whether it's rational or satisfying is something we might need consensus on. How can we determine whether something is rational? It has to come through some consensus or common sense, because one person cannot decide for everyone.
That's why most decisions become bounded rational—bounded by specific constraints, limitations, lack of certain things like cognitive strength.
Some Weird Patterns I've Observed
I've seen people select courses just to get grades. They know the course isn't going to help them. They're not buying into the course content at all. But it's like, "Okay, this course might be a chill pill kind of thing."
There's also signal-to-noise ratio playing a huge role in admitting students to courses. A professor wants to admit people but has a capacity limit of 30 students. Lots of people want to get in, creating noise—lots of emails, lots of requests, everyone with different pitches that look very similar. Extracting a signal from that noise—figuring out who can really do well, who can really contribute to the class—is a big deal. The signal gets submerged or overshadowed by noisy information. Somebody might say this, somebody might mention a particular paper. Sometimes what's irrelevant to the course decision is noise because it doesn't help decide anything.
This causes people in critical decision-making positions to fumble or make bounded rational decisions. They choose people based on very specific things, based on their heuristics. If I wanted to select people, I might have my own conditions—I might prioritize people who have taken causal inference for some opportunity, or prefer people from a specific school, like economics, for a behavioral economics course.
Such heuristics create unfair advantages for specific people and disadvantages for others.
From the outside, this seems simple—it doesn't have any big economy or business value around it. Everyone registers for three or four courses for the semester. Most courses fill up. But there's no mechanism guiding anyone to take this or that course—that can't be done anyway.
As everyone makes their choices, it's more like self-organizing behavior. Some people go to a particular course, they talk to each other, they realize a professor or course is coming that creates overlap. That conversation triggers the next scaffold in the decision-making process, and they end up doing something else.
At the end of the day, consumption or diffusion happens through different levels of information. It can be private information, half-baked, very critical, insider information like "this is how it's going to be." A simple thing from the outside has a lot encoded in it. Most of the time we don't even give a damn about this. It's just how human brains and behavior are intertwined and how decisions are minted.
All such decisions are bounded rationally. All are constrained by N number of things. There's no right or wrong—it's all subjective because satisfaction is subjective. If I'm satisfied with Course ABC, it may not make any sense or give satisfaction to somebody else.
How Can We Deal With It?
Some limitations can be overcome. The unknown unknowns—over time, through exploration, we might come up with some proxy information. Cognition and computational ability come with experience: exposure to different concepts, trends, how people see things, what kinds of companies are using what.
Otherwise, we might end up selecting a rudimentary course that sounds fancy and simple. After the semester ends, we realize it doesn't have much substance—just reading random papers or implementing very fundamental things.
Sometimes none of us were aware of a specific course's grading policy. Most people chose thinking, "This course will help me prepare for interviews, it's more applied, I might end up doing a project." But the final outcome was completely different. Many people were dissatisfied and disappointed because it didn't cover the depth or certain criticalities—it just went through "This is the introduction, what is what, how can we do this in some Python packages."
You can't judge this a priori because you may not have complete thoughts about all these things, and the information may not even exist.
So yeah, this is what I wanted to cover about bounded rationality—how our decisions are bounded by constraints and limitations, and how we still feel very happy about them. Very few might question it, but most of us live with our satisfying choices, and perhaps that's okay.
We optimize what we can, satisfice what we must, and live with the rest.