← Back to Index
No. 011

Bootcamps, Online Courses, and LinkedIn Influencers: A Misleading Start to Your Career

This blog is going to be about the misleading things, the hyped things happening in online learning. There are still good courses and content from the right people—I'm not dismissing everything. But I'm going to talk about the common patterns, common hypes, common misconceptions you see across the internet as blogs, courses, YouTube videos. Especially around AI, machine learning, data science, business analytics. That's where I want to focus.


LinkedIn Influencers

You can see a very common tone among many LinkedIn influencers. Sometimes people think I'm also an influencer kind of thing. Actually, I'm not a typical influencer. I'm also trying to influence, but the thing is—I want to help people think differently about the field. Think more about different concepts, right practices, exploring as much as possible. Be more skeptical about different methodologies than just writing every day about "this is what the interview will ask, this is what you need, this is the only technology, if you don't learn this you're not doing it right."

I don't want to reciprocate what most people are thinking about this industry because they are not making people build the personality of a data scientist or machine learning engineer. More or less, they build yet another interview prep. You cannot always make people confine within this interview preparation mentality.

Maybe this is because I never prepared for interviews—all happened very naturally, serendipity played a major role. I believe if you do the right things, things fall in place. Getting an interview or clearing an interview is a byproduct. It shouldn't be the main objective you tune against.

If you see all these LinkedIn profiles, they always say "these are the three methods you need to prep for this, these are the four algorithms people will ask in interviews." But it's not kindling the data science mentality or machine learning engineer mentality. People refrain from talking about different problem spaces, or discussing methodologies and directions, or sharing their efforts building from scratch or replicating something.

I would suggest: if you're building a career, stick with the right set of people. Do not get swayed away by all this LinkedIn yapping about interview preparations and "take only these courses, you'll be fine." First develop that mindset that learning is not going to end. See everything as an opportunity. Develop your own intuition about what is right, what is wrong. Do not think very linearly.

Many data scientists and machine learning engineers have the same perspective about problem spaces—they don't even discuss the problem. They just say, "Oh, you just put this algorithm, you just use this tool." I remember one conversation where I was talking about a modeling technique, and someone said, "Oh, I saw that in Krishna X's YouTube channel, this project he talked about." But completely both projects were different. The modeling paradigms were different, the outcomes were different. But people remember things at either interview reference or whatever online resource they've seen.

If some YouTuber comes up with an algorithm or talks about it, everyone just follows. If someone brings some meme notion, they follow it. That's how last year there were very rudimentary comparisons between "Agentic AI" and "AI Agents"—both are literally the same, just playing with words. But some irresponsible LinkedIn profiles were talking about it based on half-baked arXiv papers.

We should have that intuition so we don't get carried away by hypes created by these people. That's not actually helpful.

Another thing—I'm quite bored of these people's mindset and targets. Most of them do it just for fabricating evidence for EB-1 or O-1 visas. Not everyone, but some people I've observed do this LinkedIn writing and courses and stories just to gain things and make evidence for visas. Legally, conceptually, logically—nothing wrong. But are they really helpful to the learner side? That's the question.

They recirculate existing things. They don't let people think further. They build very illusional things for beginners. The beginners are not able to move forward—they're stuck with three, four courses or YouTube videos. Multiple things contribute to beginner's sabotage, but one major thing is not setting the right directions or giving right information in the name of LinkedIn or Twitter content.

Too much hyping is bad. Too much marketing something irrelevant is not going to help beginners.


Bootcamps

Recently I observed a bootcamp. Everyone on Twitter was happily sharing, "I got into this bootcamp, so happy to be part of this cohort." The curiosity is good. But the previous batch students were disappointed—the courses weren't good, the main contributor wasn't taking the classes, some teaching assistant was, and they weren't covering as much as promised.

This is always there because there are constraints for people running online courses. Another thing: many people who are actually teaching such things have not worked in industry or maybe even in academia. What they do is learn, practice by themselves. Most of the time they don't apply at scale or build very realistic things, but they teach. That's fine for basics or introductory parts, but when it comes to giving full perspective or helping students further, it's very insufficient.

Sometimes the course progression feels weird. Many bootcamps start with Pandas, Python, NumPy, then directly jump into machine learning—scikit-learn, how you implement things. But they don't cover theoretical aspects or even the conceptual side. The projects are old type, and almost everyone builds the same projects. So the essential thing of how someone can think unique is not being enabled. Enough conceptual grounding is not there. It's directly jump into one or two notebooks, you keep running it, you feel like "I've built it." And there's a pre-built project or something.

Even I was teaching an online program where the support from the program was not there at all. Just me and the students. I had to show, teach everything. The program wasn't helping others.

The overselling is happening a lot. People with three years gap, four years gap, 10 years career gap—everyone bought this dream or story that "you will get a job, this doesn't need math, this is very easy, you don't need programming background." They simplify entry prerequisites, but the expectations also reduce. People can't learn everything or the necessary stuff. They struggle—"I don't know, I couldn't understand, I don't like this, SQL is so tough," or "I like SQL but I don't see statistics helping me." They start with data science or machine learning and end up doing only SQL or only dashboarding, or they struggle a lot.

Now everything is on agentic AI or AI engineering. Many courses are overloaded—they cover machine learning, deep learning, agentic AI, AI engineering. People want to do everything. They don't have enough bandwidth, can't focus on many things. Uniqueness gets spoiled, bandwidth is not there, conceptual ground is not there. Just "okay, I'm in this cohort."


Understanding What Roles Actually Require

The world is evolving very fast. I feel like in 2025 or 2026, if someone wants to become a data analyst, they should not even touch much machine learning, statistics, all those things. Because many data science programs or data science jobs in current industry—many companies—are dashboarding. That requires SQL, Python, Tableau, max to max some scripting tools. They don't demand too many statistical or analytical aspects. They're not even doing analysis. They do more reporting, more number crunching.

People ask: what is last year's sales, what is the sales drop, what is the footfall, what is gross sales, net sales. That's what's happening. They're not doing real investigation or hypothesis testing or seeking specific evidence from data. Many roles are fighting how to run SQL queries fast or how to fix dashboards.

Understanding role requirements and seeing alignment is much needed. Starting with that or transitioning with that alignment might be very fruitful rather than being scattered.

Same goes for machine learning engineering. People learning machine learning from CS background versus data science or other backgrounds—completely different. Even at Columbia I've seen people learning machine learning don't actually take probability or statistics courses. They directly get into machine learning theory, and that's perfectly fine. Most ML algorithms—XGBoost, logistic regression, linear regression—don't demand much statistical prerequisites from the ML perspective. But from statistical analysis inference perspective, it demands more statistical inquiry, and those roles are completely different.

What kind of machine learning you want to focus on matters. If you want NLP or computer vision, you can get started with neural networks, BERT, transformers straightforward, fully focus on that math. Come back whenever needed—for example, if you're doing contrastive learning or representation learning and find you need better understanding of information theory, then come back and learn.

Defining the starting point is sometimes very generic, so people miss it. In my previous applied AI/ML blogs, I detailed some areas to start with. But I'm not giving very specific checklists because I don't want to. I want to let people explore and set their own path.

Certain career directions need certain fundamentals. If you want to become a data scientist with more statistical knowledge, get started with probability and statistics. But if you want computer vision, ML systems—you don't need to worry too much about statistics or probability. Get started with how algorithms work, what it takes to deploy them, what systems are required.

Years back there was just machine learning engineer or data scientist. Now many nuanced roles are happening.

Forward deployed engineer is something everyone is fond of. But one cannot become forward deployed engineer by doing courses. It's wearing multiple hats—customer support, understanding client problem statements, building tailor-made things, having understanding about whatever AI or data science or data platform products the company has built.

What we did during Mu Sigma was quite similar to forward deployed engineer—we completely owned the client's problem and built everything needed to solve it. If the client needed end-to-end data engineering, then build a machine learning model, then build a front-end layer, operational layer. We did all these things. That's what forward deployed engineers are doing.

Considering that role is emerging, I would say: have very good systems integrations or back-end programming. Develop your niche in machine learning, AI engineering, or data platforms and data management.

If you're in companies like Databricks, Informatica, Salesforce, Palantir—they deal with data platforms. If you're a forward deployed engineer at Anthropic, OpenAI, Perplexity—you need more understanding of AI engineering, nuances of different models, prompting, agents.

Actually, to put it simple: it's going to be like Salesforce developers or Informatica developers of older days, but now they give a pretty new name—forward deployed engineers. At end of day, they're going to build agentic AI workflows. Building agents, having different guardrails, different evals, deployments, sandboxes. That's what they do.

If you're really interested in that, don't waste your time on unwanted courses or unwanted fundamentals set for something else. Directly get started with understanding how AI models—modern deep learning models, generative AI models—work.

If you imagine yourself not at all inclined towards math, not going to train a model, don't want to do such things—but want to focus on building AI products or solutions—then stick with basics: how models work, how to control these models, how to steer these models, what it takes to build the apparatus around these models to give end-to-end solutions.

If you're interested in data side or very econometrics or business side—obviously leverage yourself in statistical inference, statistical modeling. I would say causal inference is something as a leverage. People should start advocating for it. That's how companies would start adapting it and see where they can use it, how they can play around with different models.

Another thing—considering all these OpenAI models and such, you can increase your understanding about interpretation of p-values, how they could be misleading. There are a lot of misconceptions about linear regression or different ML algorithms. You can really get into details. Any level of understanding you can get from AI models to learn in detail.


YouTube Binge Watching

Binge watching courses might give you satisfaction, but it's not giving you that problem-solving muscle. It's just giving you familiarity—"oh, okay, this is what rerankers are, this is what retrievers are, this is what he told." That's one thing.

Another thing: sometimes people don't believe in the courses they take. I see people attend a professor's course but all they do is watch some other YouTube videos. There's an issue—adapting the language professors use, following the nuances a professor or teacher uses. They always want "oh, he talked something I could easily understand."

When we go more into mathematical terminologies, the concepts will vary, it takes more time. We have to keep practicing, keep talking about it, keep thinking about it, then only we can get it.

For example, there's a model called JEPA—Joint Embedding Predictive Architecture. If you see LinkedIn or online, people say "LLMs are not the end, LLMs are not good, something something Yann LeCun said." But for an applied ML engineer or AI engineer, you need not care whether JEPA would work out or not, whether LLMs are going to fail or not. All these research are very much ahead of time and very initial in terms of scale. We cannot do anything quickly from these different narrations or point of views.

All we can do as applied machine learning engineers, ML engineers, AI engineers—build whatever is possible at this particular moment. Customers and industries would not wait or cannot bet on very futuristic things.

We need that filtering capacity—what is aspirational or long-term thinking versus what is needed for present. Many people confuse a lot. They talk like Andrej Karpathy in certain aspects, but that's completely detached from what they want to be or what they're preparing for.

If you're into research or experiments, you can keep having it as a reference. But preparing for interview and getting confused with other point of views might be intimidating. I have observed this in many people.


What I Would Suggest

If you decided to learn from online, stick with very good industry personas, their writings, their detailed work on books. You can watch lecture series from many universities—they have detailed work, comprehensive things like readings, assignments.

Other than that, in this agentic AI era, going back to very rudimentary beginner tutorials might be underwhelming.

Always think bigger. Set bigger expectations and keep improving that.