Leadership Bytes for Coders
Goodbye bug squashing, hello people problems! Your guide to navigating the tech-to-leadership transition.
AI Is Not Here to Save You
Created on 2025-10-23 14:23
Published on 2025-10-23 14:43
What do you get when you cross Kenneth Jennings with Google?
No one’s denying Ken Jennings is brilliant. Ask him any trivia question and he’ll fire back an answer. That’s why he’s the top Jeopardy player (and host now).
“What’s the chemical name for laughing gas?” Nitrous oxide. Easy.
But try this: “Explain the molecular mechanism by which nitrous oxide functions as an anesthetic?” Crickets.
PS – the answer has something to do with interacting with GABA_A and NMDA receptors…
That doesn’t mean he’s not smart. It just means his intelligence lives in a specific lane… parroting facts, not synthesizing them.
Remember back before ChatGPT when we’d poke fun at Google’s autocomplete?
Why is the moon…
-
Orange tonight?
-
Red tonight?
-
Made of cheese?
-
Not real?
Good Times
Now? We’re asking ChatGPT to write book reports on Gravity’s Rainbow and losing our minds when the lit professor fails us for claiming the central premise revolves around a sentient, Gnostic lightbulb.
AI is not AI
I get it. AI is such a vague term that unless you’ve actually done your homework, there’s a ton of conflation happening (especially with the AI Mythicists out there love to portray – OMG SKYNET IS GOING TO KILL US ALL TOMORROW).
You hear about some AI program discovering millions of proteins and compressing a century of research into a few years. So naturally, you think AI is super intelligent.
Then you hand Anthropic your three pages of system documentation, expecting it to optimize your infrastructure… and your CISO starts screaming.
TLDR: AI isn’t just ChatGPT. There’s an entire field of machine learning engineering most people forget exists. The problem? When folks think AI, they’re thinking GPTs… and then attributing everything else machine learning does to it.
So when someone says they’re replacing software engineers with LLMs? Yeah. Everyone laughs.
The Problem With Being Ernest
So what exactly would it take to get LLMs, specifically GPTs, to be able to do what folks want it to?
Parroting vs. Synthesizing
-
Commercial LLMs train on billions of documents. Huge knowledge pool. But parroting information is fundamentally different from synthesizing it. Synthesis requires understanding concepts, not just word patterns. Feed it a bunch of chemical inputs it’ll feed you a bunch of chemical outputs (that were in the training data). But ask it why those outputs were that way?
-
That then gets us into the whole accuracy problem. Train the LLMs on the internet, and half the time you ask it about the moon landing it will talk about CIA coverup. With so much data in the training set, there’s very little information assurance. Because remember the goal with LLMs so far isn’t so much factual accuracy, but language prediction.
Memory Problems
-
A little thing called the context window. Most GPTs handle a few dozen to a couple hundred pages of information at best. Half that if you need meaningful output. Beyond that? Hallucinations. Even if you do feed it hundreds of pages of documentation for your system, it’s going to forget most bits past page 50.
Documentation (or Lack Thereof)
-
Since the dawn of tech, documentation has been our collective nemesis. Nobody wants to do it, so it rarely gets done. You honestly think you have all the documentation needed to train a model on your systems? How long would it take to create?
Generic Training Data
-
These models are trained on generic internet content, not highly technical material. They’ve got Wikipedia and AMA Reddit, Inc. threads. They don’t have architecture diagrams, engineering concepts, or best practices.
Technical Depth
-
Even if you had perfect documentation, most LLMs weren’t trained on engineering principles, best practices, or academic research. To build an effective engineering LLM, you’d need mountains of domain-specific papers, textbooks, and concepts. Most companies don’t have EBSCO access or the time to pull thousands of academic papers on whatever problem you’re solving.
Business Context
-
But it’s not just engineering. You’d also need all your business-specific details, work processes, P&L fundamentals, industry sales cycles, regulatory constraints. All the messy, human stuff that shapes what “good” actually looks like in your context. Think Ken in accounting is overwhelmed now, just wait until you try and get him to brain dump all the stuff he knows and does to keep the books balanced.
The Cost Reality
-
Let’s say you somehow have everything. The training cost alone exceeds most companies’ five-year revenue projections. And then what? You’ve got a hyper-specific LLM that… still can’t write production code because you forgot to also train it on those pieces. And now the model is so big, you need a whole fleet of GPUs just to run everything. And for what? Some design documents and coding answers? What happens when you need to update your code base?
The Next Problem
-
Great you’ve spent millions on building out an LLM to redesign one of your systems. But what about your other systems? I mean what’s another couple hundred million at this point anyway?
AI Is Not Here To Save You
You know what you could’ve done with that budget? Hired five senior engineers to just do the work.
Look, LLMs are useful. They can handle surface-level tasks. Need help picking button colors for your homepage? Sure. Want boilerplate code suggestions? Fine.
But the depth required to solve actual engineering problems? It’s not there (not yet at least).
We need to stop conflating things and start understanding where AI, in its current state, can actually help us.
Otherwise we’re just asking Ken Jennings to perform brain surgery because he once answered a trivia question about scalpels.
#ArtificialIntelligence #EngineeringLeadership #TechReality #MachineLearning #SoftwareDevelopment





// COMMENTS