- The Neuron
- Posts
- 😺GPT 4.5 is a bust
😺GPT 4.5 is a bust
PLUS: And by that we mean, it's mid AF

Welcome, humans.
Looks like we spoke too soon about the AI crash yesterday, huh? Missed calling it by ~two hours. TL;DR—lots of bumpy economic news, NVIDIA no longer impresses, and GPT-4.5’s “mid” release sorta put a dent in the scale hypothesis. More on that below.
Last Call: We've teamed up with our pals at DZone on that GenAI survey—and today is your final chance to get it done before it closes! It'll take less than 10 minutes, promise.
Seriously, we timed ourselves filling it out and still had time left to wonder if GPT-4.5's price tag will require a second mortgage or just a car loan—you’ll get that joke in a sec.
What's in it for you? Early access to their trend report data (perfect for impressing your boss with industry insights), a free Getting Started with Agentic AI ref card, and a chance to win one of two $125 gift cards.
Check it out here before close of business today—think of all the things you could buy with that gift card: a fancy mechanical keyboard, 25 cups of overpriced coffee, or approximately 4 minutes of GPT-4.5 compute time!
Here’s what you need to know about AI today:
OpenAI released GPT 4.5 to mixed reactions.
Meta planned a standalone AI app.
IBM released an AI family for enterprises.
Meta unveiled the Aria Gen 2 research glasses.

Was GPT-4.5 so “mid” that it crashed the stock market?

Yesterday, OpenAI released GPT-4.5—its “largest and most knowledgeable model yet”, prioritizing emotional intelligence over raw reasoning power (Pro only atm).
You knew things were gonna be rough when OpenAI positioned this release more about “vibes” than anything else.
AI researcher Gary Marcus, who constantly criticizes the current AI hype train, called it a “nothing burger release.”
The truth is… somewhat in the middle? Very fitting, for a model called “4.5”…
First, the vibe take…
Sam Altman called GPT-4.5 “the first model that feels like talking to a thoughtful person.”
Ben Hylak declared it “the midjourney-moment for writing.”
Dan Shipper (Every) finds it “more extroverted and less neurotic,” but still prone to hallucinations.
Ethan Mollick notes it “can write beautifully” but gets “oddly lazy on complex projects."
And several testers noted it will confidently share opinions rather than deflecting with “As an AI...” responses.
Now, the “nothing burger” take…
Sam also acknowledged it's “a giant, expensive model” that “won't crush benchmarks.”
Former OpenAI researcher Andrej Karpathy explained it required 10X more compute for “diffuse” improvements.
Gary Marcus calls it evidence that “scaling data and compute is not a physical law.”
The biggest issue against GPT-4.5? The pricing is prohibitive—$75/input and $150/output per million tokens (that’s ~10-25X more than competitors).
As one observer perfectly summed up: “Half the TL saying it's bad and too expensive. Half the TL saying it's good and too expensive.”
In fact, GPT-4.5 perfectly encapsulates the AI industry's current dilemma: incredible technological achievements that can't yet justify their astronomical costs.
See, GPT-4.5 is the first major reality check in the AI scaling race, and GPT-4.5's marginal improvements suggest we're hitting fundamental limits.
Andrej Karpathy explained it well: “everything is a little bit better and it's awesome”, but in ways that are hard to notice—slightly better word choice, marginally improved understanding, reduced hallucinations—but nothing revolutionary.
Meanwhile, the economics are brutal: It cost approximately ~$500M to train GPT-4.5, and OpenAI plans to burn a lot more than that in 2025. Sam also says the company is “out of GPUs.” Hence, Stargate.
While all the new chips and servers will remain valuable for running ChatGPT, a model like GPT-4.5 simply can't achieve mass adoption if its economics don't work at scale.
Our take: Call us conspiratorial, but we don’t think it’s a coincidence that NVIDIA stock sold off right around the time GPT-4.5 was released…
The question isn't whether GPT-4.5 offers better vibes or not—it's whether any amount of vibes can justify burning billions on models most people will never use (and by “models”, we mean you, GPT-4.5).
For OpenAI, this ‘tweener release buys time while they search for a more sustainable approach to pay for new GPUs. Why else put out such a womp womp model?
For investors, yesterday’s market reaction was about uncertainty. And the truth is, nobody knows what happens next with AI. Sam doesn’t know. NVIDIA CEO Jensen Huang doesn’t know. And Wall Street CERTAINLY doesn’t know.
The only thing everybody DOES know is that the days of blank-check AI funding are numbered. As with everything in AI, it’s just a matter of how big that number is…
Goes without saying, but not financial advice!

FROM OUR PARTNERS
This tech company grew 32,481%...
No, it's not Nvidia… It's Mode Mobile, 2023’s fastest-growing software company according to Deloitte.1
Their disruptive tech, the EarnPhone and EarnOS, have helped users earn and save an eye-popping $325M+, driving $60M+ in revenue and a massive 45M+ consumer base. And having secured partnerships with Walmart and Best Buy, Mode’s not stopping there…
Like Uber turned vehicles into income-generating assets, Mode is turning smartphones into an easy passive income source. The difference is that you have a chance to invest early in Mode’s pre-IPO offering3 at just $0.26/share.
They’ve just been granted the stock ticker $MODE by the Nasdaq2 and the time to invest at their current share price is running out.
Disclaimers
1 Mode Mobile recently received their ticker reservation with Nasdaq ($MODE), indicating an intent to IPO in the next 24 months. An intent to IPO is no guarantee that an actual IPO will occur.
2 The rankings are based on submitted applications and public company database research, with winners selected based on their fiscal-year revenue growth percentage over a three-year period.
3 A minimum investment of $1,950 is required to receive bonus shares. 100% bonus shares are offered on investments of $9,950+.

Prompt Tip of the Day
Andrej Karpathy released a new video in his “general audience” series on language models and how to use them, with over 15 tips for prompting and best practices when using AI tools.

Treats To Try.
*Join Fiddler AI and Datastax to build better, safer RAG applications with comprehensive observability tools + LLM monitoring via Fiddler’s Trust Model. Register + get the replay here.
Deep Review finds you the most relevant academic papers by thinking critically (like a researcher).
Basalt helps you integrate AI into your product in seconds with tools to create, test, deploy, and monitor prompts that actually work in real conditions.
OpenArt Consistent Characters helps you create characters you can pose, place, and combine in any scene.
Pinch translates your voice in real-time during video calls so you sound like a native speaker in 30+ languages.
Quanta gives you instant, automated accounting services instead of making you wait weeks for your accounting data (raised $4.7M).
Forage Mail cleans up your inbox by filtering out low-priority emails and sending you one digestible summary.
*This is sponsored content. Advertise in The Neuron here.

Around the Horn.
Meta planned a standalone AI app for Q2 2025 to compete with ChatGPT and also planned to raise $35B for more data centers in a new financing w/ Apollo.
IBM debuted Granite 3.2, a large language model family that solves practical enterprise problems and is focused on real-world utility rather than benchmarks.
Meta also announced Aria Gen 2 glasses, an upgraded research device with advanced sensors that enables researchers to explore machine perception, contextual AI, and robotics applications.

FROM OUR PARTNERS
Building Reliable AI Agents
AI agents are tricky—bugs, hallucinations, and edge cases can break workflows.
In this exclusive AI Engineering Summit talk, Anita from Vellum unpacks how we got here, how TDD improves reliability, and even demos her SEO agent. Get access here!

Intelligent Insights
Ethan Mollick boiled the “multiple paths in AI” down to three levers: pre-training (scale), post-training, and reasoning—and breaks out where each major model excels.
Check out this interview with Nobel economist Daron Acemoglu who argues we're “driving 200 miles an hour” in the wrong direction by prioritizing automation over tools that could actually enhance human capabilities.
Ed Zitron wrote the ultimate bear take on the genAI industry that’s worth a read.
Coracle and University of Hertfordshire are developing an offline AI tutor for UK prisoners that’s surprisingly wholesome?

A Cat's Commentary.


![]() | That’s all for today, for more AI treats, check out our website. The best way to support us is by checking out our sponsors—today’s are Mode Mobile, Vellum, and Fiddler. See you cool cats on Twitter: @noahedelman02 |

| ![]() |