Sunday, August 17, 2025

 Mama, don't let your kids grow up to be Vibe Coders!

 

Gary Marcus and Nathan Hamiel explain why in this article. 

 

"Cybersecurity has always been a game of cat and mouse, back to early malware like the Morris Worm in 1988 and the anti-virus solutions that followed. Attackers seek vulnerabilities, defenders try to patch those vulnerabilities, and then attackers seek new vulnerabilities. The cycle repeats. There is nothing new about that.

But two new technologies are radically increasing what is known as the attack surface (or the space for potential vulnerabilities): LLMs and coding agents.

... 

The best defense would be not using agentic coding altogether. But the tools are so seductive that we doubt many developers will resist. Still, the arguments for abstinence, given the risks, are strong enough to merit consideration.

...

 

Don’t treat LLM coding agents as highly capable superintelligent systems

 

Treat them as lazy, intoxicated robots

 "

https://open.substack.com/pub/garymarcus/p/llms-coding-agents-security-nightmare?r=joc82&utm_campaign=post&utm_medium=email

 

 

 

Thursday, August 14, 2025

 

AI critic vindicated.

"I endlessly challenged these people to debate, to discuss the facts at hand. None of them accepted. Not once. Nobody ever wanted to talk science."

https://open.substack.com/pub/garymarcus/p/openais-waterloo


Tuesday, August 12, 2025

"Critically, as I argued at the end of June (and going back to 2019) LLMs never induce proper world models, which is why, for example, they still can’t even play chess reliably, and continue to make stupid, head-scratching errors with startling regularity."

LLMs are not like you and me - and never will be

 

The mystery religion of ML-based AI from its first miracles to its latest incarnation, LLMs, announced from Day Zero: "we don't need no steenkin' models". Classic anti-intellectual techbro arrogance. 

Monday, August 11, 2025

 Posted this on LinkedIn first: a response to the unveiling of GPT-5.

 

'Reading the abstract (Chain of Thought reasoning is “a brittle mirage that vanishes when it is pushed beyond training distributions”) practically gave me deja vu. In 1998 I wrote that “universals are pervasive in language and reasoning” but showed experimentally that neural networks of that era could not reliably “extend universals outside [a] training space of examples”.

The ASU team showed that exactly the same thing was true even in the latest, greatest models. Throw in every gadget invented since 1998, and the Achilles’ Heel I identified then still remains. That’s startling. Even I didn’t expect that.

And, crucially, the failure to generalize adequately outside distribution tells us why all the dozens of shots on goal at building “GPT-5 level models” keep missing their target. It’s not an accident. That failing is principled.'


And the principle is far older than LLMs: it goes back to the AI wars of the 60s and 70s. ML-based AI was a mystery religion that produced miracles that could not be explained. The miracles were flashy enough to get the plodding tortoises of symbolic logic and linguistics out of Big AI (universities and tech bro startups) and banish them to the margins. Gary Marcus, who wrote the critique below, was one of the survivors.


"In his first book, The Algebraic Mind (2001), Marcus challenged the idea that the mind might consist of largely undifferentiated neural networks. He argued that understanding the mind would require integrating connectionism with classical ideas about symbol-manipulation."

Gary Marcus Wikipedia entry

 

GPT-5: Overdue, overhyped and underwhelming. And that’s not the worst of it.