AI still doesn't work very well, businesses are faking it, and a reckoning is coming
Summary
Dorian Smiley and Connor Deeks of Codestrap argue that most enterprises are struggling to integrate AI effectively and are pretending to know the right strategies, as no established playbook exists. They contend that current Large Language Models (LLMs) suffer from fundamental fallibility, including non-deterministic outputs and a lack of inductive reasoning, meaning they cannot reliably check their own work. Smiley highlights that current metrics for AI coding success, like lines of code, are misleading; for example, AI-generated code for SQLite was 3.7x longer and performed 2,000 times worse than the original. They predict a reckoning in 8-9 months marked by code quality problems, lawsuits stemming from flawed business advice (like Deloitte's error for the Australian government), and pricing pressure from customers aware of AI usage. Furthermore, internal incentives within large firms misalign with careful AI review, encouraging speed over quality. A significant looming issue is that insurance underwriters are becoming wary of covering AI-related risks, lobbying for carve-outs in liability policies, which Deeks suggests could destabilize the system if these foundational problems are not seriously addressed.
(Source:Theregister)