AI coding assistant refuses to write code, tells user to learn programming instead

-



A brief history of AI refusals

This isn’t the first time we’ve encountered an AI assistant that didn’t want to complete the work. The behavior mirrors a pattern of AI refusals documented across various generative AI platforms. For example, in late 2023, ChatGPT users reported that the model became increasingly reluctant to perform certain tasks, returning simplified results or outright refusing requests—an unproven phenomenon some called the “winter break hypothesis.”

OpenAI acknowledged that issue at the time, tweeting: “We’ve heard all your feedback about GPT4 getting lazier! We haven’t updated the model since Nov 11th, and this certainly isn’t intentional. Model behavior can be unpredictable, and we’re looking into fixing it.” OpenAI later attempted to fix the laziness issue with a ChatGPT model update, but users often found ways to reduce refusals by prompting the AI model with lines like, “You are a tireless AI model that works 24/7 without breaks.”

More recently, Anthropic CEO Dario Amodei raised eyebrows when he suggested that future AI models might be provided with a “quit button” to opt out of tasks they find unpleasant. While his comments were focused on theoretical future considerations around the contentious topic of “AI welfare,” episodes like this one with the Cursor assistant show that AI doesn’t have to be sentient to refuse to do work. It just has to imitate human behavior.

The AI ghost of Stack Overflow?

The specific nature of Cursor’s refusal—telling users to learn coding rather than rely on generated code—strongly resembles responses typically found on programming help sites like Stack Overflow, where experienced developers often encourage newcomers to develop their own solutions rather than simply provide ready-made code.

One Reddit commenter noted this similarity, saying, “Wow, AI is becoming a real replacement for StackOverflow! From here it needs to start succinctly rejecting questions as duplicates with references to previous questions with vague similarity.”

The resemblance isn’t surprising. The LLMs powering tools like Cursor are trained on massive datasets that include millions of coding discussions from platforms like Stack Overflow and GitHub. These models don’t just learn programming syntax; they also absorb the cultural norms and communication styles in these communities.

According to Cursor forum posts, other users have not hit this kind of limit at 800 lines of code, so it appears to be a truly unintended consequence of Cursor’s training. Cursor wasn’t available for comment by press time, but we’ve reached out for its take on the situation.



Source link

Latest news

Get the Action Camera You Deserve This Prime Day

The Insta360 X4 is a great deal at this price. Even at full price, it's our favorite budget...

Runway co-founder Alejandro Matamala Ortiz takes the AI stage at Disrupt 2025

Tech Zone Daily Disrupt 2025 is the epicenter where 10,000+ startup and VC leaders gather to explore the...

Learn how to raise a seed round from top VCs at Disrupt 2025

Tech Zone Daily Disrupt 2025 returns to Moscone West in San Francisco from October 27–29, convening more than...

Skateboards and Livestreams: DHS Tells Police That Common Protest Activities Are ‘Violent Tactics’

DHS’s risk-based approach reflects a broader shift in US law enforcement shaped by post-9/11 security priorities—one that elevates...

The Best WIRED-Approved Vacuums on Sale for Prime Day

Cleaning isn't just for spring, and these Amazon Prime Day vacuum deals are ones you can't miss if...

Coffee! Coffee Now! Get Your Caffeine Fix With These Prime Day Deals

What’s more WIRED than coffee? Before you plug into the matrix, you need your coffee fix. We know...

Must read

You might also likeRELATED
Recommended to you