March 2026

A lawyer just won Anthropic's hackathon. Not a software engineer. Not a computer science PhD. A lawyer.

And if that surprises you, you're still thinking about AI the wrong way.

The Skill That Mattered Wasn't Coding

Here's what actually happened: The lawyer won because the skill that mattered wasn't writing code. It was understanding the problem clearly enough to direct AI to solve it.

That's the shift nobody talks about.

The bottleneck moved.

It used to be: "Can you code this?" Now it's: "Do you know what needs to be coded and why?"

And lawyers are really, really good at that.

Why Lawyers Are Better at This Than You Think

Let's break down what lawyers do all day:

1. They Define Problems Precisely

A lawyer doesn't say "my client got screwed." They say: "The contract stipulated delivery by March 15th. Delivery occurred March 22nd. The delay caused $47,000 in measurable damages due to lost sales during a promotional window. We need to establish breach, causation, and damages."

That's not emotion. That's structured problem definition.

2. They Work Backwards from Outcomes

Lawyers start with the desired result and reverse-engineer the path to get there. "I need the court to rule X, which requires proving Y, which depends on evidence Z."

That's exactly how you prompt an AI agent to build software.

3. They Operate in Systems with Rules

Legal reasoning is fundamentally about navigating systems with explicit rules, edge cases, and exceptions. You know what else works that way? Code.

4. They're Experts at Iterative Refinement

A lawyer drafts a motion, gets feedback, revises it, gets more feedback, revises again. They're used to iterating toward correctness.

Sound familiar? That's the entire process of working with AI coding tools.

5. They Understand Requirements

A lawyer can read a contract and extract every obligation, deadline, and conditional clause. They can turn ambiguous language into precise requirements.

That's exactly what you need to build software with AI: the ability to turn a fuzzy business need into a precise specification.

The Old Skill vs. The New Skill

Let's compare what mattered in 2020 vs. what matters in 2026:

2020: Implementation Bottleneck

  • Problem: "We need a user login system."
  • Bottleneck: Do you know how to implement OAuth2? Can you write the database schema? Can you handle password hashing securely?
  • Who wins: People who can implement solutions in code.

2026: Definition Bottleneck

  • Problem: "We need a user login system."
  • Bottleneck: Do you know what "secure login" actually requires? Do you understand session management, token expiration, password reset flows, account lockout policies, GDPR compliance for user data?
  • Who wins: People who can define requirements clearly enough for AI to implement them correctly.

The lawyer wins the hackathon because they're better at the second one.

What This Means for Software Development

This doesn't mean software engineers are obsolete. It means the role is changing.

What's Becoming Commoditized:

  • Writing boilerplate code
  • Implementing standard patterns (CRUD, auth, API endpoints)
  • Syntax and language-specific knowledge
  • Translating requirements into basic implementation

What's Becoming More Valuable:

  • Problem definition — What are we actually trying to solve?
  • System thinking — How do the pieces fit together?
  • Edge case identification — What breaks this solution?
  • Requirements clarity — What does "secure" or "scalable" or "user-friendly" actually mean in this context?
  • Debugging and diagnosis — When something's wrong, what's the root cause?

The Clankathon Test

There's a hackathon coming up that tests exactly this skill shift.

It's called Clankathon (https://clankerrank.xyz/clankathon), and here's how it works:

You get a full running e-commerce app with hidden bugs

Nobody tells you what's broken

You have to find the issues yourself by clicking around and exploring

Then you use any AI tool to fix them

Hidden test suites score your fix

If your fix breaks something else, you lose points

Duration: 3 hours Format: Live leaderboard Cost: Free Constraint: Limited spots

This is brilliant because it mirrors real-world software work:

  • The bugs aren't labeled — you have to diagnose them
  • You can use any AI tool — the tool doesn't matter, the thinking does
  • Broken fixes are penalized — you can't just slap a solution on and hope it works

This tests your ability to:

  • Understand a complex system you didn't build
  • Identify what's actually broken vs. what's working as intended
  • Articulate the problem clearly enough for AI to fix it
  • Verify the fix doesn't introduce new problems

Those are the skills that matter now.

Why This Is Good News

If you're a software developer reading this and feeling defensive, stop. This is actually good news.

Because the work that's being automated is the boring stuff you didn't want to do anyway:

  • Writing the same CRUD endpoints for the 47th time
  • Googling "how to hash passwords in Node.js" for the 12th time
  • Translating a spec into boilerplate code

You get to skip straight to the interesting parts:

  • Architecting the system
  • Identifying edge cases
  • Optimizing performance
  • Debugging weird integration issues
  • Evaluating tradeoffs between approaches

And if you're in a non-technical role and you've been intimidated by software, this is even better news.

Because now you can:

  • Build internal tools without waiting for engineering
  • Prototype solutions to test ideas quickly
  • Automate workflows that used to require a developer
  • Participate in software creation using the skills you already have

The lawyer who won the hackathon didn't suddenly become a software engineer. They applied their existing expertise (problem definition, requirements clarity, iterative refinement) to a new domain.

You can do the same.

What You Should Learn

If you want to thrive in this new world, here's what to focus on:

1. Learn to Think in Systems

Software is a system of interconnected parts. You don't need to know how to code them, but you need to understand how they relate.

  • What happens when a user clicks "submit"?
  • Where does the data go? Who can access it? What happens if it fails?
  • What are the dependencies? What breaks if one piece changes?

2. Get Better at Asking Precise Questions

Vague prompts get vague results. Lawyers are trained to ask precise questions. You should be too.

Bad prompt: "Build a user login system." Good prompt: "Build a user login system with email/password authentication, session tokens that expire after 24 hours of inactivity, password reset via email with a 1-hour expiration link, and account lockout after 5 failed attempts."

See the difference?

3. Understand Requirements, Not Code

You don't need to memorize syntax. You need to know what "good" looks like.

  • What makes a login system secure?
  • What makes an API scalable?
  • What makes a UI accessible?

If you can define the requirements, AI can implement them.

4. Practice Diagnosis

When something's wrong, can you figure out why? That's the skill AI can't (yet) do well.

  • Is the bug in the frontend or backend?
  • Is it a data issue or a logic issue?
  • Does it happen for all users or just some?

Diagnosis requires domain knowledge and systems thinking. Start building those skills.

5. Learn Enough Code to Read It

You don't need to write code from scratch, but you should be able to read code well enough to:

  • Understand what AI generated
  • Spot obvious errors
  • Verify the logic matches your intent

Think of it like learning enough Spanish to read a menu vs. becoming fluent. You don't need fluency — just literacy.

The Future Belongs to Problem Definers

In 2020, the future belonged to people who could implement solutions.

In 2026, the future belongs to people who can define problems clearly enough that AI can solve them.

Lawyers are good at that. So are project managers, business analysts, domain experts, and anyone who's ever had to translate messy human needs into structured requirements.

The technical barrier just dropped. What's left is the conceptual barrier.

And that's a much more interesting challenge.

Are You Ready?

The hackathon winner wasn't the person who wrote the best code. It was the person who understood the problem well enough to direct AI to write the best code.

That's the new skill.

If you want to test yourself, try the Clankathon hackathon next Saturday. You'll learn fast where your gaps are.

And if you're building software for your business — whether it's a custom web app, an internal tool, or an AI-powered system — the teams that win will be the ones who combine domain expertise with AI tooling.

That's where Caxy comes in.

We've been building custom software for 25+ years. We know how to define problems, architect systems, and identify edge cases. And now we use AI to accelerate implementation while maintaining quality and security.

If you need a team that understands both the what and the how — let's talk.


About Caxy

Caxy builds custom software for businesses that need more than off-the-shelf solutions. We specialize in complex integrations, AI-powered applications, and enterprise web platforms. Based in Chicago, serving clients nationwide since 2000.

Contact us: caxy.com/contact

by 

Michael LaVista