We took 50 real marketing take-home assignments from companies ranging from Series A startups to Fortune 500 enterprises. Then we ran every single one through Claude, GPT-4, and Gemini.

The results were sobering: 83% of the assignments produced AI output that would have passed the hiring screen.

The Experiment

We collected take-home assignments across five categories:

For each assignment, we generated responses using all three leading AI models with minimal prompting — just the assignment instructions, no additional context about the company or role.

Then we had the original hiring managers blind-review the AI submissions alongside the real candidate submissions.

The Results

In 83% of cases, at least one AI model produced work that the hiring manager rated as "pass" or "strong pass." In 34% of cases, the AI submission was rated higher than the median human submission.

The categories most vulnerable to AI:

  1. Copy and messaging — 94% pass rate
  2. Content strategy briefs — 88% pass rate
  3. Campaign planning — 82% pass rate

The categories where humans still had an edge:

  1. Brand positioning — only 62% AI pass rate (required deep contextual understanding)
  2. Data analysis with ambiguous data — 71% AI pass rate (required judgment calls)

What This Means for Hiring

If your take-home assignment can be completed by AI to a passing standard, it's not testing for human value. It's testing for competence that's been commoditized.

The fix isn't to ban AI from the process. It's to redesign your assessments to test for the skills that AI can't replicate: contextual judgment, strategic originality, and the ability to navigate ambiguity.

That's exactly what ProofOwl is built to do.