-1 C
New York
Saturday, January 4, 2025

Will Smith consuming spaghetti and different bizarre AI benchmarks that took off in 2024


When an organization releases a brand new AI video generator, it’s not lengthy earlier than somebody makes use of it to make a video of actor Will Smith consuming spaghetti.

It’s turn out to be one thing of a meme in addition to a benchmark: Seeing whether or not a brand new video generator can realistically render Smith slurping down a bowl of noodles. Smith himself parodied the pattern in an Instagram publish in February.

Will Smith and pasta is however certainly one of a number of weird “unofficial” benchmarks to take the AI group by storm in 2024. A 16-year-old developer constructed an app that provides AI management over Minecraft and exams its means to design constructions. Elsewhere, a British programmer created a platform the place AI performs video games like Pictionary and Join 4 in opposition to one another.

It’s not like there aren’t extra educational exams of an AI’s efficiency. So why did the weirder ones blow up?

LLM Pictionary
Picture Credit:Paul Calcraft

For one, lots of the industry-standard AI benchmarks don’t inform the common individual very a lot. Corporations typically cite their AI’s means to reply questions on Math Olympiad exams, or work out believable options to PhD-level issues. But most individuals — yours actually included — use chatbots for issues like responding to emails and fundamental analysis.

Crowdsourced {industry} measures aren’t essentially higher or extra informative.

Take, for instance, Chatbot Area, a public benchmark many AI lovers and builders comply with obsessively. Chatbot Area lets anybody on the internet price how nicely AI performs on explicit duties, like creating an online app or producing a picture. However raters have a tendency to not be consultant — most come from AI and tech {industry} circles — and forged their votes based mostly on private, hard-to-pin-down preferences.

LMSYS
The Chatbot Area interface.Picture Credit:LMSYS

Ethan Mollick, a professor of administration at Wharton, just lately identified in a publish on X one other drawback with many AI {industry} benchmarks: they don’t evaluate a system’s efficiency to that of the common individual.

“The truth that there usually are not 30 totally different benchmarks from totally different organizations in medication, in regulation, in recommendation high quality, and so forth is an actual disgrace, as individuals are utilizing techniques for these items, regardless,” Mollick wrote.

Bizarre AI benchmarks like Join 4, Minecraft, and Will Smith consuming spaghetti are most definitely not empirical — and even all that generalizable. Simply because an AI nails the Will Smith take a look at doesn’t imply it’ll generate, say, a burger nicely.

Mcbench
Observe the typo; there’s no such mannequin as Claude 3.6 Sonnet.Picture Credit:Adonis Singh

One professional I spoke to about AI benchmarks instructed that the AI group deal with the downstream impacts of AI as an alternative of its means in slim domains. That’s wise. However I’ve a sense that bizarre benchmarks aren’t going away anytime quickly. Not solely are they entertaining — who doesn’t like watching AI construct Minecraft castles? — however they’re straightforward to know. And as my colleague Max Zeff wrote about just lately, the {industry} continues to grapple with distilling a expertise as complicated as AI into digestible advertising.

The one query in my thoughts is, which odd new benchmarks will go viral in 2025?

TechCrunch has an AI-focused e-newsletter! Join right here to get it in your inbox each Wednesday.



Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles