A Blog by Jonathan Low

 

Oct 10, 2021

Can Humor Be Reduced To An Algorithm?

An algorithm walked into a bar graph. JL 

Tristan Greene reports in The Next Web:

Jokes can be reduced to formulas. Just about anything can be reduced to a formula, but “funny” isn’t a thing. It’s a perception. Just like you can’t hand me an ounce of satisfaction or purchase a mile’s worth of courage, you can’t quantitatively produce funniness in a lab. But because people are inclined to accept the idea that an AI can be intentionally funny we have an irrational tendency to imagine computers as having more agency than a chemistry set.Stop me if you’ve heard this one. A robot walks into a bar and the bartender takes its order. The robot says: “I’ll have whatever my developer likes.”

If you’re not laughing right now it’s because the joke isn’t funny. And if you are laughing, it’s because the joke is funny. That’s how jokes work. It’s also how people work.

Humorous or not, the premise of the joke is that robots don’t have personalities, ideas, thoughts, or desires. Any human-like qualities we could attribute to a machine or its output are merely reflections of ourselves or its programmers.

That doesn’t sit well with the mainstream perceptions of AI. We’ve seen a hundred or more variations on the “A robot wrote this article” trope that The Guardian got caught up in last year. Each one promises a near-future where human creators are either displaced or forced to work in tandem with machines.

The common refrain is that AI isn’t human-like yet, but it will be sooner than you think!

And, maybe after reading some carefully-curated outputs from OpenAI’s text generator, GPT-3, it starts to sound less like hyperbole and more like good old common sense.

We see back-flipping robots doing parkour and deepfake face-swaps in our social media feeds everyday. We have every reason to believe a text-generator can do things that seem straight out of the realm of science fiction.

At least, until we start picking at the seams. Because, unfortunately, a functional understanding of the machinations of deep learning-based AI systems doesn’t fall within the realm of common sense.

Prestidigitation

Here at Neural, we refer to most of what AI does as prestidigitation. That’s because there’s only a handful of things a typical deep learning system can actually do. Much like a real-world magician, developers create incredible programs out of some fairly basic algorithmic foundations.

The only difference between a disappearing coin trick and what David Copperfield does is scale. There is no more or less “real magic” involved in the former’s illusions and the latter’s.

And it’s the same with AI. Tesla’s computer vision systems are no more or less human-like than Not Hotdog’s. They essentially perform the exact same function at different scales.

It’s hard to explain the simplicity of a massively complex AI system to the average person.

So let’s take something uniquely human and break down exactly what happens when you try to codify it for machines in the simplest possible way.

Can an AI be funny?

Luckily for us, a former Microsoft intern named Nabil Hossain has already done all the groundwork for us. A few years back, Hossain and a pair of Microsoft AI researchers developed a machine learning system to generate humorous headlines from existing news articles.

The big idea was that the AI would make microedits by changing a single word in a serious headline to make it a funny one. 

A list of headlines with a single word changed by an AI system in order to generate supposedly humorous headlines.
Do you think these are funny?

Basically, Microsoft invented Mad Libs for AI to try and demonstrate that computers can be funny.

The AI picks a noun or verb from a headline and replaces it with a word that can be objectively quantified as humorous.

So here’s the simple answer to the question of whether AI can be funny or not: If you get to define what is and isn’t funny, sure. AI can be just as funny as you decide it is or isn’t.

Which brings us right back to the joke that opened this article. Is it funny? Is the author who wrote it funny?

What is funny?

Jokes can be reduced to formulas. Just about anything can be reduced to a formula, but “funny” isn’t a thing. It’s a perception. Just like you can’t hand me an ounce of satisfaction or purchase a mile’s worth of courage, you can’t quantitatively produce funniness in a lab.

If a scientist told the mainstream media they were creating jokes in beakers with liquids that frothed and changed colors, we’d assume they were a 1980’s cartoon character.

But, people are inclined to accept the idea that an AI can be intentionally funny.because we have an irrational tendency to imagine computers as having more agency than a chemistry set.

So, how do we make a computer spit out a joke? Or, in the case of the Microsoft headline system, how do make an AI spit out a funny Mad Lib?

The very first problem the Microsoft team ran into was data. If you want to teach an AI to recognize pictures with cats in them you train it on pictures of cats. Ergo, if you want to teach an AI to be funny you have to train on… things that are funny.

A screenshot showing how the MS team built a database of supposedly funny headlines

Who decides?

Arguably, nobody should get to decide what is and isn’t objectively funny. Because humor is subjective.

But, if there is anyone qualified to determine what is and isn’t funny, it most certainly isn’t a team of internal judges at Microsoft or the Amazon Mechanical Turk. You don’t have to be Marc Maron to know that big tech employees and micro-gig workers aren’t the world’s foremost experts on what’s funny.

But what’s the alternative? A coalition of world-recognized funny people could create a database of Mad Libs they find hilarious, yet that’s no guarantee any given person would get a chuckle out of any of them. Or that an AI could use that database to generate new funny headlines. 

A screenshot depicting how the MS team selected judges and editors.

The bottom line: AI can’t be funny. Funny as interpreted by the recipient of a joke is subjective. And funny as an intended construct requires intent.

Just like fashion, being funny is as complex as the people you surround yourself with. What a group of AI devs might find funny or fashionable will likely differ from the tastes at Fashion Week in a Comedy Central writer’s room. And both the individual and public perception of what’s humorous is constantly changing and evolving. 

Beloved characters and routines from yesteryear, such as Archie Bunker or Eddie Murphy’s homophobic jokes, would likely fail to find the same acclaim and comedic praise in the modern zeitgeist as they did in the past.

When it comes to imitating even the most simple of human experiences, there are certain aspects of our existence that cannot be codified or quantified. It would be hard to argue that humor isn’t among them.

The Microsoft team didn’t develop an AI that creates funny headlines. They codified a small sample of Mad Libs a statistically-insignificant group of humans found funny, and trained a deep learning system on that data. 

What they’ve accomplished is no more or less human than teaching a chatbot which pre-written message it should select in response to a customer query. It’s more complicated. But it isn’t more human.

2 comments:

Unknown said...

its amazing point shares about humor if you anyone interested about machine analysis please feel free visit >> Goalead

elikeu said...

I like your post! Competition breeds innovation. Let's see if this new platform can shake up the status quo and bring some much-needed improvements to the crypto trading Binance killers experience

Post a Comment