A Blog by Jonathan Low

 

Aug 2, 2019

A New Tool Uses AI To Spot Text Written By Other AIs

Setting an algorithm to catch an algorithm. JL

Huffpost reports:

A tool looks at statistical patterns in text to determine if it was composed by a human or a  machine. (It) applies baseline statistical methods that can detect generation artefacts across common sampling schemes. AI algorithms used to generate text bring a lot of predictability. In contrast, genuine news and scientific abstracts were less statistically predictable. The annotation scheme improves the human detection-rate of fake text from 54% to 72% without any prior training.
AI  algorithms are already being used to write text that could pass as something a human wrote—this can be misused to create fake news, or (for example) attack a business competitor by leaving a mass of realistic sounding reviews online on sites such as Zomato. Now, though, a new AI tool that can spot AI-generated text has been developed by researchers from Harvard working with the MIT-IBM Watson AI lab, reports Technology Review.
The researchers created a tool called the the Giant Language Model Test Room (GLTR), which looks at statistical patterns in text to determine if it was composed by a human or a  machine. According to the creators, GLTR applies a suite of baseline statistical methods that can detect generation artefacts across common sampling schemes.
“In a human-subjects study, we show that the annotation scheme provided by GLTR improves the human detection-rate of fake text from 54% to 72% without any prior training. GLTR is open-source and publicly deployed, and has already been widely used to detect generated outputs,” the researchers wrote. In countries like India, we’ve seen platforms such as WhatsApp being heavily targeted by political parties, who also put together huge teams of people to post on Facebook and Twitter. Automating the process with AI can enable scaling up such operations even further, raising the reach of disinformation and propaganda to completely new levels.
As it turns out though, the researchers found that the AI algorithms that can be used to generate text bring in a lot of predictability. In contrast, genuine news and actual scientific abstracts were less statistically predictable, which forms the basis of the GLTR model, which still relies on humans to study the output, while the tool simply highlighted predictable snippets of text. “Our goal is to create human and AI collaboration systems,” said Sebastian Gehrmann, a PhD student involved in the work.
Text generation is only one way in which AI is being used to fool human observers though. Today, it’s become easier than ever to fake a photograph using AI, and one developer also created a computer program which would process photos of women (and only women, not men) to digitally remove their clothes, in just seconds. Although that app was shut down, fears of digitally manipulated images remain.
Equally controversial are ‘Deepfake’ videos, where AI can be used to create realistic morphed videos. This has, again, been misused to make celebrity nude clips as well as to generate fake videos of political leaders saying things they didn’t actually say.
A new deepfake detection algorithm has been developed which can spot these videos, and new digital forensics techniques keep evolving, but as the sophistication of tools for fakes also increases, it’s going to become harder and harder to know what’s real.

2 comments:

srisri said...

A new tool called the Giant Language Model Test Room (GLTR) uses AI to spot text written by other AIs. The tool was developed by researchers at Harvard University and the MIT-IBM Watson AI Lab. GLTR works by analyzing the statistical patterns of text, such as the use of pronouns, conjunctions, and other words. It can also identify patterns that are common in AI-generated text, such as the use of repetitive phrases and the lack of creativity.The GLTR has the potential to be used in a variety of applications, such as detecting fake news and spam, and identifying plagiarism. It could also be used to improve the quality of AI-generated text.
Bufete de Abogados de Accidentes de Semirremolques
Liquidación de Promedio de Accidentes de Semirremolques

romilly said...

The emergence of a novel tool leveraging AI to identify text generated by other AI systems marks a significant advancement in the realm of artificial intelligence and cybersecurity. This innovative technology represents a critical step forward in combating the proliferation of AI-generated content used for potentially malicious purposes. By harnessing machine learning algorithms, the tool scrutinizes and distinguishes text generated by AI models from human-written content, contributing to the detection and mitigation of misinformation, deepfakes, and other AI-generated texts. This groundbreaking development not only bolsters efforts to maintain authenticity and trust in digital communications but also underscores the evolving landscape of AI ethics and security, paving the way for enhanced safeguards in an increasingly AI-driven world.
truck accident lawyer virginia
criminal law firm washington dc

Post a Comment