A Blog by Jonathan Low

 

Oct 9, 2016

Wanted: Neural Networks That Can Tell the Difference Between Turtles and Tuna

To help conservation efforts as well as cooking...JL

Michael Coren reports in Quartz:

Computer vision advances are allowing computers to recognize images of birds, name plant species, catalog rainforest species and even name individual whales. Artificial neural networks—algorithms modeled on the human brain— “learn” objects by seeing thousands or millions of examples. Every time an algorithm identifies a specific species, it increases the probability it will recognize the next one.
In one of the world’s last great high seas fisheries, the western and central tropical Pacific, it’s still a free-for-all. Only 2% of the fishing vessels are monitored by authorities, and as much of 40% of the catch may be unintended or illegal.
But the Shen Lian Chen, with thousands of hooks trailing behind it, is playing by the rules. The vessel is one of four boats in the region that has allowed The Nature Conservancy (TNC) to install cameras recording every fish coming over the side.
It’s the first step by TNC to outfit all of the thousands of long-line tuna vessels in the tropical Pacific fleet with a suite of cameras and sensors, says Matt Merrifield, the chief technology officer at TNC of California. Within a few years, he wants to see 100% coverage. “We can’t manage these stocks efficiently if we don’t have the data,” said Merrifield in an interview.
But watching all the video is a huge data challenge. On a typical two-month fishing trip, each boat records about 800 hours of footage. Even a highly trained fisheries monitor would need about 10 weeks to watch all of it from a single boat. With thousands of boats, the effort looks nearly impossible.
So TNC has turned to Silicon Valley. Computer vision advances are allowing computers to recognize images of birds, name plant species, catalog rainforest species and even name individual whales. The field has made strides by applying artificial neural networks—algorithms modeled on the human brain—to “learn” objects by seeing thousands or millions of examples. Every time an algorithm identifies a specific species, it increases the probability it will recognize the next one. Over time, these algorithms have made impressive progress. CalTech researchers recently announced they were able to use satellite and street-level images (such as Google maps) to create inventory and identify 80,000 street trees with 80% accuracy.

http://jivp.eurasipjournals.springeropen.com/articles/10.1186/1687-5281-2013-52
Camera traps capture wildlife imagery so algorithms can extract identifiable features. (Yu et al)

TNC now wants to use them to see what’s on the end of a fishing line. Building an algorithm to tell the difference between a tuna and a turtle is hard. Recognizing the difference between a Big Eye and Yellow Fin tuna, as well as variations among adult and juvenile fish, is even harder.
To solve that challenge, the organization is sponsoring a $150,000 competition on the data competition platform Kaggle, and thousands of data scientists to test their algorithms against reams of video from the long-line tuna fishery. Competitors must identify what, when and where a line is caught, catalog catches with 90% accuracy, and reduce the time needed to review the footage by about 40%. The organization has launched the “This is Our Future” campaign, including up-close-and-personal views of the fishery in VR to raise awareness. If it works, TNCs plans to push for global electronic monitoring of fishing fleets. It’s starting with 24 vessels in Palau, Federated States of Micronesia, Marshall Islands, Japan and Solomon Islands. Fully outfitted, the fleet would send home more than 600,000 hours of footage per year. That’s a volume that a team of about 200 human observers could manage–with some help from intelligent machines.

0 comments:

Post a Comment