A Blog by Jonathan Low


Sep 20, 2019

The Big One: AI Is Taking On Earthquake Prediction - And Appears To Be Succeeding

If consistently replicable, such AI-generated early warning would truly be 'The Big One.' JL

Ashley Smart reports in Quanta:

Groups are using machine learning to demystify earthquake physics and tease out warning signs of impending quakes. Using pattern-finding algorithms similar to those behind image and speech recognition and other forms of artificial intelligence, collaborators successfully predicted temblors in a model laboratory — a feat since duplicated in Europe. They’ve tested their algorithm on quakes in the Pacific Northwest. They indicate that the algorithm can predict the start of a slow slip earthquake to “within a few days — and possibly better.”

AI and Machine Learning Applied To Make Data Storage More Efficient and Productive

As the compilation and analysis of data expands exponentially, finding ways to make its storage and access more effective could be one of the greatest challenges - and financial successes - of the techno-socio-economic future.

And it appears AI can help achieve that goal. JL

Jim Salter reports in ars technica:

As the scale and complexity of storage workloads increase, it becomes more and more difficult to manage them efficiently. With thousands of services competing for resources with differing performance and confidentiality targets, management of storage outpaces the human ability to make informed and useful changes.An AI architect might choose a convolutional or recurrent neural network to discover patterns in storage availability. Neural networks learn to spot anomalies and performance problems. AI management may also be able to provide a degree of efficiency not otherwise possible.

How Did Drones Manage To Take Out 5 Percent Of the World's Oil Supply?

These were not those cute - if sometimes annoying - little quadro-propellar drones seen at the beach or local park.

They were more like subsonic cruise missiles capable of sophisticated evasive maneuvers - and carrying large explosive payloads. JL

Kyle Mizokami reports in Jalopnik:

How did the Saudi military, with a defense budget of $67.6 billion, allow this drone attack to cause such massive economic damage? Saudi Arabia has large numbers of Patriot missiles, but most of the missile batteries were looking south. The limited coverage arc of the radar makes it  easy to fly around—particularly if you’re coming from the east. Drones encompass a  variety of  objects, from quadcopters to high speed subsonic pilotless aircraft similar to cruise missiles, which fly at low altitudes, making detection with ground-based radars difficult.

The Reason An AI Facial Recognition Ban May Be Likely In the US

There will be legal challenges based on the Constitutional right to be assumed innocent until proven guilty, a presumption undermined by current uses of facial recognition.

That, plus growing concern about abuses of the technology, may limit its spread. JL

Charlotte Jee reports in MIT Technology Review:

“Proper use” of facial recognition by government is supported by 80% of Americans. Without specific examples of what proper use is or is not, though, it’s hard to be sure of public opinion. “There will be legal challenges, and there will eventually be regulation. A constitutional right we have is innocent until proven guilty. Facial recognition could flip that around." A campaign in New York by tenants to stop a plan for using facial recognition instead of keys to access their apartments mostly affected poor, black, and brown women. The tenants involved human rights lawyers, and more affluent groups started to ally with them.

China Introduces Social Credit Scores For Companies

Chinese and foreign companies will be surveilled, evaluated and assigned a score, which will impact everything from export and import licenses to access to new business opportunities.

While a good score may not necessarily improve a corporation's lot, a bad score can definitely hurt it. JL

Nathaniel Taplin reports in the Wall Street Journal:

A key target of China’s coming “social credit” system, which triggers visions of “1984”-style monitoring of people, is misbehaving businesses. 80% of information on the data-sharing platform relates to companies. Social credit will make falling afoul of regulations more costly. Compliance or noncompliance with important regulations will be assigned a value and fed into an algorithm to produce a company’s overall rating shared across agencies through a central database.A bad rating will have ripple effects. Data will be gathered from company submissions and inspections, but also from video surveillance, instrument monitoring and third-party sources.

Who Isn't A Media Company These Days?

It's all about attempts to grab attention, engage and monetize. And if consumers want to mention your brand on their social media feed, so much the better. JL

Josh Sternberg reports in Ad Week:

“We’re at the nexus of content, technology and distribution.” A content strategy needs to fit into a company’s business strategy. All tie into a basic yet vital function: the relationship with customer. Marketers say you have to measure what you can measure, and trust the rest. “We never ask a member or anyone to post on our behalf. “This is about their experience, what they feel comfortable doing. We will see that they tag us or talk about us. And this is a content lever, with zero media spend." Media companies structure what content performs around topics that have audience behavior underneath them. Why wouldn’t brands insert themselves into that?”

Why Algorithms Encode the Subjectivities Of Their Human Designers

The notion that algorithms, or data, or technology is somehow objective and unbiased, neutral in all it surveys, has become a matter of faith for those disillusioned with their human colleagues.

But the reality is that algorithms and the data from which they are derived, and the devices on which they are devised or reported, are infused with the education, training and belief systems of their creators. JL

Sidney Fussell reports in The Atlantic:

Algorithms interpret millions of data points, and the exact path from input to conclusion can be difficult to make plain. But the effects are clear. This is a powerful asymmetry: Anyone can notice a change in search results, but it’s difficult to prove what caused it. That gives algorithm designers deniability. Because of their opacity, algorithms can privilege or discriminate without their creators designing them to do so. Algorithms provide “strategic ignorance." Try as companies might to minimize personal accountability, it is humans who build, train, and deploy algorithms. Human subjectivities are encoded every step of the way.