A Blog by Jonathan Low

 

Aug 16, 2023

Why Investors Believe Legal Threats Cause Corporate Hesitation To Invest In AI

The engine driving investment in generative AI is the future prospect of corporate adaptation. Companies recognize that AI is going to be important to the future, but investors are seeing slower embrace at scale than expected. 

The reason is not that companies don't understand AI or its potential, it's actually the opposite: they understand those prospects quite well, which means they also have a sophisticated understanding of the risks, and not just fairness or transparency, but the legal and financial implications which they perceive to be more significant than do some VCs. The result is that there is more demand for legal rules, detailed chains of responsibility and clear understanding of risk assumptions. This may be contrary to the 'move fast and break things' ethos that has often prevailed in tech, but the perception is that AI is a far more powerful development and that that brings with it a need for more structure. JL

Venture Beat reports:

Generative AI uses data that exists, cobbles (it) together, and generates new content, videos, images, text. It’s generating not only privacy and transparency concerns, but IP infringement, false content and more. Customer data and proprietary company IP is at risk as these models hoover up the data available to them. When an AI engine is asked a question or sent a prompt, there’s a danger of sending data that shouldn’t be public. It’s also increasingly easy for these AI engines to learn and train on proprietary data sets. The risk is growing as the technology becomes more powerful and more deeply embedded into company infrastructure. “Trustworthy AI is (what) we're starting to look at, making sure we’re not generating faulty content, or violating copyright laws.”

Generative AI has actually been around for a while, but in just months, ChatGPT democratized AI for every single person with a connection to the internet, taking hold of the imagination of business leaders and the public alike. With the technology evolving at a record-breaking pace and getting implemented far and wide, embracing responsible AI to manage its ethical, privacy and safety risks has become urgent, says Vijoy Pandey, senior vice president at Outshift by Cisco. (Outshift is Cisco’s incubation engine, exploring the kinds of transformative technology critical to today’s environment in new and emerging markets.)

“Every aspect of our personal and business lives, across industries, has been impacted by generative AI,” Pandey says. “Putting a responsible AI framework in place is crucial, now that AI has broken free from specific use cases for specific products, and is embedded in everything we do, every day.”

The risks and real-world cost of irresponsibility

AI is enabling tremendous innovation, but technology leaders must understand that there’s a real-world cost involved. When AI does good, it can transform lives. When AI goes unattended, it can have a profound impact not just on a company’s bottom line, but on the humans whose lives it touches. And generative AI brings its own brand-new set of issues. It’s a big swing of the pendulum away from predictive AI, recommendations and anomaly detection, to an AI that actually delivers ostensibly new content.

“We’re not only looking at privacy and transparency, we’re starting to look at IP infringement, false content, hallucinations, and more.”

“I call it regenerative AI, because it uses things that exist, cobbles them together, and generates new audio content, videos, images, text,” Pandey says. “Because it’s generating content, new issues creep in. We’re not only looking at privacy and transparency, we’re starting to look at IP infringement, false content, hallucinations, and more.”

Customer data and proprietary company IP is at risk as these generative AI models hoover up all the data available to them across the internet. When an AI engine is asked a question or sent a prompt, there’s a real danger of sending data that shouldn’t be public, if there are no guardrails in place. It’s also increasingly easy for these AI engines to learn and train on proprietary data sets – Getty Image’s lawsuit against Stable Vision, a generative AI art tool, is a stark example. And the risk is growing as the technology becomes more powerful and more deeply embedded into company infrastructure.

A framework for trustworthy and responsible AI

With the emergence of generative AI, a responsible AI framework must place an emphasis on IP infringement, unanticipated output, false content, security and trust.

The move toward security and trust in this framework also means ensuring that there is responsibility baked into every AI initiative.

“Trustworthy AI is the bigger umbrella we’re all starting to look at,” Pandey explains. “It’s not just about being unbiased, transparent and fair. It’s also about making sure we’re not generating faulty or distorted content, or violating copyright laws.”

The move toward security and trust in this framework also means ensuring that there is responsibility baked into every AI initiative, with clear lines of authority, so that it’s easy to identify who or what is liable, if something goes wrong.

Transparency reinforces trustworthiness, because it gives agency back to customers, in situations where AI is being used to make decisions that affect them in material and consequential ways. Keeping communications channels open helps build the trust of customers and stakeholders. It’s also a way to mitigate harmful bias and discriminatory results in decision-making, to create technology that promotes inclusion.

For instance, the new security product Outshift is developing, Panoptica, helps provide context and prioritization for cloud application security issues — which means it’s handling hugely sensitive information. So to ensure that it doesn’t expose any private information, Outshift will be transparent about the unbiased synthetic data it trains the model on.

Accountability is about taking responsibility for all consequences of the AI solution, including the times it does jump the fence.

And when Cisco added AI for noise suppression in Webex for video meetings, which cancels any noise besides the voices of the attendees in front of their computers, it was crucial to ensure the model wasn’t being trained on conversations that included sensitive information, or private conversations. When the feature rolled out, the company was transparent about how the model was trained, and how the algorithms work to ensure it remains bias-free, fair and stays fixed in its lane, training only on the correct data.

Accountability is about taking responsibility for all consequences of the AI solution, including the times it does jump the fence and suddenly begins operating outside its intended parameters. It also includes making privacy, security and human rights the foundation of the entire AI life cycle, which encompasses protection against potential cyberthreats to improve attack resiliency, data protection, threat modeling, monitoring and third-party compliance.

Even if a system isn’t threatened from the outside by malicious actors, there’s always a risk of inaccurate results, for generative AI, in particular. That requires systematic testing of an AI solution once it’s launched to maintain consistency of purpose and intent, across unforeseen conditions and use cases.

“Responsible AI is core to our mission statement, and we’ve been a champion of the responsible AI framework for predictive AI since 2021,” Pandey says. “To us, it’s part of the software development life cycle. It’s as embedded in our processes as a security assessment.”

Implementing trustworthy and responsible AI: Beyond people and processes

“First and foremost, it’s imperative that C-suites start educating their teams and start seriously thinking about responsible AI, given the pervasiveness of the technology, and the dangers and the risks,” Pandey says. “If you look at the framework, you see it requires cross-functional teams, from the security and trust side to engineering, IT, government and regulatory teams, legal, and even HR, because there are ramifications both internally and in partnerships with other companies.”

“It requires cross-functional teams, from the security and trust side to engineering, IT, government and regulatory teams, legal, and even HR because there are ramifications both internally and in partnerships with other companies.”

It starts with education concerning the risks and pitfalls, and then building a framework that matters, customized to your own use cases and using language that every team member can rally behind, so that you’re all on the same page. The C-suite then needs to build out required business outcomes, because without that, all of these remain best-effort initiatives.

“If the entirety of the world is moving toward digitization, then AI, data and responsible AI become a business imperative,” he says. “Without building a business value into every use case, these efforts will just disappear over time.”

He also notes that as we move from predictive to generative AI, the world becomes increasingly digitized, and the number of use cases multiply, the machines and software and tools that power these solutions independently will also need to operate within these frameworks.

Deploying and using AI in every facet of a business is incredibly complex — and the churning regulatory landscape makes it clear that it will keep getting more complicated. Companies will need to keep an eye on how regulations evolve, as well as invest in products and work with companies that can help solve the pain points that flare up when pursuing a responsible AI strategy.

Getting started on the trustworthy and responsible AI journey

Launching a responsible AI initiative is a tricky process, Pandey says. But the first step is to ensure you’re not AI-washing, or using AI no matter the use case, but instead, identifying business outcomes as well as where and when AI and machine learning is actually required to make a difference. In other words, where does the business bring differentiation, and what can you offload?

“Just because there’s AI everywhere, throwing AI at every problem is expensive and adds unnecessary complexity,” he says. “You need to be very particular about where you use AI, as you would with any other tool.”

“I definitely believe technology solutions to these problems will come out of the industry.”

Once you determine the most appropriate use cases, you must build the right abstraction layers in people, process, software and so on in order to handle the inevitable churn as you build the organizational structure required to use AI in a responsible way.

“And finally, have hope and faith that technology will solve technology’s problems,” Pandey says. “I definitely believe technology solutions to these problems will come out of the industry. They’ll solve for this complexity, for this churn, for the responsible AI framework, for the data leakage, privacy, IP and more. But for now, ensure that you’re ready for these evolutions.”

0 comments:

Post a Comment