A Blog by Jonathan Low

 

Mar 25, 2019

Why Not Appoint An Algorithm To Your Corporate Board?

Think of the saving in not having to give it stock or options on top of not having to schedule meetings at strategically important but expensive places like Hawaii or Paris? JL


Will Pugh reports in Slate:

Not to say the high-level supervision that human directors provide isn’t important. But if a company were to supplement with supervision from A.I. (it) could independently monitor goals and balance competing interests. Technology could help human board members transition from high-level supervisors to micromanagers. Machine learning is ideal when you need to find hidden patterns in troves of data. An A.I. director could consume huge amounts of information about the company and the business environment to make decisions. If machine learning algorithms can reveal their internal logic, analyze and communicate well, they may do a better job by helping humans focus on the right details by filtering out noise in data.
Though Elon Musk has famously warned humanity about the dangers of artificial intelligence, his shareholders might be well-served by having an algorithm on Tesla’s board of directors.
In recent years, Tesla has become a cautionary tale for how difficult it is for part-time directors to oversee charismatic, strong-willed CEOs—especially ones who are the founding visionaries of their companies. Given how Elon Musk has landed the company in hot water with the Securities and Exchange Commission with his erratic tweets and mocking disregard for the regulatory regime dictating the proper behavior of a publicly traded company, it’s little wonder that Tesla’s board has been accused of being “asleep at the wheel.” Perhaps their seeming unwillingness to rein him in is due to the Tesla directors’ personal loyalty to Musk. Or maybe they simply don’t want to spend the time to “preapprove” Musk’s tweets about the company, especially with the less conventional hours and fast pace the CEO keeps. Either way, they can’t seem to keep him—or the company—out of trouble.
The Tesla example may be extreme, but it illustrates two key difficulties that boards often face in overseeing CEOs and their management teams: first, that boards typically consist of notable people with limited time and attention spans, and second, that their flow of information regarding corporate affairs is typically controlled by the CEO. All of this makes it rather intriguing that at least one company has raised a tech-forward way around these conundrums: appointing an algorithm to one of the directors’ chairs.
But why bring in an algorithm to solve these very human problems? First, it’s worth looking at what these teams are tasked with doing. Generally, board seats are not hard to fill. They usually carry prestige and can pay seven figures for part-time work. With this great power comes great responsibility, and if a director is careless with the company’s resources or acts on a conflict of interest to benefit someone other than the shareholders, then the director can be sued.
A.I. could help human board members transition from high-level supervisory entities to effective micromanagers.
While it may seem like a daunting task to oversee a large company as a part-time job, directors do have a right to rely on the company’s officers (like its CEO, CFO, and COO) and to delegate responsibility to a management team. For example, Apple’s full board only met four times during 2018. Because directors have limited time, boards authorize employees to handle the company’s day-to-day management.
Director rosters at large companies are usually a who’s who of business, political, and academic standouts. While they may be well-respected, directors may not be subject matter experts in the company’s specific industry. Take, for example, Apple’s board, which includes Ronald Sugar (retired chairman of Northrop Grumman), Sue Wagner (former vice chairman of BlackRock), and Al Gore. Or Tesla’s board, which includes Brad Buss (former CFO of SolarCity), James Murdoch (currently the CEO of 21st Century Fox), Larry Ellison (co-founder of Oracle), and Linda Johnson Rice (CEO of Johnson Publishing Company). Both demonstrate the kind of high-level supervision that shareholders expect boards to provide. Because directors have limited time to study details, they mostly focus on steering the company around corporate icebergs and not granular issues. But that can lead to trouble when things like, say, giving a close read of the thousands of tweets that Musk produces in a given year are added to their other obligations.
That’s not to say the high-level supervision that human directors provide isn’t important. But think of how much further it could go if a company were to supplement that with supervision from, say, sophisticated A.I. that could independently monitor fine-tuned goals (is Tesla really going to produce 500,000 vehicles in 2019?) and even balance competing interests on a more nuanced level. It’s the kind of technology that could help those human board members transition from high-level supervisory entities to effective micromanagers.
Consider the data-hungry environments where A.I. thrives. Machine learning is ideal when you need to find hidden patterns in vast troves of data. An A.I. director could consume huge amounts of information about the company and the business environment to make good decisions on issues like the future demand for the company’s products or whether the company should expand to China. This is exactly how the first A.I. director, appointed by the Hong Kong company Deep Knowledge Ventures, is being used: It’s tasked with consuming data about life science companies and then voting on which companies are good investments. The company says that it relies on the A.I.’s recommendations by refraining from making any investments that the A.I. doesn’t approve—which they say has helped with eliminating some kinds of bias and avoiding “overhyped” investments (looking at you, Theranos backers).
But why go to the extreme of giving A.I. its own seat when, theoretically, the board could just consult such algorithmic assessments to inform its decisions? This gets back to the issues of time, loyalty, and access to information. Unlike a human, an A.I. director is appealing as a potential independent tiebreaker on any disagreement between the human board members. What’s more, if such algorithms cast votes, it will be harder for other directors to disregard those votes, and it will force those directors to find compelling reasons to oppose them. In some cases, an A.I. director’s vote could be a red flag, an antidote to groupthink. In others, it may force human directors to confront potential biases in their thinking, like loyalty to a particularly charismatic CEO. Think of what an A.I. director at General Electric might have focused on in recent years when the company appeared to disregard its plummeting cash flow and mounting pension liabilities from operations over many years.
There are, of course, limitations and issues to overcome before giving software a seat at the directors’ table. For one, many forms of A.I. “learn” from human-generated and human-curated data—which has been known to replicate human bias. This kind of bias can be hard to fix because it can creep in at many different stages of A.I. training, including the goals programmers assign the A.I. to achieve, the data sets they feed it, the data attributes they choose to focus on, and the data they use to test it. Many programmers are becoming more cognizant of these issues, however, and are looking at better ways to address these biases in the process of developing these tools—including projects like A.I. that aims to “de-bias” other A.I. tools.
Deep learning techniques are currently “black boxes.” A self-driving car may be able to identify a crosswalk, and a valuation algorithm may be able to say that a company is worth $X, but if A.I. directors are going to interact with shareholders and human directors, they need to be able to explain their conclusions. If we can’t look under the hood and see their reasoning, A.I. directors will be hard to trust, and courts won’t be able to ensure that they are fulfilling their legal duties to provide shareholders “candor”—i.e., all information that would be important to a shareholder. Under securities law, one of the most common disclosure items for directors is an explanation of how and why directors are handling risk in a specific way. If machine learning algorithms can reveal their internal logic and are designed to analyze and communicate such risks well, they may even do a better job at providing such disclosures by helping humans focus on the right details by filtering out noise in data.


This also gets at another advantage that a transparent algorithm could have: a refreshing lack of personal ambition or interests. Assuming sufficient advancement in A.I. technology, shareholders and stakeholders alike could trust A.I. directors to be forthcoming about why they are taking a specific action—an attribute not always found in their human counterparts. Courts have recognized that, while directors may ostensibly be trying to benefit shareholders, there’s an “omnipresent specter” that members of the board are, intentionally or not, actually pursuing self-interest. On a hybrid board with both humans and A.I., the A.I. could provide shareholders, as well as other directors, with a more objective analysis when it comes to, say, questions like how a potential merger could affect directors’ own net worth. AI directors may also be better at balancing hard to reconcile interests, such as whenshareholder and employee interests sometimes conflict. Under current Delaware law, directors can consider “the impact [of their actions] on ‘constituencies’ other than shareholders.” However, directors are only to consider these constituencies to the extent that they impact long-term shareholder value (e.g., happy employees are usually better employees in the long run). But new proposals, such as Elizabeth Warren’s Accountable Capitalism Act, call for directors to consider shareholders and other stakeholders’ interests. This could be achieved by requiring a subset of human directors to look out for employees while others remain focused on shareholders—or it could be achieved by fine-tuning an individual A.I. director’s ultimate goals. If A.I. technology advances to the point where A.I. directors could explain how they reach their conclusions, then a single A.I. director could, for example, be programmed to consider both shareholder and stakeholder interests in a more transparent way than a human director could.To date, it seems Deep Knowledge is the only firm to have taken such a step. And it’s worth noting that, as of now, A.I. directors would be illegal under U.S. corporate law, which requires directors to be “natural persons.” But the idea of putting A.I. on a corporate board isn’t as far-fetched as it may seem. In a 2015 study by the World Economic Forum, which surveyed over 800 IT executives, 45 percent of respondents expected that we’d see the first A.I. on a corporate board by 2025, and that such a breakthrough would be a tipping point for more.

Perhaps, with enough advancement in this specialized technology, Musk’s warning that A.I. could create an “immortal dictator from which we can never escape” may come true—at least when it comes to how much freedom certain CEOs have to fire off reckless tweets or miss production targets without “someone” on the board sounding the alarm.

0 comments:

Post a Comment