A Blog by Jonathan Low

 

Nov 21, 2019

Can Algorithms Actually Be Held Accountable?

Creators and users of algorithms can be held accountable, but the process of making that happen will be iterative, complicated and time intensive. JL

Joshua New reports in the Center For Data Innovation:

A typical software development process involves multiple iterations of beta testing, updates to existing software, adding new functionality, and so on. It would not be feasible to require a company to conduct a new impact assessment for every minor software update. A smart framework for algorithmic accountability would entail technological and procedural mechanisms to ensure the operator of an algorithmic system can verify it acts in accordance with its intentions, as well as identify and rectify harmful outcomes. Even the most extensive reviews will not predict all potential pitfalls. Hold companies accountable for their use of algorithms and mitigating potential harms
After months of advocates turning up the volume on concerns that algorithmic decision-making may exploit consumers, amplify bias, and foster discrimination, Congress made its first legislative foray into the issue of algorithmic governance with the Algorithmic Accountability Act of 2019. Unfortunately, as explained below, this bill misses the mark, primarily by holding algorithms to different standards than humans, not considering the non-linear nature of software development, and targeting only large firms despite the equal potential for small firms to cause harm. Because these issues are complex, important, and rapidly evolving, the best option would be for Congress to better study what is needed to achieve algorithmic accountability rather than embrace the precautionary principle. However, if Congress insists on rushing forward with this bill, it should fix these problems.
The Algorithmic Accountability Act, introduced in April by Senator Cory Booker (D-NJ), Senator Ron Wyden (D-OR), and Representative Yvette Clark (D-NY), would direct the Federal Trade Commission (FTC) to develop regulations requiring large firms to conduct impact assessments for existing and new “high-risk automated decision systems.” The definition of a high-risk automated decision system is broad and includes many different types of automated systems, including those that pose a “significant risk” to individual data privacy or security or that result in biased or unfair decision-making; those that make decisions that significantly impact consumers using data about “sensitive aspects,” such as work performance and health; those that involve personal data like race, political and religious beliefs, gender identity and sexual orientation, and genetic information; or those that monitor a large public space. These impact assessments would evaluate how an automated system is designed and used, including the training data it relies on, the risks a system poses to privacy or security, and various other factors. Companies would be required to reasonably address concerns these assessments identify, but companies would not be required to disclose these impact assessments. However, failure to comply would be considered an unfair or deceptive act under the Federal Trade Commission Act and thus subject to regulatory action.
The first and foremost problem with the Algorithmic Accountability act is its framing. Targeting only automated high-risk decision-making, rather than all high-risk decision-making, is counterproductive. If a certain decision carries a high risk of harming consumers, such as by facilitating discrimination, it should make no difference whether an algorithm or a person makes that decision. To hold algorithmic decisions to a higher standard than human decisions implies that automated decisions are inherently less trustworthy or more dangerous than human ones, which is not the case. This would only serve to stigmatize and discourage AI use, which would reduce its beneficial social and economic impact. This provision would only be worthwhile if expanded to all high-risk decisions, regardless of the technology involved.
Second, this approach does not appropriately consider the non-linear nature of software and process development and deployment. A typical software development process can involve multiple iterations of beta testing, pushing minor updates to existing software, adding new functionality, and so on. It would not be feasible to require a company to conduct a new impact assessment for every minor software update, but the bill does not provide guidance on how to productively integrate impact assessments with the software development process.
Third, the bill only applies to companies with over $50 million in revenue or that are in possession of data about 1 million consumers or consumer devices. It is unclear why these requirements should only apply to large or high-revenue companies. If a certain decision can be harmful to consumers and thus warrant greater regulatory oversight, then the size of companies making these decisions or the total number of their customers does not seem relevant. If there is a serious risk of harm, it makes little sense to exempt the majority of companies from compliance.
Fourth, the bill does not require impact assessments to be public out of respect for the importance of protecting proprietary information. Protecting proprietary information is important, however it would be better to require these impact assessments be publicly available, with the company publishing general information about what it did and redacting any proprietary information. This would make consumers more aware of any potential risks of engaging with a particular algorithmic system and create competitive pressure for companies to reduce this risk. Though the average consumer is unlikely to review these assessments, trusted third parties such as Consumer Reports likely would provide consumers with easily digestible recommendations. Requiring this disclosure would lead to more transparency and consumer empowerment.
Fifth, the bill is correct to make a distinction between high-risk decisions, which are worthy of regulatory scrutiny, and low-risk ones, which are not, however its definitions of high-risk automated decision and information systems are overly broad. For example, based on the bill’s definitions, any large information system that involves data about consumers’ gender identities would necessarily be high-risk. But in many cases, the use of this data is totally benign. For example, many companies want to know about the breakdown of their customers based on gender, such as a clothing retailer that wants to better allocate floor space for certain items. It makes little sense to require companies to conduct impact assessments for such innocuous applications of analytics.
Overall, as the Center has explored at length in its report, “How Policymakers Can Foster Algorithmic Accountability,” impact assessments can be a valid tool for helping achieve algorithmic accountability, but not as described in this bill and not by themselves. A smarter regulatory framework for algorithmic accountability would entail a wide variety of technological and procedural mechanisms to ensure that the operator of an algorithmic system can verify it acts in accordance with its intentions, as well as identify and rectify harmful outcomes. Absent the additions laid out here, the Algorithmic Accountability Act would fall short of creating an environment in which companies are both meaningfully accountable and able to demonstrate this accountability to regulators effectively.
That being said, even though policymakers are unlikely to expand this bill to cover all high-risk decisions, rather than just automated ones, there are several opportunities to improve this bill. First, the bill should direct the FTC to create guidance and identify best practices for impact assessments. Second, impact assessments should be voluntary and provide those companies that participate with some degree of liability protection.  While firms can conduct and publish impact assessments for their algorithms now, they have little incentive to do so. This change would have some important impacts for accountability. Should the FTC find evidence of consumer harm as a result of automated decision-making, companies that voluntarily conducted an impact assessment meeting the FTC’s guidelines would have the presumption of no ill-intent and could be given a window of time, such as 30 or 60 days, to fix their system to stop this harm from occurring. If the company never conducted an impact assessment or did not do so to the FTC’s standards, or fails to fix their software within the time window, the FTC could then take enforcement action. Companies will have a strong incentive to conduct high-quality impact assessments as it would mean reducing regulatory risk, and regulators would have a clear process for determining whether and to what degree a company should be liable for the harm their algorithms cause.
The Algorithmic Accountability Act, if implemented as written, would create overreaching regulations that would still not protect consumers against many potential algorithmic harms while also inhibiting benign and beneficial applications of algorithms. The most prudent approach would be for Congress to act with forbearance and take the time to understand this issue better to avoid potential unintended consequences of misguided legislation. However, if Congress recognizes that the best way to achieve better outcomes for consumers is not to bog down companies using algorithms with new regulations—even the most extensive internal reviews will not be able to predict all potential pitfalls—but rather to hold companies strictly accountable for monitoring their use of algorithms and mitigating potential harms, then this bill is salvageable.

0 comments:

Post a Comment