Log In

Hidden Bias in the Black Box: Info Gov as a Key Check to Algorithmic Bias

by Jason R. Baron, Drinker Biddle & Reath, as seen on Legaltech News

With each passing day, we are 
 increasingly living in an algorithmic universe, due to the easy accumulation of big data. In our personal lives, we experience being in a 24/7 world of "filter bubbles," where Facebook has the ability to customize how liberal or conservative one's newsfeed is based on prior postings; Google personalizes ads popping up on Gmail based on the content of our conversations; and merchants like Amazon and Pandora feed us personalized recommendations based on our prior purchases and everything we click on.

While (at least in theory) we remain free in our personal lives to make choices in continuing to use these applications or not, increasingly often what we see is the result of hidden bias in the software. Similarly, in the workplace, the use of black box algorithms holds the potential of introducing certain types of bias without an employee's or prospective employee's knowledge. The question we wish to address here: From an information governance perspective, how can management provide some kind of check on the sometimes naïve, sometimes sophisticated use of algorithms in the corporate environment?

Algorithms in the Wild

An early, well-known example of the surprising power of algorithms was Target's use of software that, based on purchasing data (e.g., who was buying unscented lotions, cotton balls, etc.), was spookily able to predict whether a customer was likely pregnant. Target sent coupons for baby products to a Minnesota teenager's home before the teenager's father knew about the pregnancy, leading to a bad public relations episode. A different example is Massachusetts' use of a mobile app called Street Bump, where smartphones riding over potholes and the like would automatically report their location for local government to fix. The problem: the resulting map of potholes corresponded closely with the demographically more well-off areas of the city, as those were the areas where individuals knew to download the mobile app and could afford smartphones in the first place.

In workplace hiring decisions, facially neutral algorithms sometimes reveal a hidden bias based on how features are selected and weighted, or where certain variables used in the algorithm essentially function as "proxies" for real world racial or ethnic differences. For example, a software feature using the variable "commuting distance from work" as a factor in deciding which candidates to hire may, depending on local geography, discriminate based on race. As Gideon Mann and Cathy O'Neill stated in Harvard Business Review (12/9/16), "When humans build algorithmic screening software, they may unintentionally determine which applicants will be selected or rejected based on outdated information—going back to a time when there were fewer women in the workforce, for example—leading to a legally and morally unacceptable result."

Once on the job, employees may experience a very different kind of filter bias through software targeting the risk of internal threats to the company. The more advanced programs coming onto the market use sentiment analysis (e.g., algorithms looking at language used in emails) to predict whether certain individuals are more likely to display anger or other inappropriate behavior in the workplace. This capacity can be combined with matching up external sources of data on individuals obtained online, including credit report updates, crime reports, and certain types of medical information, to essentially triage the employee population into "high-risk" and lower risk categories, so as to target the keystrokes made by a few. If this all sounds like we have truly now entered a pre-crime, Minority Report world, it does.

IG and Its Role with Algorithms

What can or should be done? Mann & O'Neill suggest to avoid making decisions solely by use of an algorithm, but include what they call "algorithm-informed" individuals. They further suggest, "[w]e need to audit and modify algorithms so that they do not perpetuate inequities in businesses and society," with audits to be carried out either by inside experts or by hiring outside professionals. These are both sound suggestions.

Advocates of information governance (IG) argue that corporations with an IG program in place have a built-in mechanism to escalate data-related issues to a standing committee, consisting of either C-suite representatives or their delegates. In a growing number of corporate models, an individual with some kind of IG designation in their title will have been given authority to call together ad hoc groups to resolve specific data policy issues.

One could well imagine a chief information governance officer convening an ad hoc task force of the IG council, including a C-suite representative of the corporate human relations (HR) department, along with the person who approved or manages the data analytics software used by HR and a senior counsel, to perform the kind of "audit" of hiring practices envisioned above. Similarly, an ad hoc task force including the chief information security officer, senior HR office personnel, and other IT representatives and senior counsel could be asked to review how well internal monitoring of employees is working, and how much transparency or notice should be given to staff on such monitoring.

Along these lines, organizations might consider tasking a group of individuals—under the auspices either of the IG structure or as a freestanding committee—to perform a similar function to a present-day institutional review board, but limited to predictive software's effect on human subjects. Such an "algorithm review board" (ARB) would be tasked to provide approval and/or oversight of any use of analytics in the workplace aimed at targeting present employees or prospective hires, so as to serve as a check against possible hidden bias or a lack of notice where appropriate.

Some corporations (Microsoft and Facebook) have taken initial steps to implement, at least on a selected basis, an ethics review board being used in an equivalent way to an ARB. However, the practice remains rare across all industry verticals, notwithstanding the growing power of analytics in all aspects of daily life.

In his book, "The Black Box Society: The Secret Algorithms That Control Money and Information," law professor Frank Pasquale states that "authority is increasingly expressed algorithmically," and that "[d]ecisions that used to be based on human reflection are now made automatically." But, as computer scientist Suresh Venkatasubramanian has put it, "The irony is that the more we design artificial intelligence technology that successfully mimics humans, the more that A.I. is learning in a way that we do, with all of our biases and limitations."

This new reality calls for consideration of some kind of human intervention to serve as a quality control check on the black box (even if it means humans employing a second algorithm to check for bias in the first!) In the coming world we live and work in, adoption of some kind of IG framework that includes reviewing the possibility of algorithmic bias in the workplace will be appreciated by an increasingly sophisticated populace.




Jason R. Baron is Of Counsel at Drinker Biddle & Reath LLP in Washington, D.C.