In a company blog post, the chief privacy officer acknowledged that advertisers could misuse the tool for discriminatory and predatory purposes, and committed to updating ad guidelines so that vendors understand Facebook’s nondiscrimination policies.
The social media giant joins a host of other high tech companies that find themselves wedged between the values of permissionless innovation, which seeks to remove barriers to entry for technology experimentation, and the social responsibility to protected classes, particularly in sheltering racial and ethnic groups from either explicit discrimination, unconscious bias, or both.
THE EFFECTS OF BIG DATA
Consumer data are collected daily through a series of interactions with web sites, social media, e-commerce vehicles, and inquiries for information of interest. These small portions of data become compiled, mined, and eventually regenerated for marketplace use. Big data serves a variety of purposes, from helping to advance breakthroughs in science, health care, energy, and transportation to enhancing government efficiencies by aggregating the input of citizens.
Big data can also include or exclude consumers, argues the Federal Trade Commission, an agency whose responsibilities include regulatory oversight of online content companies. When the analytics behind big data are misapplied, consumers can be tracked or profiled based on anything from their online preferences for goods and services to their public opinions. The result of these profiles could lead to denial of credit based on web browsing of pay day lenders, or predictive algorithms that could determine an individual’s suitability for future employment or educational opportunity. Online proxies, including zip code, can also be used by marketers to extrapolate an individual’s socioeconomic status based on neighborhood, resulting in subjective assumptions about one’s lifestyle.
In these and other examples, big data, when misused, can aid and abet discrimination already experienced by disadvantaged populations, particularly people of color, women, and the poor.
BIG DATA AND EXPLICIT DISCRIMINATION
Algorithmic bias is one form of explicit discrimination perpetuated online. In 2013, Google results for searches of “black-sounding” names were more likely to link arrest records with the profiles, even when false. This year, comparative online searches of “three white teenagers” and “three black teenagers” rendered smiling faces for the former, and police mugshots for the latter. While computer programmers may not create algorithms that start out being discriminatory, the collection and curation of social preferences eventually can become “adaptive algorithms” that embrace societal biases.
Companies such as Airbnb, Uber, and Lyft are also experiencing increased explicit discrimination among users of their online sharing applications. This year, Airbnb found that some hosts for the home-rental service were rejecting renters based on race, age, gender, and other factors. In its corporate report to tackle this bigotry and foster inclusion, the company vowed to eradicate bias on their site through new “community commitment” agreements that reinforce legal compliance, and policies that ensure that all guests will be accommodated in cases of unjust treatment.
Researchers also exposed similar occurrences of discrimination when they found that ride-sharing services, Uber and Lyft, were either cancelling rides or extending wait times for African-American customers in Boston and Seattle. In a sample of 1,500 rides in both cities, the study found that Uber drivers were more likely to cancel on riders with “black-sounding” names, and that African-American men typically waited longer to be picked up. African-American customers were also “screened out” by Lyft drivers through a review of their names and faces upon the order. Other study findings included women who were often taken on longer rides to extend the cost of the fare.
Countering forms of explicit discrimination has become a lot easier in the U.S. under federal laws that govern equal opportunity for protected classes in the areas of housing, employment, and the extension of credit. In 1964, Congress passed Public Law 88-352 which “forbade discrimination on the basis of sex as well as race in hiring, promoting, and firing.” The Civil Rights Act of 1968 was later amended to include the Fair Housing Act, which further prohibits discrimination in the sale, rental, and financing of dwellings, and in other housing-related transactions to federally-mandated protected classes. Enacted in 1974, the Equal Credit Opportunity Act (ECOA) prohibits any creditor from discriminating against any applicant from any type of credit transaction based on protected characteristics.
However, these laws do not mitigate and resolve implicit bias that can factor into algorithms and human decision making within the online economy.
The Kirwan Center for the Study of Race and Ethnicity defines implicit bias as “the attitudes or stereotypes that affect our understanding, actions, and decisions in an unconscious manner.” Citing individuals’ common susceptibility to these biases, the Kirwan Center found that it is the nature of homogenous associations and relationships to harbor particular feelings and attitudes about others based on race, ethnicity, age, and appearance.
In her research on implicit bias in higher education, sociologist Katherine Milkman identified its persistence when white professors were less likely to respond to students of color requesting office hours due to preconceived stereotypes about their background, status, and competencies. On the campaign trail, former presidential candidate Hillary Clinton invoked the concepts of implicit bias when referencing the harmful deviances of the law enforcement and how more privileged classes apply every day stereotypes to racial and ethnic groups.
Compared to the more explicit expressions of discriminatory behaviors, implicit bias is equally harmful in the online economy and more likely to be present in Silicon Valley, where diverse populations are currently underrepresented.
The Obama administration was proactive on the issue of fairness and issued a report on algorithmic systems and civil rights, pointing to the threats of big data for vulnerable populations. The report suggested an “equal opportunity design framework” to mitigate discrimination along historical, social, and technological contexts. In other words, the findings proffered that technology not be designed in a vacuum, but rather account for potential disparities in its platform and execution. Consequently, algorithms that discriminate could be easily dealt with, fixed, or abandoned.
Cultivating more diverse workforces in high tech companies is another strategy for quelling online discrimination. Companies that are disrupting societal norms via the sharing economy, social media, and the internet of things must do better to address the less than remarkable representation of people of color as creators, influencers, and decision makers.
Recent diversity statistics report that African-Americans make up less than two percent of senior executive positions in high tech companies, and Hispanics three percent when compared to 83 percent of whites. Asian-Americans comprised 11 percent of executives in high tech companies. In the occupations of computer programmers, software developers, and database administrators, African-Americans and Hispanics collectively are under six percent of the total workforce while whites make up 68 percent.
On the technology investment side, less than eight percent of senior decision makers in venture capital firms are women; and African-Americans and Hispanics comprise one percent of these same roles. And not surprisingly, only three percent of funding goes to women, and one percent to African-American technology entrepreneurs.
Without confronting the challenges of workforce diversity, implicit bias is prone to go unnoticed in homogenous work cultures, leading to innovations that can directly or indirectly discriminate against users.
As the U.S. directs most of its attention to addressing the ideological and racial rifts resultant from the most recent election, an opportunity also exists to curtail racial bias in the online economy, which could potentially serve as a reference for current reconciliation efforts.
Facebook and Google are donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the authors and not influenced by any donation.
Follow Dr Nicole Turner -Lee on Twitter @drturnerlee