How Can Regulation Facilitate Financial Inclusion in Fintech?

Authors

Carol Evans and Karen Pence, Federal Reserve Board of Governors

Download PDF
(412 KB)

Volume 15, Issue 2 | August 19, 2021

The racial gaps in wealth and financial inclusion are large and persistent. Our concept of “financial inclusion” follows that of the World Bank: “individuals and businesses have access to useful and affordable financial products and services that meet their needs…delivered in a responsible and sustainable way.”2 Turning first to wealth gaps, the typical White family has eight times the wealth of the typical Black family and five times the wealth of the typical Hispanic family.3 These wealth gaps emerge early in life. For example, among young families in 2019, about 46 percent of White families owned their home, compared with 17 percent of Black families and 28 percent of Hispanic families.4 The racial homeownership gap was smaller for older families, but the later start in purchasing a home means that Black and Hispanic families accumulate less wealth from homeownership over their lifetimes.5

Turning next to financial inclusion, no family member in nearly 14 percent of Black households and 12 percent of Hispanic households had a checking or savings account in 2019, compared with 3 percent of White households.6 Blacks and Hispanics also have less access to credit markets. In 2019, more than half of Blacks and 40 percent of Hispanics reported that they had been denied credit or approved for less credit than they requested, compared with only a quarter of Whites.7 This racial gap in credit access remained large even for individuals with family income greater than $100,000.

Can fintech help ameliorate these profound disparities? For purposes of this article, “fintech” means new algorithmic techniques, such as machine learning, and new data sources that have not been used previously in the provision of financial services. Fintech certainly has the potential to increase financial inclusion—for example, by expanding access to affordable credit. Yet, as we will discuss below, there are also ways that expanded data sources can be leveraged by sophisticated algorithms to amplify existing inequities in financial services, effectively moving analog discrimination—such as underwriting discrimination, redlining, and steering—to the digital world and compounding the legacies of structural inequity.8

In this article, we discuss how certain aspects of fintech have called into question two key assumptions that underlie consumer protection regulation: the assumption that more data make decisions more objective and that the Internet or other digital channels can equalize access to information. Then, we provide an overview of current consumer protection regulations and highlight areas that warrant further attention from regulators and policymakers. Specifically, we identify gaps in existing regulatory and legal frameworks in the United States with respect to fintech and financial inclusion, such as the lack of a federal law that prohibits discrimination in financial services outside of lending and the lack of a robust privacy framework. Finally, we will address how regulators can support financial inclusion by taking a clear-eyed approach to the benefits and risks of innovation and by recognizing that companies who may profit from new technologies may be more vocal than the consumers and communities who could ultimately be harmed.

Fintech and Financial Inclusion: What Is Different?

Many industry and advocate representatives are intrigued by the potential for fintech to promote financial inclusion. As illustrated in Figure 1, there is the tantalizing possibility that new artificial intelligence techniques, such as machine learning, can leverage newly available data to improve financial inclusion. For financial services, these expanded data include “alternative data,” which generally refer to information that is not included in consumers’ credit reports or traditionally provided by consumers in the credit application process.9 Some of these new data fields have a clear link to creditworthiness. For example, for small business lending, some new underwriting models are based on enhanced financial and business data. Similarly, for consumer loans, some firms consider more detailed financial information on inflows and outflows from consumers’ bank accounts, often called cash-flow data. A recent study by FinRegLab, a nonprofit research organization, suggests that cash-flow data may have promise for expanding credit to underserved borrowers, such as “thin file” consumers without extensive traditional credit histories.10

Figure 1
Fintech’s Potential to Increase Financial Inclusion

This figure shows that the combination of more data and new algorithms may add up to more accurate models, lower costs of underwriting, and the potential to expand access to credit.

Of course, as with many promising developments, it is fair to ask whether fintech can deliver on this promise of financial inclusion. In particular, as we describe in the next sections, fintech could challenge two fundamental underpinnings of the current regulatory and policy regime and create significant pathways to discrimination. First, the proliferation of data about consumers, coupled with new algorithms, challenges the traditional concept of what it means to be data-driven. Second, the ability to customize at the level of the individual consumer what information and prices are shown about different products and services upends the bedrock principle that consumers can improve their options and decision-making by shopping around.

Figure 2 illustrates how new data sources and algorithmic methods have the potential to both enhance and undermine financial inclusion. As policymakers and stakeholders evaluate these concerns and consider potential solutions, it is important to note that the increased use of new data and algorithms is not limited to financial services but extends broadly to other areas, such as criminal justice, employment, and access to nonfinancial goods and services.11 These broader areas are relevant to financial inclusion because bias in areas outside of financial services also can exact a steep cumulative price on financial and physical health and well-being. Moreover, understanding the intersection of these risks can better equip stakeholders to understand their effects and identify solutions.

Figure 2
Can Fintech Deliver on the Promise to Increase Financial Inclusion?

This figure shows a seesaw demonstrating that some facets of fintech may improve financial inclusion, while some facets may push against financial inclusion. Specifically, the facets that may improve financial inclusion include the potential to underwrite the “credit invisible” and that machine learning and expanded data can lower costs and improve accuracy of underwriting. On the other hand, the potential to “automate” discrimination, including redlining and steering, may undermine financial inclusion.

Let’s Unpack What It Means to Be Data-Driven: How New Data May Actually Inject Bias Back Into Decision-Making

Some fintech proponents note that data-driven algorithms can improve decision-making and reduce human bias. As we explore below, although all algorithms have the potential to be more consistent than humans, the types of data that new algorithms use have the potential to inject bias back into decision-making.

It is true that automating decisions—rather than having humans make decisions—may reduce bias and promote consistency. But this is true of both fintech algorithms and more traditional models used in credit decisioning. In fact, biased and inconsistent human decision-making has long been recognized as a risk factor for discrimination in lending and, therefore, disfavored. In one seminal study of lending discrimination, economists at the Federal Reserve Bank of Boston found significant, unexplained racial disparities in underwriting decisions.12 The authors of the study suggested that the disparities might result from loan officers’ applying discretion inconsistently in cases where a loan application did not fully meet the underwriting criteria. More broadly, one might imagine loan officers offering more assistance to White borrowers or taking into account irrelevant factors that could be correlated with race, such as whether an applicant attended the same church as the loan officer, shared common hobbies, and the like.

As a result of these concerns, over the past decades, there has been considerable focus on more objective decision-making through empirically derived credit scoring and automated underwriting systems. Credit scoring models and automated underwriting systems were promoted, in part, because they could reduce the potential for discrimination, as well as improve the accuracy of decisions. These systems typically rely on information with a well-established relationship with creditworthiness, such as loan-to-value ratios and credit history, rather than using vague criteria that could be applied unevenly and subjectively. Regulators, too, have emphasized the risk involved in subjective and inconsistent decision-making.13 Thus, arguing that fintech algorithms will lead to more equitable outcomes because they eliminate human bias overstates the potential of fintech because most lenders rely on some type of empirically based decision-making process or, at least, a rules-based system.

The better question is whether expanded data and more sophisticated models can improve equitable outcomes relative to the current models and decision-making methods. To evaluate the merits of new data, it is critical to unpack what data are being used and how they are being used. The fact that a model is data-driven does not necessarily make it fair or objective. When most of us think of the term “data-driven,” we assume it refers to hard, objective data that are clearly relevant to the decision at hand.14 But now, algorithms may increasingly use the kind of soft behavioral data—such as shopping patterns, leisure pursuits, and where you live—that lenders, advocates, and regulators have tried to keep out of lending decisions because they are not clearly related to creditworthiness and may be strongly correlated with race and other protected characteristics.15 Some of these nontraditional data also have not been vetted fully for quality and accuracy. If lenders start using models that incorporate the types of data that regulators discouraged human underwriters from using, they could automate bias, creating new pathways of discrimination that could calcify racial inequities.16

Thus, it is not enough to ask whether something is data-driven. In a prior article, one of the authors proposed a set of structured questions to evaluate the suitability of data and to guard against uses that could undermine equitable access to financial services.17 The first set of questions asks what the basis is for using the data, including whether there is a nexus with creditworthiness (in the case of lending); whether the data are accurate, reliable, and representative of all consumers; and whether the predictive relationship is ephemeral or stable over time. The second set of questions asks whether the data are being used for the purpose for which they have been validated; whether consumers know how their data are being used; whether data about consumers are used to determine what content consumers see; and which consumers are evaluated with the data. Regulators have also underscored the importance of conducting a careful analysis before using alternative data, noting that data “directly related to consumers’ finances and how consumers manage their financial commitments may present lower risks than other data.”18

How Do We Think About Consumer Choice When There Is No Longer a Single Website Experience and We Each May See Different Information Online?

Anonymity can be a powerful tool to promote inclusion. For example, one study found that having symphony musicians audition behind a screen increased the selection of women musicians.19 Initially, the anonymous nature of the Internet was seen as an equalizing force when it came to access to information and reducing bias. One early study found that Blacks and Hispanics purchasing cars online paid about the same prices as comparable Whites, but they paid more when purchasing without using the Internet.20

Now, increasingly, consumers are no longer anonymous. Consumers can be tracked across websites, and their browsing data may be matched with other information to generate detailed profiles.21 Where once marketing provided different populations with access to the same products, online marketing today is targeted to reach highly specific demographics of consumers, and even to the level of the individual consumer. The possibility that consumers may see different information online, based on information websites have about them, turns on its head a core tenet of financial well-being advice and consumer protection regulation: that consumers should shop around and compare offers. Shopping around in the digital world provides no guarantee that one consumer will be shown the same ads or offered the same prices as another online (Figure 3). For example, one academic study found evidence of digital steering and price discrimination on numerous popular e-commerce sites.22

Figure 3
Consumers No Longer See the Same Information Online

This figure shows that many fintech innovations may result in consumers no longer seeing the same information online. These innovations include the following behaviors: advertisers use online ad platforms to filter advertising; advertising filters may correlate with protected characteristics; ad platforms use machine learning to determine audiences; firms personalize websites; and web-tracking creates detailed consumer profiles.

Although targeted marketing and website personalization offer benefits by increasing the chance of showing consumers information about products of interest to them, it also creates the risk of digital redlining. Just as many of us have become increasingly aware that we may see very different news than others, we each also may see very different information about services and products. The risk is that some consumers may be digitally steered toward certain products and services based on assumptions about them, and these assumptions could be related to race or other protected characteristics.23

Facebook provides a case study of how targeted marketing may create new pathways to discrimination. In 2016, a news report revealed that when placing a housing ad on the Facebook advertising platform, it was possible to purposefully exclude Facebook users of certain “ethnic affinities” and other protected characteristics from viewing those ads.24 The journalists were able to purchase ads that sought to exclude consumers with an “affinity” for “African American,” “Asian American,” or “Hispanic.” More broadly, as of 2019, Facebook permitted advertisers to select ad audiences based on more than 200,000 attributes.25

Another Facebook feature allows advertisers to target “Lookalike” audiences that have characteristics in common with the advertisers’ current customers or other groups.26 Although this may seem like an efficient feature for an advertiser to use, it could magnify discrimination if the source list on which the Lookalike Audience is based is biased or skewed. Indeed, Facebook notes the following on its website for advertisers: “When you create a Lookalike Audience, you choose a source audience… We identify the common qualities of the people in it (for example, demographic information or interests).”27

Unfortunately, given the underlying structure and operation of the Internet, removing bias from online advertising is more complicated to fix than it might appear. In response to civil rights litigation, Facebook entered into a settlement and created new safeguards in the areas of housing, employment, and credit, where federal civil rights laws apply.28 For ads in those areas, Facebook limited the ability of advertisers to target ads based on protected characteristics. Research has shown, though, that despite these protections, the Facebook algorithms that determine which ads are shown to consumers might still skew the ad delivery. As one example, a study constructed ads that varied the text and images used to describe residential properties for rent and purchase. Although the researchers used the same audience-targeting options for each ad—and housing is an area where Facebook has limited targeting—the advertising algorithm sent some ads to an audience with more than 70 percent Black users and others to an audience of about half Black users. 29

Guardrails for Consumer Protection: The Current Regulatory Framework and What More May Be Needed Given Fintech Trends

Since two of the approaches that regulators have historically relied on to counter discrimination—encouraging financial service providers to base their decisions on data, and consumers to shop around—may be less effective in the fintech era, we now ask whether our current regulatory framework is well-suited to protect consumers or if Congress and policymakers should consider additional approaches. We outline first the existing fair lending laws and the laws prohibiting unfair, deceptive, or abusive practices.30 We then highlight two gaps that warrant further attention—discrimination outside of lending and the lack of a robust privacy framework—and discuss whether additional new tools might be needed.

Current Protections That Can Address Fintech Trends

The Fair Lending Laws: The Equal Credit Opportunity Act and the Fair Housing Act

The Equal Credit Opportunity Act (ECOA) and the Fair Housing Act (FHA) are the two key federal civil rights laws in the United States that apply to financial services. ECOA prohibits credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, receipt of income from any public assistance program, or because a person has exercised certain legal rights under ECOA and other consumer protection statutes.31 ECOA applies to both consumer and business loans. FHA applies to residential real estate‒related transactions, as well as other housing-related areas, and prohibits discrimination on the basis of race or color, national origin, religion, sex, familial status, and handicap.32 The Facebook litigation discussed above alleged violations of the FHA. Additionally, in 2019, the U.S. Department of Housing and Urban Development brought a charge of discrimination based on violations of the FHA that encompassed Facebook’s advertising algorithms.33

The fair lending laws broadly prohibit two kinds of discrimination: disparate treatment and disparate impact. Disparate treatment occurs when a lender treats a consumer differently because of a protected characteristic, such as race or sex. Disparate impact occurs when a lender’s policy or practice has a disproportionately negative impact on a prohibited basis, even though the lender may have no intent to discriminate and the practice appears neutral. Policies or practices that have a disparate impact may violate the law, unless the policy or practice meets a legitimate business necessity that cannot reasonably be achieved by a means that has less impact on protected classes.

In the context of fintech, if an algorithm expressly included race or another protected basis characteristic, that would be disparate treatment. If the algorithm included a variable—or complex interactions of variables—that disproportionately affected persons based on a protected basis characteristic and did not meet a legitimate business necessity, or if a less discriminatory alternative existed, that may be disparate impact. Variables that may raise disparate impact concerns include those traditionally correlated with race, such as where people live, educational background, and hobbies or interests. Additionally, seemingly neutral variables may also result in discriminatory outcomes, especially when combined with other data points in ways that may correlate with race or another protected characteristic. This risk may be heightened when models use many variables. Because neutral inputs do not necessarily result in neutral outcomes, the disparate impact framework within ECOA and FHA may be particularly important to addressing concerns about algorithmic bias and discrimination in this space.34

Unfair, Deceptive, or Abusive Acts or Practices: The Federal Trade Commission Act and the Dodd-Frank Act

There are two federal statutes that prohibit unfair play in the financial services marketplace. Section 5 of the Federal Trade Commission Act prohibits unfair and deceptive acts and practices (UDAP). The Dodd–Frank Wall Street Reform and Consumer Protection Act also prohibits unfair, deceptive, or abusive acts and practices (UDAAP). Many states have their own consumer protection statutes.

These statutes provide important protections against harm that could result from using consumer data in ways that are not accurately disclosed. For example, a Federal Trade Commission (FTC) complaint alleged that the lender failed to disclose to consumers that their credit limits could be reduced based on a behavioral scoring model, which penalized consumers for using their cards for certain types of transactions, such as paying for marriage counseling.35 Using data about consumers in ways that could expose them to harm or steer them to disadvantageous products may also raise UDAP concerns. In another FTC case, the complaint alleged that websites promised to connect consumers with the lenders that would offer the best terms, but instead sold the consumers’ data to the first willing buyer.36 In another illustration of potential risk, it was reported that a leading college information website filtered search results to favor for-profit schools for students who indicated a need for financial aid.37 Although this example does not involve financial services, it shows how information about consumers may be used in ways that may exploit their vulnerabilities and undermine financial inclusion.

What Is Missing in the Current Regulatory Landscape Given Fintech Trends?

Anti-Discrimination Protections That Extend Beyond Lending

The current regulatory framework includes comprehensive federal laws prohibiting discrimination in housing (FHA), credit (ECOA and FHA), and employment (Title VII). However, there are no equivalent federal statutes prohibiting discrimination in other aspects of financial services, such as checking and savings account fees or investment advisory services.38 Discrimination in these areas is covered by a patchwork of less comprehensive state public accommodation laws that were designed to prohibit businesses from treating customers differently on the basis of a protected characteristic.

The main federal public accommodation law, Title II of the 1964 Civil Rights Act, covers only certain types of businesses, such as hotels, restaurants, and theaters, and does not cover retail businesses, including banks. In contrast, state public accommodation laws typically cover retail businesses. However, these laws vary widely across states, and because they generally were enacted in the 1950s and 1960s, many have not been updated for online commerce.39 Six states do not have these laws at all, and only five states have explicitly stated that their statutes apply to online commerce.40 States also vary in which protected characteristics they cover and whether individuals are allowed to sue on their own behalf.41

This lack of a comprehensive regulatory framework is jarring when juxtaposed with the large racial gaps in banking and investment services. As noted earlier, Blacks and Hispanics are less likely to have a checking or savings account at a bank or credit union. Some research suggests that banks with main branches in Black and Hispanic communities charge more for entry-level checking accounts across a range of costs and fees, including minimum opening deposit, regular maintenance or service fee, minimum balance, and overdraft fee.42 Black and Hispanic workers are less likely to participate in 401(k) plans offered by their employer; if they do participate, they tend to contribute less to their accounts and are less likely to invest in the stock market.43 These gaps alone are not evidence of discrimination but suggest that regulators would benefit from having more tools at their disposal in the event that discriminatory practices exist in these markets. In 2020, the Fair Access to Financial Services Act, which would prohibit discrimination in any aspect of financial services, was introduced in Congress.44

Additionally, given the ability of websites to track consumers and the potential to change terms and pricing based on information about consumers, the lack of comprehensive protections is troubling. Returning to the Facebook example, Facebook implemented additional protections for three key areas protected by federal civil rights laws: housing, employment, and credit. In the remaining areas of consumers’ lives, however, advertisers are free to target consumers based on all characteristics, including those closely related to race, sex, and other protected characteristics. This gap could create a significant risk of amplifying racial inequity.

A Comprehensive Approach to Privacy

The regulatory framework around privacy—by which we mean how consumers’ personal data are collected, stored, and used—has not kept pace with the explosion in fintech capabilities. Fintech has utilized “digital footprint” data, such as the consumer’s IP address, browser type, web search results, social media posts, and ecommerce purchases. Fintech has also made it easier to purchase and leverage these data and combine them with other data sources in ways that most consumers would not expect.

There is no comprehensive federal privacy statute in the United States, although the Gramm-Leach-Bliley Act imposes some privacy-related restrictions on financial firms. In the European Union (EU), the General Data Protection Regulation (GDPR) provides comprehensive standards on how companies collect, store, protect, use, and share their consumers’ data. The law applies to U.S. companies that collect data on persons in the EU, but otherwise U.S. companies are not subject to any federal regulations similar to the GDPR in scope. In the absence of such federal legislation, some states have stepped into the void. California and Virginia have already passed laws, and other states are considering legislation. Depending on the state, either financial institutions or certain data collected by financial institutions are carved out of the law, but financial institutions may still be affected—for example, through third-party vendors.45

These privacy statutes are important because firms are increasingly using digital footprint data in their decision-making. For example, Klarna, a large Swedish fintech payments company that provides point-of-sale financing for retail purchases, uses information on the time of day that customers purchase items and the customers’ IP addresses in its credit scoring model.46 If these digital footprints are correlated with protected characteristics, such as race and sex, they may lead to discriminatory outcomes. The GDPR provides a window into this process: it requires firms that use algorithms to make decisions that affect humans significantly to disclose the type of data and the logic used by the algorithm. Although these disclosures may not be particularly useful for individual consumers, they have the potential to be a tool that researchers and advocates can use to gauge firms’ algorithmic decision-making. Researchers inferred Klarna’s use of algorithms, for example, from the GDPR-mandated disclosures.47

In the United States, it is likely that the comprehensive regulation prohibiting discrimination in the areas of lending, housing, and employment has largely kept firms from using digital footprint data in their decision-making in these markets. Outside these markets, though, U.S. consumers have neither comprehensive regulation forbidding discrimination, nor the window into algorithmic decision-making brought about by the GDPR.

Other Tools to Promote Fairness and Transparency

The fact that algorithms with significant implications for financial inclusion can be so opaque raises the question of whether the public and regulators need more information on these algorithms in order to detect and monitor bias and discrimination. When advertisements and product offers are tailored to each individual consumer, regulators and other stakeholders can no longer see if certain demographic groups, for example, are getting systematically different advertisements by monitoring the TV channels and other media that target that market.

One tool that has received attention in recent years is auditing algorithms. A 2016 report issued by the Executive Office of the President noted the importance of “promot[ing] academic research and industry development of algorithmic auditing… to ensure that people are being treated fairly.”48 The proposed Digital Services Act in Europe, which was submitted to the European Parliament in December 2020, would require annual audits of very large online platforms.49 Among other responsibilities, these audits “shall identify, analyse, and assess… any significant systemic risks…associated with the prohibition of discrimination…as enshrined in Article 21 of the [European] Charter.”50 The Digital Services Act would also require very large online platforms to allow access to their data to “vetted researchers” for the purpose of conducting research on systemic risks associated with discrimination.

Although the concept of auditing algorithms is appealing, many questions are unresolved. For example, who conducts the audit, and what mechanisms ensure that the audit is truly independent? What does the audit cover, and what standards are applied? How and with whom can the results of the audit be shared? Given that concerns over algorithmic bias span so many areas of critical importance to consumers’ well-being, including voting, health care, employment, and criminal justice, the sharing of information across sectors through audits could be helpful to address the sources of bias and how the bias might be addressed. Regulators and academics are just starting to grapple with the answers to these questions. 51

Finally, the role of large Internet platforms, such as Facebook and Google, raises the question of at what point a platform or activity becomes so fundamental that a higher level of regulation is required or a greater focus on platforms’ market power from an antitrust activity is warranted. Although these areas are outside the scope of this article, we think they bear watching and could have important implications for financial inclusion.52

How Should Regulators Approach Fintech to Ensure That It Promotes Equity and Financial Inclusion?

In their regulatory stance toward financial innovation, regulators need to remain nimble enough to support new products that help consumers without relinquishing their traditional role of protecting consumers from things that are, in fact, too good to be true. Paul Volcker famously quipped in 2009,  “The most important financial innovation that I have seen in the past 20 years is the automatic teller machine.”53 Although financial innovation is often positive, opaque models have been associated with significant harm in the financial sector in the past. The modeling assumptions used for the collateralized debt obligations that funded the riskiest part of the subprime mortgage market, for example, turned out to be fundamentally flawed.54

In addition, it may be easier for firms to promote their interests than consumers. Firms have a strong profit motive, while consumers face informational barriers that may make it difficult for them to organize to protect their interests. This dynamic suggests that regulators should rely heavily on objective data and evidence, whenever possible, in making decisions. In recent years, some regulators have expanded their capacity to gather this evidence by developing “sandboxes” that allow companies to test innovative ideas in a controlled way and minimize the potential for consumer harm. Regulators may benefit by helping to develop the tests and by receiving the results, and companies may benefit by getting early feedback from regulators. In the United Kingdom, more than 140 firms have been accepted into the Financial Conduct Authority’s sandbox program. In the United States, the Consumer Financial Protection Bureau (CFPB) has launched a multifaceted sandbox program, which includes a Compliance Assistance Sandbox, a No-Action Letter Policy, and a Trial Disclosure Sandbox. The CFPB issued its first No-Action Letter in 2017.55 It has continued to refine its innovation program, finalizing the Compliance Assistance Sandbox and revising the No-Action Letter and Trial Disclosure policies in 2019.56

Sandboxes, by their nature, are collaborative with industry, and some consumer protection advocates have raised concerns about the potential for consumer harm. 57 As a complement to these initiatives, regulators will need to continue to build their own capacity for independent research and analysis. Regulators will want to take careful note, of course, of the outside research conducted by academics, civil rights groups, and consumer protection advocates. The investigative reporting that exposed the civil rights concerns about Facebook’s advertising, as well as the academic research that documented that housing advertisements could still be skewed along racial lines even after Facebook revamped its practices, both highlight the valuable role that independent research can provide. This research could also lead to ideas regarding effective methods of debiasing.58

However, outside research is unlikely to serve as a full counterweight to industry. These groups tend to lack the funding, access to data, and sophisticated marketing campaigns available to industry. Indeed, in order to gain access to data, many researchers have entered into collaborations with tech companies.59 Although these collaborations often allow researchers to publish their findings regardless of the results, Susan Athey and Michael Luca note that “firms may choose not to sign agreements around research topics where they are concerned about what the answers might be, potentially creating a bias towards papers favorable towards firms and creating an incomplete snapshot of an issue.”60 This dynamic may have influenced, for example, the research that Uber supported.61

Finally, regulators will need to ensure that their own staff are diverse. Involving staff from diverse backgrounds in algorithm design is a common recommendation for avoiding consumer harm.62 Many commentators have emphasized the importance of diversity and inclusion in financial services and fintech to ensure that firms are equipped to appreciate the potential implications of their work on bias and equity. We agree. We also note that a recent study highlighted that Blacks hold very few top positions at federal regulatory agencies.63 We believe that this sound principle of ensuring diversity for designing algorithms should apply to those who are charged with regulating them and protecting the public from potential harm.

Conclusion

Fintech-facilitated innovations in financial services are proceeding at a dizzying pace. The combination of sophisticated algorithms and new data sources has the potential to bring more affordable and tailored financial services to a broader population. Yet, as we have described in this article, these same factors also have the potential to amplify existing inequities. The enhanced ability to curate information at the consumer level and create algorithms using vast amounts of personal data risk bringing analog forms of discrimination, such as redlining and steering, to the dialog world.

The regulatory framework may need to adapt in response to ensure that it is providing effective guardrails. Gaps in the existing discrimination laws and the lack of a robust privacy framework may warrant attention. Facilitating the ability of independent research to continue to shed light on the potential harms of algorithms can serve as an important safeguard. Regulators may also need to enhance their own ability to analyze the inner workings of algorithms. Sustained attention to diversity and inclusion will better position both industry and regulators to identify the risks and benefits of fintech. These and other steps may help ensure that all consumers can benefit from the promise of fintech.

The Community Development Innovation Review focuses on bridging the gap between theory and practice, from as many viewpoints as possible. The goal of this journal is to promote cross-sector dialogue around a range of emerging issues and related investments that advance economic resilience and mobility for low- and moderate-income communities and communities of color. The views expressed are those of the authors and do not necessarily represent the views of the Federal Reserve Bank of San Francisco or the Federal Reserve System.


End Notes

1. The views in this article are the authors’ alone and are not necessarily those of the Board of Governors or its staff, the Consumer Financial Protection Bureau, or the United States. We are grateful to Kaitlin Asrow, Katrina Blodgett, Albert Chang, Scott Colgate, Patrice Ficklin, Susan Grutza, Jeremy Hochberg, Varda Hussein, Ena Koukourinis, Tim Lambert, Barbara Lipman, Stephanie Martin, Dana Miller, David Palmer, and Justin Warner for thoughtful comments.

2. See https://www.worldbank.org/en/topic/financialinclusion/overview, accessed March 29, 2021.

3. Neil Bhutta et al., “Disparities in Wealth by Race and Ethnicity in the 2019 Survey of Consumer Finances,” FEDS Notes (September 2020), https://www.federalreserve.gov/econres/notes/feds-notes/disparities-in-wealth-by-race-and-ethnicity-in-the-2019-survey-of-consumer-finances-20200928.htm.

4. Ibid. Younger families are those with a household head younger than age 35.

5. Ibid.  Older families are those with a household head aged 35‒54.

6. Federal Deposit Insurance Corporation (FDIC), “How America Banks: Household Use of Banking and Financial Services, 2019 FDIC Survey” (Washington, DC: FDIC, October 2020), 12-13, https://www.fdic.gov/analysis/household-survey/2019report.pdf.

7. Board of Governors of the Federal Reserve System, “Report on the Economic Well-Being of U.S. Households in 2019 (Featuring Supplemental Data from April 2020)” (Washington, DC: Board of Governors of the Federal Reserve System, May 2020), 28, https://www.federalreserve.gov/publications/files/2019-report-economic-well-being-us-households-202005.pdf.

8. For an extended discussion of how technology may introduce discrimination into financial services, see Adair Morse and Karen Pence, “Technological Innovation and Discrimination in Household Finance,” Finance and Economics Discussion Series 2020-018 (Washington, DC: Board of Governors of the Federal Reserve System, 2020), https://doi.org/10.17016/FEDS.2020.018.

9. See “Interagency Statement on the Use of Alternative Data in Credit Underwriting,” (December 3, 2019), note 1, https://www.federalreserve.gov/supervisionreg/caletters/CA%2019-11%20Letter%20Attachement%20Interagency%20Statement%20on%20the%20Use%20of%20Alternative%20Data%20in%20Credit%20Underwriting.pdf.

10. FinRegLab, “The Use of Cash-Flow Data in Underwriting Credit: Empirical Research Findings” (Washington, DC: FinRegLab, 2019), https://finreglab.org/wp-content/uploads/2019/07/FRL_Research-Report_Final.pdf.

11. For a full discussion of the impact of algorithms, see Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York: Crown, 2016).

12. Alicia H. Munnell et al., “Mortgage Lending in Boston: Interpreting HMDA Data,” American Economic Review 86 (1) (March 1996): 25‒53.

13. See, for example, “Interagency Fair Lending Examination Procedures” (August 2009), p. 9, https://www.ffiec.gov/PDF/fairlend.pdf.

14. Of course, hard, objective data may also, in some cases, reflect and incorporate bias. For example, paying payday or car title loans on time generally does not help a consumer’s credit score, whereas repaying many other types of loans on time does. For a full exploration of these concerns, see Testimony of Lisa Rice, “Missing Credit: How the U.S. Credit System Restricts Access to Consumers of Color,” February 26, 2019, https://nationalfairhousing.org/wp-content/uploads/2019/04/Missing-Credit.pdf.

15. For example, Facebook reportedly patented technology to assist lenders in considering the creditworthiness of consumers’ online contacts when making credit decisions. Matt Vasilogambros, “Will Your Facebook Friends Make You a Credit Risk?” The Atlantic, August 7, 2015, https://www.theatlantic.com/politics/archive/2015/08/will-your-facebook-friends-make-you-a-credit-risk/432504/.

16. Fair lending laws prohibit discrimination based on numerous characteristics (often called “protected characteristics”), but in keeping with the theme of this special issue we will focus primarily on race.

17. Carol Evans, “Keeping FinTech Fair: Thinking About Fair Lending and UDAP Risks,” Consumer Compliance Outlook, Federal Reserve System, (Second Issue 2017), https://www.consumercomplianceoutlook.org/2017/second-issue/keeping-fintech-fair-thinking-about-fair-lending-and-udap-risks/.

18. See “Interagency Statement on the Use of Alternative Data in Credit Underwriting” (December 3, 2019), https://www.federalreserve.gov/supervisionreg/caletters/CA%2019-11%20Letter%20Attachement%20Interagency%20Statement%20on%20the%20Use%20of%20Alternative%20Data%20in%20Credit%20Underwriting.pdf.

19. Claudia Goldin and Cecilia Rouse, “Orchestrating Impartiality: The Impact of `Blind’ Auditions on Female Musicians,” American Economic Review 90 (4) (September 2000): 715‒41.

20. Fiona Scott Morton, Florian Zettelmeyer, and Jorge Silva-Risso, “Consumer Information and Discrimination: Does the Internet Affect the Pricing of New Cars to Women and Minorities?” Quantitative Marketing and Economics 1 (2003):65‒92.

21. For a fuller discussion of the implications of targeted marketing, see Carol A. Evans and Westra Miller, “From Catalogs to Clicks: The Fair Lending Implications of Targeted, Internet Marketing,” Consumer Compliance Outlook, Third Issue (2019), https://www.consumercomplianceoutlook.org/2019/third-issue/from-catalogs-to-clicks-the-fair-lending-implications-of-targeted-internet-marketing/. For a study finding that many online lending sites tracked borrowers, see also Barbara Lipman and Ann Marie Wiersch, “Uncertain Terms: What Small Business Borrowers Find When Browsing Online Lender Websites” (Washington, DC: Board of Governors of the Federal Reserve System, December 2019), https://www.federalreserve.gov/publications/files/what-small-business-borrowers-find-when-browsing-online-lender-websites.pdf.

22. Aniko Hannak et al., “Measuring Price Discrimination and Steering on E-commerce Web Sites” (2014), http://www.ccs.neu.edu/home/cbw/static/pdf/imc151-hannak.pdf.

23. See Evans and Miller, “From Catalogs to Clicks” for further discussion.

24. Julia Angwin and Terry Parris Jr., “Facebook Lets Advertisers Exclude Users by Race” (New York: ProPublica, October 28, 2016), https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race.

25. Athanasios Andreou et al., “Measuring the Facebook Advertising Ecosystem.” In Proceedings of the Network and Distributed System Security Symposium (NDSS 2019), https://mislove.org/publications/Facebook-NDSS.pdf.

26. Ibid., p. 3.

27. https://www.facebook.com/business/help/164749007013531?id=401668390442328, accessed on April 1, 2021.

28. Joint statement from Facebook, NFHA, CWA, ECBA, O&G, and ACLU, Summary of Settlements Between Civil Rights Advocates and Facebook (March 19, 2019), https://www.aclu.org/other/summary-settlements-between-civil-rights-advocates-and-facebook.

29. Muhammad Ali et al., “Discrimination through Optimization: How Facebook’s Ad Delivery Can Lead to Biased Outcomes.” In Proceedings of the ACM on Human-Computer Interaction Vol. 3, CSCW, Article 199 (November 2019),p. 22, https://ccs.neu.edu/~mali/papers/facebook-delivery-cscw.pdf.

30. Although many other consumer financial services statutes are applicable to financial innovation, our focus here will be on regulatory requirements relating to the trends we have identified.

31. The Consumer Financial Protection Bureau recently issued an interpretive rule clarifying that ECOA’s prohibition against sex discrimination includes discrimination on the basis of sexual orientation or gender identity. See https://www.consumerfinance.gov/about-us/newsroom/cfpb-clarifies-discrimination-by-lenders-on-basis-of-sexual-orientation-and-gender-identity-is-illegal/.

32. The United States Department of Housing and Urban Development (HUD) also has announced that it will administer and enforce the Fair Housing Act to prohibit discrimination on the basis of sexual orientation and gender identity. See https://www.hud.gov/press/press_releases_media_advisories/hud_no_21_021.

33. HUD Charge of Discrimination, March 28, 2019, https://www.hud.gov/sites/dfiles/Main/documents/HUD_v_Facebook.pdf.

34. In 2020, the Department of Housing and Urban Development (HUD) published a revised rule that would make it more challenging to demonstrate disparate impact under the Fair Housing Act, but the future of the rule is in question. The rule was challenged by civil rights groups, and a federal court granted a preliminary injunction. See Massachusetts Fair Housing Center and Housing Works, Inc. v. United States Dept. of Housing and Urban Development, No. 3:20-cv-11765-MGM (D. Mass, October 25, 2020) (order granting preliminary injunction) at http://lawyersforcivilrights.org/wp-content/uploads/2020/10/Nationwide-PI-Against-HUD.pdf. On January 26, 2021, President Biden directed HUD to review the rule. Memorandum on Redressing Our Nation’s and the Federal Government’s History of Discriminatory Housing Practices and Policies, January 26, 2021, https://www.whitehouse.gov/briefing-room/presidential-actions/2021/01/26/memorandum-on-redressing-our-nations-and-the-federal-governments-history-of-discriminatory-housing-practices-and-policies/.

35. FTC v. CompuCredit Corp., No. 1:08-CV-1976-BBM-RGV (N.D. Ga. 2008), Complaint pp. 34‒35, 39, https://www.ftc.gov/sites/default/files/documents/cases/2008/06/080610compucreditcmplt.pdf.

36. FTC v. Blue Global, LLC, No. 2:17-CV-021177-ESW (D. Ariz. 2017), https://www.ftc.gov/enforcement/cases-proceedings/152-3225/blue-global-christopher-kay.

37. O’Neil, Weapons of Math Destruction, p. 78.

38. It is possible that discrimination in these areas could violate the Federal Trade Commission Act.  See Elisa Jillson, “Aiming for truth, fairness, and equity in your company’s use of AI,” Federal Trade Commission Business Blog (April 19, 2021), https://www.ftc.gov/news-events/blogs/business-blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai (“The FTC Act prohibits unfair or deceptive practices. That would include the sale or use of – for example – racially biased algorithms.”). See also Statement of Commissioner Rohit Chopra, In the Matter of Liberty Chevrolet, Inc. d/b/a Bronx Honda, Commission File No. 1623238 (May 27, 2020), (“Using disparate impact analysis and other tools, the Commission can use its unfairness authority to attack harmful discrimination in other sectors of the economy.”) https://www.ftc.gov/system/files/documents/public_statements/1576002/bronx_honda_final_rchopra_bronx_honda_statement.pdf.

39. See David Brody and Sean Bickford, “Discriminatory Denial of Service: Applying State Public Accommodations Laws to Online Commerce,” Lawyers’ Committee for Civil Rights Under Law (January 2020), https://lawyerscommittee.org/wp-content/uploads/2019/12/Online-Public-Accommodations-Report.pdf.

40. The six states that lack these laws entirely (except for individuals with disabilities) are Alabama, Georgia, Mississippi, North Carolina, Texas, and Virginia. The five states whose laws explicitly apply to online companies are California, Colorado, New Mexico, New York, and Oregon. See Brody and Bickford, “Discriminatory Denial of Service.”

41. Brody and Bickford, “Discriminatory Denial of Service.”

42. Jacob William Faber and Terri Friedline, “The Racialized Cost of Banking” (New York: New America Foundation, 2018), https://www.newamerica.org/family-centered-social-policy/reports/racialized-costs-banking/the-racialized-costs-of-banking/; Faber and Friedline, “The Racialized Costs of `Traditional’ Banking in Segregated America: Evidence from Entry-Level Checking Accounts,” Race and Social Problems 12 (2020): 344‒61.

43. Joanne K. Yoong et al., “Disparities in Minority Retirement Savings Behavior: Survey and Experimental Evidence from a Nationally-Representative Sample of US Households.” RAND Corporation Working Paper WR-1331 (October 2019), https://www.rand.org/content/dam/rand/pubs/working_papers/WR1300/WR1331/RAND_WR1331.pdf.

44. See https://www.congress.gov/bill/116th-congress/senate-bill/4801/text.

45. Penny Crosman, “State data privacy laws pose compliance headaches for banks,” American Banker, March 8, 2021, https://www.americanbanker.com/news/state-data-privacy-laws-pose-compliance-headaches-for-banks.

46. Tobias Berg et al., “On the Rise of FinTechs: Credit Scoring Using Digital Footprints,” Review of Financial Studies 33 (7) (2020): 2845‒97, Table A6; Penny Crosman, “The high-tech, low-effort loans winning over online shoppers” American Banker, July 6, 2017, https://www.americanbanker.com/news/the-high-tech-low-effort-loans-winning-over-online-shoppers.

47. Berg et al., “On the Rise of FinTechs.”

48. Executive Office of the President, “Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights” (May 2016), https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/2016_0504_data_discrimination.pdf.

49. See https://ec.europa.eu/info/sites/info/files/proposal_for_a_regulation_on_a_single_market_for_digital_services.pdf.

50. See Article 26 of the proposed Digital Services Act.

51. For one example, see Christo Wilson et al., “Building and Auditing Fair Algorithms: A Case Study in Candidate Screening.” Proceedings of the Conference on Fairness, Accountability, and Transparency (FAccT’21), March 2021, https://www.ccs.neu.edu/home/amislove/publications/Pymetrics-FAccT.pdf.

52. Shannon Bond, “Biden Administration Gears Up For A Showdown With Big Tech,” March 10, 2021, https://www.npr.org/2021/03/10/975545509/biden-administration-gears-up-for-a-showdown-with-big-tech; Daisuke Wakabayashi, “The Antitrust Case Against Big Tech, Shaped by Tech Industry Exiles,” New York Times, December 20, 2020, https://www.nytimes.com/2020/12/20/technology/antitrust-case-google-facebook.html.

53. Mark Bourdillon, “Paul Volcker: Think More Boldly,” Wall Street Journal, December 14, 2009, https://www.wsj.com/articles/SB10001424052748704825504574586330960597134.

54. See, for example, FDIC Advisory Committee on Economic Inclusion, Meeting of October 22, 2019, Transcript pp. 168‒69, https://www.fdic.gov/about/comein/2019/2019-10-22-transcript.pdf, for a discussion of the harm that these modeling failures caused consumers. See Larry Cordell, Yilin Huang, and Meredith Williams, “Collateral Damage: Sizing and Assessing the Subprime CDO Crisis,” Federal Reserve Bank of Philadelphia Working Paper No. 11-30/R (2012), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1907299.

55. Consumer Finance Protection Bureau (CFPB), “CFPB Announces First No-Action Letter to Upstart Network.” Press release (Washington, DC: CFPB, September 14, 2017, https://www.consumerfinance.gov/about-us/newsroom/cfpb-announces-first-no-action-letter-upstart-network/.

56. See https://www.consumerfinance.gov/rules-policy/innovation/ for a full description of the program and policies.

57. For a statement by consumer protection advocates of potential concerns, see “CFPB to Approve Potentially Risky Fintech Products,” September 10, 2019, https://www.nclc.org/media-center/cfpb-to-approve-potentially-risky-fintech-products.html.

58. Some civil rights groups are exploring how to debias algorithms. See, for example, the Tech Equity Initiative, sponsored by the National Fair Housing Alliance, https://nationalfairhousing.org/tech-equity-initiative/.

59. Airbnb, Amazon, eBay, Facebook, Indeed, LinkedIn, Microsoft, Rover, TaskRabbit, Uber, Upwork, Yelp, and Zillow are among the companies that have entered into these collaborations. See Susan Athey and Michael Luca, “Economists (and Economics) in Tech Companies,” Journal of Economic Perspectives 33 (1) (2019): 209–30, p. 226, https://pubs.aeaweb.org/doi/pdfplus/10.1257/jep.33.1.209.

60. Ibid., p. 227.

61. Luigi Zingales, “Uber and the Sherlock Holmes Principle: How Control of Data Can Lead to Biased Academic Research” (Chicago, IL: Stigler Center at the University of Chicago Booth School of Business, October 9, 2019), https://promarket.org/2019/10/09/uber-and-the-sherlock-holmes-principle-how-control-of-data-can-lead-to-biased-academic-research/.

62. Nicol Turner Lee, Paul Resnick, and Genie Barton, “Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms” (Washington, DC: Brookings Institution, 2019), https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/.

63. Chris Brummer, “What do the data reveal about (the absence of Black) financial regulators?” (Washington, DC: Brookings Institution, 2020), https://www.brookings.edu/wp-content/uploads/2020/09/ES-09.02.20-Brummer.pdf.