Socialpost

Complete News World

Credit value in the United States: How can AI help counterparties

Credit value in the United States: How can AI help counterparties

It has long been known that unbalanced data, including unbalanced mechanisms, can avoid automated decision-making to the detriment of low-income groups and minorities. For example, software used by banks to predict whether someone will repay their credit card loan is favorable to rich white applicants. Many protocols and many startup companies try to solve the problem by changing these algorithms beautifully.

So far The largest study with real US mortgage data Now, Stanford University economists Laura Blatner and Scott Nelson of the University of Chicago show that differences in home loans for minority and majority groups are due not only to algorithmic dependencies but also to minority and low-income groups. Groups have less data on their credit history.

This means that if this data is used to calculate the credit score and this value is used to predict credit defaults, this estimate is less accurate. It is this inaccuracy that leads to inequality – not unbalanced mechanisms. The implications are clear: better AI practices do not fix the problem.

Ashes Rambachan, who studies mechanical learning and economics at Harvard University but is not involved in research, said, “This is a very interesting result. Pro and sketchy credit records have been a hot topic for some time, but this is the first large-scale effort to review loan applications from millions of real people.

Credit scores capture the range of socio-economic data such as employment history, financial data and purchasing habits. These all flow into a single int. In addition to determining loan applications, credit scores are now used in the United States for many life-defining decisions, such as insurance approvals, lease agreements, and decisions about hiring new employees.

See also  USA: Wins Trance Girl Swimming Tournament - Recognized as US Governor

More from MIT Technology Review

More from MIT Technology Review

More from MIT Technology Review

To find out why minorities are treated differently by mortgage lenders, Blatter and Nelson collected credit reports for 50 million anonymous U.S. consumers, including marketing data on each of these consumers’ socio-economic information, land records and mortgage transactions and the banks that lent them money.

This is one reason this is the first study of its kind, as these datasets are often private and not generally available to researchers. “So we went to a credit bureau and basically had to pay them a lot to get the data,” Blatner says. Researchers later tested various prediction algorithms to show that credit scores were not distorted, but had a lot of noise, which is a statistical term for data that cannot be used to make accurate predictions.

Take the minority applicant who scored 620. In an unbalanced system that shows a bias for this group, one can expect the score to always exaggerate this applicant’s credit risk, and the more accurate score to be 625. In theory, this miscalculation could then be offset by some sort of algorithmic positive bias, i.e. reducing the threshold for recognition of minority applications.

However, Blatter and Nelson show that this is not the case. They found that a score of 620 for a minority applicant was a poor approximation of their creditworthiness. But since the error can go both ways: 620 may be 625 – or it may be 615.

This distinction may seem subtle, but it is important. Inaccuracy is caused by noise in the data and not by distortions in the way the data is used, injustice cannot be corrected by better means. “It’s a self-recycling cycle,” Blatner says. “We lend to the wrong people, and in the future a section of the population will never have the opportunity to compile the data needed to lend to them.”

Blatter and Nelson later tried to quantify the problem. They created their own simulation of the lender’s forecasting tool and assessed what would have happened if the applicant’s results had been reversed by the accepted or rejected boundary due to incorrect scores. To do this, they used a number of techniques, such as comparing rejected applicants with accepted applicants or viewing other credit lines rejected by applicants: b. Car loans. By combining all of these together, they inserted these imaginary “perfect” credit results into their simulation and re-measured the difference between groups.

They found that the differences between the groups were reduced by 50 percent, and found that the results on minority or low-income applicants were as accurate as those on rich white-background applicants. For minority applicants, almost half of the gain in accuracy came from eliminating the error that the applicant should be recognized but not made. This improvement was less for low-income applicants because it was offset by eliminating errors that went the other way: applicants who should have been rejected but not rejected. Plattner points out that eliminating this misconduct will benefit lenders and fewer applicants. “The economic approach allows us to calculate the cost of algorithms with data of sounds in a meaningful way,” he says. “We can estimate how much debt misappropriation will result in.”


(B.Sc.)

Home page