Question: Case study: significant harms and benefits data can present. A married couple in their 30s, John and Mary, are seeking a business loan to help

Case study: significant harms and benefits data can present. A married couple in their 30s, John and Mary, are seeking a business loan to help them realise their long-held dream of owning and operating their own restaurant. Mary is an experienced accountant, and John is a promising graduate of a prominent culinary school. They share a strong entrepreneurial desire to be "their own bosses" and bring something new and exciting to their local culinary scene; outside consultants have reviewed their business plan and assured them that they have a very promising and creative restaurant concept as well as the skills necessary to successfully implement it. They should have no trouble receiving a loan to get the business off the ground, according to the advisors. John and Mary's local bank loan officer uses an off-the-shelf software tool to evaluate loan applications, which synthesises a wide range of data profiles obtained from hundreds of private data brokers. As a result, it has access to details about John and Mary's lives that go well beyond what was requested on their loan application. Some of this data is obviously relevant to the application, such as their history of on-time bill payments. However, much of the information used by the system's algorithms is information that no human loan officer would think to look at or have access to, such as inferences about their likely medical histories based on their pharmacy purchases, information from online genetic registries about health risk factors in their extended families, data about the books and movies they watch, and inferences about their racial background. While much of the material is correct, some of it is not. A few days after applying, John and Mary receive a call from the loan officer informing them that their loan has been denied. When they inquire as to why, they are merely told that they were evaluated as moderate-to-high risk' by the loan system. When they seek for further information, the loan officer says he does not have any and that the software company that designed their loan system will not tell them anything about the proprietary algorithm or the data sources it uses, or even if the data was validated. Even the system's designers do not know how the data led to any particular outcome, they are told; all they can say is that the system is 'usually' dependable statistically. When John and Mary inquire about appealing the judgement, they are informed that they will be unable to do so since the system will just process their application again using the same algorithm and data, yielding the same outcome,

  1. What ethically substantial harms would John and Mary have endured as a result of their loan denial? (Be as specific as possible in your replies; think of as many different types of potential harm to their important life interests as you can).

2. What ethically significant benefits could banks derive from adopting a big-data driven system to analyse loan applications?

3. What greater societal damages could come from the widespread usage of this particular loan appraisal process, aside from the effects on John and Marys lives?

4. Could the loan officer, bank managers, and/or software system designers and marketers have anticipated the harms you described in Question 1 and Question 3? Should they have been foreseen, and if so, why?

5. What steps could the loan officer, the bank's managers, or the software company's employees have taken to mitigate or avert such consequences?

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related General Management Questions!