CMPT 125 Assignment 4

 

This assignment asks you to read (and write) about ethical issues related to autonomous vehicles. Your answers to questions 2 and 3 should be written in English in complete sentences (and paragraphs if appropriate) and not in note form. Marks will be assigned to both content and the quality of your writing.

 

Question 1 – Moral Machine: 20%

 

Navigate to the MIT Moral Machine web site and watch the short introductory video. Then click on Start Judging and judge the scenarios presented to you. Note that it's useful to click on Show Description to find out what the choices are. Once you've completed judging the thirteen scenarios take a screenshot of the page asking you if you want to see a summary screen as proof that you've completed the activity. If you do not navigate to this page (it may have changed) then take a screenshot of the summary page. Basically, your screenshot is proof that you have completed the survey. I encourage you to look at the summary and answer their survey.

 

Question 2 – Ethics of Autonomous Cars: 30%

 

First read this article from the Atlantic, written by Patrick Lin and then answer the questions that follow.

 

https://www.theatlantic.com/technology/archive/2013/10/the-ethics-of-autonomous-cars/280360/

 

a) Lin contrasts the difference between ethics and laws. Explain why programming autonomous vehicles to strictly follow the Motor Vehicle Act and associated Rules of the Road may not be appropriate. Illustrate your answer with at least two examples, one from the text and one not covered in the text. Your answer should be 200 to 500 words in length.

 

b) Lin contrasts the difference between levels of responsibility of programmers of autonomous vehicles and drivers of conventional vehicles. Explain this difference and briefly explain whether or not you agree with the author's conclusion on which should bear greater responsibility for bad outcomes. Your answer should be 200 to 500 words in length.

 

Question 3 – Kate's Ethical Dilemma: 50%

 

Read this article from MIT Technology Review as background to the question that follows.

 

https://www.technologyreview.com/s/542626/why-self-driving-cars-must-be-programmed-to-kill/

 

Kate is a programmer working for a manufacturer of autonomous vehicles (AVs). Her employer has advertised that their AVs may, under certain circumstances, decide to act in a way that is likely to result in the deaths of their AVs passengers. However, they have qualified these statements to stress that any such decision will be "heavily weighted in favour of the survival of the passengers of our vehicles". It is a commonly held believe that this kind of qualification is required to encourage consumers to purchase AVs.

 

Essentially, if an AV (manufactured by the company that Kate works for) is in a situation where someone is likely to die it compares the numbers of lives that will be lost from each of its possible actions and selects the action with the least loss of life. Except that an additional weight is given to the passengers of the AV by multiplying the number of passengers by some constant value greater than one. For example, if this constant was 2.1 then the AV would choose to preserve the life of a single passenger and kill two pedestrians but would kill the passenger to save three pedestrians.

 

Kate noticed that the constant value is set to 1 – which is to say that the lives of the cars passengers are weighted exactly the same as anyone else involved in an accident with the AV. Given the manufacturer's public statements, this made Kate concerned and she discussed the issue with her supervisor, Bob, who explained that she shouldn't worry about advertising claims "since nobody ever really believes them". Bob also explained that it was decided that it would be immoral to weigh the lives of the AV passengers higher than anyone else. Kate was not satisfied with this response so sent an email expressing her concerns to her department manager and the CEO of the company. Her only response was from the company's legal department reminding her that she had signed an NDA (Non-Disclosure Agreement) and that violating this NDA would result in her termination and that "the vast majority of our industry partners are reluctant to hire prospective employees who have previously been sued for NDA violations".

 

Kate is trying to decide whether or not she should make this information public.

 

Write a short essay (900 to 1,500 words) that describes the issues involved in Kate's decision and give a recommendation on what actions (if any) Kate should take. Your recommendation should be based on either the Kantian or the utilitarian ethical perspective. Whichever perspective you choose you should also briefly discuss whether you’re recommended course of action would differ if you chose the other perspective. In other words, if you choose to argue from the utilitarian perspective explain whether or not your advice to Kate would be different if you chose the Kantian perspective.

 

This article from Fortune is worth reading if only to show that this question is not entirely hypothetical

 

Submission

You should submit your assignment online to the CoursSys submission server.  Your solution should consist of a single .pdf file, please read the documentation on site for further information.  The assignment is due by 11:59pm on Monday the 17th of July.

 

 

CMPT 125 Home

 

John Edgar (johnwill@sfu.ca)