Ethical Issues a Data Scientist Could Face

Bryan

2019/01/15

    The Association for Computing Machinery (ACM) has a code of ethics and professional conduct guidelines (ACM, 2018). The “code” is for people that work in computing and a guide to consider when decision making. Section 2.9 outlines the need to design systems that are very secure and recognizing that breaches of computer systems can be malevolent to many people and organizations (ACM, 2018). The ACM states not to roll out systems without thorough testing of security exploits (2018).

    The Intelligence Advanced Research Projects Agency (IARPA) has revealed there are ways to corrupt artificial intelligence and machine learning models, and many of these systems are currently deployed (Miller, 2019). There is a risk of exploitation when text, facial, and voice recognition systems found in the training data reveal the model’s training information, ultimately, the private information of the individuals used to train the model. Exposing the training data is a significant privacy concern and shows how these technologies rolled out without thoroughly testing for security loopholes. There can be assumptions made that the data scientist team could have known about the gaps, and upper management may have pressed the team to get the system to market, or we could assume ignorance and its part of the growing pains with new technology. Either way, it is an ethical issue to release a system without vigorous testing for security loopholes. Doing further testing is best to strengthen the models to protect the training data that is proprietary and sensitive (Leonard, 2018).

    In the ACM’s section “1.6 Respect privacy” (ACM, 2018), it describes when collecting data there should be transparency that states data is collected and consent given but also explicitly indicating how the information is used and for how long it is stored and removed from databases (2018). There was a 2013 article about IARPA working on a project called “Janus,” which would strengthen facial recognition algorithms to determine the identity of people from images and video (Locker, 2013). The algorithm can analyze the dynamic nature of the human facial expressions (such as yawning, laughing, frowning, and smiling) and quickly match and retrieve data about the person (Locker, 2013). Note this isn’t a driver’s license or passport photo; it is images and video taken of the person in “other” ways. The ACLU jumped in saying surveillance like this is an intrusion on civil privacy even though it’s in the name of national security (ACLU, n.d.). Three years later, in 2016, IARPA made public its program called “DIVA” (Deep Intermodal Video Analytics), which is a real-time behavior monitoring algorithm that can identify people, objects, and activity from the camera networks (Dalton, 2016). It is developed to prevent public attacks and should be integrated with facial recognition and deployed currently (IARPA, n.d.). Exciting work performed by the data science community, and it has its purpose as a preventive measure to protect the public at large from random attacks. However, at what point is it also a privacy issue, and what happens to the collected data? Data privacy is a tough topic to debate and a possible ethical challenge to some but a price to pay for freedom. I am not sure there is mitigation aside from placing signs in areas that are under surveillance and some way to let people consent to have their visual information taken and processed in a government security model.

References

ACLU (n.d.). Privacy and surveillance. Retrieved from: https://www.aclu.org/issues/national-security/privacy-and-surveillance

ACM (2018). ACM code of ethics and professional conduct. Retrieved from: https://www.acm.org/code-of-ethics#h-2.9-design-and-implement-systems-that-are-robustly-and-usably-secure.

Dalton, A. (2016, June 9). US intelligence wants real-time behavior monitoring software. Retrieved from: https://www.engadget.com/2016/06/09/us-intelligence-wants-real-time-behavior-monitoring-software/

IARPA (n.d.). Deep intermodal video analytics (DIVA). Retrieved from: https://www.iarpa.gov/index.php/research-programs/diva

IARPA (n.d.). Janus. Retrieved from: https://www.iarpa.gov/index.php/research-programs/janus

Leonard, M. (2018, December 19). Hardening algorithms against adversarial AI. Retrieved from: https://gcn.com/articles/2018/12/19/ai-security.aspx Leonard, M. (2018, September 17, 2018). Is that algorithm safe to use? Retrieved from: https://gcn.com/articles/2018/09/17/ethics-algorithm-toolkit.aspx

Locker, R. (2013). Intelligence agency seeks facial recognition upgrade. Retrieved from: https://www.usatoday.com/story/nation/2013/11/12/facial-recognition-software-iarpa-upgrade/3506157/

Miller, S. (2019, January 04). IARPA seeks to plug privacy holes in AI. Retrieved from: https://gcn.com/articles/2019/01/04/iarpa-sails.aspx