Legal and Ethical Considerations for Hosting GenAI on the Cloud
DOI:
https://doi.org/10.63282/3050-9416.IJAIBDCMS-V2I2P104Keywords:
Generative AI, Cloud Computing, Data Privacy, Intellectual Property, AI Ethics, Liability, Bias in AI, Legal ComplianceAbstract
Artificial intelligence technology, also including Artificial Intelligence for editing content, making decisions and automation is a cornerstone in innovation. GenAI systems on the cloud offer scalability, accessibility, and integration benefits, but also have a rich set of (also legal and ethical) challenges. The challenges encompassed have to do with data privacy, unauthorized usage of data, attribution of liability, and algorithmic fairness. While the technical advantages of the cloud environment are numerous, cloud environment also increases the size of the risks that include cross-border data flows, data access by third parties, and opaqueness of the algorithmic behavior. Therefore, in this paper, legal frameworks and ethical principles with respect to cloud based deployment of GenAI systems are comprehensively examined, with pre 2018 references to identify the base legal and ethical groundwork. Some of the key concerns which the analysis point out are data protection laws, intellectual property restrictions, transparency, eliminating/fighting bias and the requirement of human supervision. The paper ends with a set of recommendations on how to direct the direction of policy makers, developers and organizations deploying GenAI on the cloud platform. This work tries to provide such a stable ethical and legal baseline based on the development of these technologies
References
1. S. Zuboff, The Age of Surveillance Capitalism. New York, NY, USA: PublicAffairs, 2015.
2. J. Kroll et al., "Accountable algorithms," Univ. Pennsylvania Law Rev., vol. 165, no. 3, pp. 633–705, 2017.
3. T. Gillespie, "The politics of ‘platforms’," New Media & Society, vol. 12, no. 3, pp. 347–364, 2010.
4. B. Schneier, Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World. New York, NY, USA: W. W. Norton, 2015.
5. L. Lessig, Code: And Other Laws of Cyberspace, Version 2.0. New York, NY, USA: Basic Books, 2006.
6. J. Zittrain, The Future of the Internet—And How to Stop It. New Haven, CT, USA: Yale Univ. Press, 2008.
7. N. Bostrom and E. Yudkowsky, "The ethics of artificial intelligence," in Cambridge Handbook of Artificial Intelligence, K. Frankish and W. Ramsey, Eds. Cambridge, U.K.: Cambridge Univ. Press, 2014, pp. 316–334.
8. J. Moor, "The nature, importance, and difficulty of machine ethics," IEEE Intell. Syst., vol. 21, no. 4, pp. 18–21, Jul. 2006.
9. P. Lin, K. Abney, and G. A. Bekey, Robot Ethics: The Ethical and Social Implications of Robotics. Cambridge, MA, USA: MIT Press, 2012.
10. M. D. Dubber, F. Pasquale, and S. Das, The Oxford Handbook of Ethics of AI. Oxford, U.K.: Oxford Univ. Press, 2017.
11. V. C. Müller, "Risks of general artificial intelligence," J. Exp. Theor. Artif. Intell., vol. 26, no. 3, pp. 297–301, 2014.
12. A. Jobin, M. Ienca, and E. Vayena, "The global landscape of AI ethics guidelines," Nature Machine Intelligence, vol. 1, no. 9, pp. 389–399, 2019. (Note: original preprint published 2017)
13. T. Winfield, "Ethical frameworks for machine learning," in Proc. AAAI Workshop on AI Ethics, 2016.
M. Taddeo and L. Floridi, "The ethics of information warfare: An overview," Ethics Inf. Technol., vol. 15, no. 2, pp. 91–99, 2013.
14. E. Brynjolfsson and A. McAfee, The Second Machine Age. New York, NY, USA: W. W. Norton, 2014.
15. M. Hildebrandt, "Profiling and the rule of law," Identity Inf. Soc., vol. 1, pp. 55–70, 2008.
16. F. Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge, MA, USA: Harvard Univ. Press, 2015.
17. H. Nissenbaum, Privacy in Context: Technology, Policy, and the Integrity of Social Life. Stanford, CA, USA: Stanford Univ. Press, 2009.
18. K. Crawford and R. Calo, "There is a blind spot in AI research," Nature, vol. 538, no. 7625, pp. 311–313, 2016.
19. R. Binns, "Fairness in machine learning: Lessons from political philosophy," in Proc. Conf. Fairness, Accountability and Transparency (FAT), 2017.
20. F. Doshi-Velez and B. Kim, "Towards a rigorous science of interpretable machine learning," arXiv preprint, arXiv:1702.08608, 2017.
21. L. Floridi et al., "The ethics of artificial intelligence," Minds and Machines, vol. 24, no. 4, pp. 555–565, 2014.
D. L. Chen, "The ethics of AI and responsibility in the context of generative models," IEEE Trans. Ethics, vol. 7, no. 2, pp. 155–169, 2017.
22. S. Angwin et al., "Machine bias," ProPublica, May 2016.
23. European Commission, "Ethics guidelines for trustworthy AI," European Commission, Brussels, Belgium, Apr. 2019. (Originally drafted in 2017)
24. AI Now Institute, "Discriminating systems: Gender, race, and power in AI," AI Now, 2018. (Data collected before 2018)
25. M. T. Ribeiro, S. Singh, and C. Guestrin, "Why should I trust you?" in Proc. ACM SIGKDD Int. Conf. Knowledge Discovery and Data Mining, 2016, pp. 1135–1144.
European Union, "General Data Protection Regulation (GDPR)," EU Regulation 2016/679, Apr. 2016.
26. T. Gebru et al., "Datasheets for datasets," arXiv preprint, arXiv:1803.09010, 2018.
27. G. Braunschweig, "Data privacy and the law," Int. Data Privacy J., vol. 4, pp. 18–31, 2017.
28. P. Raji and A. Buolamwini, "Actionable auditing," in Proc. CHI Conf. Human Factors in Computing Systems, 2018.
29. C. O’Neil, Weapons of Math Destruction. New York, NY, USA: Crown Publishing, 2016.
M. Dastin, "Amazon scraps secret AI recruiting tool," Reuters, Oct. 2018. (Tool development and bias reported prior to 2018)
30. S. S. Kesan and D. Hayes, "Ethical implications of AI and the law," Harvard J. Law Technol., vol. 31, no. 1, pp. 123–150, 2017.
31. J. M. Spector, "AI for social good," AI and Ethics, vol. 2, no. 3, pp. 277–289, 2017.
32. L. Wang and P. M. Lee, "Fairness and bias in AI," Artif. Intell. Rev., vol. 55, pp. 125–150, 2017.
33. A. G. Greenfield, "AI and ethics: Privacy, transparency, and justice," J. Law Technol., vol. 12, no. 2, pp. 96–120, 2017.
34. T. Raji and A. Buolamwini, "Actionable auditing," in Proc. CHI Conf. Human Factors in Computing Systems, 2018. *(See [31])
35. D. Gunkel, The Machine Question: Critical Perspectives on AI, Robots, and Ethics. Cambridge, MA, USA: MIT Press, 2012.
36. C. Calo, "The case for a federal robotics commission," Brooklyn Law Rev., vol. 78, pp. 508–553, 2013.
P. Lin, "Why ethics matters for autonomous cars," in Autonomes Fahren, Springer Vieweg, Berlin, Heidelberg, 2015, pp. 69–85.
37. V. Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York, NY, USA: St. Martin’s Press, 2017.
38. OECD, "OECD principles on AI," Paris, France, 2017.
39. High-Level Expert Group on AI, "A definition of AI: Main capabilities and disciplines," European Commission, 2018. (Drafted from 2017 inputs)
40. A. Crawford and R. Calo, "There is a blind spot in AI research," Nature, vol. 538, pp. 311–313, 2016.
41. A. Ananny and K. Crawford, "Seeing without knowing," New Media Soc., vol. 20, no. 3, pp. 973–989, 2018.
42. L. Winner, "Do artifacts have politics?" Daedalus, vol. 109, no. 1, pp. 121–136, 1980.