IZAZOVI ZAŠTITE LJUDSKIH PRAVA I BEZBEDNOSTI U DOBA VEŠTAČKE INTELIGENCIJE
##plugins.themes.academic_pro.article.main##
Abstract
Zaštita ljudskih prava i bezbednosti je oblast u kojoj deluju mnogi institucionalni, organizacioni i humanitarni akteri, a njena efikasnost zahteva stalne inovacije u teoriji i praksi. U tom kontekstu, ovo istraživanje se bavi izazovima koje donosi era veštačke inteligencije, sa ciljem unapređenja kvaliteta rada, saradnje i iskustava svih učesnika u procesu zaštite ljudskih prava i sigurnosti podržanih tehnologijama veštačke inteligencije. Istraživanje je zasnovano na stavovima 330 ispitanika sa teritorije Republike Srbije, koji su putem onlajn-ankete izrazili svoje zadovoljstvo korišćenjem alata savremenih tehnologija, uključujući najnovija rešenja VI, u pitanjima zaštite ljudskih i radničkih prava, kao i komunikaciji sa medijima, sudovima, organima preduzeća i javnim institucijama. Rezultati, analizirani metodom hi-kvadrat testa sa demografskog aspekta, ukazuju na značajne statističke razlike u zadovoljstvu ispitanika sa višim i visokim obrazovanjem u vezi sa stanjem zaštite ljudskih prava i delovanjem organizacija podržanih novim tehnologijama. Ovi nalazi naglašavaju važnost digitalnog obrazovanja za učesnike i organizacije u procesu zaštite prava i sigurnosti, ali i upozoravaju na potencijalne rizike koje nose rešenja utemeljena na nedostatku algoritamske transparentnosti, sajber-bezbednosnim ranjivostima, nepravednosti, pristrasnosti, diskriminaciji, negativnim uticajima na radnu snagu, ugrožavanju privatnosti i nejasnoj odgovornosti za moguće štete. Rad doprinosi literaturi i budućim analizama, s naglaskom na poboljšanje interakcije donosioca odluka u procesu zaštite ljudskih prava i sigurnosti. Pruža podsticaj za veća ulaganja u veštačku inteligenciju i unapređenje znanja i veština svih aktera, kako bi se osigurala bolja komunikacija, transparentnost i donošenje pravednih rešenja koja garantuju najviši nivo zaštite prava i sigurnosti ljudi.
##plugins.themes.academic_pro.article.details##

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 Unported License.
How to Cite
References
- Ananny, Mike, and Kate Crawford. 2018. “Seeing without Knowing: Limitations of the Transparency Ideal and Its Application to Algorithmic Accountability.” New Media & Society 20 (3): 973–989. https://doi.org/10.1177/1461444816676645.
- Berk, Richard A. 2019. “Accuracy and Fairness for Juvenile Justice Risk Assessments.” Journal of Empirical Legal Studies. https://crim.sas.upenn.edu/sites/default/files/Berk_FairJuvy_1.2.2018.pdf.
- Berendt, Bettina. 2019. “AI for the Common Good?! Pitfalls, Challenges, and Ethics Pen-Testing.” Paladyn, Journal of Behavioral Robotics 10: 44–65. https://doi.org/10.1515/pjbr-2019-0004.
- Council of Europe. 2019. Unboxing Artificial Intelligence: 10 Steps to Protect Human Rights.
- Coldewey, Devin. 2018. “AI Desperately Needs Regulation and Public Accountability, Experts Say.” TechCrunch, D e c e m b e r 7 . h t t p s : / / t e c h c r u n c h . c o m / 2 0 1 8 / 1 2 / 0 7 /ai-desperately-needs-regulation-and-public-accountability-experts-say/.
- Cath, Corinne. 2018. “Governing Artificial Intelligence: Ethical, Legal and Technical Opportunities and Challenges.” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. https://doi.org/10.1098/rsta.2018.0080.
- Courtland, Rachel. 2018. “Bias Detectives: The Researchers Striving to Make Algorithms Fair.” Nature. https://www.nature.com/articles/d41586-018-05469-3.
- Couchman, Hannah. 2019. Policing by Machine: Predictive Policing and the Threats to Our Rights. Liberty Human Rights. https://www.libertyhumanrights.org.uk/sites/default/files/LIB%2011%20Predictive%20Policing%20Report%20WEB.pdf.
- Čurčić, Nemanja, Aleksandar Grubor, and Bojan Jevtić. 2024. “Implementing Artificial Intelligence in Travel Services: Customer Satisfaction Gap Study at Serbian Airports.” Ekonomika 3/2024.
- Danks, David, and Alex J. London. 2017. “Algorithmic Bias in Autonomous Systems.” In Proceedings of the 26th International Joint Conference on Artificial Intelligence. AAAI Press. https://www.cmu.edu/dietrich/philosophy/docs/london/IJCAI17-AlgorithmicBias-.
- Desai, Deven R., and Joshua A. Kroll. 2017. “Trust but Verify: A Guide to Algorithms and the Law.” Harvard Journal of Law & Technology 31:1.
- Eliot, Lance. 2022. “Responsible AI Relishes Preeminent Boost Via AI Ethics Proclamation by Top Professional Society the ACM.” Forbes.
- European Parliament. 2017. “Resolution of 14 March 2017 on Fundamental Rights Implications of Big Data: Privacy, Data Protection, Non-Discrimination, Security and Law Enforcement (2016/2225(INI)).”
- European Commission. 2018a. “Coordinated Plan on Artificial Intelligence.” COM (2018) 795 final, Brussels.
- European Commission. 2021. “Proposal for a Regulation on a European Approach for Artificial Intelligence.” COM(2021) 206 final.
- European Commission. 2020. “White Paper on Artificial Intelligence: A European Approach to Excellence and Trust.” COM(2020) 65 final, Brussels.
- Executive Office of the President. 2016. Preparing for the Future of Artificial Intelligence.
- Fralick, Courtney. 2019. “Artificial Intelligence in Cybersecurity Is Vulnerable.” SC Magazine. https://www.scmagazine.com/home/opinion/artificial-intelligence-in-cybersecurity-is-vulnerable/.
- Jansen, Pieter. 2018. SIENNA D4.1 State of the Art Review: AI and Robotics. SIENNA Project. https://www.sienna-project.eu/digitalAssets/787/c_787382-l_1-k_sienna-d4.1-state-of-the-art-reviewfinal-v.04.pdf.
- Jevtić, Bojan, Stevica Deđanski, Milorad Beslać, Ratko Grozdanić, and Aleksandar Damnjanović. 2013. “SME Technology Capacity Building for Competitiveness and Export: Evidence from Balkan Countries.” Metalurgija International 18 (special issue 4): 162–170. București: Editura Științifică F.M.R. https://enauka.gov.rs/handle/123456789/676927.
- Jevtić, Bojan, Milorad Beslać, Dragana Janjušić, and Marija Jevtić. 2024. “The Effects of Digital Natives’ Expectations of Tech Hotel Services Quality on Customer Satisfaction.” International Journal for Quality Research 18 (1): 1–10. https://doi.org/10.24874/IJQR18.01-01.
- Lohr, Steve. 2019. “AI and Privacy Concerns Get White House to Embrace Global Cooperation.” The New York Times. https://www.nytimes.com/2019/04/03/technology/artificial-intelligence-privacyoecd.html.
- Mittelstadt, Brent D., Philip Allo, Mariarosaria Taddeo, Sandra Wachter, and Luciano Floridi. 2016. “The Ethics of Algorithms: Mapping the Debate.” Big Data & Society. https://doi.org/10.1177/2053951716679679.
- Marr, Bernard. 2019. “Artificial Intelligence Has a Problem with Bias, Here’s How to Tackle It.” Forbes. https://www.forbes.com/sites/bernardmarr/2019/01/29/3-steps-to-tackle-the-problem-of-bias-inartificial-intelligence/.
- Mitchell, Iain. 2019. “The Use of AI Gives Rise to Huge Potential Legal Issues.” The Scotsman. https://www.scotsman.com/lifestyle/iain-mitchellthe-use-of-ai-gives-rise-to-huge-potential-legal-issues-1-4924962.
- Makridakis, Spyros. 2017. “The Forthcoming Artificial Intelligence (AI) Revolution: Its Impact on Society and Firms.” Futures 90: 46–60. https://doi.org/10.1016/j.futures.2017.03.006.
- Mantelero, Alessandro, and Massimo S. Esposito. 2021. “An EvidenceBased Methodology for Human Rights Impact Assessment (HRIA) in the Development of AI Data-Intensive Systems.” Computer Law & Security Review 41: 105561. https://doi.org/10.1016/j.clsr.2021.105561.
- Miškić, Milana, Bojana Srebro, Marko Rašković, Milan Vrbanac, and Bojan Jevtić. 2024. “Key Challenges Hindering SMEs’ Full Benefit from Digitalization: A Case Study from Serbia.” International Journal for Quality Research 19 (2). https://doi.org/10.22874/IJQR1902-03.
- Niiler, Eric. 2019. “Can AI Be a Fair Judge in Court? Estonia Thinks So.” Wired. https://www.wired.com/story/ can-ai-be-fair-judge-court-estonia-thinks-so.
- OECD. 2019. Artificial Intelligence in Society. Paris: OECD Publishing. https://doi.org/10.1787/eedfee77-en.
- Patel, Anya, Themis Hatzakis, Kevin Macnish, Mark Ryan, and Anatolii Kirichenko. 2019. “Cyberthreats and Countermeasures.” SHERPA Project. https://doi.org/10.21253/DMU.7951292.v3.
- Privacy International & Article 19. 2018. Privacy and Freedom of Expression in the Age of Artificial Intelligence. https://www.article19.org/wp-content/uploads/2018/04/Privacy-and-Freedom-of-ExpressionIn-the-Age-of-Artificial-Intelligence-1.pdf.
- Raji, Inioluwa Deborah, and Joy Buolamwini. 2019. “Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products.” AAAI/ACM Conference on AI Ethics and Society.
- Raymond, Adam H., and Scott J. Shackelford. 2013. “Technology, Ethics, and Access to Justice: Should an Algorithm Be Deciding Your Case?” Michigan Journal of International Law 35: 485.
- Russell, Stuart J., and Peter Norvig. 2016. Artificial Intelligence: A Modern Approach. Pearson Education Limited.
- Rodrigues, Rowena, Alexandros Panagiotopoulos, David Wright, Themis Hatzakis, and Stella Laulhé Shaelou. 2020. “SHERPA Deliverable 3.3 Report on Regulatory Options.” SHERPA Project. https://doi.org/10.21253/DMU.8181827.v2.
- Schönberger, Daniel. 2018. “Deep Copyright: Up-and-Downstream Questions Related to Artificial Intelligence (AI) and Machine Learning (ML).” Zeitschrift für Geistiges Eigentum/Intellectual Property Journal 10 (1): 35.
- Srebro, Bojana, and Bojan Jevtić. 2024. “Improving Decision-Making Efficiency Through AI-Powered Fraud Detection and Prevention.” International Congress on Project Management of ICT, Aranđelovac, 2024.
- Smith, Laura. 2017. “Unfairness by Algorithm: Distilling the Harms of Automated Decision-Making.” Future of Privacy Forum. https://fpf.org/2017/12/11/unfairness-by-algorithm-distilling-the-harms-ofautomated-decision-making/.
- United Nations. 2019. United Nations Activities on Artificial Intelligence (AI). http://handle.itu.int/11.1002/pub/813bb49e-en.
- Williams, Holly. 2019. “Big Brother AI Is Watching You.” IT ProPortal. https://www.itproportal.com/features/big-brother-ai-is-watching-you/.
- Wachter, Sandra, and Brent D. Mittelstadt. 2019. “A Right to Reasonable Inferences: Rethinking Data Protection Law in the Age of Big Data and AI.” Columbia Business Law Review. https://ora.ox.ac.uk/objects/uuid:d53f7b6a-981c-4f87-91bc-743067d10167/download_file?file_format=pdf.