IZAZOVI ZAŠTITE LJUDSKIH PRAVA I BEZBEDNOSTI U DOBA VEŠTAČKE INTELIGENCIJE

##plugins.themes.academic_pro.article.main##

Aleksandar S. Đorđević
Stevica Deđanski
Boris Jevtić

Abstract

Zaštita ljudskih prava i bezbednosti je oblast u kojoj deluju mnogi institucionalni, organizacioni i humanitarni akteri, a njena efikasnost zahteva stalne inovacije u teoriji i praksi. U tom kontekstu, ovo istraživanje se bavi izazovima koje donosi era veštačke inteligencije, sa ciljem unapređenja kvaliteta rada, saradnje i iskustava svih učesnika u procesu zaštite ljudskih prava i sigurnosti podržanih tehnologijama veštačke inteligencije. Istraživanje je zasnovano na stavovima 330 ispitanika sa teritorije Republike Srbije, koji su putem onlajn-ankete izrazili svoje zadovoljstvo korišćenjem alata savremenih tehnologija, uključujući najnovija rešenja VI, u pitanjima zaštite ljudskih i radničkih prava, kao i komunikaciji sa medijima, sudovima, organima preduzeća i javnim institucijama. Rezultati, analizirani metodom hi-kvadrat testa sa demografskog aspekta, ukazuju na značajne statističke razlike u zadovoljstvu ispitanika sa višim i visokim obrazovanjem u vezi sa stanjem zaštite ljudskih prava i delovanjem organizacija podržanih novim tehnologijama. Ovi nalazi naglašavaju važnost digitalnog obrazovanja za učesnike i organizacije u procesu zaštite prava i sigurnosti, ali i upozoravaju na potencijalne rizike koje nose rešenja utemeljena na nedostatku algoritamske transparentnosti, sajber-bezbednosnim ranjivostima, nepravednosti, pristrasnosti, diskriminaciji, negativnim uticajima na radnu snagu, ugrožavanju privatnosti i nejasnoj odgovornosti za moguće štete. Rad doprinosi literaturi i budućim analizama, s naglaskom na poboljšanje interakcije donosioca odluka u procesu zaštite ljudskih prava i sigurnosti. Pruža podsticaj za veća ulaganja u veštačku inteligenciju i unapređenje znanja i veština svih aktera, kako bi se osigurala bolja komunikacija, transparentnost i donošenje pravednih rešenja koja garantuju najviši nivo zaštite prava i sigurnosti ljudi.

##plugins.themes.academic_pro.article.details##

How to Cite
IZAZOVI ZAŠTITE LJUDSKIH PRAVA I BEZBEDNOSTI U DOBA VEŠTAČKE INTELIGENCIJE. (2025). Limes-plus, 20(2-3), 157-181. https://doi.org/10.69899/limes-plus24212-3157d

How to Cite

IZAZOVI ZAŠTITE LJUDSKIH PRAVA I BEZBEDNOSTI U DOBA VEŠTAČKE INTELIGENCIJE. (2025). Limes-plus, 20(2-3), 157-181. https://doi.org/10.69899/limes-plus24212-3157d

References

  1. Ananny, Mike, and Kate Crawford. 2018. “Seeing without Knowing: Limitations of the Transparency Ideal and Its Application to Algorithmic Accountability.” New Media & Society 20 (3): 973–989. https://doi.org/10.1177/1461444816676645.
  2. Berk, Richard A. 2019. “Accuracy and Fairness for Juvenile Justice Risk Assessments.” Journal of Empirical Legal Studies. https://crim.sas.upenn.edu/sites/default/files/Berk_FairJuvy_1.2.2018.pdf.
  3. Berendt, Bettina. 2019. “AI for the Common Good?! Pitfalls, Challenges, and Ethics Pen-Testing.” Paladyn, Journal of Behavioral Robotics 10: 44–65. https://doi.org/10.1515/pjbr-2019-0004.
  4. Council of Europe. 2019. Unboxing Artificial Intelligence: 10 Steps to Protect Human Rights.
  5. Coldewey, Devin. 2018. “AI Desperately Needs Regulation and Public Accountability, Experts Say.” TechCrunch, D e c e m b e r 7 . h t t p s : / / t e c h c r u n c h . c o m / 2 0 1 8 / 1 2 / 0 7 /ai-desperately-needs-regulation-and-public-accountability-experts-say/.
  6. Cath, Corinne. 2018. “Governing Artificial Intelligence: Ethical, Legal and Technical Opportunities and Challenges.” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. https://doi.org/10.1098/rsta.2018.0080.
  7. Courtland, Rachel. 2018. “Bias Detectives: The Researchers Striving to Make Algorithms Fair.” Nature. https://www.nature.com/articles/d41586-018-05469-3.
  8. Couchman, Hannah. 2019. Policing by Machine: Predictive Policing and the Threats to Our Rights. Liberty Human Rights. https://www.libertyhumanrights.org.uk/sites/default/files/LIB%2011%20Predictive%20Policing%20Report%20WEB.pdf.
  9. Čurčić, Nemanja, Aleksandar Grubor, and Bojan Jevtić. 2024. “Implementing Artificial Intelligence in Travel Services: Customer Satisfaction Gap Study at Serbian Airports.” Ekonomika 3/2024.
  10. Danks, David, and Alex J. London. 2017. “Algorithmic Bias in Autonomous Systems.” In Proceedings of the 26th International Joint Conference on Artificial Intelligence. AAAI Press. https://www.cmu.edu/dietrich/philosophy/docs/london/IJCAI17-AlgorithmicBias-.
  11. Desai, Deven R., and Joshua A. Kroll. 2017. “Trust but Verify: A Guide to Algorithms and the Law.” Harvard Journal of Law & Technology 31:1.
  12. Eliot, Lance. 2022. “Responsible AI Relishes Preeminent Boost Via AI Ethics Proclamation by Top Professional Society the ACM.” Forbes.
  13. European Parliament. 2017. “Resolution of 14 March 2017 on Fundamental Rights Implications of Big Data: Privacy, Data Protection, Non-Discrimination, Security and Law Enforcement (2016/2225(INI)).”
  14. European Commission. 2018a. “Coordinated Plan on Artificial Intelligence.” COM (2018) 795 final, Brussels.
  15. European Commission. 2021. “Proposal for a Regulation on a European Approach for Artificial Intelligence.” COM(2021) 206 final.
  16. European Commission. 2020. “White Paper on Artificial Intelligence: A European Approach to Excellence and Trust.” COM(2020) 65 final, Brussels.
  17. Executive Office of the President. 2016. Preparing for the Future of Artificial Intelligence.
  18. Fralick, Courtney. 2019. “Artificial Intelligence in Cybersecurity Is Vulnerable.” SC Magazine. https://www.scmagazine.com/home/opinion/artificial-intelligence-in-cybersecurity-is-vulnerable/.
  19. Jansen, Pieter. 2018. SIENNA D4.1 State of the Art Review: AI and Robotics. SIENNA Project. https://www.sienna-project.eu/digitalAssets/787/c_787382-l_1-k_sienna-d4.1-state-of-the-art-reviewfinal-v.04.pdf.
  20. Jevtić, Bojan, Stevica Deđanski, Milorad Beslać, Ratko Grozdanić, and Aleksandar Damnjanović. 2013. “SME Technology Capacity Building for Competitiveness and Export: Evidence from Balkan Countries.” Metalurgija International 18 (special issue 4): 162–170. București: Editura Științifică F.M.R. https://enauka.gov.rs/handle/123456789/676927.
  21. Jevtić, Bojan, Milorad Beslać, Dragana Janjušić, and Marija Jevtić. 2024. “The Effects of Digital Natives’ Expectations of Tech Hotel Services Quality on Customer Satisfaction.” International Journal for Quality Research 18 (1): 1–10. https://doi.org/10.24874/IJQR18.01-01.
  22. Lohr, Steve. 2019. “AI and Privacy Concerns Get White House to Embrace Global Cooperation.” The New York Times. https://www.nytimes.com/2019/04/03/technology/artificial-intelligence-privacyoecd.html.
  23. Mittelstadt, Brent D., Philip Allo, Mariarosaria Taddeo, Sandra Wachter, and Luciano Floridi. 2016. “The Ethics of Algorithms: Mapping the Debate.” Big Data & Society. https://doi.org/10.1177/2053951716679679.
  24. Marr, Bernard. 2019. “Artificial Intelligence Has a Problem with Bias, Here’s How to Tackle It.” Forbes. https://www.forbes.com/sites/bernardmarr/2019/01/29/3-steps-to-tackle-the-problem-of-bias-inartificial-intelligence/.
  25. Mitchell, Iain. 2019. “The Use of AI Gives Rise to Huge Potential Legal Issues.” The Scotsman. https://www.scotsman.com/lifestyle/iain-mitchellthe-use-of-ai-gives-rise-to-huge-potential-legal-issues-1-4924962.
  26. Makridakis, Spyros. 2017. “The Forthcoming Artificial Intelligence (AI) Revolution: Its Impact on Society and Firms.” Futures 90: 46–60. https://doi.org/10.1016/j.futures.2017.03.006.
  27. Mantelero, Alessandro, and Massimo S. Esposito. 2021. “An EvidenceBased Methodology for Human Rights Impact Assessment (HRIA) in the Development of AI Data-Intensive Systems.” Computer Law & Security Review 41: 105561. https://doi.org/10.1016/j.clsr.2021.105561.
  28. Miškić, Milana, Bojana Srebro, Marko Rašković, Milan Vrbanac, and Bojan Jevtić. 2024. “Key Challenges Hindering SMEs’ Full Benefit from Digitalization: A Case Study from Serbia.” International Journal for Quality Research 19 (2). https://doi.org/10.22874/IJQR1902-03.
  29. Niiler, Eric. 2019. “Can AI Be a Fair Judge in Court? Estonia Thinks So.” Wired. https://www.wired.com/story/ can-ai-be-fair-judge-court-estonia-thinks-so.
  30. OECD. 2019. Artificial Intelligence in Society. Paris: OECD Publishing. https://doi.org/10.1787/eedfee77-en.
  31. Patel, Anya, Themis Hatzakis, Kevin Macnish, Mark Ryan, and Anatolii Kirichenko. 2019. “Cyberthreats and Countermeasures.” SHERPA Project. https://doi.org/10.21253/DMU.7951292.v3.
  32. Privacy International & Article 19. 2018. Privacy and Freedom of Expression in the Age of Artificial Intelligence. https://www.article19.org/wp-content/uploads/2018/04/Privacy-and-Freedom-of-ExpressionIn-the-Age-of-Artificial-Intelligence-1.pdf.
  33. Raji, Inioluwa Deborah, and Joy Buolamwini. 2019. “Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products.” AAAI/ACM Conference on AI Ethics and Society.
  34. Raymond, Adam H., and Scott J. Shackelford. 2013. “Technology, Ethics, and Access to Justice: Should an Algorithm Be Deciding Your Case?” Michigan Journal of International Law 35: 485.
  35. Russell, Stuart J., and Peter Norvig. 2016. Artificial Intelligence: A Modern Approach. Pearson Education Limited.
  36. Rodrigues, Rowena, Alexandros Panagiotopoulos, David Wright, Themis Hatzakis, and Stella Laulhé Shaelou. 2020. “SHERPA Deliverable 3.3 Report on Regulatory Options.” SHERPA Project. https://doi.org/10.21253/DMU.8181827.v2.
  37. Schönberger, Daniel. 2018. “Deep Copyright: Up-and-Downstream Questions Related to Artificial Intelligence (AI) and Machine Learning (ML).” Zeitschrift für Geistiges Eigentum/Intellectual Property Journal 10 (1): 35.
  38. Srebro, Bojana, and Bojan Jevtić. 2024. “Improving Decision-Making Efficiency Through AI-Powered Fraud Detection and Prevention.” International Congress on Project Management of ICT, Aranđelovac, 2024.
  39. Smith, Laura. 2017. “Unfairness by Algorithm: Distilling the Harms of Automated Decision-Making.” Future of Privacy Forum. https://fpf.org/2017/12/11/unfairness-by-algorithm-distilling-the-harms-ofautomated-decision-making/.
  40. United Nations. 2019. United Nations Activities on Artificial Intelligence (AI). http://handle.itu.int/11.1002/pub/813bb49e-en.
  41. Williams, Holly. 2019. “Big Brother AI Is Watching You.” IT ProPortal. https://www.itproportal.com/features/big-brother-ai-is-watching-you/.
  42. Wachter, Sandra, and Brent D. Mittelstadt. 2019. “A Right to Reasonable Inferences: Rethinking Data Protection Law in the Age of Big Data and AI.” Columbia Business Law Review. https://ora.ox.ac.uk/objects/uuid:d53f7b6a-981c-4f87-91bc-743067d10167/download_file?file_format=pdf.

Similar Articles

You may also start an advanced similarity search for this article.

Most read articles by the same author(s)