Philosophical consideration of social risks of intellectual automation of social management

Natalia G. Mirono­va
Bashkir State Uni­ver­si­ty

Philo­soph­i­cal con­sid­er­a­tion of social risks of intel­lec­tu­al automa­tion of social man­age­ment

Abstract. The dig­i­tal trans­for­ma­tion of process­es and con­trol sys­tems in the last decade has been accom­pa­nied by the intro­duc­tion of arti­fi­cial intel­li­gence tech­nolo­gies. The pur­pose of this study is to inves­ti­gate the con­di­tions for the safe use of intel­li­gent tech­nolo­gies and tools for man­ag­ing social infra­struc­ture. The research method­ol­o­gy bases on an inte­grat­ed approach, com­par­a­tive analy­sis, and log­i­cal syn­the­sis. The author sug­gests a philo­soph­i­cal analy­sis of exis­ten­tial risks of intel­lec­tu­al automa­tion of social man­age­ment and the mech­a­nisms of their imple­men­ta­tion, and also inves­ti­gates the con­di­tions for a safer use of tech­nolo­gies for intel­li­gent automa­tion of social­ly sig­nif­i­cant deci­sions. Gen­er­al­ized mea­sures and search direc­tions are pro­posed to reduce a num­ber of risks asso­ci­at­ed with intel­li­gent automa­tion of con­trol.

Key­words: philo­soph­i­cal prob­lems of knowl­edge engi­neer­ing, intel­li­gent mod­els, deci­sion sup­port sys­tems, arti­fi­cial intel­li­gence, risks

DOI: 10.32326/2618–9267–2021–4–2–125–144


  1. Alti­nok, D. “Con­text the­o­ry I: intro­duc­tion”, Towards Data Sci­ence, March 29, 2019. Avail­able at: (accessed on June 1, 2021).
  2. Arri­eta, A.B., Díaz-Rodríguez, N., Del Ser, J., Ben­netot, A., Tabik, S., Bar­ba­do, A., Gar­cia, S., Gil-Lopez, S., Moli­na, D., Ben­jamins, R., Chati­la, R., Her­rera, F. “Explain­able Arti­fi­cial Intel­li­gence (XAI): con­cepts, tax­onomies, oppor­tu­ni­ties and chal­lenges toward respon­si­ble AI”, Infor­ma­tion Fusion, 2020, vol. 58, pp. 82–115. Avail­able at: (accessed on June 1, 2021).
  3. Bahur, V. “Mega­fon” nachi­naet pro­da­vat elek­tron­nuyu zamenu yuris­tov i buh­gal­terov [Mega­fon starts sell­ing elec­tron­ic replace­ments for lawyers and accoun­tants]., Jan­u­ary 29, 2020. Avail­able at: (accessed on June 1, 2021). (In Russ­ian)
  4. Bryson, J.J. “Robots should be slaves”, in: Y. Wilks (ed.), Close Engage­ments with Arti­fi­cial Com­pan­ions: Key Social, Psy­cho­log­i­cal, Eth­i­cal and Design Issues. Ams­ter­dam: John Ben­jamin, 2010, pp. 63–74.
  5. Dastin, J. “Ama­zon scraps secret AI recruit­ing tool that showed bias against women”, Reuter, Octo­ber 11, 2018. Avail­able at: (accessed on April 20, 2021).
  6. Dat­teri, E. “Pre­dict­ing the long-term effects of human-robot inter­ac­tion: A reflec­tion on respon­si­bil­i­ty in med­ical robot­ics”, Sci­ence and Engi­neer­ing Ethics, 2013, no. 19 (1), pp. 139–160.
  7. Eth­i­cal­ly Aligned Design. IEEE, 2021. Avail­able at: (accessed on April 20, 2021).
  8. Flori­di, L. “Fault­less respon­si­bil­i­ty: on the nature and allo­ca­tion of moral respon­si­bil­i­ty for dis­trib­uted moral actions”, Philo­soph­i­cal Trans­ac­tions of the Roy­al Soci­ety A, publ. online Decem­ber 28, 2016, vol. 374, no. 2083. Avail­able at: (accessed on April 10, 2021).
  9. Glad­den, M.E. “The dif­fuse intel­li­gent oth­er: An ontol­ogy of non­lo­cal­iz­able robots as moral and legal actors”, in: M. Norskov (ed.), Social Robots: Bound­aries, Poten­tial, Chal­lenges. Dorch­ester, VT: Ash­gate. 2016, pp. 177–198.
  10. Ham­mond, D.N. “Autonomous weapons and the prob­lem of state account­abil­i­ty”, Chica­go Jour­nal of Inter­na­tion­al Law, 2015, vol. 15, no. 2, pp. 652–687.
  11. Inozemt­sev, V.A. Reprezen­tat­siya znaniya v kompyuternykh i kog­ni­tivnykh naukakh [Rep­re­sen­ta­tion of Knowl­edge in Com­put­er and Cog­ni­tive Sci­ences]: DSc PhD in Phi­los­o­phy dis­ser­ta­tion. Moscow: Bau­man MSTU, 2018. (In Russ­ian)
  12. Kasavin, I.T. “Kon­tek­stu­al­izm kak metodologich­eskaya pro­gram­ma” [Con­tex­tu­al­ism as a method­olog­i­cal pro­gram], Epis­te­mol­o­gy & Phi­los­o­phy of Sci­ence, 2005, no. 4, pp. 5–17. (In Russ­ian)
  13. Knight, W. DARPA is fund­ing projects that will try to open up AI’s black box­es. MIT Tech­nol­o­gy Review, April 13, 2017. Avail­able at: (accessed on April 20, 2021).
  14. Knyaze­va, E.N. “Poz­nayushchee telo i dvizhushchiysya um: kontsep­tu­al­nyj povorot v epis­te­mologii” [The Know­ing Body and the Mov­ing Mind: A Con­cep­tu­al Turn in Epis­te­mol­o­gy], in: I.T. Kasavin, N.N. Voron­i­na (eds.), Epis­te­mologiya segod­nya. Idei, prob­le­my, diskus­sii [Epis­te­mol­o­gy today. Ideas, prob­lems, dis­cus­sions]: mono­graph. Nizh­ny Nov­gorod: Lobachevsky Uni­ver­si­ty Press, 2018, pp. 339–350. (In Russ­ian)
  15. Korolev, I. V Rossii budut raskry­vat prestu­pleniya s pomoshchyu iskusstvenno­go intellek­ta [Rus­sia will solve crimes using arti­fi­cial intel­li­gence]. CNews, Feb­ru­ary 15, 2021. Avail­able at:–02-15_v_rossii_budut_raskryvat_prestupleniya (accessed on April 20, 2021). (In Russ­ian)
  16. Lar­i­na, E.S., Ovchin­skiy, V.S. Iskusstven­nyj intellekt. Bol­shie dan­nye. Prestup­nost [Arti­fi­cial Intel­li­gence. Big data. Crime]. Moscow: Knizh­nyj mir Publ., 2018. (In Russ­ian)
  17. Lech­er, C. How Ama­zon auto­mat­i­cal­ly tracks and fires ware­house work­ers for “pro­duc­tiv­i­ty”. TheV­erge, April 25, 2019. Avail­able at: (accessed on April 23, 2021).
  18. Mirsky, Y., Mahler, T., Shelef, I., Elovi­ci, Y. CT-GAN: Mali­cious tam­per­ing of 3D med­ical imagery using deep learn­ing. Research­Gate. 2019. Avail­able at:
  19. GAN_Malicious_Tampering_of_3D_Medical_Imagery_using_Deep_Learning/figures?lo=1 (accessed on April 24, 2021).
  20. Morkhat, P.M. Iskusstven­nyj intellekt: pravovoy vzglyad [Arti­fi­cial Intel­li­gence: Legal View]: monog­ra­phy. Moscow: Buki Vedi Publ., 2017. (In Russ­ian)
  21. Mul­gan, T. “Cor­po­rate Agency and Pos­si­ble Futures”, Jour­nal of Busi­ness Ethics, publ. online May 3, 2018.‑3887-1. Avail­able at:‑3887-1.pdf (accessed on March 3, 2021).
  22. Ramey, C. “Algo­rithm helps New York decide who goes free before tri­al”, The Wall Street Jour­nal, Sep­tem­ber 20, 2020. Avail­able at: (accessed on March 13, 2021).
  23. Reit­er, R. “A log­ic for default rea­son­ing”, Arti­fi­cial Intel­li­gence, 1980, vol. 13, no. 1–2, pp. 81–131.
  24. Teteriuk, A.S., “Chizhevskiy, Ya.A. Bespi­lot­nye letatel­nye appa­raty v asim­met­rich­nykh kon­flik­takh” [Unmanned aer­i­al vehi­cles in asym­met­ric con­flicts], Mezh­dunar­o­d­nye prot­sessy [Inter­na­tion­al process­es], 2016, vol. 14, no. 2 (45), pp.189–201. (In Russ­ian)
  25. Tey, A., Gri­bomon, P. Logich­eskiy pod­khod k iskusstven­no­mu intellek­tu [A Log­i­cal Approach to Arti­fi­cial Intel­li­gence]. Vol. 1. Moscow: Mir Publ., 1990. (In Russ­ian)
  26. Tkachev, A.N. “Skep­tit­sizm i prob­le­my opre­de­leniya znaniya” [Skep­ti­cism and the prob­lem of defin­ing knowl­edge], Revolyut­siya i evolyut­siya: mod­eli razvi­tiya v nauke, kul­ture, obshch­estve [Rev­o­lu­tion and Evo­lu­tion: Mod­els of Change in Sci­ence, Cul­ture, Soci­ety]: Pro­ceed­ings of the II All-Russ­ian Sci­en­tif­ic Con­fer­ence] / ed. by I.T. Kasavin, A.M. Feigel­man. Nizh­ny Nov­gorod: Kras­naya las­tochka, 2019, pp. 94–96. (In Russ­ian)
  27. Yaki­mov, A.I. Teo­retich­eskie osnovy tekhnologii imi­tat­sionno­go mod­elirovaniya i priny­atiya resh­enij v infor­mat­sion­nyh sis­temah promysh­len­nyh pred­priy­atij [The­o­ret­i­cal Foun­da­tions of Tech­nol­o­gy of Sim­u­la­tion and Deci­sion-Mak­ing in Infor­ma­tion Sys­tems of Indus­tri­al Enter­pris­es: DSc in Tech­ni­cal Sci­ence Dis­ser­ta­tion. Bryan­sk: BSTU, 2018. (In Russ­ian)
  28. Zamy­ati­na, E.B., Kari­mov, D.F., Mitrakov, A.A. “Arhitek­tu­ra agent­no-ori­en­tirovan­noy sis­te­my imi­tat­sii s agen­ta­mi, osno­van­ny­mi na ney­ron­nykh setyakh” [Archi­tec­ture of an agent-based imi­ta­tion sys­tem with agents based on neur­al net­works], Informa­ti­zat­siya i svyaz [Infor­ma­tion and Com­mu­ni­ca­tion], 2014, no. 2, pp. 89–97. Avail­able at:Замятина,%20Каримов,%20Митраков23456.pdf (accessed on April 20, 2021). (In Russ­ian)

Comments are closed.