{"id":11996,"date":"2026-03-14T18:57:51","date_gmt":"2026-03-14T18:57:51","guid":{"rendered":"https:\/\/info-eforie.ro\/index.php\/2026\/03\/14\/filozoful-care-invata-inteligenta-artificiala-sa-fie-buna\/"},"modified":"2026-03-14T18:57:51","modified_gmt":"2026-03-14T18:57:51","slug":"filozoful-care-invata-inteligenta-artificiala-sa-fie-buna","status":"publish","type":"post","link":"https:\/\/info-eforie.ro\/index.php\/2026\/03\/14\/filozoful-care-invata-inteligenta-artificiala-sa-fie-buna\/","title":{"rendered":"Filozoful care \u00eenva\u021b\u0103 inteligen\u021ba artificial\u0103 s\u0103 fie \u201ebun\u0103\u201d"},"content":{"rendered":"<div>\n<p>\u00cen spatele unora dintre cele mai avansate sisteme de inteligen\u021b\u0103 artificial\u0103 nu stau doar programatori sau ingineri. Uneori, rolul cheie \u00eel are un filozof. Este cazul Amandei  Askell, cercet\u0103toare la Anthropic, compania care dezvolt\u0103 chatbotul Claude AI.\u00a0Misiunea ei este s\u0103 \u00eenve\u021be inteligen\u021ba artificial\u0103 cum s\u0103 se comporte moral \u0219i responsabil atunci c\u00e2nd interac\u021bioneaz\u0103 cu oamenii<\/p>\n<div><picture loading=\"eager\" width=\"1400\" height=\"750\" alt=\"Amanda Askell FOTO Lindsay Ellary for WSJ Magazine\"><source type=\"image\/webp\"  media=\"(min-width: 1400px)\"><source type=\"image\/webp\"  media=\"(min-width: 1000px)\"><source type=\"image\/webp\"  media=\"(min-width: 700px)\"><source type=\"image\/jpeg\"  media=\"(min-width: 1400px)\"><source type=\"image\/jpeg\"  media=\"(min-width: 1000px)\"><source type=\"image\/jpeg\"  media=\"(min-width: 700px)\"><img fetchpriority=\"high\" decoding=\"async\" src=\"https:\/\/cdn.adh.reperio.news\/image-0\/09924068-fb0b-4571-8f68-be97515c65e3\/index.jpeg?p=a%3D1%26co%3D1.05%26w%3D700%26h%3D750%26r%3Dcontain%26f%3Dwebp\" alt=\"Amanda Askell FOTO Lindsay Ellary for WSJ Magazine\" width=\"1400\" height=\"750\" loading=\"eager\"><\/picture>\n<p>Amanda Askell FOTO Lindsay Ellary for WSJ Magazine<\/p>\n<\/div>\n<h2>Cine este Amanda Askell<\/h2>\n<p>Originar\u0103 din Sco\u021bia, Amanda Askell este de profesie filozof, cu studii la Universitatea Oxford \u0219i un doctorat \u00een filozofie ob\u021binut la New York University. <\/p>\n<p>\u00cenainte de a se al\u0103tura companiei Anthropic \u00een 2021, ea a lucrat la OpenAI (compania care a dezvoltat ChatGPT), unde s-a concentrat pe cercetarea legat\u0103 de siguran\u021ba inteligen\u021bei artificiale.\u00a0<\/p>\n<p>La Anthropic, Askell conduce echipa responsabil\u0103 de \u201ealinierea personalit\u0103\u021bii\u201d modelelor AI \u2013 adic\u0103 procesul prin care sistemele sunt antrenate s\u0103 manifeste tr\u0103s\u0103turi precum onestitatea, empatia \u0219i responsabilitatea.<\/p>\n<h2>De ce are o companie AI nevoie de un filozof<\/h2>\n<p>\u00cen dezvoltarea inteligen\u021bei artificiale moderne apar dileme morale complexe. Doar c\u00e2teva dintre exemple sunt urm\u0103toarele:<\/p>\n<ul>\n<li>Cum ar trebui s\u0103 r\u0103spund\u0103 un AI la \u00eentreb\u0103ri sensibile?\n<\/li>\n<\/ul>\n<ul>\n<li>Cum poate evita manipularea sau dezinformarea?\n<\/li>\n<\/ul>\n<ul>\n<li>Ce \u00eenseamn\u0103 comportamentul \u201ecorect\u201d pentru un sistem automat?<\/li>\n<\/ul>\n<p>Rolul Amandei Askell este s\u0103 defineasc\u0103 aceste principii \u0219i s\u0103 le integreze \u00een modul \u00een care func\u021bioneaz\u0103 un chatbot denumit Claude AI.\u00a0Potrivit relat\u0103rilor din The Wall Street Journal, Amanda abordeaz\u0103 procesul de dezvoltare al AI-ului aproape ca pe cre\u0219terea unui copil: modelul trebuie ghidat cu grij\u0103 pentru a \u00eenv\u0103\u021ba s\u0103 ia decizii etice \u0219i s\u0103 reziste manipul\u0103rii.<\/p>\n<h2>,,Personalitatea&#8221; lui Claude AI \u0219i ce este Constituional AI<\/h2>\n<p>Una dintre contribu\u021biile majore ale lui filozofului este elaborarea unui set amplu de reguli \u0219i principii morale pentru AI. Acest ghid intern, descris uneori drept \u201epersonalitatea\u201d lui Claude AI, are aproximativ 30.000 de cuvinte \u0219i stabile\u0219te modul \u00een care sistemul ar trebui s\u0103 gestioneze dileme etice sau conversa\u021bii dificile. <\/p>\n<p>Documentul include instruc\u021biuni despre:<\/p>\n<ul>\n<li>comportamentul etic al AI-ului\n<\/li>\n<li>modul de tratare a subiectelor sensibile\n<\/li>\n<li>empatia fa\u021b\u0103 de utilizatori\n<\/li>\n<li>evitarea r\u0103spunsurilor d\u0103un\u0103toare\n<\/li>\n<\/ul>\n<p>Munca Amandei Askell este str\u00e2ns legat\u0103 de conceptul de<i> Constitutional AI,<\/i> o metod\u0103 de antrenare a modelelor AI bazat\u0103 pe principii etice explicite.\u00a0 \u00cen loc s\u0103 corecteze constant gre\u0219elile modelului, cercet\u0103torii \u00eei ofer\u0103 un set de valori \u0219i reguli \u2013 o \u201econstitu\u021bie\u201d \u2013 dup\u0103 care sistemul \u00ee\u0219i evalueaz\u0103 propriile r\u0103spunsuri. Scopul acestei metode este ca AI-ul s\u0103 devin\u0103 mai autonom \u00een luarea deciziilor morale.\u00a0<\/p>\n<h2>Avertismentul Amandei Askell<\/h2>\n<p>Amanda Askell crede c\u0103 pe m\u0103sur\u0103 ce modelele AI devin mai avansate, ele ar putea dezvolta forme rudimentare de \u201eidentitate\u201d.\u00a0 Din acest motiv, consider\u0103 important modul \u00een care oamenii trateaz\u0103 aceste sisteme. Dac\u0103 AI-urile \u00eenva\u021b\u0103 din interac\u021biunile noastre, comportamentul utilizatorilor poate influen\u021ba evolu\u021bia lor.\u00a0<\/p>\n<\/p><\/div>\n<p><a href=\"https:\/\/adevarul.ro\/stiri-interne\/societate\/filozoful-care-invata-inteligenta-artificiala-sa-2514947.html\" class=\"button purchase\" rel=\"nofollow noopener\" target=\"_blank\">Read More<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>\u00cen spatele unora dintre cele mai avansate sisteme de inteligen\u021b\u0103 artificial\u0103 nu stau doar programatori sau ingineri. Uneori, rolul cheie \u00eel are un filozof. Este [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":11997,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[6],"tags":[],"class_list":["post-11996","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-popular"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/info-eforie.ro\/index.php\/wp-json\/wp\/v2\/posts\/11996","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/info-eforie.ro\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/info-eforie.ro\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/info-eforie.ro\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/info-eforie.ro\/index.php\/wp-json\/wp\/v2\/comments?post=11996"}],"version-history":[{"count":0,"href":"https:\/\/info-eforie.ro\/index.php\/wp-json\/wp\/v2\/posts\/11996\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/info-eforie.ro\/index.php\/wp-json\/wp\/v2\/media\/11997"}],"wp:attachment":[{"href":"https:\/\/info-eforie.ro\/index.php\/wp-json\/wp\/v2\/media?parent=11996"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/info-eforie.ro\/index.php\/wp-json\/wp\/v2\/categories?post=11996"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/info-eforie.ro\/index.php\/wp-json\/wp\/v2\/tags?post=11996"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}