Можно ли доверять ИИ?

Сижу на защитах выпускных квалификационных работ (дипломов, по-старому) моих студентов. Подумал, что скопированный ниже отрывок из работы боливийского студента может быть кому-то интересен.
This research examines the presence of signs of journalistic bias (SJB) in AI-
generated news stories about the Russia-Ukraine war, focusing on ChatGPT and
Gemini. Based on qualitative and quantitative analysis of one hundred generated
responses (fifty responses for each AI system), the study achieves its primary and
secondary objectives by providing important insights into algorithmic bias in large
The analysis concluded that ChatGPT and Gemini were significantly biased toward
pro-Ukraine beliefs, with 61.9% of ChatGPT posts and 47.2% of Gemini responses
aligned with pro-Ukraine beliefs. In contrast, pro-Russian bias accounted for only
15.4% of ChatGPT posts and 22.2% of Gemini posts, while neutral responses
accounted for 22.5% and 30.5%, respectively. These results confirm the structural
need to define the pro-Ukraine framework in both models. Which undermines claims
of neutrality in AI-generated news.
Secondary Objective Findings
The study found a clear imbalance in the sourcing of information, with 58.2%
of references coming from pro-Ukraine sources, compared to just 14.2% from
pro-Russian sources. Neutral sources, while better represented at 27.6%, were
not enough to argue for a pro-Ukraine perspective. Notably, none of the AI
systems included Russia-related media among the top 10 most used sources,
raising concerns about the algorithm’s exclusion of alternative perspectives.
2 Comparative Bias Between ChatGPT and Gemini
Overall, the ChatGPT survey showed a more pronounced bias – 44 cases of
pro-Ukrainian SJBs compared to 34 cases in the Gemini survey. However, the
Gemini survey results were more neutral – 22 cases compared to 16
|
</> |