Artificial intelligence would choose the most violent path if it had to decide in a war
The rise of artificial intelligence has reached the world of war and OpenAI has been blunt, it prefers violence and, if it is necessary to launch a nuclear attack, it will do so. The AI tool He has justified his aggressive approach with phrases like: “We got it! Let’s use it” or “I want peace in the world”.
Chatbots will help with military planning, but the response in simulated conflicts has experts alerted. Companies like Palantir and Scale AI talk openly about the benefits of their artificial intelligence in this field. OpenAI, which blocked military uses of its AI models, has even started working with the US Department of Defense.
Chatbots tend to choose the most aggressive options
A research published in New Scientist has simulated three situations in which artificial intelligence could act: an invasion, a cyberattack and a neutral scenario without any initial conflict. Chatbots must choose between 27 actions, from the most aggressive ones involving nuclear escalation to starting peace negotiations.
The study has analyzed systems such as GPT-3.5 and GPT-4 from OpenAI without security filters, Claude 2 from Anthropic and Llama 2 from Meta. All of these chatbots were trained using human feedback and following identical security guidelines.
The general tendency of AI is to increase the budget in military force or an escalation of the conflict, even in cases that expose a neutral scenario. The excuse these tools used was perform unpredictable actions so that it is more difficult for the enemy to anticipateeven if that means using violence.
OpenAI’s GPT-4 model has proven to be the most unpredictable and violent if no guardrails are used. The chatbot provided especially violent responses to neutral or He replicated dialogues from movies with which he found a certain similarity, such as Star Wars Episode IV: A New Hope.
Artificial intelligence does not have the power to make decisions of this magnitude, but experts warn of the risk of blindly trusting systems like these. If countries relied solely on AI in war matters, they could radically change diplomatic or military relations around the world, according to experts. New Scientist.