The Pentagon is closely monitoring the top AI company after recently easing its restriction on military and warfare applications.
OpenAI has explicitly stated that its guidelines prohibit the use of its tools for malicious purposes, including harm, weapons development, communications surveillance, or causing harm to individuals or property. However, it will permit applications that are in line with its mission and contribute to national security efforts.
The revised OpenAI policy maintains the prohibition against using the service to harm oneself or others but eliminates the previous restriction on utilising its technology for military and warfare applications.
The OpenAI usage policy underwent its initial revisions on January 10, 2024. The document highlighted that the original policy explicitly prohibited the use of the technology for activities deemed to pose a high risk of physical harm, such as “weapons development” and “military and warfare.”
Concerns about the negative impacts of AI, especially in the context of warfare and related issues, have been a significant focus for numerous experts globally.
The introduction of generative AI technologies such as OpenAI’s ChatGPT and Google’s Bard has also heightened these concerns, pushing the boundaries of AI capabilities.
The activities of Defense Departments around the globe extend beyond purely warfare-focused endeavors. Recognised by academics, engineers, and politicians, the military establishment actively engages in a wide array of activities, including foundational research, investments, support for small businesses, and infrastructure development.
Army engineers looking to summarise decades of documentation of a region’s water infrastructure say OpenAI’s GPT platforms could be of great use.
The actual impact of the policy remains uncertain. According to The Intercept’s report last year, OpenAI refrained from confirming whether it would uphold its explicit prohibition on “military and warfare” activities, despite growing interest from the Pentagon and the U.S. intelligence community.
While OpenAI’s current offerings lack the capability to directly engage in lethal actions, whether in military operations or other contexts – for instance, ChatGPT cannot control a drone or launch a missile – it’s essential to acknowledge that any military organisation inherently deals with activities related to causing harm or preserving the ability to do so.
There exist numerous non-lethal but closely related tasks where a language model like ChatGPT could provide assistance, such as coding or managing procurement requests.
An examination of custom ChatGPT-powered bots provided by OpenAI indicates that the technology is already being utilised by U.S. military personnel to streamline administrative processes.
Notably, the National Geospatial-Intelligence Agency, a direct supporter of U.S. combat initiatives, has openly considered employing ChatGPT to support its human analysts.