The US space force temporarily banned the use of web-based generative artificial intelligence tools and the so-called large language models that power them, citing data security concerns, among other things.

A memorandum sent to the Guardian Workforce, which brings together members of the space force, halts the use of government data by web-based generative artificial intelligence tools that can create text, images or other media based on simple prompts. The memo says they are “not permitted” to be used in government systems unless specifically approved.

Chatbots and tools like OpenAI ChatGPT are very popular. They use language models trained on large amounts of data to predict and generate new text. Such LLMs have given birth to a whole generation of artificial intelligence tools that can, for example, search through documents, extract key details and present them as coherent reports in a variety of linguistic styles.

“Generative AI will undoubtedly revolutionize our workforce and enhance the Guardian’s ability to act quickly,” Lisa Costa, the space force’s chief technology and innovation officer, said in a memo. But Costa also cited concerns about cyber security, data processing and procurement requirements, saying the adoption of AI and LLMs must be “responsible”.

No further explanation was given. Experts have noted the risk that, under certain conditions, voluminous and potentially non-public data related to adding documents and prompts to models could be leaked to the public.

A Defense Department Space Force spokesman said the Sept. 29 memo seen by Bloomberg was authentic and that a strategic pause was being taken to protect personnel and Space Force data while deciding how to integrate the capability to support missions.

At least 500 people who used Ask Sage, a generative AI platform, have already been affected by the Space Force’s decision, according to company founder Nicolas Chaillan. Ask Sage aims to provide a secure generative AI platform that works with multiple LLM models, including those from Microsoft and Alphabet, according to Chaillan.

Chaillan, a former Air and Space Force software manager, criticized the agency’s decision to halt the use of generative AI — especially as the Defense Department has called for an accelerated deployment of artificial intelligence. In August, the Pentagon launched a generative artificial intelligence task force to explore use cases for LLMs and how to analyze and integrate them across the Department of Defense. “It’s a very short-sighted decision,” Chaillan said.

The CIA has already developed a generative AI tool for widespread use by the intelligence community. Chaillan said his platform already meets security requirements to protect data.

Source: Bloomberg