Skip to Main Content

FAQs about AI Chatbots in Military Law


ChatGPT, Google Gemini, Microsoft CoPilot, Claude: AI chatbots have become ubiquitous in law. Servicemembers facing adversity often turn to chatbots for assistance.

What do these terms mean?

Broadly speaking, “artificial intelligence” is the simulation of human intelligence.

“LLMs” are “large language models.” LLMs are computer programs that analyze vast pools of information to learn the statistical relationships between words and phrases. They can generate language (i.e. write text) based on a user’s prompts.

“GPTs” are “generative pre-trained transformers.” They are a specific subset of LLMs. GPTs excel at generating human-like text in response to user prompts.

Can AI chatbots generate reliable answers in military justice and military employment matters?

Sometimes, these models generate correct answers. But their accuracy rate is far too low to be relied upon.

The biggest problem: AI chatbots either hallucinate portions of regulations that do not exist, or misread the regulations that do exist. Worse, they give very specific citations to these provisions, making them seem plausible and accurate.

Is this a problem unique to military law?

All AI chatbots make mistakes, but in our experience, they operate worse when it comes to military matters. We have a few theories as to why:

  • Military law usually involves a great deal of command discretion, unwritten rules, and cultural norms that are difficult to understand or express in writing. If you have served, you instinctively know this: reading a manual cannot tell you what the military is like, or how military commanders or investigators think. Only experience can do that.
  • Military administrative matters turn on complicated webs of hierarchy and authorities: statute, DoDIs, service regulations, ALARACTS, MILPERSMAN, NAVADAMINs, local policy memoranda, SOPs, and so on. Many of those are not publicly available and are not digested by AI chatbots.
  • Military regulations are updated so frequently that AI chatbots struggle to keep out. We have often found that they cite out-of-date regulations, thinking they are still valid.
  • Military regulations are not as clear-cut or detailed as statutes and regulations in other fields. They leave a great deal of matters open to interpretation. AI chatbots do not handle these ambiguities well.
  • The vast majority of military matters (both administrative and criminal) are never published. Therefore, AI chatbots cannot digest or analyze them. Remember, AI chatbots function by absorbing enormous amounts of data in order to identify patterns and make predictions based on associations. If the chatbot cannot absorb that data, it cannot reliably make the predictions.
  • Despite these issues, AI chatbots are designed to give you the answers you are looking for. If the answer does not exist, there is pressure on the chatbot to hallucinate it. We have witnessed AI chatbots inventing regulatory provisions out of thin air, even creating false quotations out of regulations it directly cites.

Can a chatbot help me craft the right legal strategy?

In our opinion, no. We have tested this ourselves with several AI chatbots. Very frequently, when we ask general questions like, “What should I do in XYZ situation,” the chatbots recommend legal strategies that would backfire in the real world.

It is becoming more common for servicemembers to attempt to DIY their problem with an AI chatbot, then turn to an attorney after their reliance on an AI has damaged their case.

We strongly encourage servicemembers in need of legal assistance to set up a free consultation with an experienced attorney.

Are my chatlogs confidential?

No. You should never type anything into a chatlog that you would not want your commander, CID, NCIS, or AFOSI to read. Logs with AI chatbots are not confidential in any way. Military investigators can easily acquire your chatlogs with a subpoena, and your prompts could be used against you at court-martial.