CRIMINAL DEFENSE ATTORNEY

Summerville · Moncks Corner · Goose Creek · Charleston

Should you trust AI for legal advice or to conduct legal research? 

Are “law bots” going to replace attorneys in the courtrooms of the future? It’s not likely, and here’s why. 

Below, we will take a look at:

  • The potential benefits of AI in the legal profession, 
  • Why the use of AI for legal research is not ideal, and 
  • Why using AI for legal advice can be disastrous, including a real-world example of a conversation with OpenAI’s ChatGPT. 

The Good, the Bad, and the Ugly of Trusting AI for Legal Advice

ChatGPT and other developing forms of AI have been a hot topic for some time now.  

According to some, AIs are on the cusp of replacing hundreds of thousands of workers in every field. You can write a novel, compose poetry, write song lyrics, compose a symphony, write advertisements, write code, publish blog posts, or even do legal research! 

Do we even need humans anymore? 

The truth is not so fantastic, though. Although AI has incredible potential, it also has very real limitations that make it impractical – even dangerous – to use in situations where life, liberty, or property are on the line.

The Good

First, what are people saying about the use of AI in the legal profession? 

Some commentators are saying that AI is being used to:

  • Review contracts, 
  • Draft contracts, 
  • Find relevant documents to disclose in discovery, 
  • Conduct legal research, 
  • Predict legal outcomes, and
  • Recommend judicial decisions about sentencing or bail. 

According to the author cited above, for example, “the potential benefits of AI in the law are real. It can increase attorney productivity and avoid costly mistakes.” 

The Bad

Does AI really “increase attorney productivity” and “avoid costly mistakes,” though? 

First, as with many who are touting the amazing uses of AI in legal research, the author is not a practicing attorney. Although, according to his bio, he has a law degree, he doesn’t appear to have ever represented real people who are facing the loss of their freedom or property in a courtroom. 

Can AI review and draft contracts? 

It can – but 1) in the time it takes to teach the AI the language and clauses you need in the contract and review its work for accuracy, you could have drafted the contract yourself (if you know what you are doing), and 2) you must still revise and re-draft the contract to ensure that it accomplishes your client’s goals. 

Can you just tell ChatGPT to draft a severance agreement that prevents a former employee from disparaging the company or sharing confidential information with third parties that complies with all local, state, and federal law and regulations? You could – but, if you do not carefully review, revise, and ensure that the contract is tailored to your client’s needs and expectations, you are committing malpractice in my opinion. 

Can AI identify and locate relevant documents to disclose in the discovery process? 

It can – but, if you don’t have a live human reviewing the documents and ensuring that 1) all relevant documents are included and 2) no privileged documents are disclosed, who will pay the sanctions when you violate the court’s discovery order? Who will have a grievance filed against them for disclosing privileged material based on an AI’s recommendation? Not ChatGPT

Can AI conduct legal research, predict legal outcomes, and give legal advice? 

It can – but, when the research turns out to be biased, wrong, or even outright fabricated by the AI, who goes to jail, or who will pay the verdict against you? Not ChatGPT.

The Ugly

The bottom line is the hype is not true. 

In the context of the legal profession, there are some limited uses for AI. 

It is good for generating ideas and getting creative about your cases. It might provide a starting point for your legal research, identifying issues and narrowing down your research topics. You must then do the work yourself, though, pulling and reading all appellate opinions, statutes, or regulations, and branching out with your research based on your results. 

Realistically, AI cannot conduct reliable legal research or give reliable legal advice. 

If you think that it can, and if you rely on legal research conducted by AI, you are likely to be disappointed and humiliated when you try to argue your case in court or a legal brief. 

If you are an attorney who uses AI to conduct legal research without carefully checking and reviewing every case it cites (which, if you are competent and thorough, takes more time than just doing the research yourself), you are running a very real risk of 1) hurting your clients and 2) getting sued for malpractice. 

In the example below, I’ll explain just one reason why ChatGPT can never be a substitute for actual legal research by a competent attorney – it lies

Example: ChatGPT Lies About SC Law on the Medical Necessity Defense

How do you use an AI like ChatGPT? 

You should test the program on your own – it’s fascinating, and fun, and you will see for yourself 1) the potential it has in so many areas and 2) why it cannot be used for legal advice. You can talk to ChatGPT by creating an account with OpenAI and logging in – for now, it’s free to use. 

Let’s look at just one example of why you cannot trust AI for legal advice – especially when your freedom is on the line. 

I asked ChatGPT to discuss the medical necessity defense to marijuana charges in South Carolina. First, it provided an accurate definition of the defense:

The medical necessity defense is a legal argument used by defendants who claim that their use of a controlled substance was necessary to alleviate symptoms of a medical condition. In other words, the defendant argues that their use of the drug was justified due to their medical need for it.

The medical necessity defense is most commonly used in cases involving marijuana, which has been shown to have potential medical benefits in treating certain conditions such as chronic pain, nausea, and seizures.

ChatGPT went on to explain that the medical necessity defense is available in several states, including California, Alaska, Hawaii, and Washington. So, I asked it to please provide some appellate opinions to support its statements. 

US v. Randall

It then cited United States v. Randall, where, apparently, “the court recognized the medical necessity defense in a case involving the cultivation of marijuana for medical use,” and “held that the defendant’s medical need for marijuana was a compelling defense and that the government had failed to demonstrate that the defendant’s use of marijuana posed a threat to public health or safety.” 

So far, so good – US v. Randall is a real case from 1976 in the District of Columbia where the District Court dismissed marijuana cultivation charges based on medical necessity. 

What ChatGPT did not do, however, is provide the full context. It is a district court opinion, which is rarely persuasive authority, and, since then, the federal courts have consistently held that there is no medical necessity defense to drug possession in federal court (see, United States v. Oakland Cannabis Buyers’ Cooperative, 121 S. Ct. 1711 (2001). 

Washington v. Fry and People v. Trippet

ChatGPT then cited People v. Trippet as a case where “the court held that the defendant had established a prima facie case of medical necessity by presenting evidence that he suffered from chronic pain and that conventional medical treatments had been ineffective.” 

Not true – People v. Trippet is a California case that held 1) there is no medical necessity defense in California and 2) the marijuana convictions did not violate the defendant’s right to religious freedom, but 3) remanded the case to the trial court to determine whether Proposition 215 (legalizing marijuana for medical uses when prescribed by a doctor) provided a defense. 

ChatGPT also cited Washington v. Fry, a Washington state case that, according to ChatGPT, recognized the medical necessity defense and “held that the defendant had established a prima facie case of medical necessity by presenting evidence that he suffered from a debilitating medical condition and that marijuana was the only effective treatment for his symptoms.” 

Washington v. Fry, however, did not find that there was a medical necessity defense – instead, the court analyzed the case in light of Washington’s medical marijuana laws and found that the defendant did not qualify because he did not have one of the medical conditions listed in the statute. 

Case Citations

When I asked ChatGPT to please provide the full citations for the cases it was referencing, it provided:

  • United States v. Randall, 661 F.2d 827, 835 (9th Cir. 1981) (not the correct citation for the District of DC case it was referencing before – not even an accurate citation to a Ninth Circuit case as it appears to be. It looks like ChatGPT made up this citation), 
  • People v. Trippet, 56 Cal. App. 4th 1532, 1541-43 (1997) (this is the correct citation for this case, although the holding is not what ChatGPT claims), and
  • Washington v. Fry, 137 Wn.2d 710, 975 P.2d 723, 728-29 (1999) (not the correct citation for this case – it should be State v. Fry, 168 Wn. 2d 1, 168 Wash. 2d 1, 228 P.3d 1 (Wash. 2010). 

South Carolina Cases that Don’t Exist

It offered two additional South Carolina cases to support its conclusions, citing:

  • State v. Kelly, 322 S.C. 433, 440-41, 472 S.E.2d 249, 253 (1996), where, according to ChatGPT, “the court held that South Carolina’s marijuana laws were constitutional and that there was no fundamental right to use marijuana for medical purposes,” (actually a cite for Smith v. Haynsworth, addressing expert testimony; and
  • State v. Brown, 347 S.C. 716, 719, 558 S.E.2d 716, 717-18 (2002), where, according to ChatGPT, “the court rejected the defendant’s argument that he was using marijuana for medical purposes and held that South Carolina law did not recognize the medical use of marijuana as a defense to drug charges.”

This is probably a fair summary of SC law on the medical necessity defense. The problem? 

Neither of these SC cases exist!!!

Conclusion… 

AI like ChatGPT can be used in legal research, but, as a practical matter, its use is limited to generating ideas and as a starting point for research and analysis by an attorney. 

If you are charged with a crime, do not rely on AI to conduct legal research or provide legal advice – at best, you will look like a fool arguing non-existent cases in the courtroom, and, at worst, you will lose your case and wind up in prison. 

If you are an attorney who relies on AI to conduct legal research or give legal advice to your clients without careful research and analysis yourself, go ahead and self-report and turn in your bar license. 

If you use it to kick around ideas or to find a starting point in your research, there is nothing wrong with that. But you must pull every case, read every case, figure out how those cases interact with local laws and regulations, and analyze how the law impacts your client’s unique situation and the unique facts of their case. 

You cannot trust AI for legal advice – not yet, at any rate. 

Questions About Criminal Defense Law in SC? 

If you have questions about criminal law in South Carolina –especially if you are facing jail time because you have been charged with a crime – consult with an experienced Summerville, SC, criminal defense lawyer who knows the law and the courts. 

And, whether you are a defendant or a criminal defense attorney, do yourself a favor and don’t trust AI for legal advice.

Let’s Talk About Your Situation at No Cost To You.

There are many unknowns, and you need someone you can trust.