AI: What is it Good For? Absolutely... Something

AI: What is it Good For? Absolutely... Something

Let's address the elephant in the room: AI. Depending on who you ask, it's either the end of human creativity, a factory for low-quality slop, a machine coming to steal your job, or the first chapter of a robot uprising. What it rarely gets credit for being is a tool, and a genuinely useful one, when you understand what it actually does.

That misunderstanding isn't surprising. Every new technology goes through a phase where the noise drowns out the signal. Search engines were going to make libraries obsolete. Social media was going to fix democracy. Each time, the reality ended up being more boring and more practical than either the hype or the panic suggested. AI is no different.

What AI Actually Is (and Isn't)

At a technical level, the AI tools most people interact with today are large language models. They've been trained on enormous amounts of text and learned to predict what a useful, coherent response looks like given a particular input. They're very good at generating fluent, organized, plausible-sounding text. They're not databases. They don't look things up in real time (unless explicitly given that ability). They don't "know" things the way you know your case files. They pattern-match at a scale humans can't.

That distinction matters because it explains both where AI shines and where it fails. Ask it to help you brainstorm angles for a motion, draft a first pass at a client letter, or explain a concept in plain English, and it can do that well. Ask it to tell you what the current statute says in your jurisdiction and trust that answer without verifying it, and you're setting yourself up for trouble. It's not that AI is malicious. It's that it doesn't know what it doesn't know, and it has no hesitation about filling in gaps with confident-sounding text.

This is often called "hallucination," which is a polite word for making things up. It happens, it's well-documented, and anyone using AI for anything consequential needs to understand that going in.

The Google Problem, Revisited

Here's the thing: this isn't a new problem. It's a familiar one wearing a new hat.

When Google became the default starting point for research in the late 1990s and early 2000s, there was a legitimate concern that people would take whatever ranked first as fact. And many did. The information literacy problem didn't start with AI, it's been compounding for decades. Reddit threads presented as lived experience turn out to be fabricated. Wikipedia articles get vandalized. News articles misquote studies. Sources get cited out of context.

The answer was never "don't use Google." The answer was: verify what you find. Trace claims back to primary sources. Be skeptical of anything you can't independently confirm. That principle doesn't change with AI. It just needs to be applied more consciously, because AI is better at sounding authoritative than any search result ever was.

For legal matters specifically, the stakes of skipping that step are higher than in most fields. A bad recipe ruins dinner. A bad legal conclusion can cost a client their case, their money, or their freedom. AI doesn't know which situation it's in.

Where it Actually Helps

With that grounding in place, here's where AI genuinely earns its keep for solo attorneys and small firms:

First-Draft Everything

AI is a strong first-draft machine. Client intake emails, engagement letters, FAQ pages for your website, explanations of legal concepts for non-lawyer audiences: give it a clear prompt and a decent first draft comes back in seconds. You still edit it. You still apply your judgment. But starting from something is faster than starting from nothing, and for a solo practitioner wearing every hat in the office, that matters.

Plain-Language Translation

Translating dense legal language into something a client can actually understand is time-consuming work. AI handles this well. Feed it a clause, ask for a plain-English explanation, and use that as a starting point for your client communication. Again, verify it reflects your actual interpretation before sending it anywhere.

Brainstorming and Outlining

Stuck on how to structure an argument? Not sure what angles you might be missing? AI can be a useful thinking partner for getting ideas on the table. Treat it like a junior associate who reads a lot but hasn't passed the bar: useful for generating options, not for making final calls.

Research Starting Points

AI can help you identify what you should be researching, even if it can't reliably tell you what the research will say. Ask it what legal issues are typically implicated in a given fact pattern, and use that as a map for where to dig using actual legal research tools. The map isn't the territory, but having a map beats wandering.

Administrative Overhead

Summarizing long documents, drafting agendas, writing follow-up notes from a set of bullet points: AI handles this kind of work well and it's exactly the kind of work that eats hours without generating billable value. The more of this you can offload, the more time you have for work that actually requires your expertise.

Should You Use AI for Legal Research and Prep?

A few practical cautions specific to legal practice:

  • Confidentiality. Whatever you paste into an AI tool may be used to improve that tool's model, depending on the platform and your settings. Don't paste in client names, case details, or anything that would constitute confidential information without understanding the platform's data policies first. Many tools have options to opt out of training data use, but you need to actively set that up.
  • Jurisdiction specificity. AI tends to give answers that reflect the most common or general rule, which may not be the rule in your state. Always verify against your jurisdiction's actual statutes, regulations, and case law.
  • Citation verification. There have been documented cases of attorneys submitting briefs citing cases that don't exist, because they trusted AI-generated citations without checking them. This is not a hypothetical risk. Check every citation against a real legal database before it goes anywhere official.
  • Unauthorized practice and ethics rules. If you're using AI to generate advice that goes directly to clients without attorney review, you're in murky territory. The ABA and most state bars are still working through the ethics guidance here, but the through-line is consistent: the attorney is responsible for what goes out under their name, regardless of how it was generated.

The Bottom Line

AI is a productivity tool with real limitations and real value. It's not a replacement for legal judgment, and it's not a reason to panic. The attorneys who will get the most out of it are the ones who treat it the way they should have always treated any research starting point: useful input that still requires a human brain to verify, contextualize, and apply.

AI is good for more than just "something." Quite a few things, actually. But you just have to know what those things are, and know when to put it down and do the work yourself.

Comments