The other day, I had a repeat experience of what’s now becoming a trend: someone presenting a potential case, sure that they had a slam-dunk winner, because before presenting the case to me, they’d presented the case to an artificial intelligence engine. The AI had assured them they had a great case. As you might imagine, it took me all of ten seconds to see what the AI engine did not: the law that was cited did not apply to this situation. This person was very disappointed, to the point they almost cried, when I told them the whole truth. It broke my heart a little bit.
I suppose the good news is, the AI engine told this person to present the case to a lawyer and get professional advice before proceeding. They didn’t puzzle their way through filing a lawsuit in propia persona (because AI could help you with that, too, if you asked!) and get way out over their skis in court. There aren’t any consequences other than disappointment and possibly some embarrassment presenting an AI analysis of your fact pattern to an actual lawyer. And I try to be compassionate.
There is a place for AI in this world. It’s potentially useful! AI can be good at reading through long texts and helping extract specific information from it that might be hard to find. It can be good at sorting through a confusing timeline (but do be careful with this). And it can even be useful as an aid (but not a substitute) for legal research. If you think of AI as a souped-up internet search engine, you start to realize what’s really going on with this new-ish tool: it recognizes patterns in language. So what you feed into it governs the quality of what you get out — it’s all in the art of giving it good prompts. And double-checking the results manually.
But it’s most certainly not perfect. It can and will make mistakes. It can and will miss important facts. It can’t really apply facts to law because it doesn’t understand either of those things. AI associates words together. It is necessarily derivative of other mental work that humans have done. It does not come up with new things on its own. It takes other things apart, selects elements of those other things, and puts the package back together for you. Incredibly quickly. And it does so with perfect grammar and a reasonable sentence structure, so it looks and sounds authoritative.

Don’t be fooled by that.
A parrot does not understand the sounds it makes. The parrot knows if it makes certain sounds that are like what the humans around it makes, it gets a handful of sunflower seeds. That doesn’t mean the parrot is actually expressing its own thoughts or emotions.
So too with artificial intelligence. It is programmed to please you. It is programmed to sound smart. But it doesn’t know things. Computers aren’t there yet. They aren’t sentient. They don’t have consciences, they only know to associate words like “murder” with words like “immoral.” They don’t have creativity, they only have image search capabilities to see things like “green baseball hats.” They don’t have experience, they have databanks which to them are just data, just words, words that commonly go together, words that can be repackaged according to certain rules which are basically electronic restatements of Strunk and White.
If you think of AI like a powerful internet search engine, you’re on the right track. You wouldn’t ask Google or Bing whether or not you have a legal case. You might ask them to point you to the law so you could read it for yourself and start educating yourself. Either way, though, you’d pretty soon reach the point where you would say, “Hey I think there’s something going on here, I’d better talk to a lawyer.”
Until further notice, that lawyer should be a human being.

