.

As a child, I had a number of family nicknames.

Some I’m not going to repeat – although I have little doubt that my parents could be bribed into spilling the beans.

But a few still come to mind from time to time.

At my brother’s Bar Mitzvah, I was listed in the introductory menu as “number one trouble maker”. Ironically, I now get paid to cause trouble, usually to the other side!

I was also regularly referred to as an accident waiting to happen, due to my ability to get into mischief at a moment’s notice. That one may not be so attributable to me these days but can often apply to the circumstances around me. My clients a regularly getting into difficulties and need me to get them out of it.

As a result, I spend as much time trying to help them avoid those difficulties as I do resolving them.

Part of the challenge is keeping up to date with all the ways in which new dangers can arise, because whilst all the old ways still exist, new problems are evolving every day.

The latest area of concern is AI, because although it can help clients save a lot of time and money, it can also cost them a lot of both if it’s used incorrectly.

Did you know that if you use AI to review a document that you’ve created, to see if it can be improved on, the data in the document is now embedded into the AI, which means that if the document contained confidential data, that data is no longer confidential. And if you think it couldn’t happen, ask Samsung whose staff fed source code into ChatGPT and then found out it meant that they’d accidentally leaked confidential information to their competitors.

They were fortunate – if the data had included personal information, they’d have been in breach of GDPR as well!

Part of the problem is that AI is so far reaching. What if you’re using AI to help with selecting people for redundancy, dealing with disciplinary or grievances or select shift patterns, drafting contracts or policies. How do you know it’s not going to accidentally discriminate against someone and leave you with an expensive Tribunal claim. And can you reject a CV or application that’s been created by AI?

What about research. If your team are using AI to come up with answers or statistics, are they double checking that the information they’re relying on is accurate and comes from a reliable source. There have been several high profile legal cases recently where lawyers have relied on law reports that they’d obtained from AI, without checking if the information was correct. Spoiler alert! It wasn’t. Now the lawyers are facing all kinds of trouble, including disciplinary proceedings by their regulator.

As with all risks, there is a way to give yourselves some protection, which doesn’t involve banning the use of AI or taking your business off line!

Want to know the answer?

Well, you could Google it, but I’d suggest that you ask an expert instead!

Kleyman & Co Solicitors. The full service law firm. We’re the experts – in case that wasn’t obvious.