While, for many ventures, AI is changing the business landscape for the good, a legal expert is urging businesses to get up on the technology, as they can inadvertently fall into legal mires in everything from copyright to data privacy.
The F&B sector is one of the latest to report positive ROI from AI investments. However, there is also a reported AI skills gap in the UK, which means that companies might be desperate to deploy AI but find themselves lacking people with AI skills.
This could lead to mistakes with potentially huge implications, both financially and in terms of loss of reputation.
Copyright calamities
Speaking to IFA Magazine, Kirstin McKnight, practice group leader at commercial law firm LegalVision, said that education is absolutely key for businesses if they want to use AI and avoid legal wrangles or compliance issues.
According to McKnight, the number one risk for businesses is using AI-outputted material and unintentionally infringing on copyrighted materials in the process. In particular, there have been high-profile legal battles after material protected as Intellectual Property was reportedly used to train AI models.
This means that someone using AI to create content could find themselves entering prompts and being presented with a response that is actually close to, or identical to, a copyrighted work – whether text, a logo, or an image.
McKnight advised: “To protect your business, it is essential to carefully review the licensing and terms of service of any AI tool you use. Implement internal review processes to check outputs for potential infringement, and clearly define ownership rights in contracts.” The advice is particularly pertinent if you are using the AI content commercially.
Misleading information
Caution is also advised as generative AI can be prone to “hallucinations” – false or misleading information that the system presents as correct. If you are a business owner using AI to create marketing material, for example, make sure that you have a human fact-checking process in place. This is something job hunters have neglected at their peril when using AI to help write their CVs.
According to New Scientist, hallucinations are getting worse. The site reported on a hallucination rate leaderboard that revealed some models had seen double-digit rises in hallucinations from their more recent releases.
When AI is deployed by organisations in customer-facing capacities, this has led to potentially dangerous situations. A bot used by the New York City authorities, for example, gave out “dangerously inaccurate” advice on everything from housing policy to workers’ rights.
Put frameworks in place
While the EU is working on its own AI framework, businesses can’t be lax about their own governance. McKnight said: “Many businesses adopt AI tools without establishing clear policies. This lack of governance can quickly turn into a serious legal and operational risk as employees may misuse AI, input inappropriate or sensitive data, or fail to recognise harmful outputs, which could lead to data breaches or escalate into costly lawsuits.”
If you are using AI that’s given access to your customers’ personal data, for example, protocols need to be in place to make sure this data is protected and anonymised, and that permissions have been gained.
Regulation is key to protecting against these kinds of data protection or compliance slipups. Create a robust AI plan that your employees are trained on. This training also needs to include understanding the compliance rules and regulations that are already in place for AI usage.
McKnight added: “It is essential to stay informed about evolving regulations, conduct regular audits of AI systems, and design strategies with flexibility so you can adapt quickly to new legal requirements as they emerge.”
While AI uptake is moving at a dizzying pace, businesses cannot use their lack of understanding or ignorance of the regulations as an excuse if things go wrong. If you’re to deploy AI without risk, constant awareness and alertness are your greatest shield.




