"a significant increase in usage of, and uploading of files to, such tools.”
International law firm Hill Dickinson has restricted general access to several artificial intelligence (AI) tools following a “significant increase in usage” by its employees.
The update comes after concerns that much of the usage did not comply with the firm’s AI policy, which was launched in September 2024
In an email, Hill Dickinson’s chief technology officer told staff that access to AI tools like ChatGPT and Grammarly would now be granted only through a request process.
The email noted over 32,000 visits to ChatGPT and 50,000 hits to Grammarly in a seven-day period between January and February.
During the same period, more than 3,000 visits were made to DeepSeek, a Chinese AI service recently banned from Australian government devices due to security concerns.
The email warned: “We have been monitoring usage of AI tools, particularly publicly available generative AI solutions, and have noticed a significant increase in usage of, and uploading of files to, such tools.”
Hill Dickinson, which has offices across England and internationally, said it aims to “positively embrace” AI while ensuring proper and secure use.
In a statement, the law firm said: “Like many law firms, we are aiming to positively embrace the use of AI tools to enhance our capabilities while always ensuring safe and proper use by our people and for our clients.
“AI can have many benefits for how we work, but we are mindful of the risks it carries and must ensure there is human oversight throughout.
“Last week, we sent an update to our colleagues regarding our AI policy, which was launched in September 2024.
“This policy does not discourage the use of AI, but simply ensures that our colleagues use such tools safely and responsibly – including having an approved case for using AI platforms, prohibiting the uploading of client information and validating the accuracy of responses provided by large language models.
“We are confident that, in line with this policy and the additional training and tools we are providing around AI, its usage will remain safe, secure and effective.”
The firm’s AI policy restricts AI access until an employee has approved use. In these cases, their access will be reinstated.
The Information Commissioner’s Office (ICO), the UK’s data watchdog, advised companies to provide AI tools that comply with organisational policies, rather than banning them outright.
A spokesperson said: “With AI offering people countless ways to work more efficiently and effectively, the answer cannot be for organisations to outlaw the use of AI and drive staff to use it under the radar.”
Ian Jeffery, chief executive of the Law Society of England and Wales, highlighted the potential benefits of AI:
“AI could improve the way we do things a great deal.
“These tools need human oversight, and we will support legal colleagues and the public as they navigate this brave new digital world.”
However, the Solicitors Regulation Authority (SRA) has raised concerns about a lack of digital skills in the legal sector.
A spokesperson said: “This could present a risk for firms and consumers if legal practitioners do not fully understand the new technology that is implemented.”
A survey by legal software provider Clio in September 2024 found that 62% of UK solicitors expected AI usage to rise over the next year, with many firms already using it for tasks such as drafting documents, analysing contracts and conducting legal research.
The Department for Science, Innovation and Technology described AI as a “technological leap” that would create new opportunities and reduce repetitive work.
A spokesperson said:
“We are committed to bringing forward legislation which allows us to safely realise AI’s enormous benefits.”
“We will launch a public consultation to ensure our approach effectively addresses this fast-evolving technology.”
Hill Dickinson confirmed that since the update was circulated, the firm has received and approved requests for use.
The firm reiterated its focus on maintaining security and client confidentiality while exploring AI’s potential to enhance its services.








