I have heard a lot of attorneys express confusion regarding when it is OK to use AI in their practice. This curiosity is great, and AI can be extremely useful (as evidenced below). However, I have also heard attorneys request that their AI assistant be part of confidential mediation meetings, so I thought it worthwhile to let Google’s Large Language Model tell you what to watch out for regarding confidentiality in its own words:
Artificial intelligence (AI) is transforming the legal field, offering tools to boost efficiency and legal research. But for attorneys, a critical question remains: when can AI be a helpful partner, and when does it threaten the bedrock principle of attorney-client confidentiality?
Here’s where attorneys should tread carefully with AI:
- Third-party AI with client data: Cloud-based AI tools might be convenient, but they raise concerns. If client information is stored on third-party servers, a data breach or unauthorized access could expose confidential details. Attorneys should be wary of using such tools for sensitive cases unless they understand the data security measures in place. Look for cloud-based solutions from reputable providers with robust security features.
- Generative AI and inadvertent disclosure: Generative AI can be a powerful tool for drafting documents or creating summaries. However, these AI models often learn from vast datasets, some of which may be open source. This raises the possibility that confidential information from other clients, inadvertently included in open source data, could be woven into the generated text. Attorneys should avoid using generative AI for highly sensitive matters and carefully vet the data sources used to train the AI model.
- Lack of informed consent: Before using any AI tool that involves sharing client data, attorneys must obtain informed consent from the client. This means clearly explaining the potential risks of confidentiality breaches associated with AI, including the open source nature of some training data, and giving the client the option to decline its use.
- AI Digital Assistants and Recording Meetings: While AI digital assistants can be helpful for scheduling or taking notes, their always-listening nature presents a confidentiality risk. If a client meeting is held within earshot of an active AI assistant, there’s a chance confidential conversations could be recorded or stored. Attorneys should disable such assistants during client meetings or use designated conference rooms with AI assistant functionality turned off.
Remember, maintaining client confidentiality is an ethical obligation. While AI offers undeniable benefits, attorneys should prioritize client trust and carefully consider the confidentiality risks before integrating AI into their practice.
Additional Tips:
- Explore on-premise AI solutions that store data within your own infrastructure.
- Look for AI tools with robust security features and clear data privacy policies, inquiring about the specific data sources used to train the AI.
- Consult with your state’s bar association on ethical considerations for using AI.
By understanding the confidentiality risks and taking appropriate precautions, attorneys can leverage AI while upholding the cornerstone of the attorney-client relationship.