Photograph by wisely / Shutterstock
Open Letter to the Chief Justice and the Supreme Court of New South Wales Regarding Practice Note SC Gen 23 on Generative Artificial Intelligence
Dear Chief Justice Bell and Members of the Supreme Court,
I write this letter in response to the recently issued Practice Note SC Gen 23 and associated judicial guidelines on the use of Generative Artificial Intelligence (Gen AI) within the legal profession. While I acknowledge and commend the Supreme Court’s initiative to address the integration of technology in legal practice, I must express significant concerns about the approach taken in the proposed framework.
As a professional working at the intersection of law and technology, I believe this Practice Note reflects a fundamental misunderstanding of both the potential and the limitations of Gen AI. More importantly, the blanket restrictions outlined risk stifling innovation, undermining practitioner autonomy, and inadvertently exacerbating existing challenges within the legal profession.
1. Lack of Meaningful Consultation with Technology Experts
The Practice Note emphasises consultation with the New South Wales Bar Association and the Law Society of New South Wales, yet it does not mention engagement with technology companies or experts specialising in Gen AI. This omission is concerning, as it results in a document based on incomplete and, at times, factually inaccurate assumptions about the capabilities and risks of Gen AI.
For example, the Practice Note highlights the “lack of adequate safeguards to preserve the confidentiality, privacy or legal professional privilege that may attach to information or otherwise sensitive material submitted to a public Gen AI chatbot” (paragraph 7(e)). While this is a valid concern for public AI tools, it overlooks the existence of secure, purpose-built legal AI platforms designed with robust privacy measures that prevent data from being used to train models or being exposed to third parties.
The Practice Note lumps together general-purpose AI chatbots like ChatGPT with purpose-built enterprise-level applications designed specifically for legal practice. This is akin to suggesting that a junior lawyer fresh out of law school performs the “same job” as a specialist lawyer with 25 years of experience. Such a comparison overlooks the significant differences in capability, reliability, and suitability for professional use.
By not differentiating between these types of AI tools, the Practice Note demonstrates a lack of nuance and understanding of the current AI landscape. General-purpose chatbots may not have the necessary safeguards for legal applications, whereas specialised legal AI platforms are developed with robust privacy measures, compliance features, and domain-specific knowledge bases.
This lack of differentiation not only misrepresents the functionalities of these tools but also discriminates against purpose-built systems that do not have most of the pitfalls associated with general-use systems.
Engaging with technology experts could have addressed these nuances, providing an accurate and balanced perspective and more practical recommendations.
2. Mischaracterisation of Generative AI as a Threat Rather than a Tool
The Practice Note seems to treat Gen AI primarily as a risk to the legal system, highlighting potential issues such as “hallucinations,” data inaccuracies, and confidentiality breaches. While these risks exist, they are not inherent to all Gen AI tools, especially those specifically designed for legal use. This perspective fails to recognise that AI is a tool—like any other—that can be used responsibly or irresponsibly, depending on the user’s expertise and judgement.
By conflating general-purpose AI tools like ChatGPT with specialised legal AI platforms, the Practice Note undermines the credibility of secure, tailored technologies that can significantly enhance efficiency and accuracy in legal practice. To penalise the entire profession for potential misuse by a few is akin to banning scalpels in surgery because of isolated incidents of malpractice.
The problem is not the tool itself but rather a lack of education and the actions of bad practitioners within the industry. Bad practitioners, whether careless or intentionally negligent, will continue to misuse generative AI regardless of regulatory restrictions you impose. The reality is that putting this Practice Note in place will not stop poor-quality work or unethical behaviour; such practitioners will find ways to continue unnoticed by the judicial system.
Furthermore, the quality of AI outputs is improving rapidly, reaching a level where it can often go undetected. This makes blanket restrictions ineffective in addressing the root of the problem. Instead of mitigating risks, these rules inhibit ethical practitioners from leveraging AI to reduce unnecessary workloads, maintain work-life balance, and alleviate the pressures of an overburdened and mentally strained profession.
You cannot address the challenges of a broken industry by reverting to outdated practices. Each week, new legislation is introduced with minimal repeal of old laws, compounding complexity and workload.
Simultaneously, reports of mental health issues and burnout among legal professionals continue to rise. Focusing on governance frameworks and education rather than sweeping prohibitions would empower practitioners to use these tools responsibly while addressing systemic issues through industry at their core.
3. Overreaching Restrictions That Undermine Practitioner Autonomy
The Practice Note imposes strict limitations on the use of Gen AI, such as prohibiting its use in drafting affidavits, witness statements, and expert reports (paragraphs 10-12, 20-22).
While safeguarding the integrity of legal processes is important, these restrictions reflect an implicit assumption that legal practitioners lack the ability to responsibly use tools and exercise their professional judgement in verifying and validating facts. This assumption not only undermines the credibility of the legal profession but also fails to address the root causes of potential misuse.
Legal practitioners operate under stringent professional and ethical obligations, including the responsibility to ensure the accuracy, reliability, and appropriateness of all submitted work.
Gen AI is simply a tool—a modern extension of existing technologies that lawyers have integrated into their practice for decades. Restricting its use in specific areas, such as drafting witness statements or expert reports, dismisses the fact that all such outputs, whether AI-assisted or not, require rigorous review and approval by a qualified practitioner.
The requirement to verify every output of Gen AI “without using a Gen AI tool” (paragraph 17) further compounds the problem. This mandate negates the primary efficiency benefits of these tools, forcing practitioners to duplicate efforts that AI could streamline.
For your information, there are AI tools that specialise in verification that regularly pick up mistakes and oversights in senior lawyers’ work on a daily basis.
We agree that ultimately work must be verified by a human; however, we see no reason why the verification process cannot be assisted by Gen AI.
It is a misstep to enforce additional layers of regulation on tools that, when used correctly, can enhance productivity, reduce errors, and alleviate some of the burdens of an increasingly overworked profession.
Moreover, these blanket restrictions fail to recognise the diversity of Gen AI tools. Purpose-built legal AI platforms are specifically designed to address the profession’s unique needs, offering features such as robust privacy safeguards, compliance with confidentiality requirements, and minimisation of inaccuracies.
The Practice Note’s sweeping approach penalises the entire profession, including practitioners who use secure, specialised systems, to prevent misuse by a minority who may lack education or ethical standards. This one-size-fits-all prohibition stifles innovation and discourages the adoption of tools that could provide significant value to the profession.
4. Contradictions and Inefficiencies in Implementation
The Practice Note introduces contradictory positions that reflect an inconsistent and reactionary approach to regulating Gen AI. While it acknowledges the risks associated with public Gen AI tools, it fails to adequately differentiate them from secure, enterprise-grade AI platforms specifically designed for legal practice. This lack of nuance results in policies that are misaligned with the realities of modern technology and practice.
How can this Practice Note be relied upon as a guiding framework when it contains demonstrable inaccuracies? For example, Google Bard was officially renamed Gemini nine and a half months prior to the publication of this document. This glaring error, easily verifiable with basic research, raises serious questions about the reliability of the information underpinning this Practice Note.
If such a public-facing and widely known fact is incorrect, it begs the question: what other inaccuracies, assumptions, or omissions might exist within the Practice Note that could undermine its validity and applicability? Accurate and up-to-date information is the cornerstone of any regulatory framework, especially one that seeks to govern the intersection of law and rapidly advancing technology.
For example, the Note prohibits the use of Gen AI on sensitive materials, such as those subject to non-publication orders or obtained under compulsion (General prohibition point 9). However, it isolates these types of data while ignoring broader and equally critical confidentiality issues in legal practice. Notably, it does not address:
- The Use of Unsecured Email: Unsecured email is widely used for transmitting the same confidential information, including subpoenaed documents, non-publication or suppression orders, any material that is the subject of a statutory prohibition, and privileged material. This practice poses significant security risks, yet you have zeroed in on purpose-built Gen AI as a greater threat.
- Sensitive Personal Records: Legal matters often involve the handling of confidential medical records, financial records, and other sensitive personal data. These types of data are equally, if not more, susceptible to misuse or unauthorised access if handled improperly. The Practice Note’s narrow focus on specific types of sensitive material creates an inconsistent standard for safeguarding client confidentiality.
- Cloud Storage and Document Management Systems: Many legal professionals use cloud-based document storage and management systems, some of which may lack adequate encryption or privacy controls. These tools pose similar risks to purpose-built Gen AI systems but are not subject to the equivalent scrutiny or regulation that you are proposing.
- Chronology Software Tools: The Practice Note allows the use of software tools for generating chronologies (6(a)(ii)), despite the fact that such tools are also prone to the very same issues claimed to be problematic with Gen AI, such as inaccuracies and hallucinations. These tools often rely on algorithms to extract and organise data, yet they are not held to the same standard or subjected to similar prohibitions. This inconsistency raises a critical question: why are certain tools given a pass while others are disproportionately restricted?
By failing to address these broader confidentiality concerns, the Practice Note singles out purpose-built legal-grade Gen AI as a unique threat, ignoring the reality that other widely used technologies may present comparable or greater risks.
A Reactionary Rather Than Comprehensive Approach
This selective focus suggests that the Practice Note is a reaction to rising public and media concerns about Gen AI, rather than a well-rounded strategy for addressing technology governance. By making sweeping statements, this Practice Note conflicts with many other best practices and current standards of the industry.
5. Missed Opportunity for Education and Governance Frameworks
Instead of imposing restrictive rules, the legal profession would benefit more from a well-defined governance framework supported by education and certification programmes. The Court could consider mechanisms such as:
- Certification of AI tools that meet strict privacy, accuracy, and security standards.
- Mandatory training for legal practitioners on the proper use of Gen AI.
- Clear guidelines on disclosure and verification processes for AI-generated content.
These measures would enable practitioners to use AI responsibly while ensuring accountability and adherence to ethical standards, while simultaneously solving many of the industry’s biggest challenges.
6. Proposal for Collaboration and Demonstration
As someone highly experienced in this technology and deeply invested in advancing the responsible use of AI in legal practice, I extend an invitation to the Supreme Court to participate in a live demonstration of secure, purpose-built AI tools.
Such a session would illustrate the potential of AI to enhance—not undermine—legal practice while addressing misconceptions about its risks and capabilities.
Furthermore, I propose convening a panel of legal and technology experts to collaboratively refine the Practice Note, ensuring it reflects both the realities of legal practice and the rapid evolution of AI technology.
7. The Bigger Picture: Protecting the Legal Profession’s Future
The legal profession is already facing unprecedented challenges, including increasing workloads and high rates of mental health issues among practitioners. Gen AI offers a way to alleviate these pressures, allowing lawyers to focus on high-value tasks while maintaining a healthier work-life balance. Overly cautious regulation risks hindering progress and innovation within the industry.
A significant danger that arises from penalising the legal profession for using AI is that the public may increasingly bypass legal professionals altogether, turning directly to technology for legal advice or solutions. When people perceive the legal profession as constrained, too expensive, or out of touch with modern tools, they may choose to rely on general-purpose AI systems for assistance, systems that are not designed to provide nuanced or jurisdiction-specific legal advice. This could result in a proliferation of poorly informed or outright incorrect legal decisions made by individuals without adequate understanding of the law.
The fallout from this shift will ultimately burden the courts and legal practitioners even further, as they are left to address and resolve the mistakes made by the general public using unregulated technology. This compounds the problem, creating inefficiencies and additional stress for an already overburdened system. Rather than solving problems, penalising AI use within the legal profession could exacerbate existing challenges by driving more cases of misrepresentation, procedural errors, and legally unsound decisions.
By fostering a culture of education, collaboration, and responsible innovation, we can empower lawyers to use AI tools effectively and appropriately, positioning New South Wales as a global leader in integrating technology into the practice of law. Supporting legal professionals in adopting these tools ensures that the public continues to trust and rely on qualified practitioners for their legal needs, maintaining the integrity of the profession and the legal system as a whole.
Conclusion Of Key Impacts of the Restrictions
- Disempowering Practitioners: The restrictions imply that lawyers cannot be trusted to differentiate between appropriate and inappropriate uses of Gen AI. This undermines their professional autonomy and disregards their capacity to adapt to and govern the use of emerging technologies responsibly.
- Ignoring Existing Oversight Mechanisms: The legal profession already has comprehensive oversight mechanisms, ethical frameworks, and accountability structures in place. Instances of malpractice arise not from the tools themselves but from a failure to adhere to these standards. Holding the tools accountable instead of addressing the practitioner’s conduct misplaces the focus and risks damaging the profession’s evolution.
- Perpetuating Inefficiency and Burnout: The legal industry faces high levels of stress, burnout, and inefficiency, with many practitioners overwhelmed by increasing workloads. Gen AI has the potential to alleviate some of these pressures by automating repetitive tasks, allowing lawyers to focus on high-value work. Imposing restrictions that eliminate these benefits exacerbates the existing challenges of the profession.
- Failing to Address Root Causes: Poor outcomes attributed to Gen AI often stem from a lack of training or understanding of the tools, not the tools themselves. Instead of banning their use, investment in education and governance frameworks could address the actual problem—ensuring practitioners know how to use Gen AI responsibly and effectively.
- Slowing Progress in Legal Innovation: By discouraging the use of innovative tools, these restrictions risk positioning the legal profession as resistant to progress. This could result in New South Wales falling behind other jurisdictions that are embracing technology to modernise their legal systems.
A Better Path Forward
Rather than imposing blanket restrictions that stifle progress, the Practice Note should focus on enabling practitioners to use the right types of Gen AI responsibly.
Blanket prohibitions are not the solution. Instead, empowering practitioners with the knowledge and tools to use legal-grade Gen AI responsibly will lead to a more innovative, efficient, and adaptive legal profession. By recognising the capabilities of both the technology and the practitioners who use it, the Court can ensure the responsible adoption of AI without inhibiting progress or undermining legal practitioners autonomy.
This letter, drafted with the assistance of AI and thoroughly reviewed for accuracy (including by human lawyers), reflects my genuine concerns and recommendations regarding Practice Note SC Gen 23. I urge the Supreme Court to reconsider its approach and engage with stakeholders from the technology sector to develop a more informed, balanced framework.
The stakes are too high to let misconceptions about technology dictate the future of the legal profession. Let us seize this opportunity to create a thoughtful, progressive path forward.
Yours sincerely,
Samuel Junghenn
CEO AI Legal Assistant