My Submission to the NSW Supreme Court on AI in Legal Practice: Why Your Voice Matters Too


The NSW Supreme Court is currently reviewing Practice Note SC Gen 23—the framework governing how generative AI can be used in court proceedings. The deadline for submissions is today, 18 December 2025.

This isn’t just a regulatory matter for lawyers to sort out behind closed doors. The decisions made now will shape access to justice, the cost of legal services, and the integrity of our court system for years to come. They’ll determine whether AI helps level the playing field or widens the gap between those who can afford sophisticated legal help and those who can’t.

I’ve spent the past 12 months coaching hundreds of legal practitioners—from sole practitioners to King’s Counsel—on AI implementation. I’ve seen what works, what doesn’t, and what’s coming. Earlier this year, I was invited to advise another Australian Supreme Court on AI integration, and we’ve since been asked to demonstrate how our platform could assist their operations.

That experience has given me a perspective I felt obligated to share.
Below is my full submission to Chief Justice Bell. I’ve tried to go beyond the immediate regulatory questions to address what I believe the Court will actually face over the next 3-5 years: an evidence authenticity crisis, a flood of AI-assisted self-represented litigants, and fundamental questions about judicial capacity that current frameworks aren’t designed to handle.

I’ve made 10 specific recommendations, ranging from tiered tool recognition to a five-year regulatory roadmap. Some are practical and immediate. Others are forward-looking and admittedly ambitious.

You may disagree with parts of it. Good. The legal profession needs robust debate on these questions, not passive acceptance of whatever framework emerges.

If you have views on how AI should be regulated in Australian courts—whether you’re a practitioner, a technologist, or simply someone who cares about access to justice—I’d encourage you to make your voice heard. The Court has invited public submissions for a reason.

The technology isn’t waiting for us to figure this out. We need to be deliberate about shaping it, or it will shape us.

Samuel Junghenn
CEO, AI Legal Assistant


The full submission follows below.

SUBMISSION TO THE SUPREME COURT OF NEW SOUTH WALES

Response to Review of Practice Note SC Gen 23

Use of Generative Artificial Intelligence

Submitted to: The Honourable A.S. Bell, Chief Justice of New South Wales
Via: Sebastian Braham, Tipstaff to the Chief Justice
[email protected]
Date: 18 December 2025
Submitted by: Samuel Junghenn, CEO, AI Legal Assistant Pty Ltd


DISCLOSURE OF INTEREST


I am the founder and CEO of AI Legal Assistant, an Australian-owned legal AI platform designed specifically for Australian and New Zealand legal practitioners. I disclose this interest at the outset.

This submission reflects both my commercial perspective and my practical experience from coaching hundreds of legal practitioners—including managing partners, Senior Counsel, and King’s Counsel—in AI implementation across Australian legal practice.

Earlier this year, I was invited to provide advice to another Australian Supreme Court on AI implementation in judicial and registry workflows. Following that presentation, we have been requested to provide a demonstration of how our platform could assist that Court’s operations. I mention this not to suggest any particular standing, but to indicate that the issues raised in this submission reflect practical engagement with courts beyond my commercial interest in practitioner-facing tools.

When the original Practice Note was released, I published an open letter to the Chief Justice expressing concerns about its approach. This submission expands on those concerns with the benefit of 12 months’ additional experience and evidence.

Open Letter to Chief Justice (January 2025)


EXECUTIVE SUMMARY


This submission makes seven key arguments:

    1. The Practice Note treats all AI tools as equivalent, failing to distinguish between consumer-grade chatbots and purpose-built legal platforms with materially different risk profiles.
    2. Evidence from hundreds of practitioner implementations demonstrates that properly implemented AI is improving the quality and accuracy of legal work, not degrading it.
    3. The pace of technological change means that a framework designed for January 2025 capabilities is already disconnected from December 2025 reality—and will be obsolete well before its next scheduled review.
    4. Overly restrictive regulation risks driving AI use underground, creating worse outcomes than permissive-with-accountability approaches adopted elsewhere.
    5. The industry needs more education, not more rules. Mandatory AI-focused CPD requirements would address the root cause of AI misuse more effectively than prohibition.
    6. The immediate challenges addressed by the Practice Note will be dwarfed by structural challenges emerging over the next 3-5 years—including an evidence authenticity crisis, a self-represented litigant tsunami, and fundamental questions about judicial capacity.
    7. The Court has an opportunity to position NSW as a leader in responsible AI integration rather than a cautious follower reacting to crises as they emerge.


PART A: POINTS OF AGREEMENT


We cannot compromise on the principle of verification. Regardless of the method by which legal research or drafting is conducted, only true and verifiable information should ever be submitted to courts. Paragraph 16’s requirement that practitioners verify all citations reflects a sound professional obligation.

I agree that AI must not generate the content of affidavits, witness statements, or character references. These documents must reflect the deponent’s own knowledge.

The concern about practitioners uploading privileged information to public AI systems reflects a genuine risk. Practitioners using consumer-grade tools like ChatGPT, Google Gemini, or DeepSeek are uploading client-privileged information to systems with uncertain data handling practices. I have researched and published on the particular risk posed by the US CLOUD Act—see Appendix B.


PART B: THE PACE OF TECHNOLOGICAL CHANGE


  • Exponential Progress

Six months of AI progress now represents the equivalent of 5-10 years of progress in traditional technology fields. This is measurable.

Based on internal tracking we have conducted over the past 18 months: the world’s leading large language model in March 2024 demonstrated cognitive capability benchmarked at approximately 83 IQ-equivalent points. By September 2024, this had risen to 100. By April 2025, to 116. The version we deployed in December 2025 benchmarks above 130. The average IQ of lawyers is approximately 110.

I am not suggesting AI should replace professional judgement or verification. My point is that the technology is not slowing down—it is progressing at a rate the legal profession has not yet fully grasped. By Q1-Q2 2026, we anticipate deploying legal AI with cognitive capability benchmarked between 145-150 IQ points.

What this technology does is democratise intelligence. It allows those who previously could not afford access to justice to level the playing field.

  • Hallucination Rates Are Declining

The hallucination concerns that motivated the Practice Note were well-founded for early 2025 technology. Studies showed general-purpose models hallucinating on legal queries 69-88% of the time.

Current purpose-built legal AI platforms achieve dramatically better results. Some platforms now achieve accuracy rates exceeding 94%. Retrieval-Augmented Generation (RAG) from authoritative databases reduces hallucination rates by up to 71% compared to general-purpose models. The trend is towards continued improvement—a regulatory framework anchored to January 2025 failure rates will be addressing a rapidly diminishing problem.


PART C: EVIDENCE FROM THE FIELD


AI is Improving Quality

After coaching hundreds of firms and barristers in AI implementation, I can report consistent observations that directly contradict assumptions underlying the Practice Note.

Every week, practitioners report that AI has identified issues they missed, found more relevant authorities than their manual research, or caught errors in their work product. We did not expect to see this level of quality improvement. It is consistent across practice areas and seniority levels.


Case Study 1: The Dismissed Judgment

A senior barrister used our research platform and found information about a judgment relevant to her case. She already knew of the judgment but believed the AI was wrong about its relevance. With appropriate critical thinking, she decided to understand why the AI believed this judgment was related to her matter. She re-read the judgment, second-guessed herself, and ultimately took the question to the presiding judge. The judge agreed with the AI’s assessment. Without AI assistance, she would have excluded a directly relevant authority from her submissions.


Case Study 2: Previously Rejected Submissions

Practitioners have utilised AI to draft submissions that faced multiple rejections in the past. The AI helped structure the submissions in accordance with court requirements, avoiding hearsay or inadmissible material that had caused prior rejections. The same substantive arguments, properly presented, succeeded.


Case Study 3: Final Review

Practitioners routinely put their final letters of advice through AI review. The AI identifies errors, gaps, or missing considerations that human review missed. This is not replacing human judgement—it is augmenting it with a second layer of analysis that catches what humans overlook.


Case Study 4: AI as a Sparring Partner

Senior practitioners, including King’s Counsel, are using AI as a thinking partner to critically analyse matters, identify gaps and weaknesses, and stress-test arguments at speed. This reduces the cognitive load on practitioners in a profession plagued by stress, burnout, and mental illness. The wellbeing implications of AI assistance deserve consideration alongside the risk implications.


Case Study 5: Barrister acknowledgment

Several solicitors who use our platform have reported that their barristers have commented on how their briefs or submissions are “the best” that they have ever seen.


The Critical Insight

Practitioners will use AI whether the Practice Note permits it or not. When used properly, AI-assisted work often becomes indistinguishable from purely human work due to the technology’s increasing capabilities. You cannot tell. The regulatory question is not whether to permit AI use, but whether to drive it underground or bring it into a framework of transparency and accountability.


PART D: TECHNICAL DEFICIENCIES IN THE CURRENT FRAMEWORK

Failure to Distinguish Between Tool Categories

The Practice Note’s most significant deficiency is treating all generative AI as functionally equivalent. Paragraph 3 lists consumer-grade chatbots alongside purpose-built legal platforms without recognising their materially different risk profiles.

The concerns articulated in paragraph 7 apply with full force to consumer-grade tools but are substantially or entirely addressed by enterprise legal platforms. A risk-proportionate framework would recognise this distinction.


The Expert Report Leave Requirement

Paragraph 20 requires prior leave for AI use in “any part” of an expert report. This may inadvertently prohibit legitimate analytical uses:

    • A forensic accountant using AI to analyse 50,000 transactions for anomaly detection
    • A medical expert using AI-assisted diagnostic imaging analysis
    • A construction expert using AI to process building management system data
    • A data scientist using machine learning for pattern identification

These uses improve accuracy. The Chief Justice notes no leave applications have been sought in seven months. This likely indicates not that experts are avoiding AI, but that the leave requirement creates sufficient uncertainty that legitimate AI-assisted analysis is being driven underground or avoided entirely. We have had direct reports from practitioners stating they have received clearly AI-generated expert reports. 


The Affidavit Process: A Counterpoint

The prohibition on AI in affidavit content generation deserves closer examination of current practice.

Current process: Practitioners interview clients. They handwrite notes reflecting their interpretation of what the client said. These notes are typed up, often by a junior. The final affidavit passes through multiple transmission points, each introducing potential distortion—from the event itself, through the client’s memory, through verbalisation, through the practitioner’s interpretation, through the note-taking, through the typing.

AI-assisted process: The conversation with the client is recorded (with consent). The transcript is provided to AI to produce an affidavit structured in accordance with court guidelines, using the client’s actual words.

We are seeing higher quality and accuracy from the AI-assisted process because it reduces transmission distortion. The affidavit more faithfully reflects what the client actually said. I am not suggesting AI should generate witness content from nothing. I am suggesting that the current human process is not the accuracy benchmark the Practice Note implicitly assumes.


PART E: ACCESS TO JUSTICE IMPLICATIONS

The Two-Tier Justice Risk

The Chief Justice correctly identifies the risk of a “two-tier” system. However, overly restrictive regulation may widen this gap. Large firms can absorb compliance costs—leave applications, disclosure requirements, and audit trails. Small firms and sole practitioners who make up the volume of the industry cannot. They may simply stop using beneficial technology, falling behind in efficiency and competitive position. The compliance burden created by the Practice Note falls disproportionately on those least able to bear it.


The Self-Represented Litigant Challenge

I have received direct reports from judicial officers that AI-powered self-represented matters already constitute a significant and growing proportion of their caseloads—in one jurisdiction, approximately 20%. We are at the early stages of this trend.

Over the next 12-24 months, I expect this number to more than double. The entry barrier for self-representation has been lowered dramatically. AI tools can now draft proceedings that are indistinguishable from junior solicitor work. They can coach litigants through procedural requirements in real time. They can analyse opponent submissions and generate response strategies.

This will occur regardless of what regulation is put in place for practitioners. If the Practice Note restricts AI use by practitioners while SRLs face no equivalent constraints, it places the legal profession at a structural disadvantage.


The Underground Use Problem

When rules are experienced as overly restrictive or technically unrealistic, practitioners respond by avoiding helpful tools even where risk is low (lost productivity) or using them but under-disclosing due to fear of breach, uncertainty about definitions, or the grey zone of embedded AI. This results in a decrease in transparency rather than an increase. This outcome undermines the Practice Note’s stated objective of maintaining integrity.


PART F: COMPARATIVE REGULATORY ANALYSIS

The Chief Justice has acknowledged adopting “a more cautious approach to the use of Gen AI in the courts of New South Wales than most other jurisdictions.”


United Kingdom (October 2025):
    • Permits AI use by judicial officers for research and drafting
    • No blanket prohibitions on AI in submissions
    • Judges now have Microsoft Copilot access on their computers
    • Ministry of Justice scaling AI assistants department-wide by end of 2025

Singapore:
    • “Neutral stance” on AI use by court users
    • Active collaboration with Harvey AI for access to justice
    • Experimenting with AI-assisted judgment writing (“red-teaming”)
    • Project Sea-Lion developing jurisdiction-specific LLM
    • Small Claims Tribunal implementing AI translation and summarisation

United States:
    • Over 25 federal judges have issued standing orders on AI
    • Focus on disclosure rather than prohibition
    • Courts actively developing AI tools for self-represented litigants


UNESCO Guidelines (December 2025):
    • 15 Principles emphasising human oversight rather than prohibition
    • Recognition of access to justice benefits

NSW’s restrictive approach places the NSW legal industry at a potential competitive disadvantage. Legal work may increasingly be performed in AI-permissive jurisdictions. Talent may migrate. Investment in legal technology may go elsewhere. These are real risks that should be weighed against the protective benefits of restriction.


PART G: WHAT’S COMING—A DIRECT ASSESSMENT

The recommendations in this submission address immediate concerns. But I want to be direct with the Court about what is coming. The changes we have seen in the past 12 months will be dwarfed by the changes in the next 36 months. A Practice Note designed for current technology will be obsolete well before its next review.

The Court should be planning not for incremental change but for transformation of how legal services are delivered and disputes are resolved. Here are the structural challenges I believe the Court will face.


The Next 12-24 Months: Near-Term Certainties

The Self-Represented Litigant Tsunami

The barrier to producing competent-looking legal documents has collapsed. Within 18 months, AI will be able to draft proceedings indistinguishable from senior solicitor work, generate evidence that passes superficial scrutiny, coach litigants through procedural requirements in real time, and analyse opponent submissions and generate response strategies.

The current court system assumes friction—that producing legal documents requires either money (lawyers) or time (learning). That friction is disappearing. The Court will see:

    • Massive increase in filed proceedings
    • Dramatic increase in applications, interlocutories, and procedural disputes
    • Judicial officers overwhelmed by volume
    • Registry backlogs extending dramatically


The Verification Problem

The Practice Note assumes human verification is the solution. But verification does not scale. A sole practitioner using AI can produce 10x the output. Are they verifying 10x as much? No—they are verifying the same amount and hoping the AI got the rest right. Within 24 months, there will likely be a major citation scandal involving a reputable firm (not just SRLs), insurance claims from AI-assisted work product failures, and courts unable to distinguish AI-assisted from human-only work.


The 3-5 Year Horizon: High-Probability Developments

Agentic AI Changes Everything

Current AI is reactive—you prompt, it responds. Agentic AI is proactive—you give it a goal, and it executes multi-step plans. This is already emerging and will be mainstream within 3 years.

What this means for legal practice:

    • AI that does not just draft a letter of demand but sends it, monitors for response, prepares the statement of claim if no response, files it, and serves it
    • AI that manages entire litigation workflows with minimal human intervention
    • AI that conducts discovery by systematically analysing all documents and generating categorised summaries with privilege flags
    • AI that prepares trial bundles, identifies witnesses, and drafts examination outlines

The human lawyer becomes a supervisor, not a producer. The Practice Note’s framework—which assumes AI is a tool humans wield—becomes obsolete when AI is conducting entire workflows.


The Evidence Authenticity Crisis

The Chief Justice’s concerns about deepfakes and fabricated evidence are legitimate and will intensify dramatically. What is coming:

    1. AI-generated documents (contracts, emails, texts) that are forensically indistinguishable from authentic
    2. AI-generated audio recordings of conversations that never happened
    3. AI-generated CCTV and photographic evidence
    4. AI-generated medical records, financial statements, and corporate documents

Current evidence law assumes documents are either authentic or forged, with forgery being difficult and detectable. That assumption collapses entirely. Within 5 years, courts will need standards for digital evidence authentication, approved methods for AI-assisted forgery detection, requirements for metadata preservation and chain-of-custody, and presumptions about evidence created without authentication safeguards.

This argues for AI-assisted detection, not AI prohibition. Human perception processes approximately seven pieces of information simultaneously. AI can map inconsistencies, patterns, and conflicts across vast evidence sets that humans cannot process. If the Practice Note inhibits AI use, courts will be less equipped to distinguish real from fake evidence, not more.

The Judicial Capacity Crisis

Judges are already stretched. Add: the SRL tsunami; evidence authentication disputes; AI disclosure disputes; increased interlocutory applications; and more appeals (because AI makes appellate work easier). The current system cannot absorb this.

The Court will likely see a dramatic expansion of registrar and associate powers; AI-assisted case management and triage (whether officially sanctioned or not); pressure for algorithmic resolution of certain dispute categories; and serious discussion of AI-assisted judgment writing (not just research). The UK is already moving in this direction. NSW can lead or follow.


Judicial Mapping and Outcome Prediction

AI will enable “judicial mapping”—predicting how specific judges will rule based on their entire judgment history. Some practitioners are already using such analysis. This creates ethical questions not addressed by SC Gen 23: Should practitioners be permitted to use AI for argument tailoring based on judicial profiling? This may represent a new form of “forum shopping” where well-resourced litigants use the most powerful predictors to tilt the scales. I raise this not to suggest prohibition, but to highlight that the regulatory challenges ahead are more complex than the current Practice Note contemplates.


The 5-10 Year Horizon: Structural Transformation

The Legal Services Market Restructures

Large firms survive by becoming AI orchestrators—their value is judgement, relationships, and reputation, not document production. Mid-tier firms face existential pressure—they cannot compete on cost with AI-enabled small practitioners, and they lack the brand of large firms.

Small practitioners bifurcate: those who master AI become hyper-productive sole operators handling volumes previously requiring teams; those who do not become obsolete. The legal profession will likely lose 20-40% of junior positions within 5 years—not because firms want to cut jobs, but because AI eliminates the economic rationale for hiring humans to do document review, basic research, and initial drafting.

The access to justice gap potentially narrows—but only if regulation permits AI-assisted services for those who cannot afford traditional lawyers.


The Professional Identity Crisis

What does it mean to be a lawyer when AI can research more comprehensively than any human, draft more consistently than any human, analyse documents faster than any human, and predict outcomes more accurately than any human?

The profession will need to redefine its value proposition around judgement and wisdom in complex situations, relationship management and client trust, ethical oversight and accountability, and creative problem-solving that AI cannot replicate. Some lawyers will resist this transition. They will fail.


PART H: RESPONDING TO SPECIFIC CONCERNS

The Hallucination Examples

The Chief Justice’s examples of ChatGPT fabricating sources demonstrate why consumer-grade tools are dangerous for legal use. However, these examples illustrate the need to distinguish between tool categories, not to restrict all AI. Purpose-built legal platforms with citation verification would not generate false references because they retrieve from verified databases rather than generating text probabilistically.


The Deskilling Concern

The concern about AI encouraging “laziness in research and analysis” is legitimate. However, education, not prohibition, addresses this risk. Junior practitioners need to learn foundational skills while also learning to work with AI effectively. Prohibition develops neither skill set; structured education develops both.


Pre-AI Errors

Citation errors and factual inaccuracies in legal submissions predate AI. Practitioners made mistakes before AI existed. The relevant question is whether AI-assisted work is more or less accurate than purely human work. The evidence I see from implementations suggests properly supervised AI-assisted work is often more accurate than purely human work because it catches errors that human review misses. AI is being used as a scapegoat for poor practice that existed long before these tools emerged.


The “Punishment” Question

If there is to be punishment for AI-related failures, the punishment should be directly in line with standard disciplinary practices—not some special class of punishment because a practitioner used a tool without education and did not verify results they should have been verifying anyway. The problem with punishing people—especially publicly—is that it makes practitioners fearful of the technology instead of focusing their energy on learning a skill set that is going to be mandatory whether they like it or not.


PART I: RECOMMENDATIONS

Recommendation 1: Risk-Proportionate Framework

Amend the Practice Note to distinguish between:

    1. High-risk AI use: Content generation without human review, automated filing, AI-generated evidence
    2. Medium-risk AI use: Research assistance, document summarisation, chronology preparation
    3. Low-risk AI use: Spell-checking, formatting, translation, transcription

Recommendation 2: Tiered Tool Recognition

Establish criteria for recognising legal AI platforms meeting minimum standards:

    • Australian data sovereignty (not merely Australian data residency)
    • Citation verification capabilities
    • Audit trail functionality
    • Professional indemnity coverage
    • Demonstrated accuracy metrics on Australian legal benchmarks

Platforms or users of platforms meeting these criteria should face a reduced compliance burden compared to consumer-grade tools. This creates incentives for quality without mandating specific technical approaches.


Recommendation 3: Expert Report Clarification

Amend paragraphs 20-22 to:

    • Distinguish between AI as analytical tool versus AI as a drafter
    • Permit AI-assisted data processing and analysis without leave where methodology is disclosed
    • Establish threshold criteria for when leave is required
    • Provide standard disclosure language rather than leave applications


Recommendation 4: Mandatory AI Education

The most effective response to AI misuse is education, not prohibition. Practitioners cannot understand the implications of their actions without understanding the technology.

I recommend mandatory CPD points for AI education from qualified providers. After over one thousand hours of coaching practitioners, I can confirm the need for education is time-sensitive and growing. Surface-level guidance is insufficient; practitioners need structured training on effective and ethical AI use from quality leaders in the space—not self-appointed experts with no practical implementation experience.


Recommendation 5: Mandatory Evidence Authentication Standards

The Practice Note focuses on AI in document production. It should also address AI in document verification. Within 3 years, courts will need:

    • Standards for digital evidence authentication
    • Approved methods for AI-assisted forgery detection
    • Requirements for metadata preservation and chain-of-custody
    • Presumptions about evidence created without authentication safeguards

The court that gets ahead of this—that establishes NSW as a leader in digital evidence standards—creates a competitive advantage. The court that waits until the crisis hits will be overwhelmed.


Recommendation 6: AI-Assisted Court Intake and Triage System

The Court should consider implementing a master AI platform through which all submissions and evidence are processed before consuming judicial resources. This system would:

    • Automatically assess filed documents for completeness and compliance
    • Flag issues requiring human review
    • Provide predictive analytics for case duration, complexity, and resource requirements
    • Identify inconsistencies, missing information, or potentially fraudulent filings
    • Detect vexatious patterns across matters
    • Enable automated scheduling optimisation

This is technologically achievable today and is not a replacement for human judgement—it is a shield against the volume that is coming. Without it, courts will face a backlog like never seen before and will have no efficient way to distinguish legitimate matters from vexatious or fraudulent filings.


Recommendation 7: SRL-Specific Measures

Instead of restricting SRL AI use (which will not work anyway), the Court should consider:

    • Publishing approved AI prompts for common SRL needs (drafting affidavits, understanding procedures)
    • Creating an official court AI assistant for procedural questions
    • Establishing a “verified SRL document” pathway where AI-assisted documents meeting certain standards get expedited processing
    • Partnering with community legal centres to provide supervised AI assistance

The goal is channelling SRL AI use into productive patterns, not pretending it will not happen. Singapore is already moving in this direction with its Small Claims Tribunal AI initiatives.


Recommendation 8: Judicial AI Capability Building

The Practice Note regulates practitioners but says nothing about judicial AI use. Judges need:

    • Training on AI capabilities and limitations
    • Access to AI tools for legal research (as UK judges now have)
    • Understanding of how to assess AI-related evidence
    • Frameworks for identifying AI-generated submissions and evidence

A judiciary that does not understand AI cannot effectively regulate its use or assess AI-related disputes.


Recommendation 9: AI Transparency Register

Establish a voluntary (initially) register where legal AI platforms can:

    • Submit to accuracy testing on Australian legal benchmarks
    • Demonstrate data sovereignty compliance
    • Provide evidence of citation verification capabilities
    • Show professional indemnity arrangements

Platforms on the register get safe harbour provisions. This incentivises quality without mandating specific technical approaches.


Recommendation 10: Five-Year Regulatory Roadmap

The Practice Note should acknowledge that it is a point-in-time response and commit to structured evolution:

    • Year 1: Current framework with enhanced education requirements
    • Year 2: Introduction of tiered tool recognition
    • Year 3: Evidence authentication standards
    • Year 4: AI-assisted case management pilot
    • Year 5: Comprehensive review and next-generation framework

This signals that the Court is thinking ahead, not just reacting to crises as they emerge.


CONCLUSION

The landscape has changed materially since January 2025. Purpose-built legal AI platforms with Australian data sovereignty, citation verification, and demonstrated accuracy now exist. The hallucination rates that justified caution are declining rapidly. Other jurisdictions are adopting less restrictive frameworks without apparent harm.

Tighter regulation will not prevent AI use—it will drive it underground, creating worse outcomes than transparent, accountable frameworks. What the industry needs is a shift towards education and collaboration, not additional rules.

Yes, I have a commercial interest in this submission. But everything I have said I believe wholeheartedly, and my predictions will come true whether the Court regulates AI further or not. Practitioners will use it. Self-represented litigants will use it. Businesses will use it. Organised actors with malicious intent will use it. The question is not whether AI transforms legal practice—that is already happening. The question is whether NSW courts lead that transformation or are overwhelmed by it.

The Court has an opportunity. NSW can establish itself as a leader in responsible AI integration—a jurisdiction that harnesses AI benefits while managing genuine risks, that builds judicial AI capability, that establishes evidence authentication standards, and that creates frameworks other jurisdictions will follow.

Or NSW can continue with cautious incrementalism and find itself reacting to crises rather than preventing them. I respectfully urge the Court to choose leadership.

Respectfully submitted,

Samuel Junghenn
CEO, AI Legal Assistant Pty Ltd
18 December 2025


APPENDIX A: SUMMARY OF RECOMMENDATIONS

# Recommendation Current Position Proposed Amendment
1 Risk-proportionate framework Uniform treatment of all AI Tiered approach by risk level
2 Tiered tool recognition No distinction between tools Criteria for compliant platforms with safe harbour
3 Expert report clarification Leave required for any AI use Distinguish analysis from drafting
4 Mandatory AI education No requirement CPD points for AI training
5 Evidence authentication standards Not addressed Digital evidence standards and AI detection
6 Court intake AI system No system AI triage and verification platform
7 SRL-specific measures Professional focus only Court AI assistant, approved prompts
8 Judicial AI capability Not addressed Training, tools, assessment frameworks
9 AI transparency register No system Voluntary register with safe harbour
10 Five-year regulatory roadmap “Periodic review” Structured annual evolution

— END OF SUBMISSION —

This submission was prepared with the assistance of AI tools. My thoughts are my own and augmented by the assistance of AI tools. 

I, the author, have verified all citations and factual claims. If I didn’t use AI tools, it would be inconsistent with my position

Author

Samuel is the founder and CEO of AI Legal Assistant. Samuel has been building and scaling tech companies for over 17 years and started developing with AI in 2017 when it was really expensive and not that useful. He's been invited to speak to number of organisations including but not limited to legal education organisations, Supreme Court Justice, managing partners, Kings Counsel, technology committees to name a few.

View all posts by Author

Related Posts