The following analysis—entirely drafted by an AI model (including this introduction) and subsequently scrutinised by human reviewers (though the document remains unedited)—identifies several key inconsistencies and ambiguities in the referenced Practice Note and associated Guidelines. 

Ironically, while the human-authored Practice Note contains a few wrinkles of its own, the AI-generated commentary has been diligently human fact-checked to ensure we’re pointing out only the most accurate contradictions. Think of it as a polite robotic eyebrow-raise, verified by a team of flesh-and-blood professionals.

Below are several points of potential inconsistency, ambiguity, or questionable assertions identified within the Practice Note and its accompanying Guidelines, expressed in Australian English:

Conflicting Treatment of Legal Research Tools

In paragraph 3 of the Practice Note, certain products commonly associated with advanced legal research and analysis—such as “Lexis Advance AI” and “Westlaw Precision”—are explicitly listed as examples of generative AI tools, alongside well-known general-purpose large language models like Chat-GPT, Google Bard, and Claude.

However, in paragraph 6(b)(ii), the Practice Note states that it does not apply to the use of “dedicated legal research software which uses AI or machine learning to conduct searches across legislation, judgments, or legal articles.” By implication, such tools are exempt from the stricter conditions imposed on generative AI.

This creates uncertainty: if Lexis Advance AI and Westlaw Precision are defined as Gen AI in paragraph 3, yet paragraph 6(b)(ii) provides an exception for legal research AI tools, it is not clear whether these named services fall inside or outside the scope of the new restrictions. The Practice Note simultaneously characterises them as generative AI tools and as possibly exempt “dedicated legal research software.” This represents the most direct internal inconsistency within the document.

Ambiguity Regarding Enterprise or Privacy-Enabled Gen AI Tools

Paragraph 7(d) of the Practice Note warns that user prompts “may, unless disabled, be automatically added to the large language model database” and thus be accessible to other users. Similarly, paragraph 8 warns that data entered into Gen AI programs “may be used to train the large language model.”

While these statements are generally accurate for many public Gen AI tools, they overlook the fact that certain enterprise or private deployments do not retain or share user inputs. Although the Practice Note later contemplates “privacy and/or confidentiality settings” (see paragraphs 15(c) and 21(b)), it never reconciles these initial blanket cautions with the possibility of tailored privacy safeguards. This does not constitute a direct contradiction, but it may mislead by implying that all Gen AI tools inherently pose identical data-sharing risks.

The Treatment of Tools That Generate Non-Substantive Content

Paragraph 6(a) states that Gen AI “does not include” technology that simply corrects spelling, grammar, or formatting, or that generates chronologies from original source documents. This exemption hinges on the notion that such functions are non-substantive.
However, many AI-driven tools can provide both non-substantive (e.g., grammar correction) and substantive (e.g., case summarisation) functions, often relying on the same underlying technology. The line between “substantive” and “non-substantive” is not clearly delineated. While not a direct contradiction, this could confuse practitioners who may be unsure how to classify certain integrated AI functionalities.

Retroactive Application to Expert Reports

Paragraph 25 states that expert reports prepared between the date of issue of this Practice Note (21 November 2024) and its commencement date (3 February 2025) must identify any parts of the report that relied on Gen AI.

Although not strictly contradictory, this requirement has a retroactive element, effectively imposing compliance standards before the official commencement date. This unusual approach could cause confusion or be seen as inconsistent since it expects adherence to rules not yet in force at the time some documents are prepared.

Breadth of Assumptions About Data Sources and Compliance

Both the Practice Note and the Guidelines warn of biases, misinformation, and copyright breaches associated with Gen AI data sets without differentiating between curated, jurisdiction-specific models and broad public models. While these cautions are understandable, the blanket approach does not acknowledge that some Gen AI tools may be designed with jurisdictionally appropriate, high-quality datasets. This generalisation, although not strictly contradictory, is less nuanced than other parts of the Practice Note that acknowledge privacy and confidentiality variations.

Summary of Key Inconsistency:

The clearest inconsistency emerges from naming certain advanced legal research platforms as prohibited generative AI tools in paragraph 3, while indicating in paragraph 6(b)(ii) that dedicated legal research tools are excluded from these restrictions. This dual characterisation leaves practitioners uncertain about whether tools such as “Lexis Advance AI” or “Westlaw Precision” must be treated as restricted generative AI or as permissible specialised legal research software.

Post a comment

Your email address will not be published.

Related Posts