Introduction
This is an on-going project to document best-practices for law students using AI in legal research and writing applications. This would include using generative AI for authoring research papers and law school essays, in addition to authoring legal opinion and briefs.
This document should be read in its entirety, and Parts of it applied as relevant to the situation.
We are currently seeking feedback on this draft. Please send your comments to ademers@uwindsor.ca.
This guide will be updated from time to time and may be ultimately be replaced.
Definition
This Guide uses the following definition of "generative AI":
“...a computer system capable of generating new content and independently creating or generating information or documents, usually based on prompts or information provided to the system. This Notice does not apply to AI that lacks the creative ability to generate new content. For example, this Notice does not apply to AI that only follows pre-set instructions, including programs such as system automation, voice recognition, or document editing. It bears underscoring that this Notice only applies to content that was created or generated by AI.”
Adopted from: Federal Court, “Notice to the Parties and the Profession: The Use of Artificial Intelligence in Court Proceedings” (December 20, 2023), online: 2023-12-20-notice-use-of-ai-in-court-proceedings.pdf (fct-cf.gc.ca)
Generally, for many legal matters, there is no "right answer". Outcomes are based on a complex and nuanced array of factors.
Accordingly, consider the following:
1. a) Familiarize yourself with any relevant rules originating from the University (for example University of Windsor, Senate Bylaw 31, Academic Integrity, last amended 11 November 2022), as well as Windsor Law, Policy Statement on Student Discipline, Policy no Law-4 (established 2010).
1. b) Familiarize yourself with and closely abide by all instructions and prohibitions contained a course syllabus pertaining to the use of AI on classroom work.
2. If doing work for a clinic, familiarize yourself with any relevant practice directions originating from the court itself. A preliminary list of court practice directions on AI can be found here.
3. The Rules of Professional Conduct that govern the profession in your jurisdiction may also have guidance.
4. Does the product purport to give you "the right answer" or a single answer? If so, avoid using it.
5. Instead, a high-quality product will clearly state the assumptions, provisos or conditions upon which the output is based, and notifies the user that the response may change depending on changes to the inputs provided.
6. Also, the product should clearly state any provisos or conditions to notify the user of possible errors or omissions.
7. Be aware of the Terms and Conditions of any product used. The supplier will certainly not allow itself to be held liable for damages caused by errors or omissions in the outputs provided.
1. Do not provide any personally-identifying information about yourself and/or your client to an AI system. Doing so may compromise your practice, your professional reputation and /or breach your obligations to maintain client confidentiality under the governing rules of professional conduct.
2. Is the AI system using your question, and the generated answer to it, to further train the AI system? Consider if this may be problematic from a confidentiality, copyright or privacy standpoint.
Ensure that the tool is drawing on data from the correct jurisdiction for the task (e.g. Canadian domestic law (federal, provincial, municipal or indigenous territorial laws).
If the product or service does not clearly specify that it is built on data from the relevant jurisdiction, do not use it for this purpose.
Recognize that complete datasets for Canadian legal information do not exist. Large providers such as CanLII, Westlaw and Lexis+ have quite comprehensive collections that can robustly support AI applications. Start-ups are quickly building their collections of Canadian legal data as well.
For these reasons, it is important to ascertain the scope of the dataset upon which an AI solution is built prior to using or acquiring it.
Look for information such as:
1. Does the dataset include:
-consolidated statutes and regulations
-if so, how often is the data updated
-judicial decisions
-if so:
-what court and tribunal content is provided
-how far back does the data go (what is the earliest content provided on the system)
-how often are the court and tribunal datasets updated
-secondary material
-if so:
-who prepared the commentary (AI or human)
-what are the credentials of the author
-what was the publishing date or authorship date of the source
-how often is this database updated
If you are unable to ascertain this information about the data are using, you should make note of this uncertainty.
Don't wrongly assume that the dataset is current or complete or something could be missed.
2. Do / did humans supervise the training of the AI on this data? As legal research, writing and advocacy are governed by professional regulation and codes of conduct, it is preferable to use an AI system trained by experts.
Does the product clearly state that it holds copyright or a license to use to the data contained in the dataset for the intended uses? Be wary of products that do not abide by copyright law.
In a Firm Context
As with all legal research and writing, the enquiry does not begin until we have:
-comprehensively ascertained the client's facts
-analyzed the facts
-formulated issue questions
For Law School Papers and Essays
Ensure to have clearly-framed, limited scope questions ready prior to using the system.
Generally
1. Do not use products without first having clearly formulated the questions to be explored.
2. When assessing a product, consider how your question is being delivered:
-does the system use sophisticated prompt engineering / survey / interview questions to help you formulate a response that will target relevant data in the system? This is preferred over generalized prompting.
Helpful Guides for Creating Prompts
Harvard Library, "Artificial Intelligence for Research and Scholarship - the CLEAR Framework" (last accessed June 20, 2024), online.
University of Calgary Library, "Artificial Intelligence: Fine Tuning Prompts for AI Tools" (last accessed June 20, 2024), online.
It is important to examine the output(s) provided by the system prior to using.
Look for such things as:
1. Is the system designed to summarize an area of law, or does it purport to apply the law to the facts of a client's situation?
-be very cautious about products that purport to apply the law to your client's situation. This is your job.
2. a) Who wrote the answer (AI or human?) Is a human supervising outputs?
b) Are you easily able to ascertain this from the FAQs or on the document itself?
3. Were sub-headers used to clearly delineate sections of the response? Many legal principles and tests are divided into parts, thus the use of sub headers can be helpful for readers.
4. a) If the output was written by a human (secondary sources/ commentary), on what date was it written?
4. b) Can you easily ascertain this from the FAQs / Terms of Use / or on the document itself? Be extra cautious if this information cannot be ascertained - further due diligence will be required on your part.
5. It is inadvisable to use AI to generate a complete written document or essay. Tools that assist with summarizing articles or other sources may be appropriate to use in some circumstances.
6 a) Were references provided to substantiate the response? If not, do not use it.
6 b) Do the citations generally conform to the citation standards that you've been taught?
6 c) If references were provided, which of the following were referenced?
-cases
-statutes
-secondary material
7. To test that citations are not fake, enter them into a platform such as CanLII for testing. Is the case:
-findable? If you are unable to find the cases that are referenced, then the output should not be used for any purpose.
-were pinpoint citations provided to pages / paragraphs within the case?
-if so, check the paragraphs and page numbers referenced.
-Do they contain the information stated in the output answer?
-If not, check if the case has any relevance to the topic. If not, be wary of the product.
-If so, choose appropriate pinpoint citations to use that substantiate your assertions.
8. For every case referenced in the answer, note up the case and review its history to ensure that it is still "good law". (In other words, check that each case utilized is the most current summary of the particular area of law / test / principle or rule from the highest level of court relevant to the jurisdiction).
Note: There are no datasets by our largest content providers (CanLII, Westlaw or Lexis+) that can be considered complete. For example, some providers have some content while others don't / some do not include complete note up or history information. Accordingly, assume that any AI built into these systems, or AI built by start-up companies with less comprehensive data sets, will likely not include information about case history or judicial consideration in their outputs.
Generative AI
1. a) Review the syllabus carefully and ask your Professor explicitly if AI may be used in drafting any written work.
1. b) If the Professor explicitly allows use of AI to generate text, ask how / if this should be noted in a footnote.
1. c) If the Professor explicitly allows AI to be used and requires notation in a footnote, this is the most comprehensive guidance currently available to us:
COAL-RJAL Editorial Group, Canadian Open Access Legal Citation Guide, 2024 CanLIIDocs 830, <https://canlii.ca/t/7nc6q>, retrieved on 2024-06-20 |
Extractive AI (AI for Research)
2. If you've used an AI application to extract statutes, cases and commentary:
2. a) A research summary provided by the system should be treated as a generative AI output (follow steps in part 1 above).
2. b) Research results should be examined and tested (see Part D above).
2. c) Generally, research results should be fully cited according to the citation standard that the Professor requires (ie. McGill Guide). No further mention of the system used is necessary.
As a student employee, be sure to take a moment to discuss the appropriate handling of the costs of generative AI with your supervisor.
For example, some systems may charge per transaction to interact with their AI system.