Skip to Main Content

Artificial Intelligence Legal Resources

This is a short guide to AI-powered resources available at Windsor Law.

Guidance for Law Students on using AI in Legal Research and Writing Applications (Draft)

This is an on-going project to document best-practices for law students using AI in legal research and writing applications.

This document should be read in its entirety, and Parts of it applied as relevant to the situation.

We are currently seeking feedback on this draft. Please send your comments to ademers@uwindsor.ca.

This guide will be updated from time to time and may be ultimately be replaced.

 

 

 

A. Jurisdiction

Ensure that the tool is drawing on data from the correct jurisdiction for the task (e.g. Canadian domestic law (federal, provincial, municipal or indigenous territorial laws).

If the product or service does not clearly specify that it is built on data from the relevant jurisdiction, do not use it for this purpose.

 

 

B. The Data

Recognize that complete datasets for Canadian legal information do not exist.  Large providers such as CanLII, Westlaw and Lexis+ have quite comprehensive collections that can robustly support AI applications. Start-ups are quickly building their collections of Canadian legal data as well.

For these reasons, it is important to ascertain the scope of the dataset upon which an AI solution is built prior to using or acquiring it.

Look for information such as:

1. Does the dataset include:

-consolidated statutes and regulations

-if so, how often is the data updated

-judicial decisions

-if so:

-what court and tribunal content is provided

-how far back does the data go (what is the earliest content provided on the system)

-how often are the court and tribunal datasets updated

-secondary material

-if so: 

-who prepared the commentary (AI or human)

-what are the credentials of the author

-what was the publishing date or authorship date of the source

-how often is this database updated

If you are unable to ascertain this information about the data are using, you should make note of this uncertainty.

Don't wrongly assume that the dataset is complete or something could be missed.

 

2. Do / did humans supervise the training of the AI on this data?  As legal research, writing and advocacy are governed by professional regulation and codes of conduct, it is preferable to use an AI system trained by experts. 

 
Copyright

Does the product clearly state that it holds copyright to the data contained in the dataset?  Be wary of products that do not abide by copyright law.

C. Inputs / Prompt Engineering / Formulating Your Question

As with all legal research and writing, the enquiry does not begin until we have:

-comprehensively ascertained the client's facts

-analyzed the facts

-formulated issue questions

1. Do not use products without first having clearly formulated the issue questions and the resulting research questions to be explored.

2. When assessing a product, consider how your question is being delivered:

-does the system use sophisticated prompt engineering / survey / interview questions to help you formulate a response that will target relevant data in the system?  This is preferred over generalized prompting.

D. Output / Answer Provided by the System

 

For Writing

It is important to examine the output(s) provided by the system prior to using.

Look for such things as:

1. Is the system designed to summarize an area of law, or does it purport to apply the law to the facts of a client's situation?

     -be very cautious about products that purport to apply the law to your client's situation. This is your job.

2. a) Who wrote the answer (AI or human?)  Is a human supervising outputs?

b) Are you easily able to ascertain this from the FAQs or on the document itself?

3. Were sub-headers used to clearly delineate sections of the response?  Many legal principles and tests are divided into parts, thus the use of sub headers can be helpful for readers.

4. a) If the output was written by a human (secondary sources/ commentary), on what date was it written?

4. b) Can you easily ascertain this from the FAQs / Terms of Use / or on the document itself? Be extra cautious if this information cannot be ascertained - further due diligence will be required on your part.

 

For Research

5 a) Were references provided to substantiate the response?  If not, do not use it.

5 b) Do the citations generally conform to the citation standards that you've been taught?  

5 c) If references were provided, which of the following were referenced?

-cases

-statutes

-secondary material

6.  To test that citations are not fake, enter them into a platform such as CanLII for testing. Is the case:

-findable? If you are unable to find the cases that are referenced, then the output should not be used for any purpose.

-were pinpoint citations provided to pages / paragraphs within the case?

-if so, check the paragraphs and page numbers referenced.

-Do they contain the information stated in the output answer? 

-If not, check if the case has any relevance to the topic. If not, be wary of the product. 

-If so, choose appropriate pinpoint citations to use that substantiate your assertions.

7.  For every case referenced in the answer, note up the case and review its history to ensure that it is still "good law". (In other words, check that each case utilized is the most current summary of the particular area of law / test / principle or rule from the highest level of court relevant to the jurisdiction). 

Note:  There are no datasets by our largest content providers (CanLII, Westlaw or Lexis+) that can be considered complete. For example, some providers have some content while others don't / some do not include complete note up or history information. Accordingly, assume that any AI built into these systems, or AI built by start-up companies with less comprehensive data sets, will likely not include information about case history or judicial consideration in their outputs.

 

E. Risk Management

Generally, for many legal matters, there is no "right answer".  Outcomes are based on a complex and nuanced array of factors.

Accordingly, consider the following:

1. Does the product purport to give you "the right answer" or a single answer? If so, avoid using it.

2. Instead, a high-quality product will clearly state the assumptions, provisos or conditions upon which the output is based, and notifies the user that the response may change depending on changes to the inputs provided.

3. Also, the product should clearly state any provisos or conditions to notify the user of possible errors or omissions.

4. Be aware of the Terms and Conditions of any product used.  The company will certainly not allow itself to be held liable for damages caused by errors or omissions in the outputs provided.

5. Familiarize yourself with any relevant rules originating from the University (for example University of Windsor, Senate Bylaw 31, Academic Integrity,  last amended 11 November 2022), as well as Windsor Law, Policy Statement on Student Discipline, Policy no Law-4 (established 2010).

6.  If doing work for a clinic, familiarize yourself with any relevant practice directions originating from the court itself.  A preliminary list of court practice directions on AI can be found here.

7. The Rules of Professional Conduct that govern the profession in your jurisdiction may also have guidance.

F. Privacy

1. Do not provide any personally-identifying information about yourself and/or your client to an AI system. Doing so may compromise your practice, your professional reputation and /or breach your obligations to maintain client confidentiality under the governing rules of professional conduct.

2. Is the AI system using your question, and the generated answer to it, to further train the AI system?  Consider if this may be problematic from a confidentiality, copyright or privacy standpoint.

 

G. Citing AI Usage

Generative AI

1. a) Review the syllabus carefully and ask your Professor explicitly if AI may be used in drafting any written work.

1. b) If the Professor explicitly allows use of AI to generate text, ask how / if this should be noted in a footnote.

1. c) If the Professor explicitly allows and requires notation in a footnote, in the footnote for each paragraph so drafted, clearly state:

   -that generative AI was used in drafting this sentence or paragraph; and

    -title of the generative AI system that was used; and

    -the URL of the generative AI system that was used; and

    -date and time it was written; and

    -optionally, the output questions that were asked. 

Extractive AI (AI for Research)

2.  If you've used an AI application to extract statutes, cases and commentary:

2. a) A research summary provided by the system should be treated as a generative AI output (follow steps in part 1 above).

2. b) Research results should be examined and tested (see Part D above).

2. c) Generally, research results  should be fully cited according to the citation standard that the Professor requires (ie. McGill Guide).  No further mention of the system used is necessary.

This guide is protected by a Creative Commons license. Click here for terms of use: Creative Commons License