Jain Immigration Law is online! We can assist you and communicate with you at your convenience
via Zoom, Microsoft Teams, or by telephone. Contact us to book a consultation.
Suite 6060 - 3080 Yonge Street, Toronto, Ontario, M4N 3N1, Canada
Jain Immigration Law is online! We can assist you and communicate with you at your convenience via Zoom, Microsoft Teams, or by telephone. Contact us to book a consultation.
As Canada experiments with Artificial Intelligence (AI) in its attempt to streamline the immigration system, we must ensure that the role of lawyers remains paramount when it comes to data collection and retention for the processing of immigration visa applications. The experimentation of AI to augment and replace human decision-makers in its immigration system can have profound implications for people’s fundamentals human rights.
With the use of electronic applications, any information that is provided by the applicant is stored and retained in their online profile, which can then be used by AI to make decisions that determine the outcome of visa applications. For instance, whenever an applicant begins a new application, they provide all sorts of personal information ranging from their education to travel history. Unfortunately, with the advent of AI and no presence of the human element in the form of lawyers, it could pose serious challenges for individuals. In that case, AI wouldn’t allow for the individuals to correct any incorrect entries, which can in turn have large implications on their eligibility to qualify for the relevant visa. More importantly, if we are completely reliant on AI now, we won’t have the ability to cross-reference data with the applicant’s previous applications, which can lead to misrepresentation findings.
In the recent Federal Court case of Singh v MCI, 2023 FCI 677, the court concluded how previously stored data about the applicant can have implications on the applicant’s future applications. The Federal Court in the matter also affirms that the burden is on the applicant to challenge adverse data or “prove a negative.” Under the same argument, the applicant was also able to argue how his previous applications and data indicated his forthrightness that should not amount to a misrepresentation finding. When you take out the human element, the ability of AI and such tools to access the applicant’s prior application still leads a lot to be desired to make an informed decision. With the use of only AI in immigration visa applications and no human element, it can lead to high-risk decisions in an already highly discretionary system.