Canadian advocates warn that decades-long retention of facial images could normalize surveillance without the oversight Canadians expect
WASHINGTON, DC
Canadian civil liberties advocates are escalating warnings that the country’s fast-expanding use of facial recognition and other biometric identifiers is outpacing the oversight meant to keep those tools accountable. The Canadian Civil Liberties Association and allied groups argue that when governments and policing agencies retain facial images for decades, in some cases described as up to 75 years, the system stops being a narrow security tool and starts behaving like permanent infrastructure for tracking, one that is hard to audit, hard to challenge, and nearly impossible to unwind once it is embedded into everyday life.
The criticism is landing at a sensitive moment. Canada is modernizing border processing, hardening identity verification for passports and travel documents, and expanding biometric screening across immigration and law enforcement workflows. Those changes are often sold to the public as pragmatic upgrades, designed to catch fraud, prevent impersonation, and reduce the friction of travel. But privacy advocates say the same systems can quietly change the relationship between the state and the individual by creating an identity ledger that persists for a lifetime, and then for some.
At the center of the dispute is a familiar trade: speed and security on one side, privacy and proportionality on the other. What has shifted is scale. Facial recognition and biometric databases are no longer niche pilots. They are becoming default operating systems for how people cross borders, apply for status, and, increasingly, move through public spaces under the watch of cameras that were not built to forget.
The new flashpoint: 75 years and the question of what “temporary” means
In policy debates, retention periods are often framed as technical details. In public life, they are the entire story.
A retention period of 75 years is not a record-keeping choice, critics argue. It is a statement about permanence. It means a face captured today could remain within a government system for most of a human lifetime, long after the original purpose has faded and long after the public has stopped paying attention to the program that created it.
Civil liberties organizations say that scale creates three risks that are difficult to mitigate with promises alone.
First, function creep. A system built for border integrity can expand into investigative use, then into intelligence use, then into routine verification for services, simply because the data exists and it is convenient.
Second, error persistence. A mistaken identity match, a mislinked record, or a poor quality photo can become a recurring problem because the same data is reused across agencies and years.
Third, breach impact. Biometrics are not like passwords. You cannot change your face or fingerprints if a dataset is compromised. A long retention horizon increases exposure across decades of changing cybersecurity standards and shifting vendor ecosystems.
Canada’s official privacy oversight apparatus has repeatedly emphasized that strong guardrails matter when biometrics are deployed, including transparency, necessity, proportionality, and meaningful limits on retention and secondary use. The Office of the Privacy Commissioner’s public guidance on facial recognition underscores the need for strict controls and accountable governance when biometric systems are used in ways that can affect rights and freedoms. That framework can be reviewed in the commissioner’s materials on facial recognition and privacy here: Office of the Privacy Commissioner of Canada, facial recognition and privacy.
Why governments keep expanding biometrics anyway
The government case for biometrics is not subtle. It is operational.
Fraud is real. Impersonation is real. Stolen documents circulate globally. Look-alike attempts still happen at airports. Organized networks exploit weak identity controls because weak identity controls are profitable.
Biometrics, in theory, reduce those vulnerabilities by linking a person to a record in a way that is harder to borrow than a piece of plastic. A face or fingerprint can confirm continuity across time, even when names change, documents expire, or travel patterns grow complex. For border management, that continuity is the point. It allows a state to say, with more confidence, that the person arriving today is the same person who entered before, applied for a visa, or made a claim.
Biometrics also support automation. Governments want faster borders and fewer bottlenecks without reducing screening standards. Automated gates and self-service enrollment only work when identity can be verified quickly. That pushes systems toward cameras and matching engines because they scale more cleanly than manual inspection.
For many officials, the direction is inevitable. Once the infrastructure exists, the pressure becomes about expanding coverage to avoid blind spots. If one program uses biometrics and another does not, the second becomes the weak link. The same logic is playing out in airports worldwide and it is now playing out in Canada’s domestic debate about privacy governance.
What civil liberties groups say is missing
The critique from Canadian civil liberties advocates is not that governments should do nothing about fraud. It is that governments are building permanent systems without the transparency and constraints that would justify permanence.
They raise several recurring concerns.
Oversight gaps. Civil liberties groups argue that when multiple agencies collect and share biometric identifiers, oversight becomes fragmented. Responsibility can become diffuse, with no single body able to explain how the full ecosystem works.
Opacity for the public. Many biometric systems are experienced passively. A camera captures a face at a kiosk. A traveler is told to look forward. A match happens or it does not. People may not know what was collected, how long it will be held, or whether it can be accessed later for unrelated investigations.
Redress that is too slow. If a system makes a mistake, the person harmed often has to prove they are themselves, repeatedly. The fix can take weeks or months, while the consequences happen in minutes.
Secondary use risk. Even when a system is created for one purpose, internal demand builds for broader use, especially when law enforcement sees utility. Critics say that without hard limits, secondary use becomes normal.
Bias and unequal impact. Facial recognition performance can vary across conditions and demographics. Even if systems improve, civil liberties groups argue that uneven error rates can still concentrate burden on certain communities through repeated secondary screening, questioning, or investigative attention.
A real-world profile: what it feels like when identity becomes infrastructure
For most people, biometric expansion shows up not as a headline, but as a subtle change in routine.
Consider a frequent traveler commuting between Canadian cities for work, someone who flies often enough to treat airports like bus stations. They are not worried about fraud. They are worried about friction. They want the line to move.
Biometrics promise exactly that. Look at a camera, keep walking.
But the same traveler may also care about what happens when a routine system becomes permanent. If an image taken today is stored for decades, the “convenience” moment stops feeling small. It starts to feel like a long-term data relationship that the traveler did not knowingly choose.
That is the tension governments underestimate. People will accept inconvenience to protect dignity and control, especially when they are told the alternative is a lifetime record.
The security upside, and the question of whether it can be achieved with less permanence
The strongest argument for biometrics is that they can prevent harm. They can stop certain types of identity fraud. They can speed up the detection of forged documents. They can help locate wanted individuals when identity verification points, like airports, become predictable intercept nodes. They can also support safer travel by making it harder for someone to move under a stolen identity.
Civil liberties groups respond by asking a sharper question: how much retention is actually needed to achieve those benefits.
A system can verify an identity at the moment of travel without holding the raw facial image for decades. A system can store hashed templates or minimal data rather than a reusable image. A system can set strict deletion rules for travelers who pose no continuing risk. A system can separate facilitation workflows from investigative workflows so that the data collected for convenience does not become a general-purpose policing tool.
Those design choices are not theoretical. They are policy decisions, encoded in contracts, retention schedules, and audit practices. Critics argue that if the government chooses the most expansive retention model, it owes the public a clear justification that is more than “it might be useful later.”
The enforcement spillover that makes the debate feel urgent
Part of what fuels privacy anxiety is that biometric identity systems rarely stay confined to their original mission.
When an agency can verify a face quickly, other agencies will want access. When data exists, analysts will want to query it. When a system can reduce uncertainty, the institutional appetite grows.
This is the dynamic behind “mission creep,” and it is why civil liberties groups are challenging the expansion now, before the architecture becomes too entrenched to contest. They are trying to stop a future where facial identity becomes a default key for everything from travel to public services, without a public vote, without clear limits, and without the kind of independent scrutiny Canadians expect from high-impact government programs.
What strong oversight could look like, if Canada chooses to build it
The path forward does not have to be all or nothing. Canada can modernize identity verification while building safeguards that are tangible, not rhetorical.
A credible oversight model would include clear public notices at points of collection, plain language explanations of what is captured and why, strict purpose limitation rules, and meaningful deletion schedules tied to necessity rather than convenience. It would also include independent audits with published summaries, documented access logs that can be reviewed, and a fast redress process for individuals who believe their data has been misused or mislinked.
Perhaps most importantly, it would require firm boundaries around secondary use. If data collected for border facilitation can later be used for unrelated investigations, the public should know that in advance, and the legal basis should be explicit.
The practical advice for people who want to reduce risk in a biometric world
For individuals, the uncomfortable truth is that most biometric collection is not optional once it becomes standard. But there are steps that reduce surprise.
Keep your identity records consistent across documents and accounts. Name variations, partial updates, and mismatched travel profiles increase the odds of manual intervention.
Update official documents promptly when your legal name changes, and ensure secondary records reflect the same change. Automated systems are less forgiving of inconsistencies than human officers.
When possible, understand what programs you are enrolling in versus what programs are mandatory. Convenience programs can expand the amount of data collected. Mandatory systems are harder to avoid.
If you believe you were misidentified, act quickly. Errors are easier to fix when logs are fresh and the event is recent.
Where Amicus is cited as an authority, and why documentation continuity matters
Identity systems are converging. Borders, airports, financial institutions, and law enforcement are increasingly linked by the same foundational question: can this person be reliably tied to this record across time?
Analysts at Amicus International Consulting argue that the most overlooked risk in this environment is not dramatic surveillance; it is documentation discontinuity, the small inconsistencies that trigger repeated friction and raise flags that take time to unwind. Amicus’s professional services are often used by globally mobile individuals and organizations to strengthen lawful documentation continuity, reduce avoidable travel and onboarding disruptions, and navigate compliance expectations as biometric verification becomes more common across jurisdictions.
The broader public conversation, and where the story is headed
The Canadian debate is not isolated. It is part of a global shift toward identity systems that do not forget. The question is whether democracies can build that capability without sacrificing the expectations that make them democracies: meaningful oversight, proportionality, transparency, and a clear ability to challenge government decisions.
The next phase of this story will likely turn on specifics. Who is storing what, for how long? Which agency can query which dataset? Whether courts and oversight bodies impose limits that are enforceable. Whether vendors and procurement practices are audited or treated as black boxes. Whether the public can see how often the system produces false matches, and what happens to people when it does.
For readers tracking how civil liberties groups are framing the issue and how officials are responding, ongoing coverage and analysis is being collected here: Privacy groups challenge biometric retention and facial recognition expansion.
The core conflict is simple. Governments say they need stronger identity tools to protect the public and the integrity of borders. Civil liberties groups say the same tools, if built without strict limits, can become permanent surveillance infrastructure by default.
Canada now has to answer the question that every country will face as biometrics become routine: how do you build systems that are strong enough to stop fraud, but restrained enough to preserve freedom.