Surveillance in 2024 felt less like a single debate and more like a set of simultaneous reckonings. Across courts, city councils, international bodies and corporate boardrooms policymakers and practitioners kept returning to the same question: when does the promise of public safety become a permanent erosion of private life? The answer matters because the tools being deployed are not neutral. They reshape patterns of everyday life and distribute risk unevenly.

Two policy moves framed much of the conversation this year. In March the European Parliament approved the Artificial Intelligence Act, a risk-based framework that bans certain high harm AI uses such as untargeted scraping of images from CCTV and other mass biometric processing, while still leaving limited law enforcement exceptions.

At the international level the Council of Europe opened the Framework Convention on Artificial Intelligence for signature in September. The instrument seeks to lock human rights and rule of law norms into AI governance and send a signal that states must do more than adopt ad hoc practices. The treaty is not technology specific but it is explicitly aimed at preventing AI from becoming a legal grey zone where mass surveillance escapes scrutiny.

On the enforcement front regulators were active. European data protection authorities continued to hold private firms to account, exemplified by the Dutch regulator’s large penalty against Clearview AI for building and operating a biometric database without a lawful basis. That ruling underscored a wider fact: national privacy law can bite where procurement or law enforcement appetite tries to push technology into use without a clear legal and ethical foundation.

Meanwhile in the United States the debate split between municipal action and federal scrutiny. Dozens of U.S. cities and some states had adopted limits or bans on law enforcement use of facial recognition in prior years; yet reporting in 2024 showed police agencies finding workarounds or continuing to test capabilities even where formal bans exist, prompting concern about enforcement and oversight gaps. At the federal level the U.S. Commission on Civil Rights examined how federal agencies use facial recognition and related technologies, raising questions about compliance with civil rights protections and recommending clearer guardrails.

Late in the year legislative proposals appeared as well. A House bill introduced in October proposed to prohibit remote biometric surveillance on body camera data, reflecting the particular anxiety around combining ubiquitous recording with automated identification. The measure is one of several attempts to channel the debate into statutory rules rather than leaving it to agencies and courts alone.

Those developments map a recurring pattern. When safety arguments are made for surveillance technologies they are often compelling at the level of a single incident. Law enforcement and emergency services legitimately want faster identification of suspects or missing people. The counterweight is systemic: expanded capability tends to normalize continuous identification and tracking. That normalization creates temptation for mission creep and produces outsized harms for groups already subject to heightened policing. The technology’s documented performance disparities also mean that increased use can multiply wrongful stops and arrests for marginalized communities.

So what should practitioners and policymakers actually do? Here are practical steps I would recommend based on what worked and what failed in 2024.

1) Treat procurement as a policy lever. Agencies must condition any purchase on documented civil rights impact assessments, independent testing for accuracy and bias, and contractual requirements for transparency and auditability. Contracts should require logging, retention limits and third party access for audits. If a vendor refuses to provide demonstrable evidence on performance and data provenance the purchase should not proceed. This is an enforceable, concrete change to how procurement officers operate.

2) Require public, mandatory impact assessments before deployment. Whether you call them Data Protection Impact Assessments or Fundamental Rights Impact Assessments, the point is the same. These assessments must be published, they must include test data and methodologies where feasible, and they must offer a realistic mitigation plan. The EU’s AI Act and the Council of Europe framework point in this direction.

3) Keep human review in the loop and make it meaningful. Automated matches should not be treated as arrests or final determinations. Design workflows so that human reviewers have both the information and the incentives to challenge algorithmic outputs rather than rubber-stamp them. Training alone is insufficient without accountability mechanisms that examine decisions ex post.

4) Narrow and codify law enforcement exceptions. If law enforcement will use biometric tools then statutes should define the conditions under which they can be used, require judicial oversight or clear internal approvals for sensitive uses, limit data sharing, and mandate independent reporting to civilian oversight bodies. Legislative proposals like the ban on biometric processing of body camera footage illustrate one clear path forward.

5) Support alternatives and defensive tech. Communities will need tools that reduce the harms of pervasive identification. That means funding research into privacy enhancing technologies, supporting open source projects that enable transparency, and creating safe channels for whistleblowing when surveillance systems are misused.

6) Enforce across borders when necessary. The Clearview cases showed that national regulators can and will act against companies that scrape and commercialize biometric data even when vendors argue they operate offshore. Regulators should coordinate on enforcement and clarify extraterritorial reach where rights are implicated.

Finally, the surveillance debate in 2024 taught a practical lesson: rights and safety are not zero sum if rules, oversight and technologies are designed to work together. Bans are sometimes necessary. Transparent regulated use is sometimes appropriate. But for either path to earn public trust governments and vendors must be explicit about limits, accountable in practice and responsive when harms appear.

The policy instruments adopted and debated in 2024 give us a toolkit. The test now is institutional. Will procurement officers, legislators, regulators and technologists take the hard steps needed to bind capability to safeguards? If they do not the balance will tilt toward persistent surveillance in ways that will take years to unwind. If they do, we will have a chance to preserve the benefits of modern tools while protecting the core liberties that make societies resilient.