CBP Signs Clearview AI Deal to Use Face Recognition for ‘Tactical Targeting’

0
1

United States Customs and Border Protection plans to spend $225,000 for a year of access to Clearview AI, a face recognition tool that compares photos against billions of images scraped from the internet.

The deal extends access to Clearview tools to Border Patrol’s headquarters intelligence division (INTEL) and the National Targeting Center, units that collect and analyze data as part of what CBP calls a coordinated effort to “disrupt, degrade, and dismantle” people and networks viewed as security threats.

The contract states that Clearview provides access to “over 60+ billion publicly available images” and will be used for “tactical targeting” and “strategic counter-network analysis,” indicating the service is intended to be embedded in analysts’ day-to-day intelligence work rather than reserved for isolated investigations. CBP says its intelligence units draw from a “variety of sources,” including commercially available tools and publicly available data, to identify people and map their connections for national security and immigration operations.

The agreement anticipates analysts handling sensitive personal data, including biometric identifiers such as face images, and requires nondisclosure agreements for contractors who have access. It does not specify what kinds of photos agents will upload, whether searches may include US citizens, or how long uploaded images or search results will be retained.

The Clearview contract lands as the Department of Homeland Security faces mounting scrutiny over how face recognition is used in federal enforcement operations far beyond the border, including large-scale actions in US cities that have swept up US citizens. Civil liberties groups and lawmakers have questioned whether face-search tools are being deployed as routine intelligence infrastructure, rather than limited investigative aids, and whether safeguards have kept pace with expansion.

Last week, Senator Ed Markey introduced legislation that would bar ICE and CBP from using face recognition technology altogether, citing concerns that biometric surveillance is being embedded without clear limits, transparency, or public consent.

CBP did not immediately respond to questions about how Clearview would be integrated into its systems, what types of images agents are authorized to upload, and whether searches may include US citizens.

Clearview’s business model has drawn scrutiny because it relies on scraping photos from public websites at scale. Those images are converted into biometric templates without the knowledge or consent of the people photographed.

Clearview also appears in DHS’s recently released artificial intelligence inventory, linked to a CBP pilot initiated in October 2025. The inventory entry ties the pilot to CBP’s Traveler Verification System, which conducts face comparisons at ports of entry and other border-related screenings.

CBP states in its public privacy documentation that the Traveler Verification System does not use information from “commercial sources or publicly available data.” It is more likely, at launch, that Clearview access would instead be tied to CBP’s Automated Targeting System, which links biometric galleries, watchlists, and enforcement records, including files tied to recent Immigration and Customs Enforcement operations in areas of the US far from any border.

Clearview AI did not immediately respond to a request for comment.

Recent testing by the National Institute of Standards and Technology, which evaluated Clearview AI among other vendors, found that face-search systems can perform well on “high quality visa-like photos,” but falter in less controlled settings. Images captured at border crossings that were “not originally intended for automated face recognition” produced error rates that were “much higher, often in excess of 20 percent, even with the more accurate algorithms,” federal scientists say.

The testing underscores a central limitation of the technology: NIST found that face-search systems cannot reduce false matches without also increasing the risk that the systems fail to recognize the correct person.

As a result, NIST says agencies may operate the software in an “investigative” setting that returns a ranked list of candidates for human review rather than a single confirmed match. When systems are configured to always return candidates, however, searches for people not already in the database will still generate “matches” for review. In those cases, the results will always be 100 percent wrong.

Disclaimer : This story is auto aggregated by a computer programme and has not been created or edited by DOWNTHENEWS. Publisher: wired.com