How the USPS is deploying AI to help track 7.3B packages a year


The US Postal Service runs a massive operation, processing 129 billion pieces of mail a year, including 7.3 billion packages. That’s 20 million packages a day, or 230 per second. So when a package gets lost, there’s a lot of sorting involved in finding it. 

With a new, Nvidia-powered AI program, the USPS has built a way to dramatically reduce the time it takes to find lost packages, down from several days to just two hours.  Package sorting is just the beginning — the USPS now has ideas for dozens of applications it could power with its new edge AI deployment, spanning everything from mail sorting to marketing. 

“There are not many enterprise-wide AI/ML projects that have been deployed at this scale across the whole enterprise, especially not in the case of government,” Anthony Robbins, VP of Nvidia’s federal government business, told reporters this week. 

The US government has poured billions in recent years into efforts to modernize the United States’ federal agencies with AI. Yet the Postal Service’s deployment stands out for its size and complexity, Nvidia says. The agency utilizes more than 1,000 mail processing machines across 195 locations. 

“The work that’s occurring by building an enterprise-wide AI program at the US Postal Service can be a motivator for the US federal government and, frankly, commercial businesses and enterprises around the globe,” Robbins said. 

The USPS opted in 2019 to use Nvidia GPUs to power its AI edge deployments. The agency now has 13 EGX systems across two data centers that are primarily used for training Al/ML models built for the Edge Compute Infrastructure Program (ECIP) — a distributed edge AI system. 

The Postal Service also deployed HPE servers to the edge — in their case, its 195 mail processing centers. The HPE Apollo 6500 servers are each equipped with four V100 GPUs, and each currently processes 20 terabytes of images a day. 

After putting the infrastructure in place, the agency deployed the Triton Inference Server, Nvidia open source software, to help manage the AI deployment across the distributed enterprise. Triton automates the delivery of different AI models to different systems, which may have different versions of GPUs and CPUs supporting different deep-learning frameworks.



Source link