Agents without Training is Like a Port of Entry without Inspections

Agents without Training is Like a Port of Entry without Inspections

I assert that the simple reason for this lack of trust in AI should focus on training. As cited above, all AI algorithms, whether machine learning- or neural network-based, require training using vast amounts of data. While essential, database size alone is insufficient.
October 12th, 20235 minute read
Agents without Training is Like a Port of Entry without Inspections

With the debut of Open AI’s ChatGPT in November 2022, and worldwide consumer expectations of driverless automobiles, incorporating artificial intelligence (AI) is one of the most talked about subjects in nearly every industry. The customs community is no exception (see related article “AI – It’s Probably Not what you Think”). Anecdotally, one of the most asked questions by the public sector is, “Can (or when will) your AI algorithm replace human agents?”

I submit that is the wrong question to be asking.

Before revealing the more relevant concern, it might be useful to take a brief stroll down memory lane.

The inspiration for the title of this article comes from a 1970s advertising campaign in the U.S. by the Florida Citrus Commission with the tagline: “Breakfast without orange juice is like a day without sunshine.” The implication was that having a more positive daily life required inclusion of a specifically essential ingredient at the start.

The claim of this article is that a similarly essential ingredient is responsible for producing positive inspection outcomes every day.

Therefore, instead of inquiring about just one specific data point for AI implementation, we should be asking the more strategic question, “How much training do agents—both human and artificial—require to be most effective?”

It took nearly seven years, over $11Bn (USD), and billions (perhaps even trillions) of text-based data to train ChatGPT’s large language model (LLM) to achieve its current level of natural language processing (NLP). Regarding autonomous vehicles, self-reporting by Tesla suggests it took more than ten million video clips to train their latest version of full self-driving (FSD v12). Yet, LLMs can produce incorrect answers and driverless cars require continuous operator attention and occasional intervention to remain safe.

In one particularly notable example, New York attorney Steve Schwartz used GPT to help him with research on a client’s personal injury case. Unfortunately, GPT fed the esquire false case law, which he naively trusted to include in one of his court filings resulting in sanctions and a fine by the aggrieved judge. More humorously, in April 2022, a police officer in San Francisco, California, attempted to issue a citation to what turned out to be a driverless car after its internal logic caused it to behave erratically at an intersection.

Thanks to Mr. Schwartz’s example, while many companies are now finding ways to incorporate the capabilities of NLPs into their daily business operations, there are few if any accounts (GPT or otherwise) where they completely replace humans. Rather, an NLP may be tasked with composing an initial draft document (e.g., business plan, resume, legal contract, computer code) with a human providing the final editing and approval. Conversely, NLPs are being used to proofread and improve a human’s first draft. In both cases, AI should be rightly considered solely as a complementary technology.

As for driverless automobiles roaming around, except for Waymo and Cruise in San Francisco, California, widescale adoption and regulatory approval of such capability is likely a decade or more in the future, according to most auto industry analysts.

I assert that the simple reason for this lack of trust in AI should focus on training. As cited above, all AI algorithms, whether machine learning- or neural network-based, require training using vast amounts of data. While essential, database size alone is insufficient. One of the preeminent edicts taught to every computer coder is “garbage in, garbage out”—a computer’s output is only as good as the input its fed. Therefore, in addition to quantity, the quality of the training dataset is even more crucial.

In Tesla’s case, the millions of training videos were collected, curated, and labeled by humans, from the billions of miles that their customers had been driving. But not just any customers. To be included in the pool of acceptable drivers, participants of the initial beta version of the software had to have a “safety score” (i.e., quality) of 90 or above. As the software improved, Tesla wanted to make sure to test it against a broader sample of edge cases (rare, complex situations) so they later relaxed the criteria to 80 or above.

Think about the self-imposed financial implications of this restriction. The company was willing to forgo $15,000 of (almost pure profit) revenue per install, for years, just to ensure that the algorithm was properly trained. Not until November 2022, nearly six years after its initial debut, was Tesla sufficiently confident in the quality of their algorithm to eliminate the safety score mandate. Yet it is still called FSD “Beta” as they continue testing the quality against a near infinite sample of both good and bad human reactions.

By now, you should have some sense that AI agents will likely not replace human agents for customs applications for many, many years to come (perhaps a decade or more). Furthermore, to have even a remote chance at such cost displacement will require a substantial, on-going investment in the production of high-quality training data derived from even higher quality current execution by human operators.

That then forces us to examine the state of today’s training of human agents, for without examples from a pool of customs officers with high quality “scores” of their own, it will be impossible to produce accredited (i.e., trustworthy) AI agents. This is where the experience of S2 Global’s S2 University (S2U) can provide us with the most appropriate example to emulate on a global scale.

Since 2010, S2U has delivered certified training to over 2,000 agents from security institutions worldwide (public and private sector). What makes their training uniquely compelling are:

Access to hundreds of high-quality training images.

Curated by humans from a sample database of millions.

Tailored to a specific customer’s current POE environment and system architecture.

Taught exclusively by former, highly rated customs agents.

Using standard instructional systems design (ISD) techniques (e.g., ADDIE).

With feedback loops from graduates and their employers ensuring constant improvement.

So, when will AI agents replace human agents in our community? That’s an impossible question to answer.

For now, though, our best foot forward is to first recognize the essential value of quality training for our current human agents, followed by an appropriate mechanism to feed quality execution as an input to customs-specific AI training algorithms.

Written by
Jeff Goldfinger
October 12th, 2023

BLOG

Latest writings from our blog

View All Blog PostsGet Started

Outsmarting Smugglers: Contraband detection in the era of AI

AI is transforming smuggling detection by predicting high-risk shipments through analysis of various textual data. The emerging frontier is image-recognition algorithms trained on thousands of X-ray images annotated by experts. As smuggling techniques get more sophisticated, so too must our methods of detection.

January 4th, 2024

S2 University’s Global Smuggling Trends Report

Take a look at the S2U website’s special feature, Global Smuggling Trends Report . This interacti...

November 8th, 2023
AI + Customs to Accelerate Secure Trade

AI + Customs to Accelerate Secure Trade

In the world of artificial intelligence, data is the most valuable asset.

October 25th, 2023