Culture

AI models don't know what to make of our behavior during COVID-19

As the world changes, so do our habits and the data about them. Machine learning models are struggling to keep up.

vintage bot lost his head on a old wooden floor with reflection
Shutterstock

Artificial intelligence (AI) solutions tend to be trained on large datasets of "standard" behavior. Using these massive and sometimes banal datasets, AI systems are able to tune their predictive algorithms according to their subject matter. Think of things like online shopping recommendations or chatbots, for instance.

But, ever since the COVID-19 outbreak hit, AI systems are beginning to show signs of confusion and disarray. According to the MIT Technology Review, vendors across the world are witnessing their automated systems crash under bulk orders, or fail to adjust to sudden shifts in consumer patterns — like a sudden drop in demand for phone covers in favor of bulk orders for toilet paper and hand sanitizer, for example.

Chatbots and other automated response services, meanwhile, are failing to produce correct responses. It turns out that these machines still need expert human judgment and intervention to run effectively, especially when the humans the tools have to contend with are — by regular measures — all acting crazy.

Step aside, bot — Much like the other cracks in the system, COVID-19 has exposed the problem with relying blindly on algorithms to run commerce, marketing, deliveries, and more. The MIT Technology Review reports that algorithms are struggling to detect rapidly depleting inventories, fraudulent activity, or parse tone.

According to the report, some companies are finding their AI systems can no longer manage basic tasks like stocking up supplies. Others, meanwhile, are witnessing their algorithm-based advertising models fail to handle mass influxes of subscribers for different websites while also shakily recommending tone-appropriate and timely articles.

Natural-language processing models are also being paused so that mass-marketed emails reflect the appropriate tone, at least more appropriate than the typical, model-generated "OMG" and emoji excitement formulations common to bulk emails. Even Amazon is intervening in its behemoth delivery system to tweak its algorithms and refer consumers to vendors who can ship their products despite the intense demands many of them are now trying to manage.

New training for new outcomes — One of the ways to overcome these challenges is to diversify the datasets systems are trained on. So, instead of limiting the machine learning model to optimal case scenarios, these models should be trained on disaster cases as well.

This would include not only the coronavirus pandemic but also massive stock market crashes, the Great Depression in the 1930s, housing crises, unemployment spikes, mortality rates, and other outlier events. Limiting a model to relatively stable conditions is counterintuitive as it restricts its ability to predict and field responses under extreme volatility.

Mercurial data sets actually help these models understand, for example, surges in anomalous purchases or spot price gouging and other fraudulent behavior. It sounds rather straightforward but it is strange that so many companies have overlooked this basic truth: machines need human complexity to respond to ever-changing landscapes, needs, attitudes, habits, and more.

To assume that these models can run without our expertise and intervention is inviting trouble in already troubling times.