A team of researchers from University College Maastricht recently published a study investigating the use of GPT-3 as an email manager. As someone with a mailbox that can only be described as ridiculous, color me intrigued.

The big idea: We spend hours a day reading and responding to emails, which as an AI can automate both processes?
The Maastricht team explored the idea of releasing GPT-3 into our email systems from a pragmatic point of view. Instead of focusing on exactly how good GPT-3 is at responding to specific emails, the team investigated whether there is any merit to even trying.
In their paper (read here) the potential effectiveness of GPT-3 as an e-mail secretary is explained by examining how useful it is to fine-tuned machines, how financially viable it is to human workers and how effective the machine-generated errors are. . transmitters and receivers.
Background: The quest to build a better email client is an infinite one, but ultimately we’re talking about hiring GPT-3 respond to incoming emails. According to the researchers:
Our research indicates that there is a market for GPT-3 based email rationalization in different sectors of the economy, of which we are only examining a few. In all sectors, the damage of a small formulation error seems minor because content usually does not involve large sums of money or human safety.
The authors describe the use cases in the insurance, energy and public administration sectors.
Objections: To begin with, it is worth pointing out that this is a preprinted paper. Often this means that science is good, but the paper itself is still under review. This particular article is currently a bit of a mess. Three separate sections contain the same information, for example, so it is difficult to observe the point of the study.
This seems to be an indication that it will save us time and money if GPT-3 can be applied to the task of responding to our work email. But it’s a giant “ash.”
GPT-3 lives in a black box. One has to proofread every email he sends out because there is no way to be sure that it will not say something that provokes lawsuit. Apart from the fear that the machine would generate offensive or false text, there is also the problem of trying to figure out what use a general knowledge bot would be for this task.
GPT-3 is trained on the internet, so you can tell the wing team of an albatross or who won the 1967 World Series, but it certainly can not decide whether you want to turn in a birthday card for a co-worker. worker or if you are interested in appointing a new subcommittee.
The point is, GPT-3 is more likely to respond to common emails than a simple chatbot trained to pick a pre-generated response.
Quick take: A little googling tells me that the landline phone was not ubiquitous in the United States in 1998. And now, a few decades later, only a small fraction of American homes have a fixed line.
I can not help but wonder if email will be the standard for communication for much longer – especially if the last line of innovation involves devising ways to keep us out of our inboxes. Who knows how long we may be of a hypothetical version of OpenAI’s GPT, which is reliable enough to make it worthwhile on any commercial level.
The research here is commendable and the article provides some interesting reading material, but ultimately the usefulness of GPT-3 as an email responder is purely academic. There are better solutions for inbox filter and auto-response out there than a brute force generator.
Published on February 8, 2021 – 20:17 UTC