Talking to your computer has been the dream of futurists and technologists for decades. Looking at the state of the art in 2004, it’s amazing how far we’ve come. We now have billions of devices and homes listening to our requests and doing their best to respond to them. But for all the time, money and effort, chatbots of any stripe haven’t taken over the world as their creators envisioned. They are amazing. They are boring too. And it’s worth asking why.
Chatbot is a term that encompasses many systems, from voice assistants to artificial intelligence and everything in between. In the not-so-good old days, talking to your computer meant typing into a window and watching the machine attempt a facsimile of the act of talking rather than the real thing. The trick of the old ELIZA (1964-1967) to repeat user input in the form of a question helped sell this performance. And it even lasted until 2001’s SmarterChild chatbot. Another branch of this work was digitizing analog with audio-to-text engines, such as Nuance’s frustrating but sometimes wonderful product.
In 2011, the ideas from that early work were quietly combined to create Siri for the iPhone 4S, which built on Nuance’s work. Amazon founder Jeff Bezos saw Siri’s promise early on and began a massive internal project to create a local competitor. In 2014, Alexa arrived, followed by Cortana and Google Assistant in later years. Natural language computing was now available in countless smartphones and smart home devices.
Companies are reluctant to talk specifically about the cost of building new projects, but talk is expensive. Forbes In 2011, it reported that it cost Apple $200 million to buy the startup behind Siri. in 2018 The Wall Street Journal It quoted Dave Limp as saying that Amazon’s Alexa team has more than 10,000 employees. A Business Insider A 2022 story suggested the company had lost more than $10 billion on Alexa development. Last year, Information Apple has claimed that it now spends a million dollars a day on the development of artificial intelligence.
But why do we use this expensive technology? Turning our smart lights on and off, playing music, answering the doorbell, and maybe even scoring sports points. In the case of AI, you might be getting poorly generalized web search results (or depictions of human subjects with multiple fingers). You certainly don’t have much in the way of meaningful conversation or extracting vital information from these things. Because in almost every case, his comprehension deteriorates and he struggles with the nuances of human speech. And this is not an isolation. in 2021 Bloomberg according to internal Amazon data, up to a quarter of shoppers stopped using an Alexa unit entirely within the second week of owning it.
An oft-cited goal was to make these platforms conversationally intelligent, answering your questions and responding to your commands. But while it does some basic things pretty well, like understanding when you’ve asked it to turn off your lights, everything else isn’t quite so smooth. Natural language tricks users into thinking that systems are more complex than they actually are. So when it’s time to ask a complex question, you’re more likely to get the first few lines of a wikipedia page, which makes you lose faith in their ability to do more than play music or turn the thermostat.
It is hypothesized that generative AIs tied to these natural language interfaces will solve all the problems currently associated with voice. And yes, on the one hand, these systems will be better at pantomiming a real conversation and trying to give you what you want. But, on the other hand, when you actually look at what’s coming out of the other side, it’s often bullshit. These systems gesture toward surface-level interaction, but cannot do anything more fundamental. Don’t forget when it is Sports Illustrated tried to use AI-generated content that makes bold claims softball can be “difficult to get into, especially without an actual ball to practice with.” Not surprisingly, many of these systems are Bloomberg supported by low-paid human labor reported last year.
Of course, boosters of the form will suggest that it’s early days and e.g OpenAI CEO Sam Altman recently said, we need billions of dollars for more chip research and development. But it makes a mockery of the decades of development and billions of dollars already spent to get to where we are today. But it’s not just about cash or chips: Last year The New York Times By 2027, the power requirements of artificial intelligence alone could increase to 134 terawatt hours per year, he said. Given the urgent need to limit energy consumption and make things more efficient, this does not bode well for the future of its development or for us. planet.
We’ve had 20 years of development, but chatbots still haven’t caught up to what we’ve been told. At first it was because they struggled to understand what we wanted, but even if that was resolved, would we suddenly embrace them? After all, the main problem remains: We simply don’t trust these platforms, both because we don’t trust their ability to do what we ask, and because of the motivation of their creators.
One of the most enduring examples of natural language computation in fiction, and one often cited by real-world manufacturers, is the computer. Star Trek: The Next Generation. But even there, with a voice assistant that seems to have anything close to general intelligence, it’s not trusted to steer the ship on its own. A crew member still sits at each station, obeying the captain’s orders and generally completing the mission. In a future as advanced as it is free from material need, such beings still crave a sense of control.
Note Engadget’s 20th Anniversarywe’ve been revisiting products and services that have changed the industry since March 2, 2004.
This article contains affiliate links; we may earn a commission if you click on such a link and make a purchase.