Failed Expectations with First Chatbots
This is another detour into NLP history. If you haven’t, go check the first part talking about the pioneers of modern linguistics.
In the middle of 1960s Joseph Weizenbaum came up with program that appeared to pass Turing’s test. Weizenbaum built a conversation simulator, known as ELIZA, using a fairly straightforward approach. The “chatbot” would examine user input for keywords attached to response rules. In case no keyword was found the program would yield either a generic response or rephrase the user’s question. ELIZA proved successful and opened new ways of machine vs human interaction.
In the 21st century, versions of these programs (now known as “chatterbots”) continue to fool people.
Wikipedia – Turing test: ELIZA and PARRY 🙂
Towards the end of 1960s Roger Shank introduced a model allowing to add a layer of understanding to a computerised language processing. His new model called Conceptual dependency theory separates meaning from words and forms a basis for later advancements in natural language processing, such as entity or intent recognition.
In 1970 William A. Woods introduced Augmented Transition Network (ATN), an alternative way of parsing sentences into machine format. Woods abandons Chomsky’s structure rules (see the previous article) in favour of graph representation. ATN allows to process complex sentences in a deterministic recursive manner. ATN became largely adopted and led to new chatterbot applications.
Despite all the groundbreaking innovations many of the expectations ended up being overly too ambitious. Machine translations weren’t practical due to excessive cost, which resulted into major budget cuts on research programmes. It was only with the rise of computational power in the 1980’s when NLP started to see new pragmatic uses. More on that later in the next part of this series.