March 24, 2023
RavenPack's CEO and Co-Founder, Armando Gonzalez, dives into how Language AI has reached an incredible turning point, from the famous Turing test to the creation of large language models. He also shares his thoughts on how we can make sure we're paving the way for a bright future.
Armando Gonzalez
CEO & Co-Founder
RavenPack
In this article, you will read about:
“ChatGPT, can machines really understand meaning?”
“AI is advancing at a remarkable rate, yet there is still a long way to go before it can possess a human-like understanding of meaning."
A few decades ago, conversing with a device would have been straight out of a Sci Fi movie. Today, it’s nothing to call home about. Progress in computing power and cheap storage have paved the way for data collection at scale. This data is now finding its way into language models that use Machine Learning — an innovation known as Language AI . And it’s about to disrupt how we work and make decisions, even more than the personal computer did in the last few decades.
I realize it is a bold claim. Before I make my case, let’s dive into what Language AI is, and how it has evolved up to this tipping point.
We need to continue building Language AI on solid foundations — hyper focused on using big data to train models together with knowledge graphs of facts and human values.
Language AI is essentially about enabling computers to understand and speak human language, with its subtleties and context dependencies. Take basic spam filters, the autocorrect feature on your smartphone, or the voice-activated devices like Siri which can play your favorite song if you ask them. And more recently, ChatGPT — an advanced language model which is able to interact with users in a conversational way and provide answers on almost any topic and is even capable of passing business and law school exams.
This seamless experience bears witness to a waterfall moment: computers are starting to understand what we mean .
Computer scientists often joke that “life was so much simpler when apples and blackberries were just fruits" . What’s truly remarkable is that today’s computers can also get the joke. They can recognize which apple we’re talking about (the fruit or the brand) by using proximity and similarity to establish context. For example, they are looking at other words in the text like pie , tree , tasty or earnings , computer , Tim Cook . And with every processed sentence, machines become better at it. The way they achieve this is through language models based on transformers.
These transformers are not the humanlike robots we see on films, but machine learning models that are revolutionizing how we analyze information at scale. This has implications for how we work, learn, manage our money, or look after our health. Transformers are incredibly complex models that digest vast amounts of inputs all at once — we're talking trillions of parameters.
They can also identify relationships between words and make deductions. This gives machines a “sense” of context and the ability to extract meaning. It is this pivot from words to meaning that makes Language AI so disruptive.
Language AI is at an exciting turning point, but it hasn’t been a smooth ride.
About 70 years ago, scientists started exploring whether machines could imitate human activities, including “ thinking ”. In 1950, Alan Turing first ran the now famous “ Turing test ”, in which someone asks questions via a terminal to both a human and a machine. Based on the answers provided, they needed to guess which terminal was operated by a computer.
8 years later, John Mcarthy released the programming language LISP (acronym for list processing), the foundation of language AI. For the first time programmers had access to flexibility and organization.
With a new generation of companies truly invested in language understanding, the scientific mood went from disappointment to excitement. Today thanks to substantial funding and global research, Natural Language Processing is now on track to meet those initial expectations.
It was a promising start. What followed, from the 1960s to the 1990s, was a significant disappointment for researchers. Machines were not able to rise up to their expectations — they couldn’t process data fast enough nor could they communicate the way scientists had envisioned. As a result, vast funding was halted and research suspended for years to come.
Only in the past two decades has Language AI seen a spectacular revival. With a new generation of companies truly invested in language understanding, the scientific mood went from disappointment to excitement. Today thanks to substantial funding and global research, Natural Language Processing is now on track to meet those initial expectations. So what changed?
70 years ago, it was thought that machines could learn anything and everything without a proper education.
Since then, we realized not only that language models needed specialized training, but that they would perform only as well as the data they were trained on. There is no one know-it-all, universal AI the way scientists initially imagined it; instead, it comes in the form of specialized models designed for very detailed tasks. So researchers are now creating hyperfocused training sets that are meant to help solve specific, real world problems.
Let’s say we want Language AI machines to help us forecast inflation more accurately, by counting and reading job listings from employers across all industries — and in real time. We first need to teach computers about workforce-related concepts such as companies, industries, occupations, and skill and the many ways they can be interlinked. Once we've done this, we can start asking computers for answers, such as — are companies or sectors growing faster or signaling a hiring freeze? That information, put in the context of how job growth impacts inflation, can make our economic predictions more accurate and timely.
This is a huge endeavor. It involves creating ontologies, taxonomies and quilting together knowledge graphs, which are interlinked descriptions of objects, events or abstract concepts. All this is essential in adding context and depth to machine learning techniques, but takes a lot of resources and can get out of control. So the next frontier is scalability. And we're seeing a lot of progress in speeding up training, making deployments of large language models simple, transparent and cost effective.
So far, no AI has been able to indisputably pass the Turing test and be mistaken for a human. That’s fine, because I think machines should help us evolve, not outsmart us or be like us. The ultimate goal is to empower everyone (governments, NGO, companies and individuals) to leverage the power of AI using big data. Because data driven decisions are wiser and more sustainable.
Here’s where we are today — in a place of incredible promise and momentum, with machine learning driving a paradigm shift in information technology. Just like fitting in a CPU inside a personal computer led to the democratization of computing, Language AI will disrupt the way we think, brainstorm, and communicate with others.
The ultimate goal is to empower everyone (governments, NGO, companies and individuals) to leverage the power of AI using big data. Because data driven decisions are wiser and more sustainable.
To get there, we need to continue building solid foundations — hyper focused on using big data to train models together with knowledge graphs of facts and human values. If the wrong foundations are laid, it is possible that language AI could lead to unpredictable consequences and stall progress once again.
Please use your business email. If you don't have one, please email us at info@ravenpack.com.
By providing your personal information and submitting your details, you acknowledge that you have read, understood, and agreed to our Privacy Statement and you accept our Terms and Conditions. We will handle your personal information in compliance with our Privacy Statement. You can exercise your rights of access, rectification, erasure, restriction of processing, data portability, and objection by emailing us at privacy@ravenpack.com in accordance with the GDPRs. You also are agreeing to receive occasional updates and communications from RavenPack about resources, events, products, or services that may be of interest to you.
Your request has been recorded and a team member will be in touch soon.
New research using RavenPack Job Analytics to quantify tech adoption in hiring posts proves that companies hiring for novel tech skills outperform peers from an investment perspective.
Our multi-asset allocation strategy based on inflation nowcasts and RavenPack sentiment analytics outperforms the S&P 500 and stabilizes volatility. With real Sharpe ratios of up to 1.25 and a risk control approach, it delivers a compelling inflation hedge for investors.
Research by scholars from University of Derby, UK and King Abdulaziz University, Saudi Arabia leverage Sentiment analytics by RavenPack to asses impact of news on gold.