Back to Articles

AI Unleashed: Ushering in the Era of Unprecedented Productivity

[
Blog
]
Edward Philip
Published:
December 4, 2024

AI Unleashed- Ushering in the Era of Unprecedented Productivity

This blog is the first in a multi-part series that will explore various aspects of AI, including how to consider its impact on a business architecture, handling of bias, financial inclusivity, software lifecycle delivery, privacy, ethics, and more. But specifically, as innovators, builders, and strategic technology advisors in highly complex and regulated industries, like healthcare and financial services, we’ll be sharing our thoughts on how AI can be integrated into existing product offerings, as capability enhancements or as new delivery channels, and key considerations we think about when designing solutions for enterprise settings.

If there’s one thing that’s been made clear in the last two years, it’s that AI no longer has a branding problem. No longer does the mention of “Artificial Intelligence” spark the age-old fear of a Skynet controlled future, but instead, one of limitless possibilities, unlocking our unimaginable potential to solve incredibly hard problems with this new technological advancement at our side. And as builders of digital and complex solutions, our views and our approaches have been evolving and changing as fast as this technology is and continues to evolve.  

The foundation of many of today's AI models can be traced back to the 2017 paper "Attention is all you need", which outlined the neural network model/architecture that is underpinning many of today's AI models. But it was the launch of ChatGPT 3 in November 2022 that really sparked the widespread adoption and popularization of AI capabilities, making terms like"LLM" (Large Language Models) and “GenAI” (Generative AI) something we began overhearing in all sorts of public settings.  

But let’s make sure we truly understand the power of what these AI’s are capable of today, and where they’re going (probably within the next year or two). Imagine dedicating your life to reading—every hour, every day, without stopping. At best, you’d manage around 14,600 books over an 80-year lifetime.  

Now consider this: Current AI models can process 100,000 books in a single month—and future models (and we’re talking about models that will likely be released in the next year) will be able to read 500,000 or even 1 million books per month. That’s equivalent to what we could read in over 5,000 years. And it’s able to index and recall this data on a moment’s notice to answer a query.  

Let that sink in. This scale of knowledge absorption is something entirely new. The “Generative” in Generative AI allows it to create new “content” based on this data, whether it’s text, images, or video. In a sense, it can form an educated opinion based on the data it has been trained on. And when you put this altogether, AI has the power to synthesize, understand, and generate insights at a level far beyond human capability. They have high IQ, can be trained to demonstrate high EQ, and now are being measured on a new metric, their Action Quotient (AQ) – their ability to take productive actions autonomously.

But in the words of Uncle Ben, with great power, comes great responsibility. Many technologists at the forefront of AI are also raising the alarm bells on ensuring there are strict controls on the use, development, and application of Artificial Intelligence, and the need to ensure human agency remains at the center. World Governments have also responded to this rapid growth with new regulations and guidelines, and they have implemented their own regulatory standards on this topic, such as the EU AI Act, and the “SharedApproach to the Responsible Use of Artificial Intelligence in Government” in Canada.

These days, AI is everywhere—and for good reason. The application of AI is transforming entire industries. If you’ve been on the conference circuit lately, you’ve probably noticed that nearly every company is proudly flying an AI banner, showcasing how their products are stepping into the future.  

And honestly, it’s exciting to see. Whether it’s simplifying complex processes, unlocking new efficiencies, and pushing the bounds of artistic creation, AI is no longer just a buzzword—it’s becoming a key ingredient for innovation across the board. Companies aren’t just saying ‘we have AI’; they’re showing how it’s solving real-world problems in ways we couldn’t have imagined a decade ago. Well-funded entrants like Anthropic and Mistral, offer LLM models to compete with the larger companies, and many others offer AI-driven value-added services to augment their existing capabilities and create new offerings, something which we’ll explore in future blogs in this series.

Roll-up Section
Expand for More Details

We’re living in an era where we have greater access and easier adoption of AI solutions, not just by bold startups but also regulated industries. However, several challenges still remain that we must remain vigilant of, and strive to address whenever possible:

Unpredictability and Hallucinations:

AI has come a long way, and thanks to advances like built-in guardrails and frameworks like NVIDIA MeMo, we’re much better at keeping these systems from 'going off the rails.' But let’s be real—most Large Language Models (LLMs) are still a bit of a wild card. Their outputs aren’t always fully predictable or explainable, and that unpredictability doesn’t just affect what they do. It can ripple into areas like performance, reliability, and the overall stability of the system.

This means that while we’ve made great strides, there’s still work to be done in understanding and controlling the broader impacts of these AI solutions. It’s not just about getting the job done—it’s about ensuring the entire system holds up under the weight of unpredictability.

Bias:

AI models are like students—they learn from the data they’re exposed to. And just as a person’s opinions are shaped by their experiences, these models form their ‘perspectives’ based on the datasets they’re trained on. But here’s the catch: if that data carries biases, the models will too.

As someone from a visible minority group, I’ve experienced first hand how bias can shape perceptions and decisions in the real world. It’s a sobering reminder of the impact unconscious bias can have, whether in humans or AI. The stakes are even higher with AI, especially as we entrust these systems with more autonomous decision-making and real-world influence. Addressing this issue isn’t just a technical challenge—it’s a moral imperative.

Evolution of UX:

When building AI solutions, we can’t just stick to the old playbook—we need to embrace new and evolving disciplines. Take prompt design, for example. It’s a whole new field that’s quickly becoming essential for shaping effective AI interactions. But this is just the beginning.

As AI continues to change how we ‘work’ with technology, we’ll likely see a transformation in User Experience(UX) and design disciplines too. The traditional approaches to UX will need to adapt, creating experiences that account for the unique, dynamic ways users interact with AI. It’s an exciting challenge: designing not just for clicks and swipes, but for collaboration and conversation with intelligent systems.

Quality Assurance:

Quality Assurance (QA) is entering uncharted territory in the age of AI. When the outcomes of a system aren’t completely predictable, traditional QA strategies just don’t cut it anymore. We’re shifting from checking for zero defects to evaluating success as a probability—a percentage of expected outcomes rather than a fixed standard.

But here’s where it gets even more interesting: defining those 'expected outcomes' is no longer straightforward. Validation might mean collaborating with other AI systems to ensure accuracy and reliability. It’s not just about testing what’s right or wrong—it’s about navigating complexity, embracing probabilities, and evolving our approach to keep up with the dynamic nature of AI-driven solutions.

Human Agency as a Priority:

AI should augment, not replace, human decision-making. Systems must be designed to empower users with clear, explainable outputs and the ability to intervene or override when needed.

Ethical design principles demand that we avoid creating opaque, “black-box” solutions that take control out of human hands.  

As an optimist, I truly believe we’ll get most of this right—and where we stumble, we’ll learn quickly. But more importantly, I’m convinced we’re standing at the threshold of the most productive era in human history. Decades from now, future generations will look back at this moment as we do the dawn of computing—when the world began to change in ways no one could have fully imagined.

I can already picture my son, years from now, writing a paper for his technology history class. He’ll describe how these transformative years shaped the world, the economy, and the very fabric of humankind. And honestly? I can’t wait to see how he tells the story.

Don’t miss the next blog in this series, where my colleagues and I dive deeper into our unique perspectives on the ever-evolving world of AI we live in.