- Products
- Solutions Use casesBy industry
- Developers
- Resources Connect
- Pricing
Long before the age of large language models (LLMs), humanity learned to use fire. Fire was a big innovation because it transformed the inedible into food. Fire also meant warmth and protection. This abundance allowed humanity to multiply and thrive.
Thousands of years later, we created rudimentary water-moving mechanisms like aqueducts that turned hours of labor into a much shorter walk to fetch water. Plumbing and water treatment followed. Eventually, this evolved into the modern-day phenomenon of turning on the tap to get drinkable water. Today, we don’t think too much about what a marvel it is to have on-demand clean water in our homes. But it’s the culmination of hundreds of individual innovations that provide us convenience and the ability to spend time doing more meaningful things than hauling water to cook for our families.
These stories might seem out of left field, but they’re relevant to today’s artificial intelligence (AI) conversation. Goldman Sachs research analysts estimate tech giants will spend $1 trillion on AI in the coming years, with mixed opinions on whether this investment will pay off. Understanding the foundation of AI advancements, similar to other innovations that have shaped humanity, can help us discern whether we need to invest heavily in advanced AI applications now or adopt a more measured approach.
AI is a vague term that has been attached to various software technologies throughout the decades. When we see a novel software innovation that mimics humanity’s capabilities with an air of magic, we think: Ah, that’s AI. But to truly appreciate AI’s advancements and effectively incorporate this technology in our organizations, it’s important to understand its foundation.
The magic of AI, as we understand it today, dates back to classical machine learning in the mid-20th century. Machine learning uses statistical methods to learn from data and make predictions, allowing for the creation of inference engines with narrow focuses like identifying similar pictures, classifying spammy emails, and detecting financial fraud. However, machine learning had limitations: it couldn’t learn across different domains and it required very structured data inputs, which meant you had to hire people to carefully tag your data so the system could learn properly. Machine learning seemed like magic — until we better understood the limits of statistical approaches.
In the 2010s, Deep Learning advanced machine learning further, leading to the development of highly parallelizable transformer architecture and LLMs. LLMs dazzled us with their generalized, human-like conversational capabilities. Suddenly, you don’t have to create a specialized model for every single use case. Many applications can be accomplished with a general-purpose model and even more by tailoring a model without starting from scratch. Beyond generating inferences, we can now extrapolate from a body of content to create new content. We don’t yet know how far this will go and which use cases will be most suitable. But it’s an exciting technology that has captured our imagination.
Despite the excitement, the commercialization of Deep Learning and LLMs is still early. Not all organizations are ready, and putting the most cutting-edge technology into production can be risky, with high costs and uncertain returns. If your organization doesn’t have a solid foundation of excellence in deploying simpler solutions, that risk is compounded. Better, faster results can be achieved with less risk. Why climb to the top of the tree to pluck an apple when there are many easily reachable from the ground?
At its core, AI is about creating abundance for humanity by allowing us to accomplish tasks much more quickly and easily, much like fire and modern plumbing did.
It’s easy to feel like you’re falling behind if your organization hasn’t implemented the freshest and flashiest hyped-up AI technology. But I implore you to get grounded and look at the big picture when considering your AI strategy.
Using technologies like the LLMs that power generative AI requires you to get several things right before being able to create real value for your users in production:
Infrastructure and team costs can be expensive, and we’re all still figuring out how to manage these costs while meeting users’ expectations for extensive usage without causing financial strain.
Before you obsess over a specific technology, zoom out and look at the big picture. AI promises to give your users leverage, whether internal or external, allowing them to do more than they could before or giving them back time by doing existing things faster and easier. It’s a virtuous loop where incremental improvements and tactically deploying innovations add up to more power.
Here are some questions to kickstart your thinking:
There is a time and a place for generative AI. But there may be simpler, cheaper, and more powerful improvements than adding “compose with AI” to every textbox in your product’s UI.
Finding the answers to these questions involves deeply understanding your users and their needs, which is always the foundation for building great product experiences.
One almost universal example of a user pain point is scheduling. People need to meet with each other to get things done. To do that, especially in today’s dispersed and highly remote world, we often must go back and forth to agree on a time to meet and then decide on a place (whether in-person or virtually). This tedious task happens at least weekly, if not daily, for many roles. There are patterns in the interaction to get a meeting on the calendar, such as sharing mutual availability. Sometimes, there are different patterns, like when scheduling interviews involving multiple people or arranging a series of back-to-back meetings. Maybe there are different roles with different constraints. However, each pattern can be encoded, and individual steps or the whole process can be streamlined.
Another example most of us have experienced is the pain of expense management. Whether it’s for personal or business finances, manually collecting receipts, categorizing expenses, and generating reports is time-consuming. However, similar to scheduling, common patterns emerge within expense management workflows, which can also be encoded and streamlined, such as recording each transaction’s date, amount, and category.
Simple automation can help categorize expenses based on past patterns and match receipts to corresponding credit card transactions to ensure accuracy. It can also generate detailed expense reports and provide insights into spending habits, helping individuals and organizations manage their finances more efficiently. This automation saves time and reduces the risk of errors, making the whole user experience (UX) much more manageable.
Before reaching for a complex, expensive generative AI solution, consider whether you could invest in great UX that streamlines a workflow first. A dozen refinements streamlining workflows shipped with excellent UX can add up to the experience of magic that brings your users back and helps them work better. Now, that’s just like turning on the tap.
Looking to automate scheduling in your application? Connect with an expert to learn how Nylas saves you months of development time. Our calendar API lets you configure scheduling rules and automate booking workflows to make your users more productive. You can also read about the latest enhancements to the Nylas Scheduler, a modular, decomposable, and customizable scheduling solution for users in your app, which is now generally available.
Christine Spang is the Co-Founder & CTO of Nylas, where she leads Nylas’ engineering and technical strategy along with her focus on building, delivering, and scaling Nylas’ services worldwide. A Forbes 30 Under 30 honoree, Christine has been recognized for her work with APIs, software development, and the greater technical community and has spoken at numerous events such as Collision, PyCon, Web Summit, and TechCrunch Disrupt. Christine supports numerous organizations and causes that promote diversity and inclusivity within the technology and software development industry. Christine holds a degree in computer science from MIT.