current-state-of-ai
Technical Thoughts

Everything About the Current State of AI (in Normal Terms)

As of December 2024

Two years ago, the release of GPT-3.5 felt like a major leap forward—an AI model that could not only answer questions but generate large chunks of coherent text with surprising clarity. It became a sensation almost overnight, spurring excitement, debate, and a race to see what else AI could accomplish. But as with any new technology, it wasn’t long before people started wanting more.


1. From Longer Context Windows to Smarter Models

Context windows—the amount of text an AI model can keep “in mind” at once—became the next big thing users clamored for. People wanted to feed the AI more information and receive bigger, more nuanced responses, whether that meant analyzing lengthy PDFs or summarizing entire ebooks.

But, of course, once you give people a bigger sandbox, they want better toys, too: smarter models capable of deeper reasoning and more intricate problem-solving. While GPT-3.5 was impressive, users (and developers) saw potential for it to be even more intelligent, handle more complicated tasks, and deliver more reliable insights.


2. Structured Outputs and the Birth of Function Calling

As people realized that AI could generate text paragraphs, the question soon became: “What if AI could produce structured output?” Imagine an AI not just returning a block of text, but well-formatted JSON or CSV data, feeding it directly into a database or an application. That’s where function calling entered the scene. By letting AI models call specific functions—like “translate this text,” “add an event to my calendar,” or “analyze this spreadsheet”—we moved from pure text responses to AI actually doing things.

This shift opened the door to agents: mini AI modules or “bots” tasked with specific goals, such as handling emails, browsing the web, or debugging code. Once you start chaining these agents together, you get a system that can orchestrate multiple steps and accomplish tasks that once took entire teams or countless hours of manual work.


3. Early Signals: GPT-3.5 and LLaMA

Around the same time GPT-3.5 was making waves, LLaMA (an open-source AI model from Meta) appeared, offering a glimpse at how powerful locally run models could be. Suddenly, the idea of AI being accessible to nearly anyone—without needing massive cloud servers—became more tangible. Developers started experimenting, connecting these models with broader workflows, and discovering new use cases beyond simple text generation.

Here at Levlex, we saw these signs early and began developing a comprehensive AI platform that could run on-device or connect to cloud services. Our vision: leverage structured outputs, function calling, and “agent-based” automation to build something that’s not just a chatbot, but a unified AI system.


4. The Rise of Vertical AI Solutions

In the year that followed, the market became saturated with vertical AI solutions—specialized tools targeting a single niche:

While these narrow-focus tools can be powerful, they often leave users juggling multiple AI subscriptions and apps. Each might handle a single part of a workflow, but none ties everything together in a single, cohesive system.


5. The Future: One Platform to Rule Them All

So, where is AI headed? The signs point to one platform where you can do it all:

This future platform must be flexible (so that people can tailor it to their exact needs) and extensible (so developers can add new features or modules whenever a new AI capability emerges). In other words, it’s the next logical step beyond simple chatbots or single-purpose apps.


6. Enter Levlex

Levlex is our answer to this evolving AI landscape. Inspired by the early buzz around GPT-3.5, the push for longer context windows, the emergence of smarter AI functions, and the concept of agent-based automation, we started building Levlex about a year ago. Our goal: a one-stop AI system—not just a chatbot, but an entire environment where you can:

  1. Automate repetitive tasks (workflows, function calling, agent orchestration).
  2. Integrate AI into every corner of your digital world (email, spreadsheets, file management).
  3. Run AI locally or connect to external models (like GPT-4 or LLaMA), depending on your resources and privacy needs.
  4. Extend Levlex with new plugins or specialized features, just like adding apps to a smartphone.

At its heart, Levlex takes the best of what the AI community has pioneered—structured output, function calling, advanced model reasoning—and wraps it in a user-centric, on-device platform designed to grow and adapt.


Conclusion: All Roads Lead to Levlex

We’ve come a long way from the early days of GPT-3.5. The AI field is advancing rapidly, with new tools and breakthroughs emerging almost monthly. If there’s one lesson from this whirlwind, it’s that people don’t just want a chatbot—they want AI that integrates seamlessly into their workflows, automates grunt work, and adapts to every unique use case.

Levlex is built with that future in mind. It’s not just about answering questions—it’s about doing tasks, harnessing agents, leveraging structured outputs, and giving users control over how AI fits into their daily lives. The race is on, and we’re excited to see how users and organizations will push AI to its full potential with a single, customizable platform. If you’re as intrigued by the possibilities as we are, welcome—Levlex is here to help you explore them.