Always building.

Application-layer AI companies: the next big thing

The real value of AI is yet to come, created by those who use the technology and apply it to solving real-world problems.
Application-layer AI companies: the next big thing
"a technology layer, in flat minimalist design with simple shapes" - by Flux Pro Ultra

The real value of AI is yet to come, created by those who use the technology and apply it to solving real-world problems. This is the essence of application-layer AI companies.

What is an application-layer AI company?

An application-layer AI company is defined as a company leveraging generative AI, such as large language models (LLMs), to solve problems in the world.

An example is a company using LLMs to develop chatbots that can answer support questions quickly and accurately, replacing the need for human support agents.

How will they “rule the world?”

Chamath Palihapitiya (VC, early exec at Facebook) explained it best: he likened LLMs to refrigerators and application-layer companies to Coca-Cola (source). The creators of the fridge certainly made a good fortune, but the food and beverage giants (Coca-Cola, Nestle, etc) used the refrigeration technology to build an empire, serving food to billions people worldwide.

To drive the point home, the market cap of refrigeration today is about $121 billion, compared to the food and beverage industry at $7.2 trillion.

Or take the Internet. Most people don't know who created the Internet Protocol (Vinton Cerf and Robert Kahn did in 1969), but the internet goliaths of Google, Facebook, and Microsoft are now household names. They utilized the technology of communicating to any computer in the world to provide immense value in search, social, entertainment, consumer devices, and so much more.

A company’s valuation is strictly a factor of the value you provide to the world. And so if AI can truly be used across industries to elevate productivity, the companies that will become the biggest are the ones that bring this technology to the masses.

Is it a bubble?

But hey James, what if AI is a bubble? Doesn't that mean it will crash and all this will go away?

Of course it's a bubble right now, do you see NVIDIA's stock??

Jokes aside, every world-changing technology was once a bubble (take the dotcom bubble of the early 2000s). A new technology can, and often will be, overvalued at a given point, but if there is underlying utility within it, its worth will only compound over time. A technology's value in the market will always revert to its true amount.

People have likened NVIDIA to businesses that made a fortune selling picks and shovels to prospectors during the California Gold Rush of 1848. NVIDIA is certainly profiting off of the craze right now, but the long-term bet is the value created on top of the infrastructure they provide. And unlike the fabled gold that was never realized, I believe the value of AI to be highly promising.

Aren’t these companies just GPT wrappers?

So, does that mean you don't create your own models? Doesn't that just make you a GPT wrapper?

Ah, my favorite question. My favorite response has to be by Aravind Srinivas (CEO of Perplexity AI): "OpenAI is an Azure, NVIDIA wrapper. Venture capital is wrapper over people who actually have money" (source).

The question you should be asking instead is what is the goal of an application-layer AI company? The goal of an infrastructure-layer company (such as OpenAI) is to produce the best model, but the goal of an application-layer company is instead to use models as a means to solve a problem in the world.

In other words, LLMs are simply a tool amongst many others for an application-layer company. Usually, this means the products these companies build are 95% software and 5% AI. Don't get me wrong, that 5% of AI is enough to elevate the value of traditional software multi-fold, enabling these products to write content, understand language, generate images and videos, etc.

The way I like to describe how our products leverage LLMs at StyleAI, an application-layer company ourselves, is by using them in tiny amounts in hundreds, even thousands of places across our products. Most people think of AI products as one giant omnipotent model making decisions, but in reality, it's a deliberate harmony of software and AI.

Code changes

To give a concrete example, take our code changes feature, one of the core features in our SEO product, Seona. The technical code structure of your website pages is an important ranking factor in search engine algorithms, so we've designed a feature that can optimize and fix these issues on your pages.

Here's how it works. Seona will:

  1. Crawl through your website pages, just like a search engine
  2. Scrape each page and identify any issues with the code (e.g. missing meta tags, improper heading structure, etc)
  3. Generate suggestions, called "code changes," to fix said issues
  4. Apply the changes live to the site

As you can see, this feature completely automates the task of finding and fixing these technical website issues. The important thing to note is that steps 1, 2, and 4 are entirely enabled by software. However, step 3, the generation of the code changes, is only made possible with the help of LLMs. It takes in the current web page's context along with a set of rules for best practices and generates improved website code.

Should you train your own models?

But why not create your own models, doesn't that give you more defensibility?

You certainly can, but again the decision to do so should be a strategic one. Let's think about the actual benefits of creating or fine-tuning your own model. You can argue that doing so can help improve your model's accuracy, speed, and cost. And yes, if those are true you have created more defensibility.

But in the last two years alone, we have seen tremendous strides in all-purpose LLMs (GPT 3 -> GPT 4 -> O1) that have made advancements in all three categories. The rate at which these LLMs have improved has truly been astounding, increasing at a seemingly faster and faster rate.

I like to joke that by the time you've invested resources and developed your own model for a specific use case, the newest version of GPT would have come out with better results. And it can be used beyond your use case.

Do I think this growth rate of LLMs will last? Personally no, their advancements will be hindered by compute and data limitations. But until their development is showing signs of slowing down, for our case (and many others) it simply isn't strategic to invest resources in developing a custom model. Again, if the benefits don't justify the costs, don't do it. Those resources can be put to better use, developing other parts of your product or business in your goal to create value by solving a problem.

Conclusion

Solving the world's problems is what technology is all about. And with great technology, comes great value.

If you too believe in the future of application-layer AI companies, come check us out at StyleAI.