Cognition points
What are "cognition points," and how are they used to build the next generation of AI-enabled software applications?
What is a cognition point?
Currently, software has found its way into automating portions of many everyday tasks. Take marketing for example, more specifically programmatic advertising:
- Google has built an ad platform (Google Ads) on top of their search engine, which automatically pushes your ad content into other people's searches.
- CRMs and spreadsheets help marketers keep track of their ad performance
- Website landing pages direct ad visitors to booking services for your business.
In between these software solutions still lies points that involve human decision-making, places I call cognition points. For instance, coming up with the ideal Google Ads campaign structure requires a marketing expert to plan it out.
A cognition point has two characteristics:
- Requires a high level of thought and reasoning
- Has high leverage in the quality of the task’s outcome
Example
In the same Google Ads example, an example cognition point would be planning your ads campaign structure. To do so, you'll want to determine which services your business offers, research which keywords to target for each service, allocate budgets between your campaigns, and create ad content tailored to your audience. As you can see, how you structure your campaign:
- Requires some "cognition", knowledge about your business, and experience with running Google Ads
- Has a big impact on the effectiveness of your overall campaign performance
To date, these cognition points require a human to step in, due to the complexity of reasoning and how important they are to the final result. As a result, there are entire industries built on providing "decision-making" as a service, such as professional consulting services in marketing, finance, and design.
My bet: automating these cognition points is where LLMs will begin to provide value, able to make decisions on their own and eventually even better than a person.
When speaking about LLMs automating these points, they will start from the simpler ones and gradually move their way up.
Building for cognition points
Great, so how do I leverage LLMs to replace said cognition points?
The process can be broken down into four steps:
- Build the infrastructure
- Build your baseline model, and supplement it with an expert
- Dissect the theory behind the expert
- Iterate on your model, repeat
Let's walk through each step, and use the example cognition point of planning your Google ads campaign structure.
Step 1: Building the infrastructure
Before we can build our model, we first need to use software to lay down the infrastructure for applying the cognition. This will allow our model, as well as a user or expert, to take in the necessary data to make its decision along with the means to apply it.
For our Google Ads example, this means integrating our application with the Google Ads API, allowing us to create campaigns, ad groups, target keywords, and upload ad copy. We will also collect information about the business we are running ads for, by building a database that stores relevant business info such as which services it offers, who it's target audience is, etc. Finally, we'll build a user-facing dashboard, allowing users to edit the campaign structure that is generated.
As you will see in the future steps, investing in the infrastructure will allow users/experts to alter the model's output, saving them time and providing effective results.
Step 2: Baseline model supplemented by an expert
Once the infrastructure is built, it's time to take your first crack at building a model! Apply the 80-20 rule, and use an LLM with a simple prompt as your first version.
Common prompting strategies include:
- Telling the LLM to adopt a persona
- ex. "You are a Google Ads expert, tasked with outputting the ideal ads campaign structure for the given business."
- Adding examples
- ex. "Below are examples of good campaign structures to follow."
Here comes the beauty of building the infrastructure first: even if your model isn't up to par, you can have a subject expert tweak the results. Your model will act as a "first pass," saving the expert time by only having to make adjustments to the output.
In our Google Ads example, we used OpenAI's new o1 model to plan out the campaign structure. In the prompt, we feed in the services a business offers, instructing the model to create a search campaign with an ad group targeting each service. Then we used GPT-4o to fill out the target keywords for each ad group.
And voila! Your model, combined with the expert's adjustments, can now make effective decisions about the cognition point you're working to replace.
At first, the subject expert may end up still doing the majority of the decision-making and thinking. That's ok! Over time, they'll slowly do less and less, until eventually the model will completely replace and even surpass the expert's reasoning abilities.
Step 3: Dissect the theory behind the expert
After experimenting with the model and expert combination, we'll want to adapt our model to take on more and more of the work between the two. The key here is paying attention to what data the expert takes in and how they choose to use it to come to their conclusion. You'll want to break down their line of thinking and dissect their reasoning into separate steps an LLM can replicate.
This is arguably the most difficult part of the process, and will require some trial and error. Often times more types of data will be needed, which means more infrastructure in step 1 to build.
In the case of mapping out our Google Ads campaign, we realized that experts also heavily rely on the Google Keywords Planner to find data about keywords (such as cost per click, monthly clicks, etc) as well as previous ad campaign performance. Thus we added those integrations to our infrastructure, so our model can also take into account that data.
Step 4: Iterate on your model, repeat
Pretty self-explanatory, keep improving your model to mimic the expert's line of thinking more and more. This can be as simple as fine-tuning the prompt, adding more data inputs to the model, or adding more steps to help the reasoning.
There's also another driving force that will help elevate the automation of cognition points over time: advancements in the LLMs themselves. As better and more powerful general-use LLMs are released (e.g. O3 by OpenAI), you can easily slot them in and leverage their reasoning improvements to more effectively address your use cases.
Conclusion
Cognition points are just one of many ways of describing the way AI and LLMs are able to provide value to the world through their reasoning capabilities.
If you're wondering how I'm so familiar with the specific cognition point of running Google Ads, it's because we're working on it and many others at StyleAI, the future "brain" of marketing for businesses.