Dagmawi D. Ambaw
Addis Ababa, Ethiopia
A week ago, I came across a LinkedIn post that I haven’t been able to get off my mind. It was from a business owner and SEO consultant who was sharing a peculiar pitch he’d received recently.
An ad agency approached him with a proposition: for $15,000 a month, they could get his brand to show up inside Perplexity or Chat GPT as the top recommended product for a specific keyword. Not a Google ranking. Not a paid ad. A direct answer from an AI assistant saying your company is “the best.”
The practice has a name: AI Search Optimization, or ASO. It refers to content adjustment in a manner that makes it easy for artificial intelligence to read, process, and surface AI-powered search engine results.
The ad agency claimed that it was performance-based. No ranking, no charge. The goal was to insert the brand into the RAG (Retrieval Augmented Generation) tools, like Perplexity, response habits for high-value questions. If someone asked, “What’s the best CRM for small remote teams?” or “Which email marketing platform is the easiest to use?”, they promised your company would be the one that gets mentioned by name.
As someone who uses Perplexity daily to summarize information, reference sources, find data, and compare tools, I was alarmed by the possibility that companies operating in the B2B space could be engineering AI-generated responses. It reminded me of black hat SEO tactics, where businesses would flood forums, spin content, and buy backlinks to shape what search engines believed was authoritative.
What’s emerging now is very similar but directed at AI systems instead. The goal is not to earn recommendations based on merit, but to manufacture credibility through volume and placement. The tactics are subtle, hard to trace, and dangerously effective when coordinated at scale.
The practice of optimizing for machine interfaces isn’t new. It has been several years since businesses began optimizing for results in Google Assistant and Siri. In fact, the acronym ASO is traditionally used to mean App Store Optimization, which refers to improving an app's visibility in app stores like the Apple App Store and Google Play. But the recent use of the term refers to something beyond voice assistants or app search.
It is about retrieval-augmented systems like Perplexity, You.com, Bing AI, and web-enabled modes of ChatGPT, Gemini, and Claude. These systems are increasingly being used as research and recommendation engines. A report by Aitools.xyz, the world’s largest AI tools directory and analytics platform, illustrates how AI platform traffic surged by 62%, growing from 7.74 billion visits in April 2024 to 12.57 billion visits in March 2025 in the past 12 months.
Taking note of the rapid evolution, marketing and advertising agencies appear to have innovated new tactics. Some of these new strategies don’t remotely resemble SEO and, at times, appear a bit sinister.
Simulated user feedback: Using crowdsourced platforms like Amazon Mechanical Turk to pay real people to repeatedly upvote certain answers in RAG systems, trying to manipulate the RAG into seeing those answers as more helpful or preferred.
Document seeding: Publishing articles, blogs, Reddit posts, and updating Wikipedia pages filled with carefully planted references to the target brand, hoping these systems will index, ingest, or retrieve them when generating responses.
Prompt targeting: Testing how various phrasings affect what the models say, and crafting content that mimics those trigger phrases.
Training Data Poisoning (the most far-reaching, but least documented): This involves tampering with the data that shapes an AI model’s behavior by slipping bias, brand mentions, or other misleading content into its learning pipeline. Although there is no public evidence that ASO agencies are doing this today, the threat from cyber criminals has already been identified by firms like IBM.
Nonetheless, security risks could stem from several sources. One notable window has to do with how data is often annotated and filtered for large models. AI companies usually face a critical question on whether to outsource annotation, as it could entail ethical risks and labor rights concerns. In its quest to make ChatGPT less toxic, OpenAI relied on outsourced Kenyan laborers earning less than $2 per hour at a data labelling firm called Sama. The annotations fed a safety classifier and an RLHF (reinforcement learning from human feedback) fine-tuning stage, not the original GPT-3 pre-training corpus, yet they still influence what the model eventually says. Sama ended its contract in 2022, but the broader dependence on low-cost labor for sensitive tasks remains, and so do the risks of quiet, large-scale manipulation.
It’s SEO meets “manipulative” reinforcement learning for AI-driven search.
We know these tactics are being pitched, and that agencies offering this kind of service do exist. What’s less clear is how well these strategies work.
The idea of manipulating ChatGPT’s output by flooding it with upvotes sounds clever. But as of now, there’s no evidence that this kind of user feedback meaningfully alters how ChatGPT answers questions. Open AI models aren’t being re-trained live based on votes, and individual user inputs don’t seem to affect model behavior measurably.
Open AI’s own documentation states that while they do make use of user feedback, such as thumbs-up or thumbs-down votes, to inform the training of future models through RLHF, this feedback is aggregated and applied during periodic retraining cycles to align the model more closely with what users seem to prefer. This is part of why GPT-4o has increasingly leaned toward validation, offering agreeable or affirming responses instead of prioritizing accuracy or critical thinking. However, this process does not produce real-time adjustments. Individual user inputs do not directly change how the model responds in the moment.
Perplexity, on the other hand, incorporates user feedback to improve performance in the form of upvotes, clicks, and other engagement signals to help identify which responses are helpful and which aren’t. Over time, this feedback helps optimize both the model’s retrieval mechanism and how it ranks or presents answers. However, the system still prioritizes high-authority sources and structured citations, so it’s still incredibly challenging to coordinate user feedback at scale.
The question that follows is: Does this kind of ASO strategy only work on RAG tools like Perplexity, or does it also apply to ChatGPT?
The answer is yes. If browsing is enabled, or if you're using a plugin or a GPT that fetches live content, ChatGPT begins to behave more like a retrieval-augmented model, like Perplexity. That's where ASO tactics start to become effective again.
Even so, it’s important to make a distinction between the familiar and the intentionally manipulative. Having well-ranked blog posts or content that performs well on search engines is just plain old SEO. It’s a good content strategy. Companies with more money have always mostly had better SEO, enough to hijack the opinions of the masses, even when some of them had shoddy services and products. That’s how the business works.
However, coordinated manipulation is done thoughtfully through indirect document influence. If your brand is mentioned frequently and positively in high-authority third-party sites that are part of the model’s training data, like Wikipedia, mainstream news, Reddit, and Stack Overflow, you’re subtly shaping the model’s prior knowledge. This kind of mass-scale visibility builds a pattern that the model starts to echo.
It’s not just about being present. It’s about being so consistently present in trusted places that the model begins to reflect your reputation as fact.
Done in volume, this doesn’t just earn attention. It shapes what the model believes is credible. That’s the long-game version of ASO. Not tricking the system with a few blog posts. But embedding your narrative where it counts.
One reason this strategy is flying under the radar is that AI referrals are nearly impossible to track. When someone clicks on your link from ChatGPT with browsing, Perplexity, or Bing Chat, the traffic often gets logged as "Direct" in Google Analytics 4.
A February 2025 study by Ahrefs found that 63% of websites receive traffic from AI sources, with 98% of this traffic coming from three major AI search assistants: ChatGPT, Perplexity, and Gemini.
ChatGPT is the largest referrer, accounting for 50% of AI-driven traffic. However, because of broken attribution often caused by AI tools not passing referrer data, much of this traffic appears as "direct" in analytics platforms like GA4. This means businesses might already be benefiting from AI-generated referrals without realizing their source.
Which raises the question: if you’re already benefiting from invisible LLM traffic, what happens if your competitor starts deliberately optimizing for it?
While Google is still dominant, the search experience increasingly begins somewhere else: a browser extension, a sidebar assistant, a model that is already forming opinions.
OpenAI recently even began to promote the use of ChatGPT for search within the interface, with the notification reading, "Download the Chrome extension to switch your default search engine to ChatGPT and get instant answers from trusted sources with every search." This shift is no longer theoretical.
OpenAI has introduced a feature that allows creators of custom GPTs to include sponsored messages within their public-facing assistants. These sponsored messages are seamlessly integrated into the conversation flow, appearing as contextual suggestions or tools labeled explicitly as “Sponsored.” This integration provides an opportunity for creators to monetize their GPTs by promoting third-party products or services directly within user interactions. These aren’t pop-ups or display ads. They’re built directly into the conversation flow and often appear under helpful, well-placed prompts that flow contextually with what you’re working on.
To OpenAI’s credit, these placements are clearly marked. But that doesn’t change the larger implication: monetized influence is now officially part of the LLM experience. It confirms that answer engines aren’t just reflecting public knowledge. They’re becoming a new kind of ad real estate. Not just AI-enhanced search, but AI-shaped preference.
But there’s still a distinction, these are clearly stated sponsored ads. The problem arises when the responses seem and feel neutral, but they’re not. They’re shaped by what the model has read, what it’s seen frequently, and what it’s been nudged to trust. If companies can manipulate it for the sake of commerce, they always will. As AI answer engines become more central to how people discover products and services, the tactics behind ASO will inevitably become more and more subtle. Agencies will scale, pricing will drop, and toolkits will emerge that make it easier for even small teams to embed their brand across the sources that models learn from or retrieve.
So, before you trust the next confident answer an AI gives you, ask yourself this: How much of what we read, watch, and believe today is truly earned? And how much of it was engineered to be there?
👏
😂
❤️
😲
😠
Share this post:
Dagmawi D. Ambaw
Dagmawi D. Ambaw is a digital marketing manager, business developer, and investment associate. He currently works as a digital marketing manager at Shega, where he also contributes as a guest writer. His professional interests lie at the intersection of digital growth, venture capital, and emerging markets, with a focus on helping investors connect with African startups pursuing viable business models.
Your Email Address Will Not Be Published. Required Fields Are Marked *
Latest Stories
Dashen Unveils Fuel Payment Service via Super App, Becomes Latest Bank in Digital Shift
12 May 2025
Unlocking Ethiopia’s Capital Markets: Strategic Insights and Lessons for Local Banks
11 May 2025