{"id":187995,"date":"2025-05-20T10:27:01","date_gmt":"2025-05-20T15:27:01","guid":{"rendered":"https:\/\/ahrefs.com\/blog\/?p=187995"},"modified":"2025-08-26T09:49:07","modified_gmt":"2025-08-26T14:49:07","slug":"black-hat-llmo","status":"publish","type":"post","link":"https:\/\/ahrefs.com\/blog\/black-hat-llmo\/","title":{"rendered":"LLMO Is in Its Black Hat&nbsp;Era"},"content":{"rendered":"<div class=\"intro-txt\">We\u2019ve seen this before. A&nbsp;new technology rises. Visibility becomes a new currency. And people\u2014<em>ahem, <\/em>SEOs<em>\u2014<\/em>rush to game the system.&nbsp;<\/div>\n<p>That\u2019s where we are with optimizing for visibility in LLMs (LLMO), and we need more experts to call out this behavior in our industry, like Lily Ray has done in <a href=\"https:\/\/www.linkedin.com\/posts\/lily-ray-44755615_seo-sge-ai-activity-7327723417573453824-OYga?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAACCeNrQBZAak_5kRDCf8MZOzqmX2OTiFvvQ\">this post<\/a>:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"966\" height=\"683\" class=\"wp-image-187996\" src=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/lily-rays-linkedin-post-highlighting-the-quote.png\" alt=\"Lily Ray's LinkedIn post highlighting the quote &quot;'Optimizing the answers' is now the name of the SEO game, and I'm already seeing so many companies and SEOs start to do this in shady and questionable ways.&quot;\" srcset=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/lily-rays-linkedin-post-highlighting-the-quote.png 966w, https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/lily-rays-linkedin-post-highlighting-the-quote-601x425.png 601w, https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/lily-rays-linkedin-post-highlighting-the-quote-768x543.png 768w\" sizes=\"auto, (max-width: 966px) 100vw, 966px\"><\/p>\n<p>If you\u2019re tricking, sculpting, or manipulating a large language model to make it notice and mention you more, there\u2019s a big chance it\u2019s black&nbsp;hat.<\/p>\n<p>It\u2019s like 2004 SEO, back when keyword stuffing and link schemes worked a little too&nbsp;well.<\/p>\n<p>But this time, we\u2019re not just reshuffling search results. We\u2019re shaping the foundation of knowledge that LLMs draw&nbsp;from.<\/p>\n<h2><a id=\"post-187995-_bhzqwlxgx7zq\"><\/a><div class=\"post-nav-link clearfix\" id=\"section1\"><a class=\"subhead-anchor\" data-tip=\"tooltip__copielink\" rel=\"#section1\"><svg width=\"19\" height=\"19\" viewBox=\"0 0 14 14\" style><g fill=\"none\" fill-rule=\"evenodd\"><path d=\"M0 0h14v14H0z\" \/><path d=\"M7.45 9.887l-1.62 1.621c-.92.92-2.418.92-3.338 0a2.364 2.364 0 0 1 0-3.339l1.62-1.62-1.273-1.272-1.62 1.62a4.161 4.161 0 1 0 5.885 5.884l1.62-1.62L7.45 9.886zM5.527 5.135L7.17 3.492c.92-.92 2.418-.92 3.339 0 .92.92.92 2.418 0 3.339L8.866 8.473l1.272 1.273 1.644-1.643A4.161 4.161 0 1 0 5.897 2.22L4.254 3.863l1.272 1.272zm-.66 3.998a.749.749 0 0 1 0-1.06l2.208-2.206a.749.749 0 1 1 1.06 1.06L5.928 9.133a.75.75 0 0 1-1.061 0z\" style \/><\/g><\/svg><\/a><div class=\"link-text\"> What \u201cblack hat\u201d looks like for LLM optimization&nbsp;<\/div><\/div><\/h2>\n<p>In tech, <em>black hat<\/em> typically refers to tactics that manipulate systems in ways that may work temporarily but go against the spirit of the platform, are unethical, and often backfire when the platform catches up.<\/p>\n<p>Traditionally, <a href=\"https:\/\/ahrefs.com\/seo\/glossary\/black-hat-seo\">black hat SEO<\/a> has looked like:<\/p>\n<ul>\n<li>Putting white keyword-spammed text on a white background<\/li>\n<li>Adding hidden content to your code, visible only to search engines<\/li>\n<li>Creating private blog networks just for linking to your website<\/li>\n<li>Improving rankings by purposely harming competitor websites<\/li>\n<li>And more\u2026<\/li>\n<\/ul>\n<p>It became a thing because (although spammy), it worked for many websites for over a decade.<\/p>\n<p>Black hat LLMO looks different from this. And, a lot of it doesn\u2019t <em>feel<\/em> immediately spammy, so it can be hard to&nbsp;spot.<\/p>\n<p>However, black hat LLMO is also based on the intention of unethically manipulating language patterns, LLM training processes, or data sets for selfish gain.<\/p>\n<p>Here\u2019s a side-by-side comparison to give you an idea of what black hat LLMO could include. It\u2019s not exhaustive and will likely evolve as LLMs adapt and&nbsp;grow.<\/p>\n<h2 id=\"tablepress-418-name\" class=\"tablepress-table-name tablepress-table-name-id-418\">Black Hat LLMO vs Black Hat&nbsp;SEO<\/h2>\n\n<table id=\"tablepress-418\" class=\"tablepress tablepress-id-418 tablepress-responsive tablepress-ahrefs-width-720px\" aria-labelledby=\"tablepress-418-name\">\n<thead>\n<tr class=\"row-1 odd\">\n\t<th class=\"column-1\">Tactic<\/th><th class=\"column-2\">SEO<\/th><th class=\"column-3\">LLMO<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr class=\"row-2 even\">\n\t<td class=\"column-1\">Private blog networks<\/td><td class=\"column-2\">Built to pass link equity to target sites.<\/td><td class=\"column-3\">Built to artificially position a brand as the \u201cbest\u201d in its category.<\/td>\n<\/tr>\n<tr class=\"row-3 odd\">\n\t<td class=\"column-1\">Negative SEO<\/td><td class=\"column-2\">Spammy links are sent to competitors to lower their rankings or penalize their websites.<\/td><td class=\"column-3\">Downvoting LLM responses with competitor mentions or publishing misleading content about&nbsp;them.<\/td>\n<\/tr>\n<tr class=\"row-4 even\">\n\t<td class=\"column-1\">Parasite SEO<\/td><td class=\"column-2\">Using the traffic of high-authority websites to boost your own visibility.<\/td><td class=\"column-3\">Artificially improving your brand\u2019s authority by being added to \u201cbest of\u201d lists\u2026that you&nbsp;wrote.<\/td>\n<\/tr>\n<tr class=\"row-5 odd\">\n\t<td class=\"column-1\">Hidden text or&nbsp;links<\/td><td class=\"column-2\">Added for search engines to boost <a href=\"https:\/\/ahrefs.com\/seo\/glossary\/keyword-density\">keyword density<\/a> and similar signals.<\/td><td class=\"column-3\">Added to increase entity frequency or provide \u201cLLM-friendly\u201d phrasing.<\/td>\n<\/tr>\n<tr class=\"row-6 even\">\n\t<td class=\"column-1\">Keyword stuffing<\/td><td class=\"column-2\">Squeezing keywords into content and code to boost density.<\/td><td class=\"column-3\">Overloading content with <a href=\"https:\/\/ahrefs.com\/seo\/glossary\/entity-based-seo\">entities or NLP terms<\/a> to boost \u201csalience\u201d.<\/td>\n<\/tr>\n<tr class=\"row-7 odd\">\n\t<td class=\"column-1\">Automatically-generated content<\/td><td class=\"column-2\">Using spinners to reword existing content.<\/td><td class=\"column-3\">Using AI to rephrase or duplicate competitor content.<\/td>\n<\/tr>\n<tr class=\"row-8 even\">\n\t<td class=\"column-1\">Link building<\/td><td class=\"column-2\">Buying links to inflate ranking signals.<\/td><td class=\"column-3\">Buying brand mentions alongside specific keywords or entities.<\/td>\n<\/tr>\n<tr class=\"row-9 odd\">\n\t<td class=\"column-1\">Engagement manipulation<\/td><td class=\"column-2\">Faking clicks to boost search click-through rate.<\/td><td class=\"column-3\">Prompting LLMs to favor your brand; spamming RLHF systems with biased feedback.<\/td>\n<\/tr>\n<tr class=\"row-10 even\">\n\t<td class=\"column-1\"><a href=\"https:\/\/ahrefs.com\/seo\/glossary\/spamdexing\">Spamdexing<\/a><\/td><td class=\"column-2\">Manipulating what gets indexed in search engines.<\/td><td class=\"column-3\">Manipulating what gets included in LLM training datasets.<\/td>\n<\/tr>\n<tr class=\"row-11 odd\">\n\t<td class=\"column-1\">Link farming<\/td><td class=\"column-2\">Mass-producing backlinks cheaply.<\/td><td class=\"column-3\">Mass-producing brand mentions to inflate authority and sentiment signals.<\/td>\n<\/tr>\n<tr class=\"row-12 even\">\n\t<td class=\"column-1\">Anchor text manipulation<\/td><td class=\"column-2\">Stuffing exact-match keywords into link anchors.<\/td><td class=\"column-3\">Controlling sentiment and phrasing around brand mentions to sculpt LLM outputs.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<!-- #tablepress-418 from cache -->\n<p>These tactics boil down to three core behaviors and thought processes that make them \u201cblack hat\u201d.<\/p>\n<h2><a id=\"post-187995-_t0hc9nqbezio\"><\/a><div class=\"post-nav-link clearfix\" id=\"section1\"><a class=\"subhead-anchor\" data-tip=\"tooltip__copielink\" rel=\"#section1\"><svg width=\"19\" height=\"19\" viewBox=\"0 0 14 14\" style><g fill=\"none\" fill-rule=\"evenodd\"><path d=\"M0 0h14v14H0z\" \/><path d=\"M7.45 9.887l-1.62 1.621c-.92.92-2.418.92-3.338 0a2.364 2.364 0 0 1 0-3.339l1.62-1.62-1.273-1.272-1.62 1.62a4.161 4.161 0 1 0 5.885 5.884l1.62-1.62L7.45 9.886zM5.527 5.135L7.17 3.492c.92-.92 2.418-.92 3.339 0 .92.92.92 2.418 0 3.339L8.866 8.473l1.272 1.273 1.644-1.643A4.161 4.161 0 1 0 5.897 2.22L4.254 3.863l1.272 1.272zm-.66 3.998a.749.749 0 0 1 0-1.06l2.208-2.206a.749.749 0 1 1 1.06 1.06L5.928 9.133a.75.75 0 0 1-1.061 0z\" style \/><\/g><\/svg><\/a><div class=\"link-text\"> 1. Manipulating LLM training processes&nbsp;<\/div><\/div><\/h2>\n<p>Language models undergo different training processes. Most of these happen before models are released to the public; however, some training processes are influenced by public users.<\/p>\n<p>One of these is Reinforcement Learning from Human Feedback (RLHF).<\/p>\n<p>It is an artificial intelligence learning method that uses human preferences to reward LLMs when they deliver a good response and penalize them when they provide a bad response.<\/p>\n<p>OpenAI has a great diagram for <a href=\"https:\/\/openai.com\/research\/instruction-following\">explaining how RLHF works for InstructGPT<\/a>:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1952\" height=\"1158\" class=\"wp-image-187997\" src=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/how-reinforcement-learning-from-human-feedback-wor.png\" alt=\"How Reinforcement Learning from Human Feedback works for InstructGPT.\" srcset=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/how-reinforcement-learning-from-human-feedback-wor.png 1952w, https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/how-reinforcement-learning-from-human-feedback-wor-680x403.png 680w, https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/how-reinforcement-learning-from-human-feedback-wor-768x456.png 768w, https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/how-reinforcement-learning-from-human-feedback-wor-1536x911.png 1536w\" sizes=\"auto, (max-width: 1952px) 100vw, 1952px\"><\/p>\n<p>LLMs using RLHF learn from their direct interactions with users\u2026 and you can probably already see where this is going for black hat&nbsp;LLMO.<\/p>\n<p>They can learn&nbsp;from:<\/p>\n<ul>\n<li>The actual conversations they have (including historical conversations)<\/li>\n<li>The thumbs-up\/down ratings that users give for responses<\/li>\n<li>The selection a user makes when the LLM presents multiple options<\/li>\n<li>The user\u2019s account details or other personalized data that the LLM has access to<\/li>\n<\/ul>\n<p>For example, here\u2019s a conversation in ChatGPT that indicates it learned (and subsequently adapted future behavior) based on the direct conversation it had with this&nbsp;user:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1078\" height=\"919\" class=\"wp-image-187998\" src=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/screenshot-of-a-chatgpt-conversation-where-it-lear.png\" alt=\"Screenshot of a ChatGPT conversation where it learned personal information about the user.\" srcset=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/screenshot-of-a-chatgpt-conversation-where-it-lear.png 1078w, https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/screenshot-of-a-chatgpt-conversation-where-it-lear-499x425.png 499w, https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/screenshot-of-a-chatgpt-conversation-where-it-lear-768x655.png 768w\" sizes=\"auto, (max-width: 1078px) 100vw, 1078px\"><\/p>\n<p>Now, this response has a few problems: the response contradicts itself, the user didn\u2019t mention their name in past conversations, and ChatGPT can\u2019t use reason or judgment to accurately pinpoint where or how it learned the user\u2019s name.<\/p>\n<p>But the fact remains that this LLM learned something it could not have through training data and search alone. It could only learn it from its interaction with this&nbsp;user.<\/p>\n<p>And this is exactly why it\u2019s easy for these signals to be manipulated for selfish gain.<\/p>\n<p>It\u2019s certainly possible that, similarly to how Google uses a \u201c<a href=\"https:\/\/ahrefs.com\/seo\/glossary\/ymyl-pages\">your money, your life<\/a>\u201d classification for content that could cause real harm to searchers, LLMs place more weight on specific topics or types of information.<\/p>\n<p>Unlike traditional Google search, which had a significantly smaller number of ranking factors, LLMs have illions (millions, billions, or trillions) of parameters to tune for various scenarios.<\/p>\n\n<table id=\"tablepress-419\" class=\"tablepress tablepress-id-419 tablepress-responsive tablepress-ahrefs-width-full\">\n<thead>\n<tr class=\"row-1\">\n\t<th class=\"column-1\">Model<\/th><th class=\"column-2\">Parameters<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr class=\"row-2\">\n\t<td class=\"column-1\">GPT-1<\/td><td class=\"column-2\"><a href=\"https:\/\/ipspecialist.net\/the-history-of-chatgpt-from-gpt-1-to-gpt-4\/#:~:text=GPT%2D1%3A%20GPT%2D1%20was%20the%20first%20edition%20of%20the%20GPT%20family%20of%20language%20models%2C%20released%20in%202018.%20It%20had%20117%20million%20parameters%20and%20was%20trained%20on%20a%20large%20corpus%20of%20text%20data.%20Despite%20its%20tiny%20size%20in%20comparison%20to%20later%20versions%2C%20GPT%2D1%20performed%20well%20in%20natural%20language%20processing%20tasks.\">~117 million<\/a><\/td>\n<\/tr>\n<tr class=\"row-3\">\n\t<td class=\"column-1\">GPT-2<\/td><td class=\"column-2\"><a href=\"https:\/\/ipspecialist.net\/the-history-of-chatgpt-from-gpt-1-to-gpt-4\/#:~:text=GPT%2D2%3A%20With%201.5%20billion%20parameters%2C%20GPT%2D2%2C%20which%20was%20launched%20in%202019%2C%20was%20a%20significant%20upgrade%20over%20its%20predecessor.%20GPT%2D2%20could%20generate%20human%2Dlike%20prose%20and%20perform%20natural%20language%20processing%20tasks%20like%20language%20translation%2C%20question%20answering%2C%20and%20text%20completion.\">~1.5 billion<\/a><\/td>\n<\/tr>\n<tr class=\"row-4\">\n\t<td class=\"column-1\">GPT-3<\/td><td class=\"column-2\"><a href=\"https:\/\/ipspecialist.net\/the-history-of-chatgpt-from-gpt-1-to-gpt-4\/#:~:text=GPT%2D3%3A,and%20question%20answering.\">~175 billion<\/a><\/td>\n<\/tr>\n<tr class=\"row-5\">\n\t<td class=\"column-1\">GPT-4<\/td><td class=\"column-2\"><a href=\"https:\/\/the-decoder.com\/gpt-4-architecture-datasets-costs-and-more-leaked\/#:~:text=OpenAI%20GPT%2D4%20is%20said%20to%20be%20based%20on%20the%20Mixture%20of%20Experts%20architecture%20and%20has%201.76%20trillion%20parameters.\">~1.76 trillion<\/a><\/td>\n<\/tr>\n<tr class=\"row-6\">\n\t<td class=\"column-1\">GPT-5<\/td><td class=\"column-2\"><a href=\"https:\/\/lifearchitect.ai\/gpt-5\/#:~:text=Sidenote%3A%20An%20order%20of%20magnitude%20bigger%20than%201.76%20trillion%20parameters%20MoE%20is%2017.6%20trillion%20parameters%20MoE%2C%20or%20around%203.5T%20parameters%20dense.\">~17.6 trillion<\/a> (speculated)<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<!-- #tablepress-419 from cache -->\n<p>For instance, the above example relates to the user\u2019s privacy, which would have more significance and weight than other topics. That\u2019s likely why the LLM might have made the change immediately.<\/p>\n<p>Thankfully, it\u2019s not this easy to brute force an LLM to learn other things, as the team at <a href=\"https:\/\/www.rebootonline.com\/blog\/rlhf-geo-experiment\/\">Reboot<\/a> discovered when testing for this exact type of RLHF manipulation.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1365\" height=\"377\" class=\"wp-image-187999\" src=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/reboots-black-hat-llmo-experiment-conclusion-high.png\" alt=\"Reboot's black hat LLMO experiment conclusion highlighting &quot;we could not get the AI models that we were testing to give us our desired responses&quot;.\" srcset=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/reboots-black-hat-llmo-experiment-conclusion-high.png 1365w, https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/reboots-black-hat-llmo-experiment-conclusion-high-680x188.png 680w, https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/reboots-black-hat-llmo-experiment-conclusion-high-768x212.png 768w\" sizes=\"auto, (max-width: 1365px) 100vw, 1365px\"><\/p>\n<p>As marketers, we are responsible for advising clients on how to show up in new technologies their customers use to search. However, this should not come from manipulating those technologies for selfish gain.<\/p>\n<p>There\u2019s a fine line there that, when crossed, poisons the well for everybody. This leads me to the second core behavior of black hat&nbsp;LLMO\u2026<\/p>\n<h2><a id=\"post-187995-_xz46365mooyi\"><\/a><div class=\"post-nav-link clearfix\" id=\"section1\"><a class=\"subhead-anchor\" data-tip=\"tooltip__copielink\" rel=\"#section1\"><svg width=\"19\" height=\"19\" viewBox=\"0 0 14 14\" style><g fill=\"none\" fill-rule=\"evenodd\"><path d=\"M0 0h14v14H0z\" \/><path d=\"M7.45 9.887l-1.62 1.621c-.92.92-2.418.92-3.338 0a2.364 2.364 0 0 1 0-3.339l1.62-1.62-1.273-1.272-1.62 1.62a4.161 4.161 0 1 0 5.885 5.884l1.62-1.62L7.45 9.886zM5.527 5.135L7.17 3.492c.92-.92 2.418-.92 3.339 0 .92.92.92 2.418 0 3.339L8.866 8.473l1.272 1.273 1.644-1.643A4.161 4.161 0 1 0 5.897 2.22L4.254 3.863l1.272 1.272zm-.66 3.998a.749.749 0 0 1 0-1.06l2.208-2.206a.749.749 0 1 1 1.06 1.06L5.928 9.133a.75.75 0 0 1-1.061 0z\" style \/><\/g><\/svg><\/a><div class=\"link-text\"> 2. Poisoning the datasets LLMs&nbsp;use&nbsp;<\/div><\/div><\/h2>\n<p>Let me shine a light on the word \u201cpoison\u201d for a moment because I am not using it for dramatic effect.<\/p>\n<p>Engineers use this language to describe the manipulation of LLM training datasets as \u201csupply chain poisoning.\u201d<\/p>\n<p>Some SEOs are doing it intentionally. Others are just following advice that sounds clever but is dangerously misinformed.<\/p>\n<p>You\u2019ve probably seen posts or heard suggestions like:<\/p>\n<ul>\n<li>\u201cYou have to get your brand into LLM training data.\u201d<\/li>\n<li>\u201cUse feature engineering to make your raw data more LLM-friendly.\u201d<\/li>\n<li>\u201cInfluence the patterns that LLMs learn from to favor your&nbsp;brand.\u201d<\/li>\n<li>\u201cPublish roundup posts naming yourself as the best, so LLMs pick that&nbsp;up.\u201d<\/li>\n<li>\u201cAdd semantically rich content linking your brand with high-authority terms.\u201d<\/li>\n<\/ul>\n<p>I asked Brandon Li, a machine learning engineer at Ahrefs, how engineers react to people optimizing specifically for visibility in datasets used by LLMs and search engines. His answer was&nbsp;blunt:<\/p>\n<blockquote class=\"big\"><div class=\"quote-content\"> Please don\u2019t do this \u2014 it messes up the dataset.&nbsp;<\/div><div class=\"quote-info clearfix\"><div class=\"quote-photo\"><img decoding=\"async\" alt=\"Brandon Li\" src=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/04\/Brandon-Li.png\"><\/div><div class=\"extra-box\"><span class=\"quote-author\">Brandon Li,<\/span> <span class=\"quote-author-job\">Machine Learning Engineer, Ahrefs<\/span><\/div><\/div><\/blockquote>\n<p>The difference between how SEOs think about it and how engineers think is important. Getting in a training dataset is not like being indexed by Google. It\u2019s not something you should be trying to manipulate your way&nbsp;into.<\/p>\n<p>Let\u2019s take <a href=\"https:\/\/ahrefs.com\/blog\/schema-markup\/\">schema markup<\/a> as an example of a dataset search engineers use.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1784\" height=\"2048\" class=\"wp-image-188000\" src=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/how-schema-markup-works.png\" alt=\"How schema markup works\" srcset=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/how-schema-markup-works.png 1784w, https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/how-schema-markup-works-370x425.png 370w, https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/how-schema-markup-works-768x882.png 768w, https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/how-schema-markup-works-1338x1536.png 1338w\" sizes=\"auto, (max-width: 1784px) 100vw, 1784px\"> In SEO, it has long been used to enhance how content appears in search and improve click-through rates.<\/p>\n<p>But there\u2019s a fine line between <em>optimizing<\/em> and <em>abusing<\/em> schema; especially when it\u2019s used to force entity relationships that aren\u2019t accurate or deserved.<\/p>\n<p>When schema is misused at scale (whether deliberately or just by unskilled practitioners following bad advice), engineers stop trusting the data source entirely. It becomes messy, unreliable, and unsuitable for training.<\/p>\n<p>If it\u2019s done with the intent to manipulate model outputs by corrupting inputs, that\u2019s no longer SEO. That\u2019s poisoning the supply chain.<\/p>\n<p>This isn\u2019t just an SEO problem.<\/p>\n<p>Engineers see dataset poisoning as a cybersecurity risk, one with real-world consequences.<\/p>\n<p>Take Mithril Security, a company focused on transparency and privacy in AI. Their team ran a test to prove how easily a model could be corrupted using poisoned data. The result was <a href=\"https:\/\/blog.mithrilsecurity.io\/poisongpt-how-we-hid-a-lobotomized-llm-on-hugging-face-to-spread-fake-news\/\">PoisonGPT<\/a> \u2014 a tampered version of GPT-2 that confidently repeated fake news inserted into its training set.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1379\" height=\"1020\" class=\"wp-image-188001\" src=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/mithril-securitys-test-creating-poisongpt-to-spre.jpg\" alt=\"Mithril Security's test creating PoisonGPT to spread fake news.\" srcset=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/mithril-securitys-test-creating-poisongpt-to-spre.jpg 1379w, https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/mithril-securitys-test-creating-poisongpt-to-spre-575x425.jpg 575w, https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/mithril-securitys-test-creating-poisongpt-to-spre-768x568.jpg 768w\" sizes=\"auto, (max-width: 1379px) 100vw, 1379px\"><\/p>\n<p>Their goal wasn\u2019t to spread misinformation. It was to demonstrate how little it takes to compromise a model\u2019s reliability if the data pipeline is unguarded.<\/p>\n<p>Beyond marketers, the kinds of bad actors who try to manipulate training data include hackers, scammers, fake news distributors, and politically motivated groups aiming to control information or distort conversations.<\/p>\n<p>The more SEOs engage in dataset manipulation, intentionally or not, the more engineers begin to see us as part of that same problem set.<\/p>\n<p>Not as optimizers. But as threats to data integrity.<\/p>\n<h3><a id=\"post-187995-_353pivmjkcz8\"><\/a>Why getting into a dataset is the wrong goal to aim for anyway<\/h3>\n<p>Let\u2019s talk numbers. When OpenAI <a href=\"https:\/\/arxiv.org\/pdf\/2005.14165\">trained GPT-3<\/a>, they started with the following datasets:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1122\" height=\"375\" class=\"wp-image-188002\" src=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/list-of-datasets-used-to-train-openais-gpt-3-mode.png\" alt=\"List of datasets used to train OpenAI's GPT-3 model.\" srcset=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/list-of-datasets-used-to-train-openais-gpt-3-mode.png 1122w, https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/list-of-datasets-used-to-train-openais-gpt-3-mode-680x227.png 680w, https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/list-of-datasets-used-to-train-openais-gpt-3-mode-768x257.png 768w\" sizes=\"auto, (max-width: 1122px) 100vw, 1122px\"><\/p>\n<p>Initially, 45 TB of CommonCrawl data was used (~60% of the total training data). But only 570 GB (about 1.27%) made it into the final training set after a thorough data cleaning process.<\/p>\n<p>What got&nbsp;kept?<\/p>\n<ul>\n<li>Pages that resembled high-quality reference material (think academic texts, expert-level documentation, books)<\/li>\n<li>Content that wasn\u2019t duplicated across other documents<\/li>\n<li>A small amount of manually selected, trusted content to improve diversity<\/li>\n<\/ul>\n<p>While OpenAI hasn\u2019t provided transparency for later models, experts like <a href=\"https:\/\/lifearchitect.ai\/whats-in-gpt-5\/\">Dr Alan D. Thompson<\/a> have shared some analysis and insights for datasets used to train&nbsp;GPT-5:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"598\" height=\"958\" class=\"wp-image-188003\" src=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/list-of-data-sets-dr-alan-d-thompson-suggests-may.png\" alt=\"List of data sets Dr Alan D. Thompson suggests may be used to train OpenAI's GPT-5 model.\" srcset=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/list-of-data-sets-dr-alan-d-thompson-suggests-may.png 598w, https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/list-of-data-sets-dr-alan-d-thompson-suggests-may-265x425.png 265w\" sizes=\"auto, (max-width: 598px) 100vw, 598px\"><\/p>\n<p>This list includes data sources that are far more open to manipulation and harder to clean like Reddit posts, YouTube comments, and Wikipedia content, to name a&nbsp;few.<\/p>\n<p>Datasets will continue to change with new model releases. But we know that datasets the engineers consider higher quality are sampled more frequently during the training process than lower quality, \u201cnoisy\u201d datasets.<\/p>\n<p>Since GPT-3 was trained on only 1.27% of CommonCrawl data, and engineers are becoming more careful with cleaning datasets, it\u2019s incredibly difficult to insert your brand into an LLM\u2019s training material.<\/p>\n<p>And, if that\u2019s what you\u2019re aiming for, then as an SEO, you\u2019re missing the&nbsp;point.<\/p>\n<p>Most LLMs now augment answers with real time search. In fact they search more than humans do.<\/p>\n<p>For instance, ChatGPT ran over 89 searches in 9 minutes for one of my latest queries:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1012\" height=\"186\" class=\"wp-image-188004\" src=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/screenshot-of-a-response-in-a-chatgpt-conversation.png\" alt=\"Screenshot of a response in a ChatGPT conversation indicating it made 89 searches.\" srcset=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/screenshot-of-a-response-in-a-chatgpt-conversation.png 1012w, https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/screenshot-of-a-response-in-a-chatgpt-conversation-680x125.png 680w, https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/screenshot-of-a-response-in-a-chatgpt-conversation-768x141.png 768w\" sizes=\"auto, (max-width: 1012px) 100vw, 1012px\"><\/p>\n<p>By comparison, I tracked one of my <a href=\"https:\/\/ahrefs.com\/blog\/search-experience-optimization\/\">search experiences<\/a> when buying a laser cutter and ran 195 searches in 17+ hours as part of my overall search journey.<\/p>\n<p>LLMs are researching faster, deeper, and wider than any individual user, and often citing more resources than an average searcher would ordinarily click on when simply Googling for an answer.<\/p>\n<p>Showing up in responses by doing good SEO (instead of trying to hack your way into training data) is the better path forward here.<\/p>\n<p>An easy way to benchmark your visibility is in Ahrefs\u2019 Web Analytics:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1536\" height=\"1230\" class=\"wp-image-188005\" src=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/ahrefs-web-analytics-allows-you-to-track-traffic.png\" alt=\"Ahrefs' Web Analytics allows you to track traffic from LLMs to your website.\" srcset=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/ahrefs-web-analytics-allows-you-to-track-traffic.png 1536w, https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/ahrefs-web-analytics-allows-you-to-track-traffic-531x425.png 531w, https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/ahrefs-web-analytics-allows-you-to-track-traffic-768x615.png 768w\" sizes=\"auto, (max-width: 1536px) 100vw, 1536px\"><\/p>\n<p>Here you can analyze exactly which LLMs are driving traffic to your site and which pages are showing up in their responses.<\/p>\n<p>However, it might be tempting to start optimizing your content with \u201centity-rich\u201d text or more \u201cLLM-friendly\u201d wording to improve its visibility in LLMs, which takes us to the third pattern of black hat&nbsp;LLMO.<\/p>\n<h2><a id=\"post-187995-_29z29ixy8sou\"><\/a><div class=\"post-nav-link clearfix\" id=\"section1\"><a class=\"subhead-anchor\" data-tip=\"tooltip__copielink\" rel=\"#section1\"><svg width=\"19\" height=\"19\" viewBox=\"0 0 14 14\" style><g fill=\"none\" fill-rule=\"evenodd\"><path d=\"M0 0h14v14H0z\" \/><path d=\"M7.45 9.887l-1.62 1.621c-.92.92-2.418.92-3.338 0a2.364 2.364 0 0 1 0-3.339l1.62-1.62-1.273-1.272-1.62 1.62a4.161 4.161 0 1 0 5.885 5.884l1.62-1.62L7.45 9.886zM5.527 5.135L7.17 3.492c.92-.92 2.418-.92 3.339 0 .92.92.92 2.418 0 3.339L8.866 8.473l1.272 1.273 1.644-1.643A4.161 4.161 0 1 0 5.897 2.22L4.254 3.863l1.272 1.272zm-.66 3.998a.749.749 0 0 1 0-1.06l2.208-2.206a.749.749 0 1 1 1.06 1.06L5.928 9.133a.75.75 0 0 1-1.061 0z\" style \/><\/g><\/svg><\/a><div class=\"link-text\"> 3. Sculpting language patterns for selfish gain&nbsp;<\/div><\/div><\/h2>\n<p>The final behavior contributing to black hat LLMO is sculpting language patterns to influence prediction-based LLM responses.<\/p>\n<p>It\u2019s similar to what researchers at Harvard call \u201cStrategic Text Sequences\u201d in <a href=\"https:\/\/arxiv.org\/pdf\/2404.07981\">this study<\/a>. It refers to text that\u2019s injected onto web pages with the specific aim of influencing more favorable brand or product mentions in LLM responses.<\/p>\n<p>The red text below is an example of&nbsp;this:<\/p>\n<div id=\"attachment_188006\" style=\"width: 1046px\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-188006\" class=\"wp-image-188006\" src=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/an-example-from-harvard-researchers-who-injected-a.png\" alt=\"An example from Harvard researchers who injected a strategic text sequence to promote a particular product more in LLM responses.\" width=\"1036\" height=\"594\" srcset=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/an-example-from-harvard-researchers-who-injected-a.png 1036w, https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/an-example-from-harvard-researchers-who-injected-a-680x390.png 680w, https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/an-example-from-harvard-researchers-who-injected-a-768x440.png 768w\" sizes=\"auto, (max-width: 1036px) 100vw, 1036px\"><p id=\"caption-attachment-188006\" class=\"wp-caption-text\">The red text is an example of content injected on an e-commerce product page in order to get it showing as the top choice in relevant LLM responses.<\/p><\/div>\n<p>Even though the study focused on inserting machine-generated text strings (not traditional marketing copy or natural language), it still raised ethical concerns about fairness, manipulation, and the need for safeguards because these engineered patterns exploit the core prediction mechanism of&nbsp;LLMs.<\/p>\n<p>Most of the advice I see from SEOs about getting LLM visibility falls into this category and is represented as a type of <a href=\"https:\/\/ahrefs.com\/seo\/glossary\/entity-based-seo\">entity SEO<\/a> or <a href=\"https:\/\/ahrefs.com\/blog\/semantic-seo\/\">semantic SEO<\/a>.<\/p>\n<p>Except now, instead of talking about putting keywords in everything, they\u2019re talking about putting entities in everything for <a href=\"https:\/\/ahrefs.com\/blog\/topical-authority\/\">topical authority<\/a>.<\/p>\n<p>For example, let\u2019s look at the following SEO advice from a critical lens:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1165\" height=\"350\" class=\"wp-image-188007\" src=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/example-of-an-expert-seos-advice-for-llm-optimiza.png\" alt=\"Example of an expert SEO's advice for LLM optimization that skews towards being black hat by manipulating language patterns unnatrually.\" srcset=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/example-of-an-expert-seos-advice-for-llm-optimiza.png 1165w, https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/example-of-an-expert-seos-advice-for-llm-optimiza-680x204.png 680w, https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/example-of-an-expert-seos-advice-for-llm-optimiza-768x231.png 768w\" sizes=\"auto, (max-width: 1165px) 100vw, 1165px\"><\/p>\n<p>The rewritten sentence has lost its original meaning, does not convey the emotion or fun experience, loses the author\u2019s opinion, and completely changes the tone, making it sound more promotional.<\/p>\n<p>Worse, it also does not appeal to a human reader.<\/p>\n<p>This style of advice leads to SEOs curating and signposting information for LLMs in the hopes it will be mentioned in responses. And to a degree, it&nbsp;works.<\/p>\n<p>However, it works (for now) because we are changing the language patterns that LLMs are built to predict. We are making them unnatural on purpose to please <s>an algorithm<\/s> a model instead of writing for humans\u2026 does this feel like SEO d\u00e9j\u00e0 vu to you,&nbsp;too?<\/p>\n<p>Other advice that follows this same line of thinking includes:<\/p>\n<ul>\n<li><strong>Increasing entity co-occurrences<\/strong>: Like re-writing content surrounding your brand mentions to include specific topics or entities you want to be connected to strongly.<\/li>\n<li><strong>Artificial brand positioning<\/strong>: Like getting your brand featured in more \u201cbest of\u201d roundup posts to improve authority (even if you create these posts yourself on your site or as guest&nbsp;posts).<\/li>\n<li><strong>Entity-rich Q&amp;A content<\/strong>: Like turning your content into a summarizable Q+A format with many entities added to the response, instead of sharing engaging stories, experiences, or anecdotes.<\/li>\n<li><strong>Topical <s>authority<\/s> saturation<\/strong>: Like publishing an overwhelming amount of content on every possible angle of a topic to dominate entity associations.<\/li>\n<\/ul>\n<p>These tactics may influence LLMs, but they also risk making your content more robotic, less trustworthy, and ultimately forgettable.<\/p>\n<p>Still, it\u2019s worth understanding how LLMs currently perceive your brand, especially if others are shaping that narrative for&nbsp;you.<\/p>\n<p>That\u2019s where a tool like Ahrefs\u2019 Brand Radar comes in. It helps you see which keywords, features, and topic clusters your brand is associated with in AI responses.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1079\" height=\"431\" class=\"wp-image-188008\" src=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/ahrefs-brand-radar-can-be-used-to-see-what-topics.png\" alt=\"Ahrefs' Brand Radar can be used to see what topics and entities LLMs associate with your brand.\" srcset=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/ahrefs-brand-radar-can-be-used-to-see-what-topics.png 1079w, https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/ahrefs-brand-radar-can-be-used-to-see-what-topics-680x272.png 680w, https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/ahrefs-brand-radar-can-be-used-to-see-what-topics-768x307.png 768w\" sizes=\"auto, (max-width: 1079px) 100vw, 1079px\"><\/p>\n<p>That kind of insight is less about gaming the system and more about catching blind spots in how machines are already representing you.<\/p>\n<p>If we go down the path of manipulating language patterns, it will not give us the benefits we want, and for a few reasons.<\/p>\n<h2><a id=\"post-187995-_3v1bswb07rs\"><\/a><div class=\"post-nav-link clearfix\" id=\"section1\"><a class=\"subhead-anchor\" data-tip=\"tooltip__copielink\" rel=\"#section1\"><svg width=\"19\" height=\"19\" viewBox=\"0 0 14 14\" style><g fill=\"none\" fill-rule=\"evenodd\"><path d=\"M0 0h14v14H0z\" \/><path d=\"M7.45 9.887l-1.62 1.621c-.92.92-2.418.92-3.338 0a2.364 2.364 0 0 1 0-3.339l1.62-1.62-1.273-1.272-1.62 1.62a4.161 4.161 0 1 0 5.885 5.884l1.62-1.62L7.45 9.886zM5.527 5.135L7.17 3.492c.92-.92 2.418-.92 3.339 0 .92.92.92 2.418 0 3.339L8.866 8.473l1.272 1.273 1.644-1.643A4.161 4.161 0 1 0 5.897 2.22L4.254 3.863l1.272 1.272zm-.66 3.998a.749.749 0 0 1 0-1.06l2.208-2.206a.749.749 0 1 1 1.06 1.06L5.928 9.133a.75.75 0 0 1-1.061 0z\" style \/><\/g><\/svg><\/a><div class=\"link-text\"> Why gaming the system with black hat LLMO will backfire&nbsp;<\/div><\/div><\/h2>\n<p>Unlike SEO, LLM visibility is not a zero-sum game. It\u2019s not like a tug-of-war where if one brand loses rankings, it\u2019s because another took its&nbsp;place.<\/p>\n<p>We can all become losers in this race if we\u2019re not careful.<\/p>\n<p>LLMs don\u2019t have to mention or link to brands (and they often don\u2019t). This is due to the dominant thought process when it comes to SEO content creation. It goes something like&nbsp;this:<\/p>\n<ol>\n<li>Do keyword research<\/li>\n<li>Reverse engineer top-ranking articles<\/li>\n<li>Pop them into an on-page optimizer<\/li>\n<li>Create similar content, matching the pattern of entities<\/li>\n<li>Publish content that follows the pattern of what\u2019s already ranking<\/li>\n<\/ol>\n<p>What this means, in the grand scheme of things, is that our content becomes ignorable.<\/p>\n<p>Remember the cleaning process that LLM training data goes through? One of the core elements was <a href=\"https:\/\/arxiv.org\/pdf\/2005.14165\">deduplication at a document level<\/a>. This means documents that say the same thing or don\u2019t contribute new, meaningful information get removed from the training data.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1042\" height=\"252\" class=\"wp-image-188009\" src=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/openais-gpt-e-model-had-duplicated-documents-remo.png\" alt=\"OpenAI's GPT-e model had duplicated documents removed from its dataset to prevent redundancy.\" srcset=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/openais-gpt-e-model-had-duplicated-documents-remo.png 1042w, https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/openais-gpt-e-model-had-duplicated-documents-remo-680x164.png 680w, https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/openais-gpt-e-model-had-duplicated-documents-remo-768x186.png 768w\" sizes=\"auto, (max-width: 1042px) 100vw, 1042px\"><\/p>\n<p>Another way of looking at this is through the lens of \u201centity saturation\u201d.<\/p>\n<p>In academic qualitative research, entity saturation refers to the point where gathering more data for a particular category of information doesn\u2019t reveal any new insights. Essentially, the researcher has reached a point where they see similar information repeatedly.<\/p>\n<p>That\u2019s when they know their topic has been thoroughly explored and no new patterns are emerging.<\/p>\n<p>Well, guess&nbsp;what?<\/p>\n<p>Our current formula and SEO best practices for creating \u201centity-rich\u201d content leads LLMs to this point of saturation faster, once again making our content ignorable.<\/p>\n<p>It also makes our content summarizable as a meta-analysis. If 100 posts say the same thing about a topic (in terms of the core essence of what they communicate) and it\u2019s fairly generic Wikipedia-style information, none of them will get the citation.<\/p>\n<p>Making our content summarizable doesn\u2019t make getting a mention or citation easier. And yet, it\u2019s one of the most common pieces of advice top SEOs are sharing for getting visibility in LLM responses.<\/p>\n<p>So what can we do instead?<\/p>\n<h2><a id=\"post-187995-_2u8jp7q7im0a\"><\/a><div class=\"post-nav-link clearfix\" id=\"section1\"><a class=\"subhead-anchor\" data-tip=\"tooltip__copielink\" rel=\"#section1\"><svg width=\"19\" height=\"19\" viewBox=\"0 0 14 14\" style><g fill=\"none\" fill-rule=\"evenodd\"><path d=\"M0 0h14v14H0z\" \/><path d=\"M7.45 9.887l-1.62 1.621c-.92.92-2.418.92-3.338 0a2.364 2.364 0 0 1 0-3.339l1.62-1.62-1.273-1.272-1.62 1.62a4.161 4.161 0 1 0 5.885 5.884l1.62-1.62L7.45 9.886zM5.527 5.135L7.17 3.492c.92-.92 2.418-.92 3.339 0 .92.92.92 2.418 0 3.339L8.866 8.473l1.272 1.273 1.644-1.643A4.161 4.161 0 1 0 5.897 2.22L4.254 3.863l1.272 1.272zm-.66 3.998a.749.749 0 0 1 0-1.06l2.208-2.206a.749.749 0 1 1 1.06 1.06L5.928 9.133a.75.75 0 0 1-1.061 0z\" style \/><\/g><\/svg><\/a><div class=\"link-text\"> How to intelligently improve your brand\u2019s visibility in&nbsp;LLMs&nbsp;<\/div><\/div><\/h2>\n<p>My colleague Louise has already created an awesome guide on optimizing your brand and content for visibility in LLMs (without resorting to black hat tactics).<\/p>\n<div class=\"further-reading\"><div class=\"reading-title\">Further reading<\/div><div class=\"reading-content\">\n<ul>\n<li><a href=\"https:\/\/ahrefs.com\/blog\/llm-optimization\/\">LLMO: 10 Ways to Work Your Brand Into AI Answers<\/a><\/li>\n<\/ul>\n<\/div><\/div>\n<p>Instead of rehashing the same advice, I wanted to leave you with a framework for how to make intelligent choices as we move forward and you start to see new theories and fads pop up in&nbsp;LLMO&nbsp;.<\/p>\n<p>And yes, this one is here for dramatic effect, but also because it makes things dead simple, helping you bypass the pitfalls of FOMO along the&nbsp;way.<\/p>\n<p>It comes from the <a href=\"https:\/\/bonpote.com\/en\/the-5-basic-laws-of-human-stupidity\/\">5 Basic Laws of Human Stupidity<\/a> by Italian economic historian, Professor Carlo Maria Cipolla.<\/p>\n<p>Go ahead and snicker, then pay attention. It\u2019s important.<\/p>\n<p>According to Professor Cipolla, intelligence is defined as taking an action that benefits yourself <em>and<\/em> others simultaneously\u2014basically, creating a win-win situation.<\/p>\n<p>It is in direct opposition to stupidity, which is defined as an action that creates losses to both yourself <em>and<\/em> others:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"585\" height=\"596\" class=\"wp-image-188010\" src=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/the-five-basic-laws-of-human-stupidity-visualized.png\" alt=\"The five basic laws of human stupidity visualized as a 2x2 matrix.\" srcset=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/the-five-basic-laws-of-human-stupidity-visualized.png 585w, https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/the-five-basic-laws-of-human-stupidity-visualized-417x425.png 417w\" sizes=\"auto, (max-width: 585px) 100vw, 585px\"><\/p>\n<p>In all cases, black hat practices sit squarely in the bottom left and bottom right quadrants.<\/p>\n<p>SEO bandits, as I like to think of them, are the people who used manipulative optimization tactics for selfish reasons (benefits to self)\u2026 and proceeded to ruin the internet as a result (losses to others).<\/p>\n<p>Therefore, the rules of SEO and LLMO moving forward are simple.<\/p>\n<ul>\n<li>Don\u2019t be stupid.<\/li>\n<li>Don\u2019t be a bandit.<\/li>\n<li>Optimize intelligently.<\/li>\n<\/ul>\n<p>Intelligent optimization comes down to focusing on your brand and ensuring it is accurately represented in LLM responses.<\/p>\n<p>It\u2019s about using tools like <a href=\"https:\/\/ahrefs.com\/ai-content-helper\/\" data-ahr=\"https:\/\/ahrefs.com\/blog\/ai-content-helper\/\">AI Content Helper<\/a> that are specifically designed to elevate your topic coverage, instead of focusing on cramming more entities in. (The SEO score only improves as you cover the suggested topics in detail, not when you stuff more words&nbsp;in.)<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1041\" height=\"826\" class=\"wp-image-188011\" src=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/ahrefs-ai-content-helper-helps-users-deepen-their.png\" alt=\"Ahrefs' AI Content Helper helps users deepen their topic coverage instead of encouraging black hat practices like keyword or entity stuffing.\" srcset=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/ahrefs-ai-content-helper-helps-users-deepen-their.png 1041w, https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/ahrefs-ai-content-helper-helps-users-deepen-their-536x425.png 536w, https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/ahrefs-ai-content-helper-helps-users-deepen-their-768x609.png 768w\" sizes=\"auto, (max-width: 1041px) 100vw, 1041px\"><\/p>\n<p>But above all, it\u2019s about contributing to a better internet by focusing on the people you want to reach and optimizing for them, not algorithms or language models.<\/p>\n<h2><a id=\"post-187995-_53xedimcuqkx\"><\/a>Final thoughts<\/h2>\n<p>LLMO is still in its early days, but the patterns are already familiar \u2014 and so are the&nbsp;risks.<\/p>\n<p>We\u2019ve seen what happens when short-term tactics go unchecked. When SEO became a race to the bottom, we lost trust, quality, and creativity. Let\u2019s not do it again with&nbsp;LLMs.<\/p>\n<p>This time, we have a chance to get it right. That&nbsp;means:<\/p>\n<ul>\n<li>Don\u2019t manipulate prediction patterns; shape your brand\u2019s presence instead.<\/li>\n<li>Don\u2019t chase entity saturation, but create content humans want to&nbsp;read.<\/li>\n<li>Don\u2019t write to be summarized; rather, write to impact your audience.<\/li>\n<\/ul>\n<p>Because if your brand only shows up in LLMs when it\u2019s stripped of personality, is that really a&nbsp;win?<\/p>\n","protected":false},"excerpt":{"rendered":"<p>That\u2019s where we are with optimizing for visibility in LLMs (LLMO), and we need more experts to call out this behavior in our industry, like Lily Ray has done in this post: If you\u2019re tricking, sculpting, or manipulating a large<span class=\"ellipsis\">\u2026<\/span><\/p>\n<div class=\"read-more\">Read more \u203a<\/div>\n<p><!-- end of .read-more --><\/p>\n","protected":false},"author":195,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"wp_typography_post_enhancements_disabled":false,"footnotes":""},"categories":[469,335],"tags":[462],"coauthors":[458],"class_list":["post-187995","post","type-post","status-publish","format-standard","hentry","category-ai-search","category-general-seo","tag-blog","odd"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>LLMO Is in Its Black Hat Era<\/title>\n<meta name=\"description\" content=\"When SEO became a race to the bottom, we lost trust, quality, and creativity. Let\u2019s not do it again with LLMs.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/ahrefs.com\/blog\/black-hat-llmo\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"LLMO Is in Its Black Hat Era\" \/>\n<meta property=\"og:description\" content=\"When SEO became a race to the bottom, we lost trust, quality, and creativity. Let\u2019s not do it again with LLMs.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/ahrefs.com\/blog\/black-hat-llmo\/\" \/>\n<meta property=\"og:site_name\" content=\"SEO Blog by Ahrefs\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/Ahrefs\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-05-20T15:27:01+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-08-26T14:49:07+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/lily-rays-linkedin-post-highlighting-the-quote.png\" \/>\n\t<meta property=\"og:image:width\" content=\"966\" \/>\n\t<meta property=\"og:image:height\" content=\"683\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Despina Gavoyannis\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@ahrefs\" \/>\n<meta name=\"twitter:site\" content=\"@ahrefs\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/ahrefs.com\\\/blog\\\/black-hat-llmo\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/ahrefs.com\\\/blog\\\/black-hat-llmo\\\/\"},\"author\":{\"name\":\"Despina Gavoyannis\",\"@id\":\"https:\\\/\\\/ahrefs.com\\\/blog\\\/#\\\/schema\\\/person\\\/143d08bacfea1ed0736d76ad138ce704\"},\"headline\":\"LLMO Is in Its Black Hat&nbsp;Era\",\"datePublished\":\"2025-05-20T15:27:01+00:00\",\"dateModified\":\"2025-08-26T14:49:07+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/ahrefs.com\\\/blog\\\/black-hat-llmo\\\/\"},\"wordCount\":3144,\"publisher\":{\"@id\":\"https:\\\/\\\/ahrefs.com\\\/blog\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/ahrefs.com\\\/blog\\\/black-hat-llmo\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/ahrefs.com\\\/blog\\\/wp-content\\\/uploads\\\/2025\\\/05\\\/llmo-is-in-its-black-hat-by-despina-gavoyannis-general-seo.jpg\",\"keywords\":[\"blog\"],\"articleSection\":[\"AI Search\",\"General SEO\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/ahrefs.com\\\/blog\\\/black-hat-llmo\\\/\",\"url\":\"https:\\\/\\\/ahrefs.com\\\/blog\\\/black-hat-llmo\\\/\",\"name\":\"LLMO Is in Its Black Hat Era\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/ahrefs.com\\\/blog\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/ahrefs.com\\\/blog\\\/black-hat-llmo\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/ahrefs.com\\\/blog\\\/black-hat-llmo\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/ahrefs.com\\\/blog\\\/wp-content\\\/uploads\\\/2025\\\/05\\\/lily-rays-linkedin-post-highlighting-the-quote.png\",\"datePublished\":\"2025-05-20T15:27:01+00:00\",\"dateModified\":\"2025-08-26T14:49:07+00:00\",\"description\":\"When SEO became a race to the bottom, we lost trust, quality, and creativity. Let\u2019s not do it again with LLMs.\",\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/ahrefs.com\\\/blog\\\/black-hat-llmo\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/ahrefs.com\\\/blog\\\/black-hat-llmo\\\/#primaryimage\",\"url\":\"https:\\\/\\\/ahrefs.com\\\/blog\\\/wp-content\\\/uploads\\\/2025\\\/05\\\/lily-rays-linkedin-post-highlighting-the-quote.png\",\"contentUrl\":\"https:\\\/\\\/ahrefs.com\\\/blog\\\/wp-content\\\/uploads\\\/2025\\\/05\\\/lily-rays-linkedin-post-highlighting-the-quote.png\",\"width\":966,\"height\":683,\"caption\":\"Lily Ray's LinkedIn post highlighting the quote \\\"'Optimizing the answers' is now the name of the SEO game, and I'm already seeing so many companies and SEOs start to do this in shady and questionable ways.\\\"\"},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/ahrefs.com\\\/blog\\\/#website\",\"url\":\"https:\\\/\\\/ahrefs.com\\\/blog\\\/\",\"name\":\"SEO Blog by Ahrefs\",\"description\":\"Link Building Strategies &amp; SEO Tips\",\"publisher\":{\"@id\":\"https:\\\/\\\/ahrefs.com\\\/blog\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/ahrefs.com\\\/blog\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/ahrefs.com\\\/blog\\\/#organization\",\"name\":\"Ahrefs\",\"url\":\"https:\\\/\\\/ahrefs.com\\\/blog\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/ahrefs.com\\\/blog\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/ahrefs.com\\\/blog\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/ahrefs-logo.png\",\"contentUrl\":\"https:\\\/\\\/ahrefs.com\\\/blog\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/ahrefs-logo.png\",\"width\":2048,\"height\":768,\"caption\":\"Ahrefs\"},\"image\":{\"@id\":\"https:\\\/\\\/ahrefs.com\\\/blog\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/Ahrefs\\\/\",\"https:\\\/\\\/x.com\\\/ahrefs\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/ahrefs\\\/\",\"https:\\\/\\\/www.youtube.com\\\/c\\\/ahrefscom\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/ahrefs.com\\\/blog\\\/#\\\/schema\\\/person\\\/143d08bacfea1ed0736d76ad138ce704\",\"name\":\"Despina Gavoyannis\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/ahrefs.com\\\/blog\\\/wp-content\\\/uploads\\\/2024\\\/06\\\/Despina-Gavoyannis-scaled.jpg18160744e97f916a45416ab6ccdfe2c0\",\"url\":\"https:\\\/\\\/ahrefs.com\\\/blog\\\/wp-content\\\/uploads\\\/2024\\\/06\\\/Despina-Gavoyannis-scaled.jpg\",\"contentUrl\":\"https:\\\/\\\/ahrefs.com\\\/blog\\\/wp-content\\\/uploads\\\/2024\\\/06\\\/Despina-Gavoyannis-scaled.jpg\",\"caption\":\"Despina Gavoyannis\"},\"description\":\"Despina Gavoyannis is a Senior SEO Specialist at Ahrefs, a leading marketing platform for search, AI, and beyond. She has worked in SEO for over 10 years, specializing in revenue-driven strategies and collaborating closely with cross-functional teams, including UX designers, developers, and marketers.\",\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/despina-gavoyannis\\\/\"],\"url\":\"https:\\\/\\\/ahrefs.com\\\/blog\\\/author\\\/despina-gavoyannis\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"LLMO Is in Its Black Hat Era","description":"When SEO became a race to the bottom, we lost trust, quality, and creativity. Let\u2019s not do it again with LLMs.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/ahrefs.com\/blog\/black-hat-llmo\/","og_locale":"en_US","og_type":"article","og_title":"LLMO Is in Its Black Hat Era","og_description":"When SEO became a race to the bottom, we lost trust, quality, and creativity. Let\u2019s not do it again with LLMs.","og_url":"https:\/\/ahrefs.com\/blog\/black-hat-llmo\/","og_site_name":"SEO Blog by Ahrefs","article_publisher":"https:\/\/www.facebook.com\/Ahrefs\/","article_published_time":"2025-05-20T15:27:01+00:00","article_modified_time":"2025-08-26T14:49:07+00:00","og_image":[{"width":966,"height":683,"url":"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/lily-rays-linkedin-post-highlighting-the-quote.png","type":"image\/png"}],"author":"Despina Gavoyannis","twitter_card":"summary_large_image","twitter_creator":"@ahrefs","twitter_site":"@ahrefs","schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/ahrefs.com\/blog\/black-hat-llmo\/#article","isPartOf":{"@id":"https:\/\/ahrefs.com\/blog\/black-hat-llmo\/"},"author":{"name":"Despina Gavoyannis","@id":"https:\/\/ahrefs.com\/blog\/#\/schema\/person\/143d08bacfea1ed0736d76ad138ce704"},"headline":"LLMO Is in Its Black Hat&nbsp;Era","datePublished":"2025-05-20T15:27:01+00:00","dateModified":"2025-08-26T14:49:07+00:00","mainEntityOfPage":{"@id":"https:\/\/ahrefs.com\/blog\/black-hat-llmo\/"},"wordCount":3144,"publisher":{"@id":"https:\/\/ahrefs.com\/blog\/#organization"},"image":{"@id":"https:\/\/ahrefs.com\/blog\/black-hat-llmo\/#primaryimage"},"thumbnailUrl":"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/llmo-is-in-its-black-hat-by-despina-gavoyannis-general-seo.jpg","keywords":["blog"],"articleSection":["AI Search","General SEO"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/ahrefs.com\/blog\/black-hat-llmo\/","url":"https:\/\/ahrefs.com\/blog\/black-hat-llmo\/","name":"LLMO Is in Its Black Hat Era","isPartOf":{"@id":"https:\/\/ahrefs.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/ahrefs.com\/blog\/black-hat-llmo\/#primaryimage"},"image":{"@id":"https:\/\/ahrefs.com\/blog\/black-hat-llmo\/#primaryimage"},"thumbnailUrl":"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/lily-rays-linkedin-post-highlighting-the-quote.png","datePublished":"2025-05-20T15:27:01+00:00","dateModified":"2025-08-26T14:49:07+00:00","description":"When SEO became a race to the bottom, we lost trust, quality, and creativity. Let\u2019s not do it again with LLMs.","inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/ahrefs.com\/blog\/black-hat-llmo\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/ahrefs.com\/blog\/black-hat-llmo\/#primaryimage","url":"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/lily-rays-linkedin-post-highlighting-the-quote.png","contentUrl":"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2025\/05\/lily-rays-linkedin-post-highlighting-the-quote.png","width":966,"height":683,"caption":"Lily Ray's LinkedIn post highlighting the quote \"'Optimizing the answers' is now the name of the SEO game, and I'm already seeing so many companies and SEOs start to do this in shady and questionable ways.\""},{"@type":"WebSite","@id":"https:\/\/ahrefs.com\/blog\/#website","url":"https:\/\/ahrefs.com\/blog\/","name":"SEO Blog by Ahrefs","description":"Link Building Strategies &amp; SEO Tips","publisher":{"@id":"https:\/\/ahrefs.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/ahrefs.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/ahrefs.com\/blog\/#organization","name":"Ahrefs","url":"https:\/\/ahrefs.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/ahrefs.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2023\/06\/ahrefs-logo.png","contentUrl":"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2023\/06\/ahrefs-logo.png","width":2048,"height":768,"caption":"Ahrefs"},"image":{"@id":"https:\/\/ahrefs.com\/blog\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/Ahrefs\/","https:\/\/x.com\/ahrefs","https:\/\/www.linkedin.com\/company\/ahrefs\/","https:\/\/www.youtube.com\/c\/ahrefscom"]},{"@type":"Person","@id":"https:\/\/ahrefs.com\/blog\/#\/schema\/person\/143d08bacfea1ed0736d76ad138ce704","name":"Despina Gavoyannis","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2024\/06\/Despina-Gavoyannis-scaled.jpg18160744e97f916a45416ab6ccdfe2c0","url":"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2024\/06\/Despina-Gavoyannis-scaled.jpg","contentUrl":"https:\/\/ahrefs.com\/blog\/wp-content\/uploads\/2024\/06\/Despina-Gavoyannis-scaled.jpg","caption":"Despina Gavoyannis"},"description":"Despina Gavoyannis is a Senior SEO Specialist at Ahrefs, a leading marketing platform for search, AI, and beyond. She has worked in SEO for over 10 years, specializing in revenue-driven strategies and collaborating closely with cross-functional teams, including UX designers, developers, and marketers.","sameAs":["https:\/\/www.linkedin.com\/in\/despina-gavoyannis\/"],"url":"https:\/\/ahrefs.com\/blog\/author\/despina-gavoyannis\/"}]}},"as_json":null,"json_reviewers":[194],"_links":{"self":[{"href":"https:\/\/ahrefs.com\/blog\/wp-json\/wp\/v2\/posts\/187995","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ahrefs.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ahrefs.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ahrefs.com\/blog\/wp-json\/wp\/v2\/users\/195"}],"replies":[{"embeddable":true,"href":"https:\/\/ahrefs.com\/blog\/wp-json\/wp\/v2\/comments?post=187995"}],"version-history":[{"count":0,"href":"https:\/\/ahrefs.com\/blog\/wp-json\/wp\/v2\/posts\/187995\/revisions"}],"wp:attachment":[{"href":"https:\/\/ahrefs.com\/blog\/wp-json\/wp\/v2\/media?parent=187995"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ahrefs.com\/blog\/wp-json\/wp\/v2\/categories?post=187995"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ahrefs.com\/blog\/wp-json\/wp\/v2\/tags?post=187995"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/ahrefs.com\/blog\/wp-json\/wp\/v2\/coauthors?post=187995"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}