首页 服务器运维 正文
  • 本文约3015字,阅读需15分钟
  • 14
  • 0

通过 robots.txt 阻止 AI 爬虫抓取网站的方法

摘要

AI训练和应答需要海量数据支持,有些AI爬虫频繁爬取轻量服务器,很容易直接能将咱的服务器干飘红,现在博客志就来分享一些通过 robots.txt 阻止 AI 爬虫抓取网站的方法。

尽管已经步入了2026年,AI的风头依旧不减,几乎到了人人都在喊AI、人人都在用AI的地步,但AI训练和应答需要海量数据支持,于是就有些AI爬虫开始爬取咱得小破站,特别是对于一些使用轻量服务器的用户,简直就到了不胜其扰的地步,直接能将咱的服务器干飘红,现在博客志就来分享一些通过 robots.txt 阻止 AI 爬虫抓取网站的方法。

一、什么是robots.txt

robots.txt 是放置在网站根目录下的一个文本文件,用于向网络爬虫和机器人传达抓取规则。通过它,你可以指定哪些内容允许被抓取,哪些应被禁止访问。

需要注意的是,尽管大多数正规机器人会遵守 robots.txt 中的规则,但某些恶意爬虫可能会无视这些限制,仍需结合其他防护手段。

二、如何阻止特定爬虫

你可以在 robots.txt 中使用如下指令:

User-agent: [爬虫名称]
Disallow: /

三、常见的AI机器人的屏蔽清单

以下是一些 robots.txt 示例,可用于限制部分AI爬虫访问

User-agent: AddSearchBot
User-agent: AI2Bot
User-agent: Ai2Bot-Dolma
User-agent: aiHitBot
User-agent: AmazonBuyForMe
User-agent: atlassian-bot
User-agent: amazon-kendra-
User-agent: Amazonbot
User-agent: Andibot
User-agent: Anomura
User-agent: anthropic-ai
User-agent: Applebot
User-agent: Applebot-Extended
User-agent: Awario
User-agent: bedrockbot
User-agent: bigsur.ai
User-agent: Bravebot
User-agent: Brightbot 1.0
User-agent: BuddyBot
User-agent: Bytespider
User-agent: CCBot
User-agent: ChatGPT Agent
User-agent: ChatGPT-User
User-agent: Claude-SearchBot
User-agent: Claude-User
User-agent: Claude-Web
User-agent: ClaudeBot
User-agent: Cloudflare-AutoRAG
User-agent: CloudVertexBot
User-agent: cohere-ai
User-agent: cohere-training-data-crawler
User-agent: Cotoyogi
User-agent: Crawlspace
User-agent: Datenbank Crawler
User-agent: DeepSeekBot
User-agent: Devin
User-agent: Diffbot
User-agent: DuckAssistBot
User-agent: Echobot Bot
User-agent: EchoboxBot
User-agent: FacebookBot
User-agent: facebookexternalhit
User-agent: Factset_spyderbot
User-agent: FirecrawlAgent
User-agent: FriendlyCrawler
User-agent: Gemini-Deep-Research
User-agent: Google-CloudVertexBot
User-agent: Google-Extended
User-agent: Google-Firebase
User-agent: Google-NotebookLM
User-agent: GoogleAgent-Mariner
User-agent: GoogleOther
User-agent: GoogleOther-Image
User-agent: GoogleOther-Video
User-agent: GPTBot
User-agent: iaskspider/2.0
User-agent: IbouBot
User-agent: ICC-Crawler
User-agent: ImagesiftBot
User-agent: img2dataset
User-agent: ISSCyberRiskCrawler
User-agent: Kangaroo Bot
User-agent: LinerBot
User-agent: Linguee Bot
User-agent: meta-externalagent
User-agent: Meta-ExternalAgent
User-agent: meta-externalfetcher
User-agent: Meta-ExternalFetcher
User-agent: meta-webindexer
User-agent: MistralAI-User
User-agent: MistralAI-User/1.0
User-agent: MyCentralAIScraperBot
User-agent: netEstate Imprint Crawler
User-agent: NovaAct
User-agent: OAI-SearchBot
User-agent: omgili
User-agent: omgilibot
User-agent: OpenAI
User-agent: Operator
User-agent: PanguBot
User-agent: Panscient
User-agent: panscient.com
User-agent: Perplexity-User
User-agent: PerplexityBot
User-agent: PetalBot
User-agent: PhindBot
User-agent: Poseidon Research Crawler
User-agent: QualifiedBot
User-agent: QuillBot
User-agent: quillbot.com
User-agent: SBIntuitionsBot
User-agent: Scrapy
User-agent: SemrushBot-OCOB
User-agent: SemrushBot-SWA
User-agent: ShapBot
User-agent: Sidetrade indexer bot
User-agent: TerraCotta
User-agent: Thinkbot
User-agent: TikTokSpider
User-agent: Timpibot
User-agent: VelenPublicWebCrawler
User-agent: WARDBot
User-agent: Webzio-Extended
User-agent: wpbot
User-agent: YaK
User-agent: YandexAdditional
User-agent: YandexAdditionalBot
User-agent: YouBot

将上述内容添加到 robots.txt 文件中即可,这些只是常见的愿意遵守 robots.txt 规则的 AI 爬虫清单,那些不愿意遵守规则的爬虫,就只能通过防火墙功能来禁用了。

评论
更换验证码
友情链接