### # postcrossing.com robots.txt file # # NOTE: Entries in robots.txt don't inherit from '*'. Or not all bots know how to anyway, hence the repetition ### User-Agent: * # only the right user can open it, so stop doing 403's Disallow: /travelingpostcard/ Disallow: /user/*/traveling Disallow: /user/*/gallery/popular Disallow: /user/*/map Disallow: /pm/send/ Disallow: /user/*/data/sent Disallow: /user/*/data/received Disallow: /user/*/data/traveling # needless load Disallow: /postcards/ Allow: / Crawl-delay: 1 # # Same as above, but we allow social media preview/shares to access /postcards/ # User-agent: atproto-fetch User-agent: Discordbot User-agent: facebookexternalhit User-agent: Facebot User-agent: LinkedInBot User-agent: OdklBot User-agent: redditbot User-agent: Slack-ImgProxy User-agent: Slackbot User-agent: Slackbot-LinkExpanding User-agent: Snap URL Preview Service User-agent: TelegramBot User-agent: Twitterbot User-agent: vkShare User-agent: WeiboShare User-agent: Yahoo Link Preview # only the right user can open it, so stop doing 403's Disallow: /travelingpostcard/ Disallow: /user/*/traveling Disallow: /user/*/gallery/popular Disallow: /user/*/map Disallow: /pm/send/ Disallow: /user/*/data/sent Disallow: /user/*/data/received Disallow: /user/*/data/traveling # # Don't need the extra load # User-agent: Googlebot-Image # only the right user can open it, so stop doing 403's Disallow: /travelingpostcard/ Disallow: /user/*/traveling Disallow: /user/*/gallery/popular Disallow: /user/*/map Disallow: /pm/send/ Disallow: /user/*/data/sent Disallow: /user/*/data/received Disallow: /user/*/data/traveling # extra Disallow: /postcards/ Disallow: /user/*/gallery Disallow: /gallery Disallow: /country/ Allow: / User-agent: AwarioRssBot User-agent: AwarioSmartBot Disallow: /postcards/ # # AdSense crawler # User-agent: Mediapartners-Google Allow: / # # Wayback machine: don't overdue it # User-agent: archive.org_bot Disallow: /user/ Disallow: /postcards/ Disallow: /gallery Allow: / # # If you don't know how to behave, you are not welcome # User-agent: Screaming Frog SEO Spider Disallow: / # # Please respect our Terms of Service: spiders/scrappers are only allowed with explicit permission # User-agent: Scrapy Disallow: / User-agent: scrapybot Disallow: / ############### # below here is primarily from https://en.wikipedia.org/robots.txt # # Some bots are known to be trouble, particularly those designed to copy # entire sites. Please obey robots.txt. User-agent: sitecheck.internetseer.com Disallow: / User-agent: Zealbot Disallow: / User-agent: MSIECrawler Disallow: / User-agent: SiteSnagger Disallow: / User-agent: WebStripper Disallow: / User-agent: WebCopier Disallow: / User-agent: Fetch Disallow: / User-agent: Offline Explorer Disallow: / User-agent: Teleport Disallow: / User-agent: TeleportPro Disallow: / User-agent: WebZIP Disallow: / User-agent: linko Disallow: / User-agent: HTTrack Disallow: / User-agent: Microsoft.URL.Control Disallow: / User-agent: Xenu Disallow: / User-agent: larbin Disallow: / User-agent: libwww Disallow: / User-agent: ZyBORG Disallow: / User-agent: Download Ninja Disallow: / # Misbehaving, requests much too fast User-agent: fast Disallow: / # Sorry, wget in its recursive mode is a frequent problem. # Please read the man page and use it properly; there is a # --wait option you can use to set the delay between hits, # for instance. User-agent: wget Disallow: / # The 'grub' distributed client has been *very* poorly behaved. User-agent: grub-client Disallow: / # Doesn't follow robots.txt anyway, but... User-agent: k2spider Disallow: / # Hits many times per second, not acceptable # http://www.nameprotect.com/botinfo.html User-agent: NPBot Disallow: / # A capture bot, downloads gazillions of pages with no public benefit # http://www.webreaper.net/ User-agent: WebReaper Disallow: / ############### # AI bots create needless extra load, so limiting to just basics # from https://github.com/ai-robots-txt/ai.robots.txt User-agent: AddSearchBot User-agent: AI2Bot User-agent: AI2Bot-DeepResearchEval User-agent: Ai2Bot-Dolma User-agent: aiHitBot User-agent: amazon-kendra User-agent: Amazonbot User-agent: AmazonBuyForMe User-agent: Andibot User-agent: Anomura User-agent: anthropic-ai User-agent: Applebot User-agent: Applebot-Extended User-agent: atlassian-bot User-agent: Awario User-agent: bedrockbot User-agent: bigsur.ai User-agent: Bravebot User-agent: Brightbot 1.0 User-agent: BuddyBot User-agent: Bytespider User-agent: CCBot User-agent: Channel3Bot User-agent: ChatGLM-Spider User-agent: ChatGPT Agent User-agent: ChatGPT-User User-agent: Claude-SearchBot User-agent: Claude-User User-agent: Claude-Web User-agent: ClaudeBot User-agent: Cloudflare-AutoRAG User-agent: CloudVertexBot User-agent: cohere-ai User-agent: cohere-training-data-crawler User-agent: Cotoyogi User-agent: Crawl4AI User-agent: Crawlspace User-agent: Datenbank Crawler User-agent: DeepSeekBot User-agent: Devin User-agent: Diffbot User-agent: DuckAssistBot User-agent: Echobot Bot User-agent: EchoboxBot User-agent: FacebookBot #User-agent: facebookexternalhit User-agent: Factset_spyderbot User-agent: FirecrawlAgent User-agent: FriendlyCrawler User-agent: Gemini-Deep-Research User-agent: Google-CloudVertexBot User-agent: Google-Extended User-agent: Google-Firebase User-agent: Google-NotebookLM User-agent: GoogleAgent-Mariner User-agent: GoogleOther User-agent: GoogleOther-Image User-agent: GoogleOther-Video User-agent: GPTBot User-agent: iAskBot User-agent: iaskspider User-agent: iaskspider/2.0 User-agent: IbouBot User-agent: ICC-Crawler User-agent: ImagesiftBot User-agent: imageSpider User-agent: img2dataset User-agent: ISSCyberRiskCrawler User-agent: Kangaroo Bot User-agent: KlaviyoAIBot User-agent: KunatoCrawler User-agent: laion-huggingface-processor User-agent: LAIONDownloader User-agent: LCC User-agent: LinerBot User-agent: Linguee Bot User-agent: LinkupBot User-agent: Manus-User User-agent: meta-externalagent User-agent: Meta-ExternalAgent User-agent: meta-externalfetcher User-agent: Meta-ExternalFetcher User-agent: meta-webindexer User-agent: MistralAI-User User-agent: MistralAI-User/1.0 User-agent: MyCentralAIScraperBot User-agent: netEstate Imprint Crawler User-agent: NotebookLM User-agent: NovaAct User-agent: OAI-SearchBot User-agent: omgili User-agent: omgilibot User-agent: OpenAI User-agent: Operator User-agent: PanguBot User-agent: Panscient User-agent: panscient.com User-agent: Perplexity-User User-agent: PerplexityBot User-agent: PetalBot User-agent: PhindBot User-agent: Poggio-Citations User-agent: Poseidon Research Crawler User-agent: QualifiedBot User-agent: QuillBot User-agent: quillbot.com User-agent: SBIntuitionsBot #User-agent: Scrapy User-agent: SemrushBot-OCOB User-agent: SemrushBot-SWA User-agent: ShapBot User-agent: Sidetrade indexer bot User-agent: Spider User-agent: TavilyBot User-agent: TerraCotta User-agent: Thinkbot User-agent: TikTokSpider User-agent: Timpibot User-agent: TwinAgent User-agent: VelenPublicWebCrawler User-agent: WARDBot User-agent: Webzio-Extended User-agent: webzio-extended User-agent: wpbot User-agent: WRTNBot User-agent: YaK User-agent: YandexAdditional User-agent: YandexAdditionalBot User-agent: YouBot User-agent: ZanistaBot # only the right user can open it, so stop doing 403's Disallow: /travelingpostcard/ Disallow: /user/*/traveling Disallow: /user/*/gallery/popular Disallow: /user/*/map Disallow: /pm/send/ Disallow: /user/*/data/sent Disallow: /user/*/data/received Disallow: /user/*/data/traveling # needless load Disallow: /postcards/ Disallow: /user/ Disallow: /gallery