Skip to main content

How to Handle Abuse Reports Responsibly

A response workflow for phishing, impersonation, and spam reports.

Written by Mayank Baswal

Founder of is-cool-me · DNS & Platform Infrastructure

Mayank Baswal maintains the is-cool-me platform and writes technical guides focused on DNS configuration, subdomain infrastructure, SSL troubleshooting, deployment workflows, and platform reliability.

Reviewed by is-cool-me Trust & Safety Review

Why Abuse Handling Matters

Every free subdomain service is a potential vector for online abuse. The same infrastructure that lets developers quickly provision subdomains for legitimate projects can be exploited by bad actors to host phishing pages, distribute malware, or conduct spam campaigns. How we handle abuse reports — speed, accuracy, fairness — determines whether our platform remains a trusted part of the internet ecosystem or becomes a haven for malicious activity.

At is-cool-me, we treat abuse handling as a core safety function, not an afterthought. A well-designed abuse response process protects our users, preserves our domain reputation, and meets our legal obligations under frameworks like the DMCA, GDPR, and the EU Digital Services Act. This article explains our abuse reporting process in detail, what reporters and subdomain owners can expect, and how we balance enforcement with due process.

Safety first: If you encounter a subdomain on is-cool-me that you believe is being used for phishing, malware, or other illegal activity, report it immediately via our contact form or Discord server. Do not engage with the content yourself.

Types of Abuse We Handle

Abuse on a subdomain platform takes many forms. We categorize reports into the following types, each with its own triage priority and response workflow:

Phishing & Credential Theft (Critical Priority)

The most severe category. Phishing sites impersonate trusted services — banks, email providers, social media platforms, crypto wallets — to steal login credentials, payment information, or personal data. A phishing page on an is-cool-me subdomain is especially dangerous because the hostname itself looks legitimate. We prioritize these reports for immediate action, often within minutes of confirmation.

Impersonation (High Priority)

Subdomains designed to impersonate a specific person, brand, or organization. This includes subdomain names that mimic trademarks (google-login.is-pro.dev), fake customer support pages, or accounts claiming to be official representatives of a company. Impersonation does not always involve credential theft — it can be used for reputation damage, misinformation, or fraud.

Malware Distribution (Critical Priority)

Subdomains hosting or linking to malicious software, including trojans, ransomware, spyware, exploit kits, or drive-by download scripts. Malware distribution endangers anyone who visits the subdomain and can trigger immediate blocklisting of our entire domain. Reports of malware are escalated to our infrastructure team for immediate isolation.

Spam & SEO Abuse (Medium Priority)

Subdomains used to host spam content, link farms, redirect chains for search engine manipulation, or bulk email campaign landing pages. While less immediately dangerous than phishing or malware, spam degrades the value of our domain namespace and can trigger search engine penalties. We process spam reports within 24 hours.

Copyright Infringement (Standard Priority)

Subdomains hosting content that infringes on someone else's copyright — unauthorized copies of software, media, or written works. We handle copyright complaints under the DMCA safe harbor framework. Reports must include sufficient information (identification of the copyrighted work, the specific URL, and a statement of good faith belief). We respond to valid DMCA notices within 72 hours.

Our Abuse Reporting Process

When a report is submitted, it passes through the following workflow:

Step 1: Report Submission

Reports can be submitted through two channels:

  • Contact form: Fill out the form at /contact/ with the subject line "[Abuse Report]" and include as much detail as possible.
  • Discord: Join our Discord server and post in the #abuse-reports channel.

Reports submitted through either channel enter the same processing pipeline. Anonymous reports are accepted, but reports with contact information allow us to follow up with status updates.

Step 2: Automated Triage

Upon submission, our automated triage system performs initial checks:

  • Extracts the reported subdomain and verifies it belongs to our namespace
  • Checks the subdomain's current DNS resolution and HTTP response status
  • Queries blocklist APIs (Google Safe Browsing, PhishTank, VirusTotal) for existing threat intelligence
  • Assigns a priority score based on report type, content analysis, and blocklist matches

Reports scoring above the critical threshold are escalated to the on-call moderator immediately. Lower-priority reports enter a moderation queue for review within the applicable SLA.

Step 3: Manual Review

A human moderator reviews the report, the subdomain content, and any associated account information. Our moderators follow a structured review checklist:

  • Confirm the subdomain exists and resolves
  • Access the subdomain content (without interacting with any forms or downloading files)
  • Compare the content against the reporter's description
  • Check the subdomain owner's history (prior reports, account age, other subdomains)
  • Determine whether the content violates our Terms of Service

Step 4: Action Taken

Based on the review, the moderator takes one of the following actions:

  • No action: Content does not violate our policies. Reporter is notified with an explanation.
  • Warning issued: Minor or ambiguous violation. Subdomain owner is notified and given a timeframe to remediate.
  • Subdomain suspended: DNS resolution is disabled immediately. Subdomain owner is notified with the reason and appeal instructions.
  • Account terminated: For severe or repeat violations, the entire account is disabled and all subdomains are reclaimed.

What to Include in an Abuse Report

A well-prepared abuse report accelerates our response time. Please include the following information:

  • Subdomain URL: The full is-cool-me subdomain URL (e.g., https://example.is-pro.dev)
  • Screenshots: Clear screenshots of the offending content, including the URL bar to establish context
  • Description: A brief description of why you believe this is abusive (e.g., "This page appears to be a phishing clone of the Gmail login page")
  • Timestamps: When you discovered the content and, if applicable, when it was reported elsewhere
  • Contact information (optional): Your email address if you want to receive status updates
  • Additional evidence: Any relevant HTTP headers, DNS records, or network analysis that supports your report

Tip: Screenshots are the most valuable evidence you can provide. They capture the content as it appeared at the time of reporting, which is critical if the subdomain owner modifies or removes the content before our moderators review it. Use your browser's built-in screenshot tool or a dedicated capture extension.

How We Verify Reports

Verification is the most important step in our process. We need to confirm that the reported content is genuinely abusive and not a false alarm before taking enforcement action. Our verification methodology includes:

  • DNS record inspection: We check whether the subdomain's DNS records point to the claimed hosting provider and whether the records appear legitimate
  • Content analysis: We access the subdomain and analyze the served content. For phishing reports, we check whether the page is replicating a known login interface or brand asset
  • Hosting provider check: If the abuse involves a downstream hosting provider (e.g., the subdomain points to a Vercel project hosting phishing content), we may coordinate with that provider's abuse team
  • Cross-referencing: We check threat intelligence feeds and community reports to see if the same subdomain or content has been reported elsewhere

Response Times: Target SLAs

We commit to the following response time targets for abuse reports:

Abuse Type Initial Response Resolution Target
Phishing / Credential Theft < 1 hour < 4 hours
Malware Distribution < 1 hour < 4 hours
Impersonation < 4 hours < 24 hours
Spam / SEO Abuse < 24 hours < 72 hours
Copyright Infringement (DMCA) < 72 hours < 7 days

These are targets, not guarantees. During weekends, holidays, or periods of high report volume, response times may be longer. We are actively building a larger moderation team to improve coverage.

False Positives: Handling Mistaken Reports

False positives — reports that turn out to be unfounded — are an inevitable part of any abuse handling system. We handle them carefully for two reasons: first, to minimize disruption to legitimate subdomain owners, and second, because aggressive false-positive handling encourages good-faith reporting.

When a report is determined to be a false positive:

  • The reporter is notified with an explanation of why the content was not found to be in violation
  • No action is taken against the subdomain or its owner
  • The subdomain owner is not notified that they were reported (to prevent retaliation)
  • The report data is used to improve our automated triage accuracy

If you receive a false positive and want to understand why your subdomain was flagged, contact us — we are happy to explain our analysis and, if appropriate, whitelist your subdomain from future automated checks.

What Subdomain Owners Can Do

If you own an is-cool-me subdomain, there are proactive steps you can take to avoid being used for abuse and to respond effectively if abuse is reported on your subdomain:

Security Hardening

  • Use a secure hosting provider: Choose a provider with strong abuse response and DDoS protection
  • Enable HTTPS: Most abuse reports involve non-HTTPS content or invalid certificates
  • Monitor your subdomain: Set up periodic checks to verify your subdomain resolves to the expected content
  • Remove unused subdomains: If you no longer need a subdomain, delete the DNS record entirely to prevent repurposing

Responding to a Report

If you receive an abuse notification from us:

  • Read the notification carefully — we include the specific URL and reason for the report
  • If you believe the report is incorrect, respond with evidence explaining why (screenshots, code, documentation)
  • If the report is correct, remediate the issue immediately and notify us when done
  • Do not retaliate against the reporter — focus on fixing the issue

Case Study: Handling a Phishing Report

To illustrate our process, here is an anonymized case study of a real phishing report we handled:

Report Received: A security researcher reported secure-login.is-pro.dev via our contact form, attaching screenshots showing a page that visually replicated the MetaMask wallet login interface. The subdomain had been registered 6 hours prior by a GitHub account created 2 days earlier.

Triage (5 minutes): Our automated system flagged the subdomain name for containing "login" in a context that matched a known phishing pattern. The report priority was elevated to critical based on the screenshots and the new account flag.

Review (10 minutes): The on-call moderator accessed the subdomain and confirmed it was a near-exact clone of the MetaMask interface. The page included JavaScript that captured wallet seed phrases and exfiltrated them to an external server.

Action (immediate): The subdomain was suspended within 15 minutes of the initial report. The owner's account was placed under review, and all their subdomains were temporarily disabled. The phishing URL was reported to Google Safe Browsing.

Follow-up (24 hours): The reporter was notified of the action taken. The account was terminated after review confirmed the malicious intent. The subdomain name was added to our permanent blocklist.

Legal Considerations

Abuse handling operates within a legal framework that varies by jurisdiction. Here are the key legal concepts that shape our policies:

DMCA Safe Harbor (United States)

Under the Digital Millennium Copyright Act, online service providers can qualify for safe harbor from copyright infringement liability if they implement a notice-and-takedown system and respond expeditiously to valid takedown notices. We comply with the DMCA by maintaining a designated copyright agent, accepting DMCA notices, and processing takedowns in accordance with the statute. Counter-notices from subdomain owners who believe their content was removed in error are reviewed and processed as required by law.

GDPR (European Union)

When processing abuse reports, we may handle personal data belonging to both the reporter and the subdomain owner. Under GDPR, we must have a lawful basis for this processing (legitimate interest in platform safety), provide transparency about our data handling, and respect data subject rights including erasure. Abuse-related data is retained only as long as necessary for enforcement and legal compliance.

Safe Harbor for User-Generated Content (EU Digital Services Act)

The EU Digital Services Act (DSA) establishes a framework for intermediary liability and requires platforms to implement notice-and-action mechanisms for illegal content. Our abuse reporting system is designed to meet DSA requirements, including transparency reporting and user redress options.

Cross-Border Cooperation

Abuse does not respect borders. A phishing site hosted via is-cool-me may target users in Europe, be reported by a researcher in Asia, and be hosted on infrastructure in North America. We cooperate with law enforcement and abuse teams worldwide, subject to applicable privacy laws and our terms of service.

Continuous Improvement

Our abuse handling process is never "finished." Every report — whether confirmed or false positive — provides data that helps us improve. We regularly review our triage accuracy, response times, and moderation decisions to identify areas for improvement. Our transparency reports, published periodically, provide metrics on the volume and disposition of abuse reports.

Need hands-on help? See Guides for step-by-step setup playbooks, or join the Discord community.

Deployment scenario from operations

Moderation decisions were reviewed with concrete evidence to separate unsafe behavior from legitimate project activity.

Platform nuance: Moderation quality improves when reports are specific, time-stamped, and technically reproducible.

Common mistakes

  • Treating policy language as vague guidance instead of enforceable boundaries.
  • Submitting reports without timestamps or reproducible evidence.
  • Assuming moderation outcomes are random when evidence is incomplete.

How to verify it works

  1. Check decision rationale against published policy categories.
  2. Confirm evidence package contains URLs, timing, and impact details.
  3. Use appeal path when factual corrections are needed.
Use these checks before closing an abuse report or confirming enforcement action to reporters.