Mining URLs from dark corners of Web Archives for bug hunting/fuzzing/further probing

📖 About🏗️ Installation⛏️ Usage🚀 Examples🤝 Contributing



paramspider allows you to fetch URLs related to any domain or a list of domains from Wayback achives. It filters out "boring" URLs, allowing you to focus on the ones that matter the most.


To install paramspider, follow these steps:

git clone
cd paramspider
pip install .


To use paramspider, follow these steps:

paramspider -d


Here are a few examples of how to use paramspider:

  • Discover URLs for a single domain:

    paramspider -d
  • Discover URLs for multiple domains from a file:

    paramspider -l domains.txt
  • Stream URLs on the termial:

    paramspider -d -s
  • Set up web request proxy:

    paramspider -d --proxy ''
  • Adding a placeholder for URL parameter values (default: "FUZZ"):

     paramspider -d -p '"><h1>reflection</h1>'


Contributions are welcome! If you'd like to contribute to paramspider, please follow these steps:

  1. Fork the repository.
  2. Create a new branch.
  3. Make your changes and commit them.
  4. Submit a pull request.

Star History

Star History Chart