Web-Sorrow - Tool For Detecting Misconfigurations and Collecting Server Information

Web-Sorrow - Web server scanning Tool

Web-Sorrow is a Pearl based scanner that allows you to find server misconfigurations and collect server information. Since this tool is written in Pearl, this will work on almost any system where Perl works. It is 100% safe to run this program against web servers because it is entirely focused on Enumeration and collecting information about the target server, and it is not designed to be an exploit or perform any harmful attacks.

Here are some of the more major features of web-sorrow:
  • CMS (Content Management System) detection
  • Port scanning
  • Login page scanning
  • Proxy support
  • Bruteforce (Subdomains, Files, and Directories)
  • Stealth
  • Error bagging
  • Standard set of scans (indexing of directories, banner grabbing, language detection, robots.txt, and 200 response testing, thumbs.db scanning, and etc.).


perl Wsorrow.pl [HOST OPTIONS] [SCAN(s)] [SCAN SETTING(s)]
    -host [host]     --  Defines host to scan, a list separated by
                         semicolons, type ranges, and
                         1.1.1.* type ranges. You can also use the
                type ranges for domains
                         like www1-10.site.com
    -port [port num] --  Defines port number to use (Default is 80)
    -proxy [ip:port] --  Use an HTTP, HTTPS, or gopher proxy server
    -S          --  Standard set of scans including: agresive directory indexing,
                    Banner grabbing, Language detection, robots.txt,
                    HTTP 200 response testing, Apache user enum, SSL cert,
                    Mobile page testing, sensitive items scanning,
                    thumbs.db scanning, content negotiation, and non port 80
                    HTTP port sweeps
    -auth       --  Scan for login pages, admin consoles, and email webapps
    -Cp [dp | jm | wp | all] scan for cms plugins.
                    dp = drupal, jm = joomla, wp = wordpress
    -Fd         --  Scan for common interesting files and dirs (Bruteforce)
    -Sfd        --  Very small files and dirs enum (for the sake of time)
    -Sd         --  BruteForce Subdomains (host given must be a domain. Not an IP)
    -Ws         --  Scan for Web Services on host such as: cms version info,
                    blogging services, favicon fingerprints, and hosting provider
    -Db         --  BruteForce Directories with the big dirbuster Database
    -Df [option]    Scan for default files. platfroms/options: Apache,
                    Frontpage, IIS, Oracle9i, Weblogic, Websphere,
                    MicrosoftCGI, all (enables all)
    -ninja      --  A light weight and undetectable scan that uses bits and
                    peices from other scans (it is not recomended to use with any
                    other scans if you want to be stealthy. See readme.txt)
    -fuzzsd     --  Fuzz every found file for Source Disclosure
    -e          --  Everything. run all scans
    -intense    --  like -e but no bruteforce
    -I          --  Passively scan interesting strings in responses such as:
                    emails, wordpress dirs, cgi dirs, SSI, facebook fbids,
                    and much more (results may Contain partial html)
    -dp         --  Do passive tests on requests: banner grabbing, Dir indexing,
                    Non 200 http status, strings in error pages,
                    Passive Web services
    -flag [txt] --  report when this text shows up on the responces.
    -ua [ua] --  Useragent to use. put it in quotes. (default is firefox linux)
    -Rua     --  Generate a new random UserAgent per request
    -R       --  Only request HTTP headers via ranges requests.
                 This is much faster but some features and capabilitises
                 May not work with this option. But it's perfect when
                 You only want to know if something exists or not.
                 Like in -auth or -Fd
    -gzip    --  Compresses http responces from host for speed. Some Banner
                 Grabbing will not work
    -d [dir] --  Only scan within this directory
    -https   --  Use https (ssl) instead of http
    -nr      --  Don't do responce analisis IE. False positive testing,
                 Iteresting headers (other than banner grabbing) if
                 you want your scan to be less verbose use -nr
    -Shadow  --  Request pages from Google cache instead of from the Host.
                 (mostly for just -I otherwise it's unreliable)
    -die     --  Stop scanning host if it appears to be offline
    -reject  --  Treat this http status code as a 404 error
EXAMPLES: perl Wsorrow.pl -host scanme.nmap.org -S perl Wsorrow.pl -host nyan.cat -Fd -fuzzsd perl Wsorrow.pl -host nationalcookieagency.mil -Cp dp,jm -ua "script w/ the munchies" perl Wsorrow.pl -host chatrealm.us -d /wordpress -Cp wp perl Wsorrow.pl -host -port 8080 -proxy -S -Ws -I


  • The -ninja doesn't make other scans stealthy. It is itself a scan that uses very few requests.
  • When using -Cp you can scan multiple or single cms plugins. For example: -Cp wp,dp or -Cp wp;dp. Also, it doesn't matter what you separate the options with.
  • When using -ua, you must use quotes if it contains whitespace.
  • If you use -e with other scans, it will run it twice.
  • To log results to a file: perl Wsorrow.pl -host host.com -S -I >logfile.txt.

No comments

Powered by Blogger.