Link: https://github.com/ffuf/ffuf

FUFF or Fuzz Faster U Fool is an easy to use webfuzzer written in GO.
Therefor the only requirement is you need to have GO installed on your system.
A good guide to install GO can be found here: https://golang.org/doc/install
It can be installed by simply cloning the Github repository or by the
go get github.com/ffuf/ffuf get command.
As with most tools, the available commands can be displayed with the parameter -h.

# ffuf -h
Fuzz Faster U Fool - v1.0.2

HTTP OPTIONS:
  -H               Header `"Name: Value"`, separated by colon. Multiple -H flags are accepted.
  -X               HTTP method to use (default: GET)
  -b               Cookie data `"NAME1=VALUE1; NAME2=VALUE2"` for copy as curl functionality.
  -d               POST data
  -r               Follow redirects (default: false)
  -recursion       Scan recursively. Only FUZZ keyword is supported, and URL (-u) has to end in it. (default: false)
  -recursion-depth Maximum recursion depth. (default: 0)
  -replay-proxy    Replay matched requests using this proxy.
  -timeout         HTTP request timeout in seconds. (default: 10)
  -u               Target URL
  -x               HTTP Proxy URL

GENERAL OPTIONS:
  -V               Show version information. (default: false)
  -ac              Automatically calibrate filtering options (default: false)
  -acc             Custom auto-calibration string. Can be used multiple times. Implies -ac
  -c               Colorize output. (default: false)
  -maxtime         Maximum running time in seconds. (default: 0)
  -p               Seconds of `delay` between requests, or a range of random delay. For example "0.1" or "0.1-2.0"
  -s               Do not print additional information (silent mode) (default: false)
  -sa              Stop on all error cases. Implies -sf and -se. (default: false)
  -se              Stop on spurious errors (default: false)
  -sf              Stop when > 95% of responses return 403 Forbidden (default: false)
  -t               Number of concurrent threads. (default: 40)
  -v               Verbose output, printing full URL and redirect location (if any) with the results. (default: false)
  
  MATCHER OPTIONS:
  -mc              Match HTTP status codes, or "all" for everything. (default: 200,204,301,302,307,401,403)
  -ml              Match amount of lines in response
  -mr              Match regexp
  -ms              Match HTTP response size
  -mw              Match amount of words in response

FILTER OPTIONS:
  -fc              Filter HTTP status codes from response. Comma separated list of codes and ranges
  -fl              Filter by amount of lines in response. Comma separated list of line counts and ranges
  -fr              Filter regexp
  -fs              Filter HTTP response size. Comma separated list of sizes and ranges
  -fw              Filter by amount of words in response. Comma separated list of word counts and ranges

INPUT OPTIONS:
  -D               DirSearch wordlist compatibility mode. Used in conjunction with -e flag. (default: false)
  -e               Comma separated list of extensions. Extends FUZZ keyword.
  -ic              Ignore wordlist comments (default: false)
  -input-cmd       Command producing the input. --input-num is required when using this input method. Overrides -w.
  -input-num       Number of inputs to test. Used in conjunction with --input-cmd. (default: 100)
  -mode            Multi-wordlist operation mode. Available modes: clusterbomb, pitchfork (default: clusterbomb)
  -request         File containing the raw http request
  -request-proto   Protocol to use along with raw request (default: https)
  -w               Wordlist file path and (optional) keyword separated by colon. eg. '/path/to/wordlist:KEYWORD'

OUTPUT OPTIONS:
  -debug-log       Write all of the internal logging to the specified file.
  -o               Write output to file
  -od              Directory path to store matched results to.
  -of              Output file format. Available formats: json, ejson, html, md, csv, ecsv (default: json)

[...SNIP...}

As you can see there are a lot of available parameters, but don't worry, getting started is really easy.
All you need is a target url defined in the parameter -u and a wordlist defined in parameter -w.
If you don´t have a good directory wordlist already, I can recommend the SecLists discovery wordlists by Daniel Meissler you can find the Github repository at https://github.com/danielmiessler/SecLists. Definitely clone this repository!
To scan a target urls simply issue the command below

Please note the FUZZ keyword in all capitals, this is needed to tell FFUF where to put the words of the supplied wordlist.
In other words, if your wordlists only contain the words

info.php
index.html
backup.zip

FFUF will issue the requests below:

https://scansite.local/info.php
https://scansite.local/index.html
https://scansite.local/backup.zip

An example scan of this blog would looks like as follows:

ffuf -u "https://scansite.local/FUZZ" -w /usr/share/wordlists/SecLists/Discovery/Web-Content/raft-medium-directories.txt

        /'___\  /'___\           /'___\
       /\ \__/ /\ \__/  __  __  /\ \__/
       \ \ ,__\\ \ ,__\/\ \/\ \ \ \ ,__\
        \ \ \_/ \ \ \_/\ \ \_\ \ \ \ \_/
         \ \_\   \ \_\  \ \____/  \ \_\
          \/_/    \/_/   \/___/    \/_/

       v1.0.2
________________________________________________

 :: Method           : GET
 :: URL              : https://scansite.local/FUZZ
 :: Follow redirects : false
 :: Calibration      : false
 :: Timeout          : 10
 :: Threads          : 40
 :: Matcher          : Response status: 200,204,301,302,307,401,403
________________________________________________

components              [Status: 301, Size: 0, Words: 1, Lines: 1]
wp-content              [Status: 301, Size: 0, Words: 1, Lines: 1]
xmlrpc                  [Status: 301, Size: 0, Words: 1, Lines: 1]
media                   [Status: 301, Size: 0, Words: 1, Lines: 1]
bin                     [Status: 301, Size: 0, Words: 1, Lines: 1]
profiles                [Status: 301, Size: 0, Words: 1, Lines: 1]
modules                 [Status: 301, Size: 0, Words: 1, Lines: 1]
wp-includes             [Status: 301, Size: 0, Words: 1, Lines: 1]
administrator           [Status: 301, Size: 0, Words: 1, Lines: 1]
themes                  [Status: 301, Size: 0, Words: 1, Lines: 1]
images                  [Status: 301, Size: 0, Words: 1, Lines: 1]
password                [Status: 301, Size: 0, Words: 1, Lines: 1]
plugins                 [Status: 301, Size: 0, Words: 1, Lines: 1]
js                      [Status: 301, Size: 0, Words: 1, Lines: 1]
reply                   [Status: 301, Size: 0, Words: 1, Lines: 1]
misc                    [Status: 301, Size: 0, Words: 1, Lines: 1]
wp-admin                [Status: 301, Size: 0, Words: 1, Lines: 1]
node                    [Status: 301, Size: 0, Words: 1, Lines: 1]
scripts                 [Status: 301, Size: 0, Words: 1, Lines: 1]
user                    [Status: 301, Size: 0, Words: 1, Lines: 1]
installation            [Status: 301, Size: 0, Words: 1, Lines: 1]
test                    [Status: 301, Size: 0, Words: 1, Lines: 1]
logout                  [Status: 301, Size: 0, Words: 1, Lines: 1]
tmp                     [Status: 301, Size: 0, Words: 1, Lines: 1]
libraries               [Status: 301, Size: 0, Words: 1, Lines: 1]
search                  [Status: 301, Size: 0, Words: 1, Lines: 1]
templates               [Status: 301, Size: 0, Words: 1, Lines: 1]
.
.
.

As you can see from the example output above, every request FFUF made is answered with a 301 redirect.
One reason why the redirects occur on all requests we see is the webserver redirecting to a 404 Not Found Page.
Nevertheless we have a lot of output to look through in order to find valuable targets for further exploitation.

Filters and matchers

To hide these for us unimportant entries, we can use various filters and matchers the tool provides.
The filters available can be seen in the help menu or the Github page mentioned above.
Here are currently available filters and matchers from the help menu.

MATCHER OPTIONS:
  -mc              Match HTTP status codes, or "all" for everything. (default: 200,204,301,302,307,401,403)
  -ml              Match amount of lines in response
  -mr              Match regexp
  -ms              Match HTTP response size
  -mw              Match amount of words in response

FILTER OPTIONS:
  -fc              Filter HTTP status codes from response. Comma separated list of codes and ranges
  -fl              Filter by amount of lines in response. Comma separated list of line counts and ranges
  -fr              Filter regexp
  -fs              Filter HTTP response size. Comma separated list of sizes and ranges
  -fw              Filter by amount of words in response. Comma separated list of word counts and ranges

Let me explain a little bit what filters and matchers are and how they can help us get rid of the uninteresting parts of a scan and help us focus on the more valuable findings.

Matchers

A matcher defines a condition that has to be true for the data to not get disgarded.
In other words, if you define a matcher for example -mc 200 you tell FFUF to only show you responses that have the http status code 200 OK.
You can define multiple status codes by putting then into a comma separated list for example -m 200,500. This will only give us the responses that have a 200 OK or a 500 INTERNAL SERVER ERROR as http status code.
FFUF can define matchers on multiple attributes of the response currently supported are the amount of lines, response size (in bytes), http status code, and amount of words in the response. There is even the possibility to define a regex, but more on this in a later post.

Filters

Filters are the exact opposite of matchers, the condition of a filter has to be false for the data to not get discarded.
So if we define a filter on the http status code 200 OK, all requests that have that status code get discarded and we only see the requests that don´t have a 200 OK as status code. Similar to the matchers described above filters can have multiple values as a comma-separated list to filter out multiple status codes.
The attributes one can define a filter on are the same as mentioned above in the matchers section.

With the basics out of the way, we can now tune the used FFUF command to scan the target and remove all the uninteresting noise from the output.
The new command look like follows

ffuf -u "https://scansite.local/FUZZ" -w /usr/share/wordlists/SecLists/Discovery/Web-Content/raft-medium-directobries.txt -fc 301

The new scan output can be seen below:

        /'___\  /'___\           /'___\
       /\ \__/ /\ \__/  __  __  /\ \__/
       \ \ ,__\\ \ ,__\/\ \/\ \ \ \ ,__\
        \ \ \_/ \ \ \_/\ \ \_\ \ \ \ \_/
         \ \_\   \ \_\  \ \____/  \ \_\
          \/_/    \/_/   \/___/    \/_/

       v1.0.2
________________________________________________

 :: Method           : GET
 :: URL              : https://scansite.local/FUZZ
 :: Follow redirects : false
 :: Calibration      : false
 :: Timeout          : 10
 :: Threads          : 40
 :: Matcher          : Response status: 200,204,301,302,307,401,403
 :: Filter           : Response status: 301
________________________________________________

index.php               [Status: 200, Size: 101, Words: 5, Lines: 1]
backup.zip              [Status: 200, Size: 6876213, Words: 0, Lines: 1]
auth.php                [Status: 200, Size: 3139, Words: 221, Lines: 80]
/wordpress              [Status: 302, Size: 0, Words: 0, Lines: 0]
/server-status          [Status: 401, Size: 0, Words: 0, Lines: 0]

As you can now see all the 301 are gone and we are left with a clean result list that shows only the interesting directories and files found.
In my opinion, FFUF is a brilliant tool and is a staple in my bug bounty toolchain.

There will be an additional article on FFUF diving deeper into the possibilities of this amazing tool. Stay tuned!