Attacking Web Applications with Ffuf - Page Fuzzing

Server Returns 404 for Automated Requests but Works Fine in Browser

I’m trying to brute force file paths using ffuf, but I’m encountering an issue where the server returns a 404 status code for certain files, even though those files exist and can be accessed in the browser.

For example, when using ffuf with the following command:

ffuf -w naix.txt:FUZZ -u http://SERVER:PORT/FUZZ.php -mc all

I get a 404 response for home.php:

home                    [Status: 404, Size: 278, Words: 23, Lines: 10, Duration: 16ms]

However, when I access the same URL via a browser or using curl, I receive a 200 OK response, and the file loads as expected:

curl -I http://SERVER:PORT/blog/home.php

Response:

HTTP/1.1 200 OK
Date: Sat, 07 Sep 2024 11:30:38 GMT
Server: Apache/2.4.41 (Ubuntu)
Content-Type: text/html; charset=UTF-8

It seems like the server is intentionally returning a 404 status for automated requests or when it detects a scanner like ffuf.

Things I’ve tried:

  1. Changing the User-Agent to mimic a browser:

    ffuf -w naix.txt:FUZZ -u http://83.136.255.40:35306/FUZZ.php -H "User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3" -mc all
    

    However, the issue persists.

  2. Verifying the server response using curl, which consistently returns a 200 OK response.


Why is the server returning 404 for automated requests and 200 for manual (browser or curl) requests? Is there any way to bypass this behavior and get ffuf to recognize the file?