Setting Up OpenResty for Report Routing Load Balancing

This guide helps you configure OpenResty to:

  • Route report rendering requests based on report name and token
  • Load balance login requests across multiple backend servers
  • Inspect request bodies and headers using Lua
  • Centralize header forwarding and response handling

Step 1: Install OpenResty

  • Go to openresty.org
  • Download the Windows version (or Linux/macOS if applicable)
  • Extract it to a folder like: C:\OpenResty\

Step 2: Prepare Your Backend Servers

In this guide, we will be using 3 servers:

  • Server A running on localhost:1730
  • Server B running on localhost:1731
  • (Optional) Server C on localhost:1729 for round-robin fallback

Each should be able to handle /rml-engine/api/v1/render and /remote-access/login.

Step 3: Create Your OpenResty Config

  1. Replace the contents of conf/nginx.conf with this:
worker_processes 1;
error_log logs/error.log info;

events {
    worker_connections 1024;
}

http {
    lua_shared_dict rr_counter 1m;

    include mime.types;
    default_type application/octet-stream;

    log_format main '$remote_addr - [$time_local] "$request" '
                    '$status upstream="$upstream_addr" '
                    'token="$arg_elx.token" rml="$rml"';

    access_log logs/access.log main;

    sendfile on;
    keepalive_timeout 65;

    upstream backend_1730 { server 127.0.0.1:1730; }
    upstream backend_1731 { server 127.0.0.1:1731; }
    upstream backend_1729 { server 127.0.0.1:1729; }

    upstream login_backend {
        server 127.0.0.1:1730;
        server 127.0.0.1:1731;
    }

    server {
        listen 8080;
        server_name localhost;

        location /rml-engine/api/v1/render {
            access_by_lua_block {
                ngx.req.read_body()
                local body = ngx.req.get_body_data()
                local cjson = require "cjson.safe"
                local json = cjson.decode(body)
                local rml = json and json.rml or ""
                ngx.var.rml = rml
            }

            content_by_lua_block {
                local args = ngx.req.get_uri_args()
                local token = args["elx.token"]
                local rml = ngx.var.rml

                local target_location
                if rml == "/ElixirSamples/Report/RML/Map.rml"
                   or rml == "/ElixirSamples/Report/RML/Master-Detail Report.rml" then
                    target_location = "/proxy_1731"
                else
                    local dict = ngx.shared.rr_counter
                    local counter = dict:incr("count", 1)
                    if not counter then
                        dict:set("count", 0)
                        counter = 0
                    end
                    target_location = (counter % 2 == 0) and "/proxy_1730" or "/proxy_1729"
                end

                local headers = ngx.req.get_headers()
                local forwarded_headers = {
                    ["Authorization"] = headers["Authorization"],
                    ["Content-Type"] = headers["Content-Type"],
                    ["Cookie"] = headers["Cookie"],
                    ["X-Real-IP"] = ngx.var.remote_addr,
                    ["X-Forwarded-For"] = ngx.var.proxy_add_x_forwarded_for,
                    ["X-Forwarded-Proto"] = ngx.var.scheme,
                    ["Host"] = ngx.var.host
                }

                local res = ngx.location.capture(target_location, {
                    method = ngx.HTTP_POST,
                    body = ngx.req.get_body_data(),
                    args = args,
                    headers = forwarded_headers
                })

                ngx.status = res.status
                for k, v in pairs(res.header) do ngx.header[k] = v end
                ngx.print(res.body)
            }
        }

        location /remote-access/login {
            proxy_pass http://login_backend;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header Authorization $http_authorization;
            proxy_set_header Content-Type $http_content_type;
        }

        location /proxy_1730 {
            internal;
            proxy_pass http://backend_1730/rml-engine/api/v1/render;
        }

        location /proxy_1731 {
            internal;
            proxy_pass http://backend_1731/rml-engine/api/v1/render;
        }

        location /proxy_1729 {
            internal;
            proxy_pass http://backend_1729/rml-engine/api/v1/render;
        }

        location / {
            proxy_pass http://backend_1730;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }
}

  1. Replace the report names with the ones that you use. For example:
if rml == "/Reports/SalesSummary.rml"
   or rml == "/Reports/CustomerInvoice.rml" then
    target_location = "/proxy_1731"

Make sure the report path matches exactly, including folder and filename, or the routing won’t work.

Optional:
If you want certain reports to always go to Server A or Server C, just add more conditions like:

if rml == "/Reports/HRDashboard.rml" then
    target_location = "/proxy_1730"
elseif rml == "/Reports/FinanceOverview.rml" then
    target_location = "/proxy_1729"
Click here to get a breakdown explanation for each code block
worker_processes 1;
error_log logs/error.log info;
  • worker_processes 1: Runs NGINX with one worker process. Good enough for local or dev setups.
  • error_log logs/error.log info: Logs errors and important info to logs/error.log.
events {
    worker_connections 1024;
}
  • Allows up to 1024 simultaneous connections per worker. This controls how many clients NGINX can handle at once.
http {
    lua_shared_dict rr_counter 1m;
  • Creates a shared memory space (rr_counter) for Lua to store a counter — used for round-robin logic later.
    include mime.types;
    default_type application/octet-stream;
  • Tells NGINX how to handle different file types.
  • If unknown, it defaults to application/octet-stream (generic binary).
    log_format main '$remote_addr - $remote_user [$time_local] "$request" '
                    '$status $body_bytes_sent "$http_referer" '
                    '"$http_user_agent" "$http_x_forwarded_for" '
                    'header_host="$http_host" header_cookie="$http_cookie" '
                    'upstream="$upstream_addr" request_time=$request_time '
                    'content_type="$http_content_type"';
  • Defines a custom log format that captures IP, request, headers, upstream server, and timing.
  • Useful for debugging and tracing requests.
    sendfile on;
    keepalive_timeout 65;
  • sendfile: Speeds up file transfers.
  • keepalive_timeout: Keeps connections open for 65 seconds to reuse them.
    gzip on;
    gzip_disable "msie6";
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_buffers 16 8k;
    gzip_http_version 1.1;
    gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
  • Enables gzip compression to reduce response size.
  • Targets common content types like JSON, CSS, JS, XML.
    upstream backend_servers {
        ip_hash;
        server 127.0.0.1:1730;
        server 127.0.0.1:1731;
    }
  • Creates a load-balanced group of servers.
  • ip_hash ensures the same client IP always hits the same backend (session stickiness).
    upstream backend_1730 { server 127.0.0.1:1730; }
    upstream backend_1731 { server 127.0.0.1:1731; }
    upstream backend_1729 { server 127.0.0.1:1729; }
  • Defines individual upstreams for direct routing.
    server {
        listen 8080;
        server_name localhost;
  • Listens on port 8080 for incoming requests.
        location /service-chooser/ {
            proxy_pass http://backend_servers/service-chooser/;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
  • Forwards /service-chooser/ requests to the load-balanced backend group.
  • Passes client info and headers to preserve context.
        location /rml-engine/api/v1/render {
            access_by_lua_block {
                local auth_header = ngx.req.get_headers()["Authorization"]
                ngx.log(ngx.ERR, "Auth header: ", auth_header or "none")
                local cookie = ngx.req.get_headers()["Cookie"]
                ngx.log(ngx.ERR, "Cookie header: ", cookie or "none")
            }
  • Logs the Authorization and Cookie headers for debugging.
            content_by_lua_block {
                ngx.req.read_body()
                local body = ngx.req.get_body_data()
                local cjson = require "cjson.safe"
                local json = cjson.decode(body)
                local rml = json and json.rml
                local args = ngx.req.get_uri_args()
                local token = args["elx.token"]
  • Reads the request body and query string.
  • Extracts the rml file path and elx.token.
                local target_location
                if rml == "/ElixirSamples/Report/RML/Master-Detail Report.rml"
                   or rml == "/ElixirSamples/Report/RML/Map.rml" then
                    target_location = "/proxy_1731"
  • If the report is Map.rml or Master-Detail.rml, route to Server B (1731).
                else
                    local dict = ngx.shared.rr_counter
                    local counter, err = dict:incr("count", 1)
                    if not counter then
                        dict:set("count", 0)
                        counter = 0
                    end
                    if counter % 2 == 0 then
                        target_location = "/proxy_1730"
                    else
                        target_location = "/proxy_1729"
                    end
                end
  • Otherwise, use round-robin logic to alternate between Server A (1730) and Server C (1729).
                local headers = ngx.req.get_headers()
                local forwarded_headers = {
                    ["Authorization"] = headers["Authorization"],
                    ["Content-Type"] = headers["Content-Type"],
                    ["Cookie"] = headers["Cookie"],
                    ["X-Real-IP"] = ngx.var.remote_addr,
                    ["X-Forwarded-For"] = ngx.var.proxy_add_x_forwarded_for,
                    ["X-Forwarded-Proto"] = ngx.var.scheme,
                    ["Host"] = ngx.var.host
                }
                local res = ngx.location.capture(target_location, {
                    method = ngx.HTTP_POST,
                    body = body,
                    args = args,
                    headers = forwarded_headers
                })
  • Sends the request to the chosen backend using internal routing.
  • Preserves headers and body.
                ngx.status = res.status
                for k, v in pairs(res.header) do
                    ngx.header[k] = v
                end
                ngx.print(res.body)
            }
        }
  • Returns the backend’s response to the client.
        location /proxy_1730 {
            internal;
            proxy_pass http://backend_1730/rml-engine/api/v1/render;
            proxy_connect_timeout 666s;
            proxy_send_timeout 666s;
            proxy_read_timeout 666s;
        }
  • Internal route to Server A.
  • Long timeouts to support heavy reports.

(Same logic applies to /proxy_1731 and /proxy_1729.)

        location / {
            proxy_pass http://backend_servers;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
  • Forwards all other requests to the load-balanced group.
        access_log logs/host.access.log main;
        error_page 500 502 503 504 /50x.html;
        location = /50x.html {
            root html;
        }
    }
}
  • Logs access using the custom format.
  • Shows a friendly error page if something goes wrong.

Step 4: Start OpenResty

  1. Open Command Prompt
  2. Navigate to OpenResty folder:
cd C:\OpenResty\
  1. Start OpenResty:
start nginx

To reload config after changes:

nginx -s reload

Step 5: Test with Curl or Java Client

Curl Examples

curl -s -v "http://localhost:8080/rml-engine/api/v1/render?elx.token=2f9fe6ed-ca0a-487c-b8e0-c3589b1a834f" \
-H "Content-Type: application/json" \
-d "{\"rml\":\"/ElixirSamples/Report/RML/Map.rml\",\"mimeType\":\"application/pdf\"}"

nginx-test.zip (2.0 KB)