SSD Grow

Proxies vs. Reverse Proxies | Key Differences & Guide to Setting Up Nginx as a Reverse Proxy Server

On the web, there are two main types of computer programs: servers and clients. A server’s job is to provide web content, such as HTML pages, dynamic websites like Facebook, web apps, or APIs used in other services like “Sign In with Google.” Servers handle requests and deliver appropriate responses, forming the backbone of the web. Clients, on the other hand, send requests to servers and receive responses. A client can be a web browser accessing a website, a desktop app like Discord communicating with servers, or a command-line tool like cURL. Even using Python’s request module to interact with a REST API involves creating an HTTP client.

A proxy, in the context of the web, is a computer that acts as a client on your behalf. It receives all the client’s HTTP requests and forwards them to the intended server. Proxies can obfuscate the client’s true origin, filter requests in networks like universities or organizations, and serve other purposes. Multiple clients can use a single proxy. A reverse proxy, however, serves as a gateway for one or more servers, handling incoming HTTP requests. It ensures each request reaches the correct backend server and that the response is forwarded to the appropriate client. Let’s configure Nginx to act as a reverse proxy for two web apps.

Nginx reverse proxy

For an introduction to Nginx configuration, you can refer to this blog post. To begin setting up an Nginx server, we’ll remove the default configuration and start with a custom server directive.

$ rm /etc/nginx/sites-enabled/default

Now, let’s add the configuration for a reverse proxy. By default, Nginx listens on port 80 and forwards traffic to the relevant endpoints. For HTTPS setup, complete the initial setup here, then follow the instructions in this blog post.

We have two options for reverse proxy configuration:

  1. Route-based reverse proxy: Proxy https://example.com/app1 to one server and https://example.com/app2 to another.
  2. Domain name-based proxy: Proxy https://site1.example.com to one server and https://site2.example.com to another.

Both configurations are similar from Nginx’s perspective, as long as all domain names point to the IP address Nginx is listening on. You can even combine them to run multiple websites, as described in this blog post.

To create a reverse proxy, a server directive might look like this:

server {
listen 80;
listen [::]:80;

server_name example.com;
location /app1 {
proxy_pass http://www.example.com/app1
}
}

The proxy_pass directive instructs Nginx to forward requests to a specified URI, retrieve the response, and send it back to the appropriate client.

You can create similar directives for different paths, continuing until all your needs are met.

For instance, if you have a Python app called app1 listening on localhost:5000 and a NodeJS app called app2 on localhost:5001, you can forward requests to them using location directives as shown below:

location /app1 {
proxy_pass http://localhost:5000/app1
}
location /app2 {
proxy_pass http://localhost:5001/app2
}

Ensure that your apps have their root routes set at /app1 and /app2, respectively, for coherence. This approach mirrors how many web applications are developed and deployed. Different teams can work on distinct microservices (/login, /dashboard) using varied languages and frameworks. Nginx serves as the cohesive element in production. If you seek a shared hosting environment where Nginx reverse proxies two entirely different entities, it’s achievable. Simply create separate server directives for each. For further guidance, refer to this blog post.

Additional configuration options

Implementing a reverse proxy introduces several considerations. Firstly, Nginx can potentially become a bottleneck as all traffic flows through it. However, Nginx is highly efficient and can handle substantial traffic even on modest hardware. Tuning options such as compression and buffering, outlined in the official docs, can further enhance web service responsiveness. A notable edge case involves running Google Analytics on the backend server to track unique visitors. Since Nginx acts as the intermediary, the backend may only see Nginx’s IP in HTTP request headers. To resolve this, Nginx can be configured to pass the original client’s IP to the backend server, ensuring accurate visitor counts.

location /some/path/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://localhost:5000/some/path;
}

Conclusion

Congratulations! You’ve grasped the basics of Nginx reverse proxy. Armed with this knowledge, you’re prepared to dive into the official documentation, set up your environment, and troubleshoot performance issues on the fly. You’re well-equipped for the journey ahead!

Related Articles