Investigating ArvanCloud Rate Limit and How to Configure It - ArvanCloud

ArvanCloud Blog

Read more about ArvanCloud news,
updates, products and services in ArvanCloud weblog.

Investigating ArvanCloud Rate Limit and How to Configure It

18 Jul 2019

Rate limiting is an approach for controlling a network’s input and output traffic. For example, rate limiting can be used to specify that the user is only allowed to send 100 requests, and to show an error if the number of requests exceeds this number. Rate limiting is implemented to:

  • Better manage traffic flow

  • Increase security by preventing attacks such as DDoS, brute force or any other destructive attack in application layer

Another advantage of rate limiting is when the user sends a request to the server by mistake and the server has to send a huge amount of data back to the user, which increases network overhead. Rate limiting can also be used to better manage these errors.

Various Rate Limiting Methods

Various methods and parameters can be used to configure the rate limit. Using any of these methods depends on the objective and on how to apply the intended limitation. There are three rate limit implementation methods, including:

  • User Rate Limit: This is the most common rate limit implementation method. This method limits the number of requests sent by the user. That is, if the number of requests sent by the user is more than the specified limit (threshold), the extra requests will be rejected. This continues until the developer increases the threshold or the time specified for this limitation ends.

  • Geographic Rate Limiting: In this method, settings can be applied to increase a geographic region’s security by limiting that specific region’s send/receive traffic for a specific time period. For example, imagine that users at a specific region have no activity between 12AM and 6AM. The rate limit for that region can therefore be set to the lowest possible value to reduce the possibility of attacks or harmful activities in that time period.

  • Server Rate Limiting: If the developer specifies various servers for managing a specific aspect of his application, he can apply rate limiting at the server level. This makes it possible to reduce traffic rate limit on one server while increasing it on another.

Parameters for Calculating Requests per Second

Number of requests per second is the result of the number of users (IPs in this case) in an hour and the number of simultaneous users in a specific time window. The number of HTTP request received from one application per second depends on the following parameters:

  • (W) Total Test Time or “Test Window”

  • (J) Time for a Single Journey or “Journey Window”

  • (Y) Number of Users Active in a “Test Window”

  • (U) Number of Simultaneous Users in a “Journey Window”

  • (S) Number of Steps in a Journey

  • (R) Number of HTTP Requests in each Journey Hop

Journey” is the sequence of hops that should be taken to connect one HTTP client to an HTTP server. Each step can lead to one or several HTTP requests. For example, to load a webpage, the user sends a separate request to the server for each different resource (in case it’s not cached), like CSS files, pictures, JS, etc.

Implementing Rate Limit on the Main Server Side

Rate limiting can be implemented through the server with programming languages, or through caching mechanisms. The following two examples show how to implement rate limit on Nginx and Apache servers.

  • Nginx

You can use the ngx_http_limit_req_module in the main server using Nginx. For example, the following commands could be added to the Nginx configuration file to apply rate limit based on user IP.

http {

limit_req_zone $binary_remote_addr zone=one:10m rate=2r/s;

...

server {

...

location /promotion/ {

limit_req zone=one burst=5;

}

}

The code snippet above specifies that the average number of requests per second should not exceed 2, and if user data is send in burst mode, the number of requests should not exceed 5 in the time slot.

  • Apache

Configuring Rate Limit in Apache is similar to Nginx. The mod_ratelimit module is used to limit user bandwidth in Apache, which is applied to every HTTP response.

<Location "/promotion">

SetOutputFilter RATE_LIMIT

    SetEnv rate-limit 400

    SetEnv rate-initial-burst 512

</Location>

The values specified in the code snippet above are in kilobytes per second. According to the code, the speed threshold is 400 kbps and 512 kbps in burst mode.

Calculating the Correct Request Limit Value with the ArvanCloud Panel

To obtain a suitable value for request limit, it is recommended to go to “Content Distribution Network (CDN)”, “Traffic Analysis”, and to specify the report display time to one month from the menu on the left side of this screen.

Then go to the requests section at bottom of the page. The total number of requests to your server in the last month is shown here.

Divide this value by the number of days in a month (30 or 31 days), then by the number of hours in a day (24), then by one hour (60 minutes) and finally by one minute (60 seconds) to obtain the approximate number of each request per second.

The value that you can specify in the limit section settings is slightly higher than the obtained value.

Rate Limiting Settings in the ArvanCloud Panel

In the limiting section of ArvanCloud panel, you can limit the number of requests from and the total number of connections in one IP per second.

For example, if the request limit is set to 500 IP requests per seconds, ArvanCloud responds to 500 requests per each IP without delay. After 500 requests per second, 20 percent of the value set, or 100 requests per second, will be moved to the queue and then responded to. If the number of requests exceeds 60 per second, the excess value will not be responded to.

You can whitelist a list of IPs under the limitation section to apply no restrictions.

دیدگاه شما