release: bump version to 0.3.0
- Refactor Redis backend connection handling and pool management - Update algorithm implementations with improved type annotations - Enhance config loader validation with stricter Pydantic schemas - Improve decorator and middleware error handling - Expand example scripts with better docstrings and usage patterns - Add new 00_basic_usage.py example for quick start - Reorganize examples directory structure - Fix type annotation inconsistencies across core modules - Update dependencies in pyproject.toml
This commit is contained in:
211
docs/api/algorithms.rst
Normal file
211
docs/api/algorithms.rst
Normal file
@@ -0,0 +1,211 @@
|
||||
Algorithms API
|
||||
==============
|
||||
|
||||
Rate limiting algorithms and the factory function to create them.
|
||||
|
||||
Algorithm Enum
|
||||
--------------
|
||||
|
||||
.. py:class:: Algorithm
|
||||
|
||||
Enumeration of available rate limiting algorithms.
|
||||
|
||||
.. py:attribute:: TOKEN_BUCKET
|
||||
:value: "token_bucket"
|
||||
|
||||
Token bucket algorithm. Allows bursts up to bucket capacity, then refills
|
||||
at a steady rate.
|
||||
|
||||
.. py:attribute:: SLIDING_WINDOW
|
||||
:value: "sliding_window"
|
||||
|
||||
Sliding window log algorithm. Tracks exact timestamps for precise limiting.
|
||||
Higher memory usage.
|
||||
|
||||
.. py:attribute:: FIXED_WINDOW
|
||||
:value: "fixed_window"
|
||||
|
||||
Fixed window algorithm. Simple time-based windows. Efficient but has
|
||||
boundary issues.
|
||||
|
||||
.. py:attribute:: LEAKY_BUCKET
|
||||
:value: "leaky_bucket"
|
||||
|
||||
Leaky bucket algorithm. Smooths out request rate for consistent throughput.
|
||||
|
||||
.. py:attribute:: SLIDING_WINDOW_COUNTER
|
||||
:value: "sliding_window_counter"
|
||||
|
||||
Sliding window counter algorithm. Balances precision and efficiency.
|
||||
This is the default.
|
||||
|
||||
**Usage:**
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from fastapi_traffic import Algorithm, rate_limit
|
||||
|
||||
@rate_limit(100, 60, algorithm=Algorithm.TOKEN_BUCKET)
|
||||
async def endpoint(request: Request):
|
||||
return {"status": "ok"}
|
||||
|
||||
BaseAlgorithm
|
||||
-------------
|
||||
|
||||
.. py:class:: BaseAlgorithm(limit, window_size, backend, *, burst_size=None)
|
||||
|
||||
Abstract base class for rate limiting algorithms.
|
||||
|
||||
:param limit: Maximum requests allowed in the window.
|
||||
:type limit: int
|
||||
:param window_size: Time window in seconds.
|
||||
:type window_size: float
|
||||
:param backend: Storage backend for rate limit state.
|
||||
:type backend: Backend
|
||||
:param burst_size: Maximum burst size. Defaults to limit.
|
||||
:type burst_size: int | None
|
||||
|
||||
.. py:method:: check(key)
|
||||
:async:
|
||||
|
||||
Check if a request is allowed and update state.
|
||||
|
||||
:param key: The rate limit key.
|
||||
:type key: str
|
||||
:returns: Tuple of (allowed, RateLimitInfo).
|
||||
:rtype: tuple[bool, RateLimitInfo]
|
||||
|
||||
.. py:method:: reset(key)
|
||||
:async:
|
||||
|
||||
Reset the rate limit state for a key.
|
||||
|
||||
:param key: The rate limit key.
|
||||
:type key: str
|
||||
|
||||
.. py:method:: get_state(key)
|
||||
:async:
|
||||
|
||||
Get current state without consuming a token.
|
||||
|
||||
:param key: The rate limit key.
|
||||
:type key: str
|
||||
:returns: Current rate limit info or None.
|
||||
:rtype: RateLimitInfo | None
|
||||
|
||||
TokenBucketAlgorithm
|
||||
--------------------
|
||||
|
||||
.. py:class:: TokenBucketAlgorithm(limit, window_size, backend, *, burst_size=None)
|
||||
|
||||
Token bucket algorithm implementation.
|
||||
|
||||
Tokens are added to the bucket at a rate of ``limit / window_size`` per second.
|
||||
Each request consumes one token. If no tokens are available, the request is
|
||||
rejected.
|
||||
|
||||
The ``burst_size`` parameter controls the maximum bucket capacity, allowing
|
||||
short bursts of traffic.
|
||||
|
||||
**State stored:**
|
||||
|
||||
- ``tokens``: Current number of tokens in the bucket
|
||||
- ``last_update``: Timestamp of last update
|
||||
|
||||
SlidingWindowAlgorithm
|
||||
----------------------
|
||||
|
||||
.. py:class:: SlidingWindowAlgorithm(limit, window_size, backend, *, burst_size=None)
|
||||
|
||||
Sliding window log algorithm implementation.
|
||||
|
||||
Stores the timestamp of every request within the window. Provides the most
|
||||
accurate rate limiting but uses more memory.
|
||||
|
||||
**State stored:**
|
||||
|
||||
- ``timestamps``: List of request timestamps within the window
|
||||
|
||||
FixedWindowAlgorithm
|
||||
--------------------
|
||||
|
||||
.. py:class:: FixedWindowAlgorithm(limit, window_size, backend, *, burst_size=None)
|
||||
|
||||
Fixed window algorithm implementation.
|
||||
|
||||
Divides time into fixed windows and counts requests in each window. Simple
|
||||
and efficient, but allows up to 2x the limit at window boundaries.
|
||||
|
||||
**State stored:**
|
||||
|
||||
- ``count``: Number of requests in current window
|
||||
- ``window_start``: Start timestamp of current window
|
||||
|
||||
LeakyBucketAlgorithm
|
||||
--------------------
|
||||
|
||||
.. py:class:: LeakyBucketAlgorithm(limit, window_size, backend, *, burst_size=None)
|
||||
|
||||
Leaky bucket algorithm implementation.
|
||||
|
||||
Requests fill a bucket that "leaks" at a constant rate. Smooths out traffic
|
||||
for consistent throughput.
|
||||
|
||||
**State stored:**
|
||||
|
||||
- ``water_level``: Current water level in the bucket
|
||||
- ``last_update``: Timestamp of last update
|
||||
|
||||
SlidingWindowCounterAlgorithm
|
||||
-----------------------------
|
||||
|
||||
.. py:class:: SlidingWindowCounterAlgorithm(limit, window_size, backend, *, burst_size=None)
|
||||
|
||||
Sliding window counter algorithm implementation.
|
||||
|
||||
Maintains counters for current and previous windows, calculating a weighted
|
||||
average based on window progress. Balances precision and memory efficiency.
|
||||
|
||||
**State stored:**
|
||||
|
||||
- ``prev_count``: Count from previous window
|
||||
- ``curr_count``: Count in current window
|
||||
- ``current_window``: Start timestamp of current window
|
||||
|
||||
get_algorithm
|
||||
-------------
|
||||
|
||||
.. py:function:: get_algorithm(algorithm, limit, window_size, backend, *, burst_size=None)
|
||||
|
||||
Factory function to create algorithm instances.
|
||||
|
||||
:param algorithm: The algorithm type to create.
|
||||
:type algorithm: Algorithm
|
||||
:param limit: Maximum requests allowed.
|
||||
:type limit: int
|
||||
:param window_size: Time window in seconds.
|
||||
:type window_size: float
|
||||
:param backend: Storage backend.
|
||||
:type backend: Backend
|
||||
:param burst_size: Maximum burst size.
|
||||
:type burst_size: int | None
|
||||
:returns: An algorithm instance.
|
||||
:rtype: BaseAlgorithm
|
||||
|
||||
**Usage:**
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from fastapi_traffic.core.algorithms import get_algorithm, Algorithm
|
||||
from fastapi_traffic import MemoryBackend
|
||||
|
||||
backend = MemoryBackend()
|
||||
algorithm = get_algorithm(
|
||||
Algorithm.TOKEN_BUCKET,
|
||||
limit=100,
|
||||
window_size=60,
|
||||
backend=backend,
|
||||
burst_size=20,
|
||||
)
|
||||
|
||||
allowed, info = await algorithm.check("user:123")
|
||||
Reference in New Issue
Block a user