release: bump version to 0.3.0
- Refactor Redis backend connection handling and pool management - Update algorithm implementations with improved type annotations - Enhance config loader validation with stricter Pydantic schemas - Improve decorator and middleware error handling - Expand example scripts with better docstrings and usage patterns - Add new 00_basic_usage.py example for quick start - Reorganize examples directory structure - Fix type annotation inconsistencies across core modules - Update dependencies in pyproject.toml
This commit is contained in:
105
docs/getting-started/installation.rst
Normal file
105
docs/getting-started/installation.rst
Normal file
@@ -0,0 +1,105 @@
|
||||
Installation
|
||||
============
|
||||
|
||||
FastAPI Traffic supports Python 3.10 and above. You can install it using pip, uv, or
|
||||
any other Python package manager.
|
||||
|
||||
Basic Installation
|
||||
------------------
|
||||
|
||||
The basic installation includes the memory backend, which is perfect for development
|
||||
and single-process applications:
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: pip
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pip install git+https://gitlab.com/zanewalker/fastapi-traffic.git
|
||||
|
||||
.. tab-item:: uv
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
uv add git+https://gitlab.com/zanewalker/fastapi-traffic.git
|
||||
|
||||
.. tab-item:: poetry
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
poetry add git+https://gitlab.com/zanewalker/fastapi-traffic.git
|
||||
|
||||
With Redis Support
|
||||
------------------
|
||||
|
||||
If you're running a distributed system with multiple application instances, you'll
|
||||
want the Redis backend:
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: pip
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pip install "git+https://gitlab.com/zanewalker/fastapi-traffic.git[redis]"
|
||||
|
||||
.. tab-item:: uv
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
uv add "git+https://gitlab.com/zanewalker/fastapi-traffic.git[redis]"
|
||||
|
||||
Everything
|
||||
----------
|
||||
|
||||
Want it all? Install with the ``all`` extra:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pip install "git+https://gitlab.com/zanewalker/fastapi-traffic.git[all]"
|
||||
|
||||
This includes Redis support and ensures FastAPI is installed as well.
|
||||
|
||||
Dependencies
|
||||
------------
|
||||
|
||||
FastAPI Traffic has minimal dependencies:
|
||||
|
||||
- **pydantic** (>=2.0) — For configuration validation
|
||||
- **starlette** (>=0.27.0) — The ASGI framework that FastAPI is built on
|
||||
|
||||
Optional dependencies:
|
||||
|
||||
- **redis** (>=5.0.0) — Required for the Redis backend
|
||||
- **fastapi** (>=0.100.0) — While not strictly required (we work with Starlette directly),
|
||||
you probably want this
|
||||
|
||||
Verifying the Installation
|
||||
--------------------------
|
||||
|
||||
After installation, you can verify everything is working:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import fastapi_traffic
|
||||
print(fastapi_traffic.__version__)
|
||||
# Should print: 0.2.1
|
||||
|
||||
Or check which backends are available:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from fastapi_traffic import MemoryBackend, SQLiteBackend
|
||||
print("Memory and SQLite backends available!")
|
||||
|
||||
try:
|
||||
from fastapi_traffic import RedisBackend
|
||||
print("Redis backend available!")
|
||||
except ImportError:
|
||||
print("Redis backend not installed (install with [redis] extra)")
|
||||
|
||||
What's Next?
|
||||
------------
|
||||
|
||||
Head over to the :doc:`quickstart` guide to start rate limiting your endpoints.
|
||||
220
docs/getting-started/quickstart.rst
Normal file
220
docs/getting-started/quickstart.rst
Normal file
@@ -0,0 +1,220 @@
|
||||
Quickstart
|
||||
==========
|
||||
|
||||
Let's get rate limiting working in your FastAPI app. This guide covers the basics —
|
||||
you'll have something running in under five minutes.
|
||||
|
||||
Your First Rate Limit
|
||||
---------------------
|
||||
|
||||
The simplest way to add rate limiting is with the ``@rate_limit`` decorator:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from fastapi import FastAPI, Request
|
||||
from fastapi_traffic import rate_limit
|
||||
|
||||
app = FastAPI()
|
||||
|
||||
@app.get("/api/hello")
|
||||
@rate_limit(10, 60) # 10 requests per 60 seconds
|
||||
async def hello(request: Request):
|
||||
return {"message": "Hello, World!"}
|
||||
|
||||
That's the whole thing. Let's break down what's happening:
|
||||
|
||||
1. The decorator takes two arguments: ``limit`` (max requests) and ``window_size`` (in seconds)
|
||||
2. Each client is identified by their IP address by default
|
||||
3. When a client exceeds the limit, they get a 429 response with a ``Retry-After`` header
|
||||
|
||||
.. note::
|
||||
|
||||
The ``request: Request`` parameter is required. FastAPI Traffic needs access to the
|
||||
request to identify the client and track their usage.
|
||||
|
||||
Testing It Out
|
||||
--------------
|
||||
|
||||
Fire up your app and hit the endpoint a few times:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Start your app
|
||||
uvicorn main:app --reload
|
||||
|
||||
# In another terminal, make some requests
|
||||
curl -i http://localhost:8000/api/hello
|
||||
|
||||
You'll see headers like these in the response:
|
||||
|
||||
.. code-block:: http
|
||||
|
||||
HTTP/1.1 200 OK
|
||||
X-RateLimit-Limit: 10
|
||||
X-RateLimit-Remaining: 9
|
||||
X-RateLimit-Reset: 1709834400
|
||||
|
||||
After 10 requests, you'll get:
|
||||
|
||||
.. code-block:: http
|
||||
|
||||
HTTP/1.1 429 Too Many Requests
|
||||
Retry-After: 45
|
||||
X-RateLimit-Limit: 10
|
||||
X-RateLimit-Remaining: 0
|
||||
|
||||
Choosing an Algorithm
|
||||
---------------------
|
||||
|
||||
Different situations call for different rate limiting strategies. Here's a quick guide:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from fastapi_traffic import rate_limit, Algorithm
|
||||
|
||||
# Token Bucket - great for APIs that need burst handling
|
||||
# Allows short bursts of traffic, then smooths out
|
||||
@app.get("/api/burst-friendly")
|
||||
@rate_limit(100, 60, algorithm=Algorithm.TOKEN_BUCKET, burst_size=20)
|
||||
async def burst_endpoint(request: Request):
|
||||
return {"status": "ok"}
|
||||
|
||||
# Sliding Window - most accurate, but uses more memory
|
||||
# Perfect when you need precise rate limiting
|
||||
@app.get("/api/precise")
|
||||
@rate_limit(100, 60, algorithm=Algorithm.SLIDING_WINDOW)
|
||||
async def precise_endpoint(request: Request):
|
||||
return {"status": "ok"}
|
||||
|
||||
# Fixed Window - simple and efficient
|
||||
# Good for most use cases, slight edge case at window boundaries
|
||||
@app.get("/api/simple")
|
||||
@rate_limit(100, 60, algorithm=Algorithm.FIXED_WINDOW)
|
||||
async def simple_endpoint(request: Request):
|
||||
return {"status": "ok"}
|
||||
|
||||
See :doc:`/user-guide/algorithms` for a deep dive into each algorithm.
|
||||
|
||||
Rate Limiting by API Key
|
||||
------------------------
|
||||
|
||||
IP-based limiting is fine for public endpoints, but for authenticated APIs you
|
||||
probably want to limit by API key:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
def get_api_key(request: Request) -> str:
|
||||
"""Extract API key from header, fall back to IP."""
|
||||
api_key = request.headers.get("X-API-Key")
|
||||
if api_key:
|
||||
return f"key:{api_key}"
|
||||
# Fall back to IP for unauthenticated requests
|
||||
return request.client.host if request.client else "unknown"
|
||||
|
||||
@app.get("/api/data")
|
||||
@rate_limit(1000, 3600, key_extractor=get_api_key) # 1000/hour per API key
|
||||
async def get_data(request: Request):
|
||||
return {"data": "sensitive stuff"}
|
||||
|
||||
Global Rate Limiting with Middleware
|
||||
------------------------------------
|
||||
|
||||
Sometimes you want a blanket rate limit across your entire API. That's what
|
||||
middleware is for:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from fastapi_traffic.middleware import RateLimitMiddleware
|
||||
|
||||
app = FastAPI()
|
||||
|
||||
app.add_middleware(
|
||||
RateLimitMiddleware,
|
||||
limit=1000,
|
||||
window_size=60,
|
||||
exempt_paths={"/health", "/docs", "/openapi.json"},
|
||||
)
|
||||
|
||||
# All endpoints now have a shared 1000 req/min limit
|
||||
@app.get("/api/users")
|
||||
async def get_users():
|
||||
return {"users": []}
|
||||
|
||||
@app.get("/api/posts")
|
||||
async def get_posts():
|
||||
return {"posts": []}
|
||||
|
||||
Using a Persistent Backend
|
||||
--------------------------
|
||||
|
||||
The default memory backend works great for development, but it doesn't survive
|
||||
restarts and doesn't work across multiple processes. For production, use SQLite
|
||||
or Redis:
|
||||
|
||||
**SQLite** — Good for single-node deployments:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from fastapi_traffic import RateLimiter, SQLiteBackend
|
||||
from fastapi_traffic.core.limiter import set_limiter
|
||||
|
||||
# Set up persistent storage
|
||||
backend = SQLiteBackend("rate_limits.db")
|
||||
limiter = RateLimiter(backend)
|
||||
set_limiter(limiter)
|
||||
|
||||
@app.on_event("startup")
|
||||
async def startup():
|
||||
await limiter.initialize()
|
||||
|
||||
@app.on_event("shutdown")
|
||||
async def shutdown():
|
||||
await limiter.close()
|
||||
|
||||
**Redis** — Required for distributed systems:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from fastapi_traffic import RateLimiter
|
||||
from fastapi_traffic.backends.redis import RedisBackend
|
||||
from fastapi_traffic.core.limiter import set_limiter
|
||||
|
||||
@app.on_event("startup")
|
||||
async def startup():
|
||||
backend = await RedisBackend.from_url("redis://localhost:6379/0")
|
||||
limiter = RateLimiter(backend)
|
||||
set_limiter(limiter)
|
||||
await limiter.initialize()
|
||||
|
||||
Handling Rate Limit Errors
|
||||
--------------------------
|
||||
|
||||
By default, exceeding the rate limit raises a ``RateLimitExceeded`` exception that
|
||||
returns a 429 response. You can customize this:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from fastapi import Request
|
||||
from fastapi.responses import JSONResponse
|
||||
from fastapi_traffic import RateLimitExceeded
|
||||
|
||||
@app.exception_handler(RateLimitExceeded)
|
||||
async def rate_limit_handler(request: Request, exc: RateLimitExceeded):
|
||||
return JSONResponse(
|
||||
status_code=429,
|
||||
content={
|
||||
"error": "slow_down",
|
||||
"message": "You're making too many requests. Take a breather.",
|
||||
"retry_after": exc.retry_after,
|
||||
},
|
||||
)
|
||||
|
||||
What's Next?
|
||||
------------
|
||||
|
||||
You've got the basics down. Here's where to go from here:
|
||||
|
||||
- :doc:`/user-guide/algorithms` — Understand when to use each algorithm
|
||||
- :doc:`/user-guide/backends` — Learn about storage options
|
||||
- :doc:`/user-guide/key-extractors` — Advanced client identification
|
||||
- :doc:`/user-guide/configuration` — Load settings from files and environment variables
|
||||
Reference in New Issue
Block a user