This snippet demonstrates a robust HTTP request retry mechanism using exponential backoff, which is useful for handling temporary network issues or rate limits gracefully.
import requests
import time
from requests.exceptions import RequestException
def fetch_with_retry(
url,
max_retries=5,
initial_delay=1,
backoff_factor=2,
timeout=10,
**kwargs
):
"""
Make an HTTP request with exponential backoff retries.
Args:
url (str): Target URL
max_retries (int): Maximum retry attempts
initial_delay (float): Initial delay in seconds
backoff_factor (float): Multiplier for delay between retries
timeout (int): Request timeout in seconds
**kwargs: Additional arguments for requests.get()
Returns:
requests.Response or None: Response if successful, None after max retries
"""
retry_count = 0
delay = initial_delay
while retry_count < max_retries:
try:
response = requests.get(url, timeout=timeout, **kwargs)
response.raise_for_status() # Raises exception for 4XX/5XX
return response
except RequestException as e:
print(f"Attempt {retry_count + 1} failed: {str(e)}")
if retry_count == max_retries - 1:
break
print(f"Retrying in {delay:.1f} seconds...")
time.sleep(delay)
delay *= backoff_factor
retry_count += 1
print(f"Max retries ({max_retries}) exceeded")
return None
# Example usage
response = fetch_with_retry(
"https://api.example.com/data",
headers={"User-Agent": "MyApp/1.0"},
max_retries=3
)
if response:
print("Request successful!")
print(response.json())
else:
print("Failed to fetch data")
This code solves the common problem of transient network failures by:
requests
’ built-in exceptionsfetch_with_retry(url)
with your target URLmax_retries
: How many times to retry (default: 3)initial_delay
: Starting wait time (default: 1s)backoff_factor
: Multiplier for delay growth (default: 2x)requests.get()
parameters (headers, auth, etc.)This pattern is especially valuable for:
The exponential backoff prevents overwhelming servers during outages while maximizing the chance of eventual success.