# Asynchronous HTTP Requests with aiohttp This snippet demonstrates how to make concurrent HTTP requests using `aiohttp`, which is useful for improving performance when fetching data from multiple URLs. ```python import aiohttp import asyncio async def fetch_url(session, url): try: async with session.get(url) as response: return await response.text() except Exception as e: print(f"Error fetching {url}: {e}") return None async def fetch_multiple_urls(urls): async with aiohttp.ClientSession() as session: tasks = [fetch_url(session, url) for url in urls] return await asyncio.gather(*tasks) async def main(): urls = [ "https://api.github.com", "https://httpbin.org/get", "https://jsonplaceholder.typicode.com/posts/1", ] results = await fetch_multiple_urls(urls) for url, content in zip(urls, results): print(f"Fetched from {url}, length: {len(content) if content else 'Error'}") if __name__ == "__main__": asyncio.run(main()) ``` ### Explanation 1. **Purpose**: - This code efficiently fetches content from multiple URLs concurrently using `asyncio` and `aiohttp`. - It avoids the bottleneck of sequential requests, reducing total execution time. 2. **Key Components**: - `fetch_url()`: Makes an asynchronous HTTP GET request and returns the response text. - `fetch_multiple_urls()`: Creates a `ClientSession` and runs multiple requests in parallel using `asyncio.gather()`. - `main()`: Defines URLs to fetch, calls `fetch_multiple_urls()`, and prints results. 3. **How to Run**: - Install dependencies: ```bash pip install aiohttp ``` - Save the snippet as `async_http.py` and run: ```bash python async_http.py ``` 4. **Use Cases**: - Scraping multiple web pages. - Aggregating data from several API endpoints. - Performance-critical applications where I/O delays must be minimized. This snippet is optimized for Python 3.7+ and leverages modern asynchronous programming for efficiency.