Blazeio vs. FastAPI: A Performance Benchmark

2 points by anonyxbiz a day ago

Blazeio is a custom Python framework that has been benchmarked using ApacheBench, and the results suggest that Blazeio outperforms FastAPI in terms of requests per second, mean time per request, and transfer rate across varying concurrency levels. These metrics indicate that Blazeio is significantly faster and more efficient at handling requests compared to FastAPI in these tests. The description of Blazeio as an ultra-fast asynchronous web framework seems to be supported by these benchmark results. Note that these results come from tests run on a colab (v2-8 TPU) both running the servers and the apache benchmark, all running within the same conditions. Key Performance Metrics for Blazeio:● Requests per second: Blazeio handles a high number of requests per second. At a concurrency level of 10, it processes 4274.40 requests per second. This number decreases slightly to 4097.28 requests per second at a concurrency level of 100, and then it increases again to 4228.90 requests per second at a concurrency level of 1000. However, at a concurrency level of 5000, the number of requests per second decreases significantly to 3256.96.● Time per request: The mean time per request increases as concurrency rises. At concurrency 10, the mean time is 2.340 ms, while at concurrency 100, the time rises to 24.406 ms. At concurrency 1000, the time is 236.468 ms and at concurrency 5000, the mean time per request is 1535.176 ms.● Time per request (across all concurrent requests): The mean time per request across all concurrent requests remains relatively stable across different concurrency levels: 0.234 ms at concurrency 10, 0.244 ms at concurrency 100, 0.236 ms at concurrency 1000, and 0.307 ms at concurrency 5000.● Transfer rate: The transfer rate is also affected by the concurrency. At concurrency 10, the rate is 2721.59 Kbytes/sec, and it decreases to 2608.82 Kbytes/sec at concurrency 100. At concurrency 1000, it is 2692.62 Kbytes/sec and it decreases further to 2073.77 Kbytes/sec at a concurrency level of 5000.● Connection times: Connection times show how long it takes to connect, process, and wait for requests. For example, at a concurrency level of 100, the connection times are 0 ms to connect, 24 ms to process, and 24 ms to wait, with a total of 24 ms. At a concurrency level of 5000, the connection times show more variance, with a mean of 39 ms to connect, 1134 ms to process, and 1134 ms to wait, with a total mean time of 1173.● Document Length: The document length for all tests is 11 bytes.● Concurrency level: The concurrency levels tested for Blazeio are 10, 100, 1000 and 5000.● Complete Requests: In each of the benchmarks, Blazeio completed 50,000 requests. Comparison with FastAPI: To provide some context, it's useful to compare Blazeio's performance with another web framework, FastAPI, which was benchmarked under similar conditions with the same document length of 11 bytes, but running on a different port.● FastAPI's requests per second are much lower than Blazeio's, starting at 1583.25 requests per second at a concurrency level of 10, 1650.12 requests per second at concurrency 100, and 1587.77 at concurrency 1000. The benchmarks for FastAPI show that not all 50,000 requests were completed at a concurrency of 5000, with 45,490 requests completed.● FastAPI's mean time per request at concurrency 10 is 6.316 ms, and 60.602 ms at concurrency 100 and 629.814 ms at concurrency 1000. Comparing these times with Blazeio, the time per request is generally faster for Blazeio in lower concurrency levels.

odie5533 a day ago

Good that you're trying to make packages. But your post here is unreadable. Perusing the Github, you should switch from ujson (no longer maintained) to orjson (also faster), and switch to pyproject.toml over setup.py (deprecated).

Also that bit in your code where it calls pip to install dependencies? Yeah, don't do that.

fasthandle a day ago

If going for speed, would something like Go make more sense? Bonus: Cross platform with no dependency helljazz.

And if going for async (and speed), Rust? Bonus: Security at core (not that Go's not).