为什么 fasthttp 比 nethttp 快10倍
Well, this is a much better implementation for several reasons:
- The worker pool model is a zero allocation model, as the workers are already initialized and are ready to serve, whereas in the stdlib implementation the
go c.serve()
has to allocate memory for the goroutine.- The worker pool model is easier to tune, as you can increase/decrease the buffer size of the number of work units you are able to accept, versus the fire and and forget model in the stdlib
- The worker pool model allows for handlers to be more connected with the server through channel communications, if the server needs to shutdown for example , it would be able to more easily communicate with the workers than in the stdlib implementation
- The handler function definition signature is better, as it takes in only a context which includes both the request and writer needed by the handler. this is HUGELY better than the standard library, as all you get from the stdlib is a request and response writer… The work in go1.7 to include context within the request is pretty much a hack to give people what they really want (context) without breaking anyone.
Overall it is just better to write a server with a worker pool model for serving requests, as opposed to just spawning a “thread” per request, with no way of throttling out of the box.