* Initial commit
* Initial commit 2
* minor update
* goimport update
* resolved race condition
* added test for TLSInfo on server side
* Post review updates
* port review changes
debug
debug
* refactoring and added third function
* post review changes
* post review changes
* post review updates
* post review commit
* post review commit
* post review update
* post review update
* post review update
* post review update
* post review commit
* post review update
* Add the initial service config support
* start scWatcher later
* remove timeoutCh
* address the comments
* deal with dial timeout
* defer cancel for the newly created context for correct lifetime management
* fix the defer order
* added other 2 missing cancels
Remove the add and cancel methods of quotaPool. Their use is not required, and
leads to logical races when used concurrently from multiple goroutines. Rename
the reset method to add.
The typical way that a goroutine claims quota is to call the add method and
then to select on the channel returned by the acquire method. If two
goroutines are both trying to claim quota from a single quotaPool, the second
call to the add method can happen before the first attempt to read from the
channel. When that happens the second goroutine to attempt the channel read
will end up waiting for a very long time, in spite of its efforts to prepare
the channel for reading.
The quotaPool will always behave correctly when any positive quota is on the
channel rather than stored in the struct field. In the opposite case, when
positive quota is only in the struct field and not on the channel, users of
the quotaPool can fail to access available quota. Err on the side of storing
any positive quota in the channel.
This includes a reproducer for #632, which fails on many runs with this
package at v1.0.4. The frequency of the test failures depends on how stressed
the server is, since it's now effectively checking for weird interleavings of
goroutines. It passes reliably with these changes to the transport package.
The behavior described in #734 (an RPC with a streaming response hangs
unexpectedly) matches what I've seen in my programs, and what I see in the
test case added here. If it's a logical flow control bug, this change may well
fix it.
Updates #632
Updates #734
* add stats.tagger APIs and connection stats.
* fix comments
use ac.ctx in http2client
change name and comments
small fixes stats_tests
* add a TODO to ConnTagInfo
* rename handle to handleRPC
* modify stats comments