247 Commits

Author SHA1 Message Date
689d061d46 Cleanup usages of resolver.Target's Scheme and Authority (#5761) 2022-11-09 23:06:01 -08:00
36d14dbf66 Fix binary logging bug which logs a server header on a trailers only response (#5763) 2022-11-02 19:46:50 -04:00
5fc798be17 Add binary logger option for client and server (#5675)
* Add binary logger option for client and server
2022-10-06 13:36:05 -04:00
12db695f16 grpc: restrict status codes from control plane (gRFC A54) (#5653) 2022-10-04 15:13:23 -07:00
30d54d398f client: fix stream creation issue with transparent retry (#5503) 2022-07-14 16:52:18 -07:00
ea86bf7497 stats: add support for multiple stats handlers in a single client or server (#5347) 2022-06-03 09:15:50 -07:00
799605c228 client: fix potential panic during RPC retries (#5323) 2022-05-04 10:06:12 -07:00
9711b148c4 server: clarify documentation around setting and sending headers and ServerStream errors (#5302) 2022-04-08 13:11:40 -07:00
1ffd63de37 binarylog: generalize binarylog's MethodLogger preparing for new observability features (#5244) 2022-03-21 14:00:02 -07:00
e601f1ae37 fix: does not validate metadata keys and values (#4886) 2022-02-23 11:15:55 -08:00
61a6a06b88 server: handle context errors returned by service handler (#5156) 2022-01-26 11:02:23 -08:00
d53469981f transport: fix transparent retries when per-RPC credentials are in use (#4785) 2021-09-21 10:39:59 -07:00
d41f21ca05 stats: support stats for all retry attempts; support transparent retry (#4749) 2021-09-14 15:11:42 -07:00
1ddab33869 client: fix detection of whether IO was performed in NewStream (#4611)
For transparent retry.

Also allow non-WFR RPCs to retry indefinitely on errors that resulted in no I/O; the spec used to forbid it at one point during development, but it no longer does.
2021-07-23 10:37:18 -07:00
4faa31f0a5 stats: add stream info inside stats.Begin (#4533) 2021-06-18 13:21:07 -07:00
b6f206b84f grpc: improve docs on StreamDesc (#4397) 2021-05-07 11:17:26 -07:00
d7737376c3 xds: implement fault injection HTTP filter (A33) (#4236) 2021-03-12 08:38:49 -08:00
61f0b5fa7c client: implement proper config selector interceptors (#4235) 2021-03-05 13:31:34 -08:00
60843b1066 xds: add support for HTTP filters (gRFC A39) (#4206) 2021-02-25 14:04:15 -08:00
750abe8f95 resolver: allow config selector to return an RPC error (#4082) 2020-12-08 13:32:37 -08:00
b88744b832 xds: add ConfigSelector to support RouteAction timeouts (#3991) 2020-11-17 13:22:28 -08:00
e6c98a478e stats: include message header in stats.InPayload.WireLength (#3886) 2020-09-25 10:06:54 -07:00
266c7b6f82 xdsrouting: add fake headers (#3748) 2020-07-20 13:40:03 -07:00
506b773066 Implemented component logging (#3617) 2020-06-26 12:04:47 -07:00
3b63c2b110 retry: re-enable retrying on non-IO transport errors (#3691) 2020-06-16 10:03:59 -07:00
eb11ffdf9b retry: prevent per-RPC creds error from being transparently retried (#3677) 2020-06-11 09:18:17 -07:00
9aa97f9cb4 stream: fix calloption.After() race in finish (#3672) 2020-06-10 18:00:24 -07:00
fff75ae40f channelz: log on channelz trace events and trace on channelz relevant logs. (#3329)
channelz: log on channelz trace events and trace on channelz relevant logs. (#3329)
2020-02-14 10:11:26 -08:00
6b9bf4296e Revert "profiling: add hooks within grpc (#3159)" (#3378)
This reverts commit 83263d17f75d76339f8e2d3b1d2d8364746349f3.
2020-02-14 07:56:46 -08:00
83263d17f7 profiling: add hooks within grpc (#3159) 2020-02-12 11:10:44 -08:00
8c50fc2565 revert buffer reuse (#3338)
* Revert "stream: fix returnBuffers race during retry (#3293)"

This reverts commit ede71d589cc36a6adff7244ce220516f0b3e446b.

* Revert "codec/proto: reuse of marshal byte buffers (#3167)"

This reverts commit 642675125e198ce612ea9caff4bf75d3a4a45667.
2020-01-27 13:30:41 -08:00
ede71d589c stream: fix returnBuffers race during retry (#3293)
And release the buffer after Write(), unless the buffer needs to be kept for retries.
2020-01-07 17:17:22 -08:00
642675125e codec/proto: reuse of marshal byte buffers (#3167)
Performance benchmarks can be found below. Obviously, a 8 KiB
request/response is tailored to showcase this improvement as this is
where codec buffer reuse shines, but I've run other benchmarks too (like
1-byte requests and responses) and there's no discernable impact on
performance.

We do not allow reuse of buffers when stat handlers or binlogs are
turned on. This is because those two may need access to the data and
payload even after the data has been written to the wire. In such cases,
we never return the data back to the pool.

A buffer reuse threshold of 1 KiB was determined after several
experiments. There's diminished returns when buffer reuse is enabled for
smaller messages (actually, a negative impact).

unary-networkMode_none-bufConn_false-keepalive_false-benchTime_40s-trace_false-latency_0s-kbps_0-MTU_0-maxConcurrentCalls_6-reqSize_8192B-respSize_8192B-compressor_off-channelz_false-preloader_false
               Title       Before        After Percentage
            TotalOps       839638       906223     7.93%
             SendOps            0            0      NaN%
             RecvOps            0            0      NaN%
            Bytes/op    103788.29     80592.47   -22.35%
           Allocs/op       183.33       189.30     3.27%
             ReqT/op 1375662899.20 1484755763.20     7.93%
            RespT/op 1375662899.20 1484755763.20     7.93%
            50th-Lat    238.746µs    225.019µs    -5.75%
            90th-Lat    514.253µs    456.439µs   -11.24%
            99th-Lat    711.083µs    702.466µs    -1.21%
             Avg-Lat     285.45µs    264.456µs    -7.35%
2019-12-20 09:41:23 -08:00
663e4ce0c9 client: fix race between client-side stream cancellation and compressed server data arriving (#3054)
`transport/Stream.RecvCompress` returns what the header contains, if present,
or empty string if a context error occurs.  However, it "prefers" the header
data even if there is a context error, to prevent a related race.  What happens
here is:

1. RPC starts.

2. Client cancels RPC.

3. `RecvCompress` tells `ClientStream.Recv` that compression used is "" because
   of the context error.  `as.decomp` is left nil, because there is no
   compressor to look up in the registry.

4. Server's header and first message hit client.

5. Client sees the header and message and allows grpc's stream to see them.
   (We only provide context errors if we need to block.)

6. Client performs a successful `Read` on the stream, receiving the gzipped
   payload, then checks `as.decomp`.

7. We have no decompressor but the payload has a bit set indicating the message
   is compressed, so this is an error.  However, when forming the error string,
   `RecvCompress` now returns "gzip" because it doesn't need to block to get
   this from the now-received header.  This leads to the confusing message
   about how "gzip" is not installed even though it is.

This change makes `waitOnHeader` close the stream when context cancellation happens.
Then `RecvCompress` uses whatever value is present in the stream at that time, which
can no longer change because the stream is closed.  Also, this will be in sync with
the messages on the stream - if there are any messages present, the headers must
have been processed first, and `RecvCompress` will contain the proper value.
2019-10-01 10:47:40 -07:00
fde0cae1c4 stream: call stats handler if the attempt failed to get transport (#2962) 2019-08-07 13:22:33 -07:00
1f154c6e18 stream: fix panic caused by failing to get a transport for a retry attempt (#2958) 2019-08-06 15:36:33 -07:00
977142214c client: fix race between transport draining and new RPCs (#2919)
Before these fixes, it was possible to see errors on new RPCs after a
connection began draining, and before establishing a new connection.  There is
an inherent race between choosing a SubConn and attempting to creating a stream
on it.  We should be able to avoid application-visible RPC errors due to this
with transparent retry.  However, several bugs were preventing this from
working correctly:

1. Non-wait-for-ready RPCs were skipping transparent retry, though the retry
design calls for retrying them.

2. The transport closed itself (and would consequently error new RPCs) before
notifying the SubConn that it was draining.

3. The SubConn wasn't synchronously updating itself once it was notified about
the closing or draining state.

4. The SubConn would go into the TRANSIENT_FAILURE state instantaneously,
causing RPCs to fail instead of queue.
2019-07-22 16:07:55 -07:00
5caf962939 client: addrConn NewStream and health check cleanup (#2848) 2019-06-26 11:15:17 -07:00
8260df7a61 grpc: implementation of PreparedMsg API
grpc: implementation of PreparedMsg API
2019-04-19 14:08:08 -07:00
d389f9fac6 balancer: add server loads from RPC trailers to DoneInfo (#2641) 2019-04-02 11:15:36 -07:00
9a2caafd93 client: restore remote address in traces (#2718)
The client-side traces were otherwise only showing `RPC: to <nil>`,
which is not helpful.

Also clean up construction of traceInfo and firstLine in a few places.
2019-03-27 09:52:40 -07:00
ed10349f45 stats: add WireLength to stats.InPayload (#2692) (#2711) 2019-03-25 15:42:16 -07:00
9c3a959569 stats: add Trailer to client-side stats.End (#2639)
Currently, it is not possible to access trailers from within a
stats.Handler. The reason is that both stats.Handler and
ClientStream.Trailer require a lock on the ClientStream.

A workaround would be to start a separate goroutine that will call
ClientStream.Trailer asynchronously, but that requires careful
coordination and we can quite easily make the trailer metadata available
to the stats.Handler directly.

Use case: an interceptor that processes trailer metadata for each
streaming RPC after the stream has finished. Note that a
StreamClientInterceptor returns immediately, before the stream has
finished and before the trailer metadata is available.
2019-03-13 10:10:52 -07:00
9572bbe0f9 cleanup: remove unused symbols (#2581) 2019-01-17 10:14:45 -08:00
dfd7708d35 cleanup: use time.Until(t) instead of t.Sub(time.Now) (#2571) 2019-01-15 16:09:50 -08:00
9e7c146356 Return nil trailer metadata, if the RPC's status is context canceled. (#2554)
* Closes the client transport stream, if context is cancelled while recvBuffer is reading.

* Passes a function pointer to recvBufferReader, instead of a Stream and an http2Client.

* Adds more descriptive error messages.

* If waitOnHeader notices the context cancelation, shouldRetry no longer returns a ContextError. Instead, it returns the error from the last try.

* Makes sure that test gets both statuses at least 5 times.

* Makse cntPermDenied a lambda function.
2019-01-14 10:59:44 -08:00
04ea82009c cleanup: replace "x/net/context" import with "context" (#2439) 2018-11-12 13:30:41 -08:00
a612bb6847 client: block RPCs early until the resolver has returned addresses (#2409)
This allows the initial RPC(s) an opportunity to apply settings from the service config; without this change we would still block, but only after observing the current service config settings.
2018-11-09 13:53:47 -08:00
61c3ec866d docs: clarify SendMsg/CloseSend usage (#2418) 2018-11-01 12:29:53 -06:00
105f61423e health: Client LB channel health checking (#2387) 2018-11-01 10:49:35 -07:00