* fix(grpc): Allow gRPC connections via Unix socket
This commit addresses issue #1832.
The way `NET_PEER_IP` and `NET_PEER_PORT` are retrieved raises a `ValueError`
when gRPC connections are handled via Unix sockets.
```py
ip, port = (
context.peer().split(",")[0].split(":", 1)[1].rsplit(":", 1)
)
```
When using an address like `unix:///tmp/grpc.sock` the value of `context.peer()` is `"unix:"`.
Substituting that in the function above...
```py
ip, port = "unix:".split(",")[0].split(":", 1)[1].rsplit(":", 1)
ip, port = ["unix:"][0].split(":", 1)[1].rsplit(":", 1)
ip, port = "unix:".split(":", 1)[1].rsplit(":", 1)
ip, port = ["unix", ""][1].rsplit(":", 1)
ip, port = "".rsplit(":", 1)
ip, port = [""] # ValueError
```
I "addressed" the issue by guarding the retrieval of `net.peer.*` values under
an `if` statement that checks if we are using a Unix socket.
I extended the `server_interceptor` tests to run against TCP and Unix socket configurations.
---
**Open Questions**
- [ ] The socket tests will fail on Windows. Is there a way to annotate that?
- [ ] Are there other span values we should be setting for the unix socket?
* Update CHANGELOG
* Add placeholder attributes for linter
* fix lint
---------
Co-authored-by: Matt Oberle <mattoberle@users.noreply.github.com>
Co-authored-by: Shalev Roda <65566801+shalevr@users.noreply.github.com>
* Add http.server.response.size metric to ASGI implementation.
Add new unit tests.
* Update changelog.
* Fix linting by disabling too-many-nested-blocks
* Put new logic in a new method
* Refactor the placement of new logic.
* Fixed the unit tests in FastAPI and Starlette
* Update changelog.
* FIx lint errors.
* Refactor getting content-length header
* Refactor getting content-length header
---------
Co-authored-by: Shalev Roda <65566801+shalevr@users.noreply.github.com>
Co-authored-by: Diego Hurtado <ocelotl@users.noreply.github.com>
* Add support for confulent_kafka until 2.1.1 version
* Include 2.1.1 version
* update CHANGELOG.md
* run: 'tox -e generate'
* resolve comments
* update top version to 2.2.0
---------
Co-authored-by: Ran Nozik <ran@gethelios.dev>
* corrected instrumentation example in urllib3
* Remove changelog entry
---------
Co-authored-by: Shalev Roda <65566801+shalevr@users.noreply.github.com>
Co-authored-by: Diego Hurtado <ocelotl@users.noreply.github.com>
* Fix falcon usage of Span Status to only set the description if the status code is ERROR
* Update changelog
* Update CHANGELOG.md
Co-authored-by: Srikanth Chekuri <srikanth.chekuri92@gmail.com>
* fix lint
* Use fewer variables to satisfy R0914 lint rule
---------
Co-authored-by: Srikanth Chekuri <srikanth.chekuri92@gmail.com>
* Refactor CODEOWNERS file
Fixes#1803
* Remove CODEOWNERS
* Refactor component owners configuration
* Refactor CODEOWNERS to select any file but the ones in instrumentation
---------
Co-authored-by: Shalev Roda <65566801+shalevr@users.noreply.github.com>
* Add otelTraceSampled to instrumetation-logging
* Updated code with black
* Added to CHANGELOG.md
---------
Co-authored-by: Srikanth Chekuri <srikanth.chekuri92@gmail.com>
* WIP
* Revert "WIP"
This reverts commit 3ed466348e4c172fd96569a0dcb1b15047760cef.
* Fix expected URL in aiohttp instrumentation test
The underlying cause of the issue here is the update of the yarl package
from 1.8.2 to 1.9.1. yarl is used as a dependency in the
opentelemetry-instrumentation-aiohttp package but it is not there where
the issue happens, but in aiohttp who also has yarl as a dependency.
This is why the fix does not touch any relevant part of any
opentelemetry-* code, since it is the return value of aiohttp code who
now has a different value for the URL.
Fixes#1770
* Allow Kafka producer headers to be dict or list
* modify kafka context getter helper methods to work on dict and list
---------
Co-authored-by: Shalev Roda <65566801+shalevr@users.noreply.github.com>
Co-authored-by: Srikanth Chekuri <srikanth.chekuri92@gmail.com>