* Run tests on Windows in Github Actions
* core sha update
* format code
* fix ci yaml
* rebase
* lint
* Try without win+py3.6 fix
* Try without win+py3.6 fix
* Improve test reliability
Update some tests to use more deterministic methods of testing in memory
spans. This helps the core repo pass tests after adding Windows to CI
matrix.
* Make propagators conform to spec
* do not modify / set an invalid span in the passed context in case
a propagator did not manage to extract
* in case no context is passed to propagator.extract assume the root
context as default so that a new trace is started instead of continung
the current active trace in case extraction fails
* fix also ot-trace propagator which compared int with str trace/span ids
when checking for validity in extract
The datadog exporter sometimes attempts to add a "None" value, if the
datadog origin header doesn't exist.
This does not cause runtime errors in the most recent opentelemetry
release (tracestate protects against an invalid value), but does cause warnings:
WARNING opentelemetry.trace.span:span.py:230 Invalid key/value pair (dd_origin, None) found.
* adding README
adding sample app
adding examples readme
fixing lint errors
linting examples
updating readme tls_config example
excluding examples
adding examples to exclude in all linters
adding isort.cfg skip
changing isort to path
ignoring yml only
adding it to excluded directories in pylintrc
only adding exclude to directory
removing readme.rst and adding explicit file names to ignore
adding the rest of the files
adding readme.rst back
adding to ignore glob instead
reverting back to ignore list
converting README.md to README.rst
* addressing readme comments
* adding link to spec for details on aggregators
* updating readme
* adding python-snappy to setup.cfg
Fixes#196
This marks the test case as flaky, making it run at most 3 times. It is
enough for one of this runs to pass to consider this test case passed
and ran no more. If 3 consecutive runs of this test case fail, the test
case will be considered failed. It has been reported that running this
test case again makes it pass, usually. This approach is preferred over
marking it as xfail(strict=False) because most of the times the test
ends up passing after another run, so that in most of the cases we
can still benefit from running this test case (since if it is actually
failing because of a bug it will be reported as such after failing 3
times, making the team aware of an actual issue happening here).