Prometheus Remote Write Exporter (6/6) (#227)

* adding README

adding sample app

adding examples readme

fixing lint errors

linting examples

updating readme tls_config example

excluding examples

adding examples to exclude in all linters

adding isort.cfg skip

changing isort to path

ignoring yml only

adding it to excluded directories in pylintrc

only adding exclude to directory

removing readme.rst and adding explicit file names to ignore

adding the rest of the files

adding readme.rst back

adding to ignore glob instead

reverting back to ignore list

converting README.md to README.rst

* addressing readme comments

* adding link to spec for details on aggregators

* updating readme

* adding python-snappy to setup.cfg
This commit is contained in:
Azfaar Qureshi
2020-12-22 14:06:22 -05:00
committed by GitHub
parent 65801c31d8
commit f6f5b90aeb
12 changed files with 662 additions and 20 deletions

View File

@ -7,7 +7,7 @@ extension-pkg-whitelist=
# Add list of files or directories to be excluded. They should be base names, not
# paths.
ignore=CVS,gen
ignore=CVS,gen,Dockerfile,docker-compose.yml,README.md,requirements.txt,cortex-config.yml
# Add files or directories matching the regex patterns to be excluded. The
# regex matches against base names, not paths.

View File

@ -29,6 +29,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
([#216](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/216))
- `opentelemetry-instrumentation-grpc` Add tests for grpc span attributes, grpc `abort()` conditions
([#236](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/236))
- Add README and example app for Prometheus Remote Write Exporter
([#227](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/227]))
### Changed
- `opentelemetry-instrumentation-asgi`, `opentelemetry-instrumentation-wsgi` Return `None` for `CarrierGetter` if key not found
@ -36,7 +38,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- `opentelemetry-instrumentation-grpc` Comply with updated spec, rework tests
([#236](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/236))
- `opentelemetry-instrumentation-asgi`, `opentelemetry-instrumentation-falcon`, `opentelemetry-instrumentation-flask`, `opentelemetry-instrumentation-pyramid`, `opentelemetry-instrumentation-wsgi` Renamed `host.port` attribute to `net.host.port`
([#242](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/242))
([#242](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/242))
- `opentelemetry-instrumentation-flask` Do not emit a warning message for request contexts created with `app.test_request_context`
([#253](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/253))
- `opentelemetry-instrumentation-requests`, `opentelemetry-instrumentation-urllib` Fix span name callback parameters

View File

@ -1,27 +1,322 @@
OpenTelemetry Prometheus Remote Write Exporter
==============================================
=========================================================
This library allows exporting metric data to `Prometheus Remote Write Integrated Backends
<https://prometheus.io/docs/operating/integrations/>`_. Latest `types.proto
<https://github.com/prometheus/prometheus/blob/master/prompb/types.proto>` and `remote.proto
<https://github.com/prometheus/prometheus/blob/master/prompb/remote.proto>` Protocol Buffers
used to create WriteRequest objects were taken from Prometheus repository. Development is
currently in progress.
This package contains an exporter to send `OTLP`_ metrics from the
`OpenTelemetry Python SDK`_ directly to a `Prometheus Remote Write integrated backend`_
(such as Cortex or Thanos) without having to run an instance of the
Prometheus server. The latest `types.proto`_ and `remote.proto`_
protocol buffers are used to create the WriteRequest. The image below shows the
two Prometheus exporters in the OpenTelemetry Python SDK.
Pipeline 1 illustrates the setup required for a `Prometheus "pull" exporter`_.
Pipeline 2 illustrates the setup required for the Prometheus Remote
Write exporter.
|Prometheus SDK pipelines|
The Prometheus Remote Write Exporter is a "push" based exporter and only
works with the OpenTelemetry `push controller`_. The controller
periodically collects data and passes it to the exporter. This exporter
then converts the data into `timeseries`_ and sends it to the Remote
Write integrated backend through HTTP POST requests. The metrics
collection datapath is shown below:
See the ``examples`` folder for a demo usage of this exporter
Table of Contents
=================
- `Summary`_
- `Table of Contents`_
- `Installation`_
- `Quickstart`_
- `Examples`_
- `Configuring the Exporter`_
- `Securing the Exporter`_
- `Authentication`_
- `TLS`_
- `Supported Aggregators`_
- `Error Handling`_
- `Contributing`_
- `Design Doc`_
Installation
------------
Prerequisites
~~~~~~~~~~~~~
1. Install the snappy c-library
**DEB**: ``sudo apt-get install libsnappy-dev``
**RPM**: ``sudo yum install libsnappy-devel``
**OSX/Brew**: ``brew install snappy``
**Windows**: ``pip install python_snappy-0.5-cp36-cp36m-win_amd64.whl``
Exporter
~~~~~~~~
- To install from the latest PyPi release, run
``pip install opentelemetry-exporter-prometheus-remote-write``
Quickstart
----------
.. code:: python
from opentelemetry import metrics
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.exporter.prometheus_remote_write import (
PrometheusRemoteWriteMetricsExporter
)
# Sets the global MeterProvider instance
metrics.set_meter_provider(MeterProvider())
# The Meter is responsible for creating and recording metrics. Each meter has a unique name, which we set as the module's name here.
meter = metrics.get_meter(__name__)
exporter = PrometheusRemoteWriteMetricsExporter(endpoint="endpoint_here") # add other params as needed
metrics.get_meter_provider().start_pipeline(meter, exporter, 5)
Examples
--------
This example uses `Docker Compose`_ to set up:
1. A Python program that creates 5 instruments with 5 unique aggregators
and a randomized load generator
2. An instance of `Cortex`_ to recieve the metrics data
3. An instance of `Grafana`_ to visualizse the exported data
Requirements
~~~~~~~~~~~~
- Have Docker Compose `installed`_
*Users do not need to install Python as the app will be run in the
Docker Container*
Instructions
~~~~~~~~~~~~
1. Run ``docker-compose up -d`` in the the ``examples/`` directory
The ``-d`` flag causes all services to run in detached mode and frees up
your terminal session. This also causes no logs to show up. Users can
attach themselves to the services logs manually using
``docker logs ${CONTAINER_ID} --follow``
2. Log into the Grafana instance at http://localhost:3000
- login credentials are ``username: admin`` and ``password: admin``
- There may be an additional screen on setting a new password. This
can be skipped and is optional
3. Navigate to the ``Data Sources`` page
- Look for a gear icon on the left sidebar and select
``Data Sources``
4. Add a new Prometheus Data Source
- Use ``http://cortex:9009/api/prom`` as the URL
- Set the scrape interval to ``2s`` to make updates
appear quickly **(Optional)**
- click ``Save & Test``
5. Go to ``Metrics Explore`` to query metrics
- Look for a compass icon on the left sidebar
- click ``Metrics`` for a dropdown list of all the available metrics
- Adjust time range by clicking the ``Last 6 hours``
button on the upper right side of the graph **(Optional)**
- Set up auto-refresh by selecting an option under the
dropdown next to the refresh button on the upper right side of the
graph **(Optional)**
- Click the refresh button and data should show up on hte graph
6. Shutdown the services when finished
- Run ``docker-compose down`` in the examples directory
Configuring the Exporter
------------------------
The exporter can be configured through parameters passed to the
constructor. Here are all the options:
- ``endpoint``: url where data will be sent **(Required)**
- ``basic_auth``: username and password for authentication
**(Optional)**
- ``headers``: additional headers for remote write request as
determined by the remote write backend's API **(Optional)**
- ``timeout``: timeout for requests to the remote write endpoint in
seconds **(Optional)**
- ``proxies``: dict mapping request proxy protocols to proxy urls
**(Optional)**
- ``tls_config``: configuration for remote write TLS settings
**(Optional)**
Example with all the configuration options:
.. code:: python
exporter = PrometheusRemoteWriteMetricsExporter(
endpoint="http://localhost:9009/api/prom/push",
timeout=30,
basic_auth={
"username": "user",
"password": "pass123",
},
headers={
"X-Scope-Org-ID": "5",
"Authorization": "Bearer mytoken123",
},
proxies={
"http": "http://10.10.1.10:3000",
"https": "http://10.10.1.10:1080",
},
tls_config={
"cert_file": "path/to/file",
"key_file": "path/to/file",
"ca_file": "path_to_file",
"insecure_skip_verify": true, # for developing purposes
}
)
Securing the Exporter
---------------------
Authentication
~~~~~~~~~~~~~~
The exporter provides two forms of authentication which are shown below.
Users can add their own custom authentication by setting the appropriate
values in the ``headers`` dictionary
1. Basic Authentication Basic authentication sets a HTTP Authorization
header containing a base64 encoded username/password pair. See `RFC
7617`_ for more information. This
.. code:: python
exporter = PrometheusRemoteWriteMetricsExporter(
basic_auth={"username": "base64user", "password": "base64pass"}
)
2. Bearer Token Authentication This custom configuration can be achieved
by passing in a custom ``header`` to the constructor. See `RFC 6750`_
for more information.
.. code:: python
header = {
"Authorization": "Bearer mytoken123"
}
TLS
~~~
Users can add TLS to the exporter's HTTP Client by providing certificate
and key files in the ``tls_config`` parameter.
Supported Aggregators
---------------------
Behaviour of these aggregators is outlined in the `OpenTelemetry Specification <https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/metrics/api.md#aggregations>`_.
All aggregators are converted into the `timeseries`_ data format. However, method in
which they are converted `differs <https://github.com/open-telemetry/opentelemetry-python-contrib/blob/master/exporter/opentelemetry-exporter-prometheus-remote-write/src/opentelemetry/exporter/prometheus_remote_write/__init__.py#L196>`_ from aggregator to aggregator. A
map of the conversion methods can be found `here <https://github.com/open-telemetry/opentelemetry-python-contrib/blob/master/exporter/opentelemetry-exporter-prometheus-remote-write/src/opentelemetry/exporter/prometheus_remote_write/__init__.py#L75>`_.
+------------------------------+-------------------------------------+------------------------------------------------------------------------------------------------------------+
| **OpenTelemetry Aggregator** | **Equivalent Prometheus Data Type** | **Behaviour** |
+------------------------------+-------------------------------------+------------------------------------------------------------------------------------------------------------+
| Sum | Counter | Metric value can only go up or be reset to 0 |
+------------------------------+-------------------------------------+------------------------------------------------------------------------------------------------------------+
| MinMaxSumCount | Gauge | Metric value can arbitrarily increment or decrement |
+------------------------------+-------------------------------------+------------------------------------------------------------------------------------------------------------+
| Histogram | Histogram | Unlike the Prometheus histogram, the OpenTelemetry Histogram does not provide a sum of all observed values |
+------------------------------+-------------------------------------+------------------------------------------------------------------------------------------------------------+
| LastValue | N/A | Metric only contains the most recently observed value |
+------------------------------+-------------------------------------+------------------------------------------------------------------------------------------------------------+
| ValueObserver | N/A | Similar to MinMaxSumCount but also contains LastValue |
+------------------------------+-------------------------------------+------------------------------------------------------------------------------------------------------------+
Error Handling
--------------
In general, errors are raised by the calling function. The exception is
for failed requests where any error status code is logged as a warning
instead.
This is because the exporter does not implement any retry logic as data that
failed to export will be dropped.
For example, consider a situation where a user increments a Counter
instrument 5 times and an export happens between each increment. If the
exports happen like so:
::
pip install opentelemetry-exporter-prometheus-remote-write
SUCCESS FAIL FAIL SUCCESS SUCCESS
1 2 3 4 5
Then the received data will be:
.. _Prometheus: https://prometheus.io/
.. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/
::
1 4 5
References
----------
Contributing
------------
* `Prometheus <https://prometheus.io/>`_
* `OpenTelemetry Project <https://opentelemetry.io/>`_
If you would like to learn more about the exporter's structure and
design decisions please view the design document below
Design Doc
~~~~~~~~~~
`Design Document`_
This document is stored elsewhere as it contains large images which will
significantly increase the size of this repo.
.. _Summary: #opentelemetry-python-sdk-prometheus-remote-write-exporter
.. _Table of Contents: #table-of-contents
.. _Installation: #installation
.. _Quickstart: #quickstart
.. _Examples: #examples
.. _Configuring the Exporter: #configuring-the-exporter
.. _Securing the Exporter: #securing-the-exporter
.. _Authentication: #authentication
.. _TLS: #tls
.. _Supported Aggregators: #supported-aggregators
.. _Error Handling: #error-handling
.. _Contributing: #contributing
.. _Design Doc: #design-doc
.. |Prometheus SDK pipelines| image:: https://user-images.githubusercontent.com/20804975/100285430-e320fd80-2f3e-11eb-8217-a562c559153c.png
.. _RFC 7617: https://tools.ietf.org/html/rfc7617
.. _RFC 6750: https://tools.ietf.org/html/rfc6750
.. _Design Document: https://github.com/open-o11y/docs/blob/master/python-prometheus-remote-write/design-doc.md
.. _OTLP: https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/protocol/otlp.md
.. _OpenTelemetry Python SDK: https://github.com/open-telemetry/opentelemetry-python
.. _Prometheus "pull" exporter: https://github.com/open-telemetry/opentelemetry-python/tree/master/exporter/opentelemetry-exporter-prometheus
.. _Prometheus Remote Write integrated backend: https://prometheus.io/docs/operating/integrations/
.. _types.proto: https://github.com/prometheus/prometheus/blob/master/prompb/types.proto
.. _remote.proto: https://github.com/prometheus/prometheus/blob/master/prompb/remote.proto
.. _push controller: https://github.com/open-telemetry/opentelemetry-python/blob/master/opentelemetry-sdk/src/opentelemetry/sdk/metrics/export/controller.py#L22
.. _timeseries: https://prometheus.io/docs/concepts/data_model/
.. _Docker Compose: https://docs.docker.com/compose/
.. _Cortex: https://cortexmetrics.io/
.. _Grafana: https://grafana.com/
.. _installed: https://docs.docker.com/compose/install/

View File

@ -0,0 +1,8 @@
FROM python:3.7
WORKDIR /code
COPY . .
RUN apt-get update -y && apt-get install libsnappy-dev -y
RUN pip install -e .
RUN pip install -r ./examples/requirements.txt
CMD ["python", "./examples/sampleapp.py"]

View File

@ -0,0 +1,42 @@
# Prometheus Remote Write Exporter Example
This example uses [Docker Compose](https://docs.docker.com/compose/) to set up:
1. A Python program that creates 5 instruments with 5 unique
aggregators and a randomized load generator
2. An instance of [Cortex](https://cortexmetrics.io/) to recieve the metrics
data
3. An instance of [Grafana](https://grafana.com/) to visualizse the exported
data
## Requirements
* Have Docker Compose [installed](https://docs.docker.com/compose/install/)
*Users do not need to install Python as the app will be run in the Docker Container*
## Instructions
1. Run `docker-compose up -d` in the the `examples/` directory
The `-d` flag causes all services to run in detached mode and frees up your
terminal session. This also causes no logs to show up. Users can attach themselves to the service's logs manually using `docker logs ${CONTAINER_ID} --follow`
2. Log into the Grafana instance at [http://localhost:3000](http://localhost:3000)
* login credentials are `username: admin` and `password: admin`
* There may be an additional screen on setting a new password. This can be skipped and is optional
3. Navigate to the `Data Sources` page
* Look for a gear icon on the left sidebar and select `Data Sources`
4. Add a new Prometheus Data Source
* Use `http://cortex:9009/api/prom` as the URL
* (OPTIONAl) set the scrape interval to `2s` to make updates appear quickly
* click `Save & Test`
5. Go to `Metrics Explore` to query metrics
* Look for a compass icon on the left sidebar
* click `Metrics` for a dropdown list of all the available metrics
* (OPTIONAL) Adjust time range by clicking the `Last 6 hours` button on the upper right side of the graph
* (OPTIONAL) Set up auto-refresh by selecting an option under the dropdown next to the refresh button on the upper right side of the graph
* Click the refresh button and data should show up on hte graph
6. Shutdown the services when finished
* Run `docker-compose down` in the examples directory

View File

@ -0,0 +1,100 @@
# This Cortex Config is copied from the Cortex Project documentation
# Source: https://github.com/cortexproject/cortex/blob/master/docs/configuration/single-process-config.yaml
# Configuration for running Cortex in single-process mode.
# This configuration should not be used in production.
# It is only for getting started and development.
# Disable the requirement that every request to Cortex has a
# X-Scope-OrgID header. `fake` will be substituted in instead.
auth_enabled: false
server:
http_listen_port: 9009
# Configure the server to allow messages up to 100MB.
grpc_server_max_recv_msg_size: 104857600
grpc_server_max_send_msg_size: 104857600
grpc_server_max_concurrent_streams: 1000
distributor:
shard_by_all_labels: true
pool:
health_check_ingesters: true
ingester_client:
grpc_client_config:
# Configure the client to allow messages up to 100MB.
max_recv_msg_size: 104857600
max_send_msg_size: 104857600
use_gzip_compression: true
ingester:
# We want our ingesters to flush chunks at the same time to optimise
# deduplication opportunities.
spread_flushes: true
chunk_age_jitter: 0
walconfig:
wal_enabled: true
recover_from_wal: true
wal_dir: /tmp/cortex/wal
lifecycler:
# The address to advertise for this ingester. Will be autodiscovered by
# looking up address on eth0 or en0; can be specified if this fails.
# address: 127.0.0.1
# We want to start immediately and flush on shutdown.
join_after: 0
min_ready_duration: 0s
final_sleep: 0s
num_tokens: 512
tokens_file_path: /tmp/cortex/wal/tokens
# Use an in memory ring store, so we don't need to launch a Consul.
ring:
kvstore:
store: inmemory
replication_factor: 1
# Use local storage - BoltDB for the index, and the filesystem
# for the chunks.
schema:
configs:
- from: 2019-07-29
store: boltdb
object_store: filesystem
schema: v10
index:
prefix: index_
period: 1w
storage:
boltdb:
directory: /tmp/cortex/index
filesystem:
directory: /tmp/cortex/chunks
delete_store:
store: boltdb
purger:
object_store_type: filesystem
frontend_worker:
# Configure the frontend worker in the querier to match worker count
# to max_concurrent on the queriers.
match_max_concurrent: true
# Configure the ruler to scan the /tmp/cortex/rules directory for prometheus
# rules: https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/#recording-rules
ruler:
enable_api: true
enable_sharding: false
storage:
type: local
local:
directory: /tmp/cortex/rules

View File

@ -0,0 +1,33 @@
# Copyright The OpenTelemetry Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
version: "3.8"
services:
cortex:
image: quay.io/cortexproject/cortex:v1.5.0
command:
- -config.file=./config/cortex-config.yml
volumes:
- ./cortex-config.yml:/config/cortex-config.yml:ro
ports:
- 9009:9009
grafana:
image: grafana/grafana:latest
ports:
- 3000:3000
sample_app:
build:
context: ../
dockerfile: ./examples/Dockerfile

View File

@ -0,0 +1,7 @@
psutil
protobuf>=3.13.0
requests>=2.25.0
python-snappy
opentelemetry-api
opentelemetry-sdk
opentelemetry-proto

View File

@ -0,0 +1,153 @@
import logging
import random
import sys
import time
from logging import INFO
import psutil
from opentelemetry import metrics
from opentelemetry.exporter.prometheus_remote_write import (
PrometheusRemoteWriteMetricsExporter,
)
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.sdk.metrics.export.aggregate import (
HistogramAggregator,
LastValueAggregator,
MinMaxSumCountAggregator,
SumAggregator,
)
from opentelemetry.sdk.metrics.view import View, ViewConfig
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logger = logging.getLogger(__name__)
metrics.set_meter_provider(MeterProvider())
meter = metrics.get_meter(__name__)
exporter = PrometheusRemoteWriteMetricsExporter(
endpoint="http://cortex:9009/api/prom/push",
headers={"X-Scope-Org-ID": "5"},
)
metrics.get_meter_provider().start_pipeline(meter, exporter, 1)
testing_labels = {"environment": "testing"}
# Callback to gather cpu usage
def get_cpu_usage_callback(observer):
for (number, percent) in enumerate(psutil.cpu_percent(percpu=True)):
labels = {"cpu_number": str(number)}
observer.observe(percent, labels)
# Callback to gather RAM usage
def get_ram_usage_callback(observer):
ram_percent = psutil.virtual_memory().percent
observer.observe(ram_percent, {})
requests_counter = meter.create_counter(
name="requests",
description="number of requests",
unit="1",
value_type=int,
)
request_min_max = meter.create_counter(
name="requests_min_max",
description="min max sum count of requests",
unit="1",
value_type=int,
)
request_last_value = meter.create_counter(
name="requests_last_value",
description="last value number of requests",
unit="1",
value_type=int,
)
requests_size = meter.create_valuerecorder(
name="requests_size",
description="size of requests",
unit="1",
value_type=int,
)
requests_size_histogram = meter.create_valuerecorder(
name="requests_size_histogram",
description="histogram of request_size",
unit="1",
value_type=int,
)
requests_active = meter.create_updowncounter(
name="requests_active",
description="number of active requests",
unit="1",
value_type=int,
)
meter.register_sumobserver(
callback=get_ram_usage_callback,
name="ram_usage",
description="ram usage",
unit="1",
value_type=float,
)
meter.register_valueobserver(
callback=get_cpu_usage_callback,
name="cpu_percent",
description="per-cpu usage",
unit="1",
value_type=float,
)
counter_view1 = View(
requests_counter,
SumAggregator,
label_keys=["environment"],
view_config=ViewConfig.LABEL_KEYS,
)
counter_view2 = View(
request_min_max,
MinMaxSumCountAggregator,
label_keys=["os_type"],
view_config=ViewConfig.LABEL_KEYS,
)
counter_view3 = View(
request_last_value,
LastValueAggregator,
label_keys=["environment"],
view_config=ViewConfig.UNGROUPED,
)
size_view = View(
requests_size_histogram,
HistogramAggregator,
label_keys=["environment"],
aggregator_config={"bounds": [20, 40, 60, 80, 100]},
view_config=ViewConfig.UNGROUPED,
)
meter.register_view(counter_view1)
meter.register_view(counter_view2)
meter.register_view(counter_view3)
meter.register_view(size_view)
# Load generator
num = random.randint(0, 1000)
while True:
# counters
requests_counter.add(num % 131 + 200, testing_labels)
request_min_max.add(num % 181 + 200, testing_labels)
request_last_value.add(num % 101 + 200, testing_labels)
# updown counter
requests_active.add(num % 7231 + 200, testing_labels)
# value observers
requests_size.record(num % 6101 + 100, testing_labels)
requests_size_histogram.record(num % 113, testing_labels)
logger.log(level=INFO, msg="completed metrics collection cycle")
time.sleep(1)
num += 9791

View File

@ -43,7 +43,7 @@ install_requires =
requests == 2.25.0
opentelemetry-api == 0.17.dev0
opentelemetry-sdk == 0.17.dev0
python-snappy >= 0.5.4
[options.packages.find]
where = src

View File

@ -17,8 +17,8 @@ import re
from typing import Dict, Sequence
import requests
import snappy
from opentelemetry.exporter.prometheus_remote_write.gen.remote_pb2 import (
WriteRequest,
)

View File

@ -296,6 +296,8 @@ deps =
httpretty
commands_pre =
sudo apt-get install libsnappy-dev
pip install python-snappy
python -m pip install {toxinidir}/opentelemetry-python-core/opentelemetry-api
python -m pip install {toxinidir}/opentelemetry-python-core/opentelemetry-instrumentation
python -m pip install {toxinidir}/opentelemetry-python-core/opentelemetry-sdk
@ -356,6 +358,8 @@ changedir =
tests/opentelemetry-docker-tests/tests
commands_pre =
sudo apt-get install libsnappy-dev
pip install python-snappy
pip install -e {toxinidir}/opentelemetry-python-core/opentelemetry-api \
-e {toxinidir}/opentelemetry-python-core/opentelemetry-instrumentation \
-e {toxinidir}/opentelemetry-python-core/opentelemetry-sdk \
@ -373,8 +377,6 @@ commands_pre =
-e {toxinidir}/instrumentation/opentelemetry-instrumentation-system-metrics \
-e {toxinidir}/opentelemetry-python-core/exporter/opentelemetry-exporter-opencensus \
-e {toxinidir}/exporter/opentelemetry-exporter-prometheus-remote-write
sudo apt-get install libsnappy-dev
pip install python-snappy
docker-compose up -d
python check_availability.py
commands =