* Elasticsearch: Use displayName field for naming
* Change solution to frame.Name to be backward compatible
* Fix snapshot tests
* Use Time and Value for time and value fields
* Use variables from grafana-plugin-sdk-go for name
* Elasticsearch: Add processing of logs query to backend
* Add and fix tests
* Add snapshot tests
* Fix test in ES client
* Small updates, remove redundant logic
* Refactor setPreferredVisType to improve readability
* WIP
* WIP
* Refactor
* Add tests
* Cleanup
* Fix whitespace
* Fix test and lint
* In snapshot tests update counter to be number
* Add boolean value for snapshot testing
* Update pkg/tsdb/elasticsearch/response_parser.go
Co-authored-by: Gábor Farkas <gabor.farkas@gmail.com>
* Update pkg/tsdb/elasticsearch/response_parser.go
Co-authored-by: Gábor Farkas <gabor.farkas@gmail.com>
* Use generic to reuse logic when creating fields
* Use nullable fields
* Fix lint
* WIP (#63272)
wip
* Fix snapshot test after we changed field types to nullable
---------
Co-authored-by: Gábor Farkas <gabor.farkas@gmail.com>
Use latest github.com/grafana/grafana-plugin-sdk-go which includes changes to the TypeVersion property (always present in JSON).
Also included is squtil changes: SQL util - allow using the database scan type for converters
* Update oapi library and thema
* Use fork commit to fix elasticsearch and cloudwatch generators
* Update thema
* Fixes
* Update thema with last fixes
* Sync
* Fix test
* Update thema and schemas
* Update thema
* Elasticsearch: Implement schema for query
* Comment out types I am not sure how to do
* Manually fix typing for PipelineMetricAggregationWithMultipleBucketPaths and BasePipelineMetricAggregation
* Import types to types.ts to have single source of truth
* Cleanup, reorder
* Remove unnecesary Schema.
* Fix test
* Refactor
Look for 'caused_by.reason' in ES error response
When the ES response does not contain `reason` or `root_cause[0].reason`
is empty, there is no information for the user to know what is going
wrong.
An example of the error message after this change:
```
Failed to evaluate queries and expressions: failed to execute query A: Trying to create too many buckets. Must be less than or equal to: [65536] but this number of buckets was exceeded. This limit can be set by changing the [search.max_buckets] cluster level setting.
```
Related to https://github.com/grafana/grafana/issues/61246
* Refactor parse query to functions
* Move parsing to new file
* Create empty result variable and use it when returning early
* Fix linting
* Revert "Create empty result variable and use it when returning early"
This reverts commit 36a503f66e52f8213c673972774329a963a78100.
* Elasticsearch: Fix ordering in raw_document and add logic for raw_data
* Add comments
* Fix raw data request to use correct timefield
* Fix linting
* Add raw data as metric type
* Fix linting
* Elasticsearch: Add defaults for log query
* Add higlight
* Fix lint
* Add snapshot test
* Implement correct query for logs
* Update
* Adjust naming and comments
* Fix lint
* Remove ifs
* Elasticsearch: Fix ordering in raw_document and add logic for raw_data
* Add comments
* Fix raw data request to use correct timefield
* Fix linting
* Add raw data as metric type
* Fix linting
* Hopefully fix lint
* Elasticsearch: Fix removing of empty settings from query in backend implementation
* Update
* Update
* Update pkg/tsdb/elasticsearch/time_series_query.go
Co-authored-by: Gábor Farkas <gabor.farkas@gmail.com>
Co-authored-by: Gábor Farkas <gabor.farkas@gmail.com>
* Chore: Update grafana-plugin-sdk-go to v0.142.0
* Update tests and golden files for 207 status code
* Chore: Move update flag definition at the top in response_parser_test.go
* retrigger
Co-authored-by: Will Browne <will.browne@grafana.com>
* make sql engine use pick log context for logs
* update tempo to get log context
* update opentsdb to use log context
* update es client to use log context
* Elasticsearch: Fix calculation of trimEdges
When a value of trimEdges is set greater than 1 we need to drop both the
first and last sample of the data from the response.
* Elasticsearch: Fix reading trimEdges from the query settings
Currently the trimEdges property in the JSON panel is stored as a string
and not directly as a number.
This caused that the reading of the value failed in the go backend
because the simplejson.Int() method doesn't properly handle this case.
This failure when decoding the value goes unnoticed because of the early
return causing the trimEdges configuration to be ignored.
* Refactor castToInt to also return an error
Add a new test case that sets the `trimEdges` property as a quoted
number.