* Add scopes to queries in DataSourceWithBackend
* Remove Prometheus-specific solution
* Readd prometheus support
* move scopes reordering ti loki ds
* Add tests and logQLScope feature flag
* Move featureToggles to before/after each
* Remove irrelevant file change
* feat: Implement optional URL path sanitization in BackendSrv methods
* add comment
* revert
* remove namespace import from backendsrv
* change method to validatePath, remove query params and fragments
* Moved validatePath call into fetch and make it throw an error instead
* update pluginSettings tests
* prettier
* Update public/app/features/plugins/pluginSettings.ts
Co-authored-by: Hugo Häggmark <hugo.haggmark@gmail.com>
* change name to validatePath
* fix other tests
* rename property in backend_srv tests
* rename to validatePath in backend_srv, add extra tests
* Move path validation into parseUrlFromOptions
* fix
* Add additional check
* Add test
---------
Co-authored-by: joshhunt <josh.hunt@grafana.com>
Co-authored-by: Hugo Häggmark <hugo.haggmark@gmail.com>
Adds a new "Allow as recording rules target" toggle to Prometheus datasource configuration that controls whether the datasource can be selected as a target for writing recording rules.
---------
Co-authored-by: ismail simsek <ismailsimsek09@gmail.com>
Co-authored-by: Konrad Lalik <konradlalik@gmail.com>
* chore(packages): remove rollup dts plugin
* build(packages): add rollup copy plugin settings to copy ts declarations to esm and cjs builds
* build(packages): remove copy settings as result doesnt pass attw cli checks
* build(packages): use single types output in dist/types directory
* ci(packages): update prepare and validate scripts for single type builds
* fix(grafana-schema): copy raw types to dist/esm directory for grafana/scenes support
* Alerting: Use default_datasource_uid as the default target for recording rules
* Add tests
---------
Co-authored-by: Konrad Lalik <konradlalik@gmail.com>
* rename /mtfe route to /femt to match project name
* set correct navTree JSON property name
* call GetWebAssets in the request handler to prevent stale assets during development
* Call /bootdata and render grafana
* set nonce on script
* write csp header in index handler
* write report-only csp as well
* debug stuff
* more debug logging
* move importing app into a seperate, async-loaded module
* Clean up comments
* make /femt redirect to / in the frontend
* remove console.log
* remove stale commented code
* call __grafana_load_failed if bootstrap fails
* comment for __grafana_boot_data_promise
* remove console.log
* remove blank newline
* codeowners
* feat(Extensions): expose an observable API for added links and components
* refactor: make `getObservablePluginExtensions()` more RxJS style
* refactor(getPluginExtensions): remove unnecessary types
* fix(getPluginExtensions): remove unused imports
* Apply suggestions from code review
Co-authored-by: Hugo Häggmark <hugo.haggmark@gmail.com>
* refactor(getPluginExtensions): stop using `shareReply()`
* fix(grafana-runtime/extensions): typo in error messages
---------
Co-authored-by: Hugo Häggmark <hugo.haggmark@gmail.com>
* Add `recordingRulesEnabled` to grafanaBootData
* Check for recording rules being enabled, as well as feature toggle
* Remove unnecessary config line
* Move recording rules check to featureToggles file
* Update NoRulesCTA.tsx
* Live: allow publishing over Centrifuge subscription
Currently when publishing over a Grafana Live channel,
the data is sent over the HTTP API. This works fine when
there is only a single Grafana instance running, but
when there are multiple instances, the data will only hit
one instance, which is often not desired: sometimes you need
to guarantee that the data appears on the same instance that
the frontend is connected to.
An example of this is in the Grafana LLM app when running the
MCP server. The MCP protocol is stateful; users subscribe to
a channel to get a long-lived stream of server-sent events,
then send subsequent requests to the server to get further
results. If there are multiple Grafana instances running then
the requests are likely to land on an instance other than the
one that the user is connected to.
This commit adds a new option to the `GrafanaLiveSrv` interface
that allows the user to publish data over the Centrifuge
subscription instead of the HTTP API. This is not the default and
should rarely be used, but is required to fulfil certain use cases.
* Address nits from code review
Co-authored-by: kay delaney <45561153+kaydelaney@users.noreply.github.com>
---------
Co-authored-by: kay delaney <45561153+kaydelaney@users.noreply.github.com>
Developers using Grafana Live need to know whether a message is too
big to be sent over the Grafana Live websocket. Since this limit
is configurable, it is useful to expose it to the frontend.
This commit adds a new field to the frontend settings,
`liveMessageSizeLimit`, which the frontend can use to access the
limit configured in the backend.
Relates to #99770.
* feat: component extension point for adaptive telemetry query actions
- only render the first non-null added-component, and provide utility in the added component infrastructure to support this
---------
Co-authored-by: Levente Balogh <balogh.levente.hu@gmail.com>