mirror of
https://github.com/foss42/apidash.git
synced 2025-12-01 10:17:47 +08:00
Merge branch 'foss42:main' into add-feature-onboarding-screen
This commit is contained in:
@@ -78,7 +78,84 @@ git push
|
||||
|
||||
## FlatHub (Flatpak)
|
||||
|
||||
TODO Instructions
|
||||
Steps to generate .flatpak package of API Dash:
|
||||
|
||||
1. Clone and build API Dash:
|
||||
|
||||
Follow the [How to run API Dash locally](setup_run.md) guide.
|
||||
|
||||
Stay in the root folder of the project directory.
|
||||
|
||||
2. Install Required Packages (Debian/Ubuntu):
|
||||
|
||||
```bash
|
||||
sudo apt install flatpak
|
||||
flatpak install -y flathub org.flatpak.Builder
|
||||
flatpak remote-add --if-not-exists --user flathub https://dl.flathub.org/repo/flathub.flatpakrepo
|
||||
```
|
||||
|
||||
*if using another linux distro, download flatpak and follow the rest of the steps.
|
||||
|
||||
3. Build API Dash project:
|
||||
|
||||
```bash
|
||||
flutter build linux --release
|
||||
```
|
||||
|
||||
4. Create flatpak manifest file:
|
||||
|
||||
```bash
|
||||
touch apidash-flatpak.yaml
|
||||
```
|
||||
in this file, add:
|
||||
|
||||
```yaml
|
||||
app-id: io.github.foss42.apidash
|
||||
runtime: org.freedesktop.Platform
|
||||
runtime-version: "23.08"
|
||||
sdk: org.freedesktop.Sdk
|
||||
|
||||
command: /app/bundle/apidash
|
||||
finish-args:
|
||||
- --share=ipc
|
||||
- --socket=fallback-x11
|
||||
- --socket=wayland
|
||||
- --device=dri
|
||||
- --socket=pulseaudio
|
||||
- --share=network
|
||||
- --filesystem=home
|
||||
modules:
|
||||
- name: apidash
|
||||
buildsystem: simple
|
||||
build-commands:
|
||||
- cp -a build/linux/x64/release/bundle /app/bundle
|
||||
sources:
|
||||
- type: dir
|
||||
path: .
|
||||
```
|
||||
|
||||
5. Create the .flatpak file:
|
||||
|
||||
```bash
|
||||
flatpak run org.flatpak.Builder --force-clean --sandbox --user --install --install-deps-from=flathub --ccache --mirror-screenshots-url=https://dl.flathub.org/media/ --repo=repo builddir apidash-flatpak.yaml
|
||||
|
||||
flatpak build-bundle repo apidash.flatpak io.github.foss42.apidash
|
||||
```
|
||||
|
||||
The apidash.flatpak file should be the project root folder.
|
||||
|
||||
To test it:
|
||||
|
||||
```bash
|
||||
flatpak install --user apidash.flatpak
|
||||
|
||||
flatpak run io.github.foss42.apidash
|
||||
```
|
||||
To uninstall it:
|
||||
|
||||
```bash
|
||||
flatpak uninstall io.github.foss42.apidash
|
||||
```
|
||||
|
||||
## Homebrew
|
||||
|
||||
|
||||
@@ -0,0 +1,54 @@
|
||||
# AI API Eval Framework For Multimodal Generative AI
|
||||
|
||||
## Personal Information
|
||||
- **Full Name:** Nideesh Bharath Kumar
|
||||
- **University Name:** Rutgers University–New Brunswick
|
||||
- **Program Enrolled In:** B.S. Computer Science, Artificial Intelligence Track
|
||||
- **Year:** Junior Year (Third Year)
|
||||
- **Expected Graduation Date:** May 2026
|
||||
|
||||
## About Me
|
||||
I’m **Nideesh Bharath Kumar**, a junior (third year) in Rutgers University–New Brunswick taking a **B.S. in Computer Science on the Artificial Intelligence Track**. I have a strong foundation in full stack development and AI engineering: I have project and internship experience in technologies like: **Dart/Flutter, LangChain, RAG, Vector Databases, AWS, Docker, Kubernetes, PostgreSQL, FastAPI, OAuth,** and other technologies that aid in developing scalable and AI-powered systems. I have interned at **Manomay Tech, IDEA, and Newark Science and Sustainability,** developing scalable systems and managing AI systems and completed fellowships with **Google** and **Codepath**, developing my technical skills. I’ve also won awards in hackathons, achieving **Overall Best Project in the CS Base Climate Hackathon for a Flutter-based project** and **Best Use of Terraform in the HackRU Hackathon for an Computer Vision Smart Shopping Cart**. I’m passionate about building distributed, scalable systems and AI technologies, and API Dash is an amazing tool that can facilitate in the process of building these solutions through easy visualization and testing of APIs; I believe my skills in **AI development** and experience with **Dart/Flutter** and **APIs** put me in a position to effectively contribute to this project.
|
||||
|
||||
## Project Details
|
||||
**Project Title:** AI API Eval Framework For Multimodal Generative AI
|
||||
**Description:**
|
||||
This project is to develop a **Dart-centered evaluation framework** designed to simplify the testing of generative AI models across **multiple types (text, image, code)**. This will be done by integrating evaluation toolkits: **llm-harness** for text, **torch-fidelity** and **CLIP** for images, and **HumanEval/MBPP** with **CodeBLEU** for code. This project will provide a unified config layer which can support standard and custom benchmark datasets and evaluation metrics. This will be done by providing a **user-friendly interface in API Dash** which allows the user to select model type, dataset management (local or downloadable), and evaluation metrics (standard toolkit or custom script). On top of this, **real-time visual analytics** will be provided to visualize the progress of the metrics as well as **parallelized batch processing** of the evaluation.
|
||||
|
||||
**Related Issue:** - [#618](https://github.com/foss42/apidash/issues/618)
|
||||
|
||||
**Key Features:**
|
||||
1) Unified Evaluation Configuration:
|
||||
- A config file in YAML will serve as the abstraction layer, which will be generated by the user's selection of model type, dataset, and evaluation metrics. This will redirect the config to either use llm-harness, torch-fidelity and CLIP, or HumanEval and MBPP with CodeBLEU. Additionally, custom evaluation scripts and datasets can be attached to this config file which can be interpreted by the systems.
|
||||
- This abstraction layer ensures that whether any of these specifications are different for the eval job, all of it will be redirected to the correct resources while still providing a centralized layer for creating the job. Furthermore, these config files can be stored in history for running the same jobs later.
|
||||
|
||||
2) Intuitive User Interface
|
||||
- When starting an evaluation, users can select the model type (text, image, or code) through a drop-down menu. The system will provide a list of standard datasets and use cases. The user can select these datasets, or attach a custom one. If the user does not have this dataset locally in the workspace, they can attach it using file explorer or download it from the web. Furthermore, the user can select standard evaluation metrics from a list or attach a custom script.
|
||||
|
||||
3) Standard Evaluation Pipelines
|
||||
- The standard evaluation pipelines include text, image, and code generation.
|
||||
- For text generation, llm-harness will be used, and utilize custom datasets and tasks to measure Precision, Recall, F1 Score, BLEU, ROUGE, and Perplexity. Custom integration of datasets and evaluation scores can be done through interfacing the llm-harness custom test config file.
|
||||
- For image generation, torch-fidelity can be used to calculate Fréchet Inception Distance and Inception Score by comparing against a reference image database. For text to image generation, CLIP scores can be used to ensure connection between prompt and generated image. Custom integration of datasets and evaluation scores can be done through a custom interface created using Dart.
|
||||
- For code generation, tests like HumanEval and MBPP can be used for functional correctness and CodeBLEU can be used for code quality checking. Custom integration will be done the same way as image generation, with a custom interface created using Dart for functional test databases and evaluation metrics.
|
||||
|
||||
4) Batch Evaluations
|
||||
- Parallel Processing will be supported by async runs of the tests, where a progress bar will monitor the number of processed rows in API Dash.
|
||||
|
||||
5) Visualizations of Results
|
||||
- Visualizations of results will be provided as the tests are running, providing live feedback of model performance, as well as a general summary of visualizations after all evals have been run.
|
||||
- Bar Graphs: These will be displayed from a range of 0 to 100% accuracy to visualize a quick performance comparison across all tested models.
|
||||
- Line Charts: These will be displayed to show performance trends over time of models, comparing model performance across different batches as well as between each model.
|
||||
- Tables: These will provide detailed summary statistics about scores for each model across different benchmarks and datasets.
|
||||
- Box Plots: These will show the distribution of scores per batch, highlighting outliers and variance, while also having side-by-side comparisons with different models.
|
||||
|
||||
6) Offline and Online Support
|
||||
- Offline: Models that are offline will be supported by pointing to the script the model uses to run, and datasets that are locally stored.
|
||||
- Online: These models can be connected for eval through an API endpoint, and datasets can be downloaded with access to the link.
|
||||
|
||||
**Architecture:**
|
||||
1) UI Interface: Built with Dart/Flutter
|
||||
2) Configuration Manager: Built with Dart, uses YAML for config file
|
||||
3) Dataset Manager: Built with Dart, REST APIs for accessing endpoints
|
||||
4) Evaluation Manager: Built with a Dart - Python layer to manage connections between evaluators and API Dash
|
||||
5) Batch Processing: Built with Dart Async requests
|
||||
6) Visualization and Results: Built with Dart/Flutter, using packages like fl_chart and syncfusion_flutter_charts
|
||||
129
doc/proposals/2025/gsoc/idea_april_lin_api_explorer.md
Normal file
129
doc/proposals/2025/gsoc/idea_april_lin_api_explorer.md
Normal file
@@ -0,0 +1,129 @@
|
||||
### Initial Idea Submission
|
||||
|
||||
**Full Name:** April Lin
|
||||
**University Name:** University of Illinois Urbana-Champaign
|
||||
**Program (Degree & Major/Minor):** Master in Electrical and Computer Engineering
|
||||
**Year:** first year
|
||||
**Expected Graduation Date:** 2026
|
||||
|
||||
**Project Title:** API Explorer
|
||||
**Relevant Issues:** [https://github.com/foss42/apidash/issues/619](https://github.com/foss42/apidash/issues/619)
|
||||
|
||||
**Idea Description:**
|
||||
|
||||
I have divided the design of the API explorer into three major steps:
|
||||
|
||||
1. **Designing the UI**
|
||||
2. **Designing the API template model**
|
||||
3. **Using AI tools to automatically extract API information from a given website**
|
||||
|
||||
---
|
||||
|
||||
## 1. UI Design (User Journey)
|
||||
|
||||
In this step, I primarily designed two interfaces for the API explorer: the first is the main API Explorer interface, and the second is a detailed interface for each API template.
|
||||
|
||||
### API Explorer
|
||||

|
||||
1. **Accessing the API Explorer**
|
||||
- In the left-hand sidebar, users will find an “API Explorer” icon.
|
||||
- Clicking this icon reveals the main API template search interface on the right side of the screen.
|
||||
|
||||
2. **Browsing API Templates**
|
||||
- At the top of the main area, there is a search bar that supports fuzzy matching by API name.
|
||||
- Directly beneath the search bar are category filters (e.g., AI, Finance, Web3, Social Media).
|
||||
- Users can click “More” to view an expanded list of all available categories.
|
||||
- The page displays each template in a **card layout**, showing the API’s name, a short description, and (optionally) an image or icon.
|
||||
|
||||
### API Templates
|
||||

|
||||
1. **Selecting a Template**
|
||||
- When a user clicks on a card (for example, **OpenAI**), they navigate to a dedicated page for that API template.
|
||||
- This page lists all the available API endpoints or methods in a collapsible/expandable format (e.g., “API name 2,” “API name 3,” etc.).
|
||||
- Each listed endpoint describes what it does—users can select which methods they want to explore or import into their workspace.
|
||||
|
||||
2. **Exploring an API Method**
|
||||
- Within this detailed view, users see request details such as **HTTP method**, **path**, **headers**, **body**, and **sample response**.
|
||||
- If the user wants to try out an endpoint, they can import it into their API collections by clicking **import**.
|
||||
- Each method will include all the fields parsed through the automated process. For the detailed API field design, please refer to **Step Two**.
|
||||
|
||||
---
|
||||
|
||||
## 2. Updated Table Design
|
||||
|
||||
Below is the model design for the API explorer.
|
||||
|
||||
### **Base Table: `api_templates`**
|
||||
- **Purpose:**
|
||||
Stores the common properties for all API templates, regardless of their type.
|
||||
|
||||
- **Key Fields:**
|
||||
- **id**:
|
||||
- Primary key (integer or UUID) for unique identification.
|
||||
- **name**:
|
||||
- The API name (e.g., “OpenAI”).
|
||||
- **api_type**:
|
||||
- Enumerated string indicating the API type (e.g., `restful`, `graphql`, `soap`, `grpc`, `sse`, `websocket`).
|
||||
- **base_url**:
|
||||
- The base URL or service address (applicable for HTTP-based APIs and used as host:port for gRPC).
|
||||
- **image**:
|
||||
- A text or string field that references an image (URL or path) representing the API’s logo or icon.
|
||||
- **category**:
|
||||
- A field (array or string) used for search and classification (e.g., "finance", "ai", "devtool").
|
||||
- **description**:
|
||||
- Textual description of the API’s purpose and functionality.
|
||||
|
||||
### **RESTful & GraphQL Methods Table: `api_methods`**
|
||||
- **Purpose:**
|
||||
Manages detailed configurations for individual API requests/methods, specifically tailored for RESTful and GraphQL APIs.
|
||||
|
||||
- **Key Fields:**
|
||||
- **id**:
|
||||
- Primary key (UUID).
|
||||
- **template_id**:
|
||||
- Foreign key linking back to `api_templates`.
|
||||
- **method_type**:
|
||||
- The HTTP method (e.g., `GET`, `POST`, `PUT`, `DELETE`) or the operation type (`query`, `mutation` for GraphQL).
|
||||
- **method_name**:
|
||||
- A human-readable name for the method (e.g., “Get User List,” “Create Order”).
|
||||
- **url_path**:
|
||||
- The relative path appended to the `base_url` (for RESTful APIs).
|
||||
- **description**:
|
||||
- Detailed explanation of the method’s functionality.
|
||||
- **headers**:
|
||||
- A JSON field storing default header configurations (e.g., `Content-Type`, `Authorization`).
|
||||
- **authentication**:
|
||||
- A JSON field for storing default authentication details (e.g., Bearer Token, Basic Auth).
|
||||
- **query_params**:
|
||||
- A JSON field for any default query parameters (optional, typically for RESTful requests).
|
||||
- **body**:
|
||||
- A JSON field containing the default request payload, including required fields and default values.
|
||||
- **sample_response**:
|
||||
- A JSON field providing an example of the expected response for testing/validation.
|
||||
|
||||
---
|
||||
|
||||
## 3. Automated Extraction (Parser Design)
|
||||
|
||||
I think there are two ways to design the automated pipeline: the first is to use AI tools for automated parsing, and the second is to employ a rule-based approach.
|
||||
|
||||
### **AI-Based Parser**
|
||||
- For each parser type (OpenAPI, HTML, Markdown), design a dedicated prompt agent to parse the API methods.
|
||||
- The prompt includes model fields (matching the data structures from [Step Two](#2-updated-table-design)) and the required API category, along with the API URL to be parsed.
|
||||
- The AI model is instructed to output the parsed result in **JSON format**, aligned with the schema defined in `api_templates` and `api_methods`.
|
||||
|
||||
### **Non-AI (Rule-Based) Parser**
|
||||
- **OpenAPI**: Use existing libraries (e.g., Swagger/OpenAPI parser libraries) to read and interpret JSON or YAML specs.
|
||||
- **HTML**: Perform DOM-based parsing or use regex patterns to identify endpoints, parameter names, and descriptions.
|
||||
- **Markdown**: Utilize Markdown parsers (e.g., remark, markdown-it) to convert the text into a syntax tree and extract relevant sections.
|
||||
|
||||
## Questions
|
||||
|
||||
1. **Database Selection**
|
||||
- Which type of database should be used for storing API templates and methods? Are there any specific constraints or preferences (e.g., relational vs. NoSQL, performance requirements, ease of integration) we should consider?
|
||||
|
||||
2. **Priority of Automated Parsing**
|
||||
- What is the preferred approach for automated parsing of OpenAPI/HTML files? Would an AI-based parsing solution be acceptable, or should we prioritize rule-based methods for reliability and simplicity?
|
||||
|
||||
3. **UI Interaction Flow**
|
||||
- Can I add a dedicated “api explorer” menu in the left navigation bar for api explorer?
|
||||
53
doc/proposals/2025/gsoc/idea_harsh_panchal_AI_API_EVAL.md
Normal file
53
doc/proposals/2025/gsoc/idea_harsh_panchal_AI_API_EVAL.md
Normal file
@@ -0,0 +1,53 @@
|
||||
# Initial Idea Submission
|
||||
|
||||
**Full Name:** Harsh Panchal
|
||||
**Email:** [harsh.panchal.0910@gmail.com](mailto:harsh.panchal.0910@gmail.com)
|
||||
● [**GitHub**](https://github.com/GANGSTER0910)
|
||||
● [**Website**](https://harshpanchal0910.netlify.app/)
|
||||
● [**LinkedIn**](https://www.linkedin.com/in/harsh-panchal-902636255)
|
||||
|
||||
**University Name:** Ahmedabad University, Ahmedabad
|
||||
**Program:** BTech in Computer Science and Engineering
|
||||
**Year:** Junior, 3rd Year
|
||||
**Expected Graduation Date:** May 2026
|
||||
**Location:** Gujarat, India.
|
||||
**Timezone:** Kolkata, INDIA, UTC+5:30
|
||||
|
||||
## **Project Title: AI API Eval Framework**
|
||||
|
||||
|
||||
## **Relevant Issues: [#618](https://github.com/foss42/apidash/issues/618)**
|
||||
*<Add links to the relevant issues>*
|
||||
|
||||
## **Idea Description**
|
||||
The goal of this project is to create an AI API Evaluation Framework that provides an end-to-end solution to compare AI models on different kinds of data, i.e., text, images, and videos. The overall strategy is to use benchmark models to compare AI outputs with benchmark predictions. Metrics like BLEU, ROUGE, FID, and SSIM can also be utilized by the users to perform an objective performance evaluation of models.
|
||||
|
||||
For the best user experience in both offline and online mode, the platform will provide an adaptive assessment framework where users can specify their own assessment criteria for flexibility in dealing with various use cases. Their will be a feature of modern version control which will enable users to compare various versions of model and moniter performance over time. For the offline mode the evalutions will be supported using LoRA models which reduces resource consumption and will give outputs without compromising with accuracy. The system will use Explainability Integration with SHAP and LIME to demonstrate how things influence model decisions.
|
||||
|
||||
The visualization dashboard, built using Flutter, will include real-time charts, error analysis, and result summarization, making it easy to analyze the model performance. Offline with cached models or online with API endpoints, the framework will offer end-to-end testing.
|
||||
|
||||
With its rank-based framework, model explainability, and evaluatable configuration, this effort will be a powerful resource for researchers, developers, and organizations to make data-driven decisions on AI model selection and deployment.
|
||||
|
||||
## Unique Features
|
||||
1) Benchmark-Based Ranking:
|
||||
Compare and rank model results against pre-trained benchmark models.
|
||||
Determine how well outputs resemble perfect predictions.
|
||||
2) Advanced Evaluation Metrics:
|
||||
Facilitate metrics such as BLEU, ROUGE, FID, SSIM, and PSNR for extensive analysis.
|
||||
Allow users to define custom metrics.
|
||||
3) Model Version Control:
|
||||
Compare various versions of AI models.
|
||||
Monitor improvement in performance over time with side-by-side comparison.
|
||||
4) Explainability Integration:
|
||||
Employ SHAP and LIME to explain model decisions.
|
||||
Provide clear explanations of why some outputs rank higher.
|
||||
5) Custom Evaluation Criteria:
|
||||
Allow users to input custom evaluation criteria for domain-specific tasks.
|
||||
6) Offline Mode with LoRA Models:
|
||||
Storage and execution efficiency with low-rank adaptation models.
|
||||
Conduct offline evaluations with minimal hardware demands.
|
||||
7) Real-Time Visualization:
|
||||
Visualize evaluation results using interactive charts via Flutter.
|
||||
Monitor performance trends and detect weak spots visually.
|
||||
|
||||
|
||||
BIN
doc/proposals/2025/gsoc/images/API_Explorer_Main.png
Normal file
BIN
doc/proposals/2025/gsoc/images/API_Explorer_Main.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 64 KiB |
BIN
doc/proposals/2025/gsoc/images/API_Explorer_Template.png
Normal file
BIN
doc/proposals/2025/gsoc/images/API_Explorer_Template.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 43 KiB |
@@ -27,22 +27,9 @@ class EnvCellField extends StatelessWidget {
|
||||
style: kCodeStyle.copyWith(
|
||||
color: clrScheme.onSurface,
|
||||
),
|
||||
decoration: InputDecoration(
|
||||
hintStyle: kCodeStyle.copyWith(
|
||||
color: clrScheme.outlineVariant,
|
||||
),
|
||||
decoration: getTextFieldInputDecoration(
|
||||
clrScheme,
|
||||
hintText: hintText,
|
||||
contentPadding: const EdgeInsets.only(bottom: 12),
|
||||
focusedBorder: UnderlineInputBorder(
|
||||
borderSide: BorderSide(
|
||||
color: clrScheme.outlineVariant,
|
||||
),
|
||||
),
|
||||
enabledBorder: UnderlineInputBorder(
|
||||
borderSide: BorderSide(
|
||||
color: clrScheme.surfaceContainerHighest,
|
||||
),
|
||||
),
|
||||
),
|
||||
onChanged: onChanged,
|
||||
);
|
||||
|
||||
@@ -5,6 +5,7 @@ import 'package:flutter/material.dart';
|
||||
import 'package:flutter_riverpod/flutter_riverpod.dart';
|
||||
import 'package:data_table_2/data_table_2.dart';
|
||||
import 'package:apidash/providers/providers.dart';
|
||||
import 'package:apidash/screens/common_widgets/common_widgets.dart';
|
||||
import 'package:apidash/widgets/widgets.dart';
|
||||
import 'package:apidash/utils/utils.dart';
|
||||
import 'package:apidash/consts.dart';
|
||||
@@ -82,7 +83,7 @@ class _FormDataBodyState extends ConsumerState<FormDataWidget> {
|
||||
key: ValueKey("$selectedId-$index-form-row-$seed"),
|
||||
cells: <DataCell>[
|
||||
DataCell(
|
||||
CellField(
|
||||
EnvCellField(
|
||||
keyId: "$selectedId-$index-form-k-$seed",
|
||||
initialValue: formRows[index].name,
|
||||
hintText: kHintAddFieldName,
|
||||
@@ -138,7 +139,7 @@ class _FormDataBodyState extends ConsumerState<FormDataWidget> {
|
||||
},
|
||||
initialValue: formRows[index].value,
|
||||
)
|
||||
: CellField(
|
||||
: EnvCellField(
|
||||
keyId: "$selectedId-$index-form-v-$seed",
|
||||
initialValue: formRows[index].value,
|
||||
hintText: kHintAddValue,
|
||||
|
||||
@@ -37,59 +37,60 @@ List<EnvironmentVariableModel> getEnvironmentSecrets(
|
||||
}
|
||||
|
||||
String? substituteVariables(
|
||||
String? input,
|
||||
Map<String?, List<EnvironmentVariableModel>> envMap,
|
||||
String? activeEnvironmentId) {
|
||||
String? input,
|
||||
Map<String, String> envVarMap,
|
||||
) {
|
||||
if (input == null) return null;
|
||||
|
||||
final Map<String, String> combinedMap = {};
|
||||
final activeEnv = envMap[activeEnvironmentId] ?? [];
|
||||
final globalEnv = envMap[kGlobalEnvironmentId] ?? [];
|
||||
|
||||
for (var variable in globalEnv) {
|
||||
combinedMap[variable.key] = variable.value;
|
||||
}
|
||||
for (var variable in activeEnv) {
|
||||
combinedMap[variable.key] = variable.value;
|
||||
if (envVarMap.keys.isEmpty) {
|
||||
return input;
|
||||
}
|
||||
final regex = RegExp("{{(${envVarMap.keys.join('|')})}}");
|
||||
|
||||
String result = input.replaceAllMapped(kEnvVarRegEx, (match) {
|
||||
String result = input.replaceAllMapped(regex, (match) {
|
||||
final key = match.group(1)?.trim() ?? '';
|
||||
return combinedMap[key] ?? '';
|
||||
return envVarMap[key] ?? '{{$key}}';
|
||||
});
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
HttpRequestModel substituteHttpRequestModel(
|
||||
HttpRequestModel httpRequestModel,
|
||||
Map<String?, List<EnvironmentVariableModel>> envMap,
|
||||
String? activeEnvironmentId) {
|
||||
HttpRequestModel httpRequestModel,
|
||||
Map<String?, List<EnvironmentVariableModel>> envMap,
|
||||
String? activeEnvironmentId,
|
||||
) {
|
||||
final Map<String, String> combinedEnvVarMap = {};
|
||||
final activeEnv = envMap[activeEnvironmentId] ?? [];
|
||||
final globalEnv = envMap[kGlobalEnvironmentId] ?? [];
|
||||
|
||||
for (var variable in globalEnv) {
|
||||
combinedEnvVarMap[variable.key] = variable.value;
|
||||
}
|
||||
for (var variable in activeEnv) {
|
||||
combinedEnvVarMap[variable.key] = variable.value;
|
||||
}
|
||||
|
||||
var newRequestModel = httpRequestModel.copyWith(
|
||||
url: substituteVariables(
|
||||
httpRequestModel.url,
|
||||
envMap,
|
||||
activeEnvironmentId,
|
||||
)!,
|
||||
url: substituteVariables(httpRequestModel.url, combinedEnvVarMap)!,
|
||||
headers: httpRequestModel.headers?.map((header) {
|
||||
return header.copyWith(
|
||||
name:
|
||||
substituteVariables(header.name, envMap, activeEnvironmentId) ?? "",
|
||||
value: substituteVariables(header.value, envMap, activeEnvironmentId),
|
||||
name: substituteVariables(header.name, combinedEnvVarMap) ?? "",
|
||||
value: substituteVariables(header.value, combinedEnvVarMap),
|
||||
);
|
||||
}).toList(),
|
||||
params: httpRequestModel.params?.map((param) {
|
||||
return param.copyWith(
|
||||
name:
|
||||
substituteVariables(param.name, envMap, activeEnvironmentId) ?? "",
|
||||
value: substituteVariables(param.value, envMap, activeEnvironmentId),
|
||||
name: substituteVariables(param.name, combinedEnvVarMap) ?? "",
|
||||
value: substituteVariables(param.value, combinedEnvVarMap),
|
||||
);
|
||||
}).toList(),
|
||||
body: substituteVariables(
|
||||
httpRequestModel.body,
|
||||
envMap,
|
||||
activeEnvironmentId,
|
||||
),
|
||||
formData: httpRequestModel.formData?.map((formData) {
|
||||
return formData.copyWith(
|
||||
name: substituteVariables(formData.name, combinedEnvVarMap) ?? "",
|
||||
value: substituteVariables(formData.value, combinedEnvVarMap) ?? "",
|
||||
);
|
||||
}).toList(),
|
||||
body: substituteVariables(httpRequestModel.body, combinedEnvVarMap),
|
||||
);
|
||||
return newRequestModel;
|
||||
}
|
||||
|
||||
@@ -0,0 +1,38 @@
|
||||
import 'package:flutter/material.dart';
|
||||
import '../tokens/tokens.dart';
|
||||
|
||||
InputDecoration getTextFieldInputDecoration(
|
||||
ColorScheme clrScheme, {
|
||||
Color? fillColor,
|
||||
String? hintText,
|
||||
TextStyle? hintTextStyle,
|
||||
double? hintTextFontSize,
|
||||
Color? hintTextColor,
|
||||
EdgeInsetsGeometry? contentPadding,
|
||||
Color? focussedBorderColor,
|
||||
Color? enabledBorderColor,
|
||||
bool? isDense,
|
||||
}) {
|
||||
return InputDecoration(
|
||||
filled: true,
|
||||
fillColor: fillColor ?? clrScheme.surfaceContainerLowest,
|
||||
hintStyle: hintTextStyle ??
|
||||
kCodeStyle.copyWith(
|
||||
fontSize: hintTextFontSize,
|
||||
color: hintTextColor ?? clrScheme.outlineVariant,
|
||||
),
|
||||
hintText: hintText,
|
||||
contentPadding: contentPadding ?? kP10,
|
||||
focusedBorder: OutlineInputBorder(
|
||||
borderSide: BorderSide(
|
||||
color: focussedBorderColor ?? clrScheme.outline,
|
||||
),
|
||||
),
|
||||
enabledBorder: OutlineInputBorder(
|
||||
borderSide: BorderSide(
|
||||
color: enabledBorderColor ?? clrScheme.surfaceContainerHighest,
|
||||
),
|
||||
),
|
||||
isDense: isDense,
|
||||
);
|
||||
}
|
||||
@@ -1,5 +1,6 @@
|
||||
import 'package:flutter/material.dart';
|
||||
import '../tokens/tokens.dart';
|
||||
import 'decoration_input_textfield.dart';
|
||||
|
||||
class ADOutlinedTextField extends StatelessWidget {
|
||||
const ADOutlinedTextField({
|
||||
@@ -65,26 +66,16 @@ class ADOutlinedTextField extends StatelessWidget {
|
||||
fontSize: textFontSize,
|
||||
color: textColor ?? clrScheme.onSurface,
|
||||
),
|
||||
decoration: InputDecoration(
|
||||
filled: true,
|
||||
fillColor: fillColor ?? clrScheme.surfaceContainerLowest,
|
||||
hintStyle: hintTextStyle ??
|
||||
kCodeStyle.copyWith(
|
||||
fontSize: hintTextFontSize,
|
||||
color: hintTextColor ?? clrScheme.outlineVariant,
|
||||
),
|
||||
decoration: getTextFieldInputDecoration(
|
||||
clrScheme,
|
||||
fillColor: fillColor,
|
||||
hintText: hintText,
|
||||
contentPadding: contentPadding ?? kP10,
|
||||
focusedBorder: OutlineInputBorder(
|
||||
borderSide: BorderSide(
|
||||
color: focussedBorderColor ?? clrScheme.outline,
|
||||
),
|
||||
),
|
||||
enabledBorder: OutlineInputBorder(
|
||||
borderSide: BorderSide(
|
||||
color: enabledBorderColor ?? clrScheme.surfaceContainerHighest,
|
||||
),
|
||||
),
|
||||
hintTextStyle: hintTextStyle,
|
||||
hintTextFontSize: hintTextFontSize,
|
||||
hintTextColor: hintTextColor,
|
||||
contentPadding: contentPadding,
|
||||
focussedBorderColor: focussedBorderColor,
|
||||
enabledBorderColor: enabledBorderColor,
|
||||
isDense: isDense,
|
||||
),
|
||||
onChanged: onChanged,
|
||||
|
||||
@@ -2,6 +2,7 @@ export 'button_filled.dart';
|
||||
export 'button_icon.dart';
|
||||
export 'button_text.dart';
|
||||
export 'checkbox.dart';
|
||||
export 'decoration_input_textfield.dart';
|
||||
export 'dropdown.dart';
|
||||
export 'popup_menu.dart';
|
||||
export 'snackbar.dart';
|
||||
|
||||
@@ -48,10 +48,13 @@ const globalVars = [
|
||||
EnvironmentVariableModel(key: "num", value: "5670000"),
|
||||
EnvironmentVariableModel(key: "token", value: "token"),
|
||||
];
|
||||
final globalVarsMap = {for (var item in globalVars) item.key: item.value};
|
||||
const activeEnvVars = [
|
||||
EnvironmentVariableModel(key: "url", value: "api.apidash.dev"),
|
||||
EnvironmentVariableModel(key: "num", value: "8940000"),
|
||||
];
|
||||
final activeEnvVarsMap = {for (var item in activeEnvVars) item.key: item.value};
|
||||
final combinedEnvVarsMap = mergeMaps(globalVarsMap, activeEnvVarsMap);
|
||||
|
||||
void main() {
|
||||
group("Testing getEnvironmentTitle function", () {
|
||||
@@ -125,66 +128,45 @@ void main() {
|
||||
group("Testing substituteVariables function", () {
|
||||
test("Testing substituteVariables with null", () {
|
||||
String? input;
|
||||
Map<String?, List<EnvironmentVariableModel>> envMap = {};
|
||||
String? activeEnvironmentId;
|
||||
expect(substituteVariables(input, envMap, activeEnvironmentId), null);
|
||||
Map<String, String> envMap = {};
|
||||
expect(substituteVariables(input, envMap), null);
|
||||
});
|
||||
|
||||
test("Testing substituteVariables with empty input", () {
|
||||
String input = "";
|
||||
Map<String?, List<EnvironmentVariableModel>> envMap = {};
|
||||
String? activeEnvironmentId;
|
||||
expect(substituteVariables(input, envMap, activeEnvironmentId), "");
|
||||
Map<String, String> envMap = {};
|
||||
expect(substituteVariables(input, envMap), "");
|
||||
});
|
||||
|
||||
test("Testing substituteVariables with empty envMap", () {
|
||||
String input = "{{url}}/humanize/social?num={{num}}";
|
||||
Map<String?, List<EnvironmentVariableModel>> envMap = {};
|
||||
String? activeEnvironmentId;
|
||||
String expected = "/humanize/social?num=";
|
||||
expect(substituteVariables(input, envMap, activeEnvironmentId), expected);
|
||||
Map<String, String> envMap = {};
|
||||
String expected = "{{url}}/humanize/social?num={{num}}";
|
||||
expect(substituteVariables(input, envMap), expected);
|
||||
});
|
||||
|
||||
test("Testing substituteVariables with empty activeEnvironmentId", () {
|
||||
String input = "{{url}}/humanize/social?num={{num}}";
|
||||
Map<String?, List<EnvironmentVariableModel>> envMap = {
|
||||
kGlobalEnvironmentId: globalVars,
|
||||
};
|
||||
String expected = "api.foss42.com/humanize/social?num=5670000";
|
||||
expect(substituteVariables(input, envMap, null), expected);
|
||||
expect(substituteVariables(input, globalVarsMap), expected);
|
||||
});
|
||||
|
||||
test("Testing substituteVariables with non-empty activeEnvironmentId", () {
|
||||
String input = "{{url}}/humanize/social?num={{num}}";
|
||||
Map<String?, List<EnvironmentVariableModel>> envMap = {
|
||||
kGlobalEnvironmentId: globalVars,
|
||||
"activeEnvId": activeEnvVars,
|
||||
};
|
||||
String? activeEnvId = "activeEnvId";
|
||||
String expected = "api.apidash.dev/humanize/social?num=8940000";
|
||||
expect(substituteVariables(input, envMap, activeEnvId), expected);
|
||||
expect(substituteVariables(input, combinedEnvVarsMap), expected);
|
||||
});
|
||||
|
||||
test("Testing substituteVariables with incorrect paranthesis", () {
|
||||
String input = "{{url}}}/humanize/social?num={{num}}";
|
||||
Map<String?, List<EnvironmentVariableModel>> envMap = {
|
||||
kGlobalEnvironmentId: globalVars,
|
||||
"activeEnvId": activeEnvVars,
|
||||
};
|
||||
String? activeEnvId = "activeEnvId";
|
||||
String expected = "api.apidash.dev}/humanize/social?num=8940000";
|
||||
expect(substituteVariables(input, envMap, activeEnvId), expected);
|
||||
expect(substituteVariables(input, combinedEnvVarsMap), expected);
|
||||
});
|
||||
|
||||
test("Testing substituteVariables function with unavailable variables", () {
|
||||
String input = "{{url1}}/humanize/social?num={{num}}";
|
||||
Map<String?, List<EnvironmentVariableModel>> envMap = {
|
||||
kGlobalEnvironmentId: globalVars,
|
||||
"activeEnvId": activeEnvVars,
|
||||
};
|
||||
String? activeEnvironmentId = "activeEnvId";
|
||||
String expected = "/humanize/social?num=8940000";
|
||||
expect(substituteVariables(input, envMap, activeEnvironmentId), expected);
|
||||
String expected = "{{url1}}/humanize/social?num=8940000";
|
||||
expect(substituteVariables(input, combinedEnvVarsMap), expected);
|
||||
});
|
||||
});
|
||||
|
||||
@@ -251,9 +233,9 @@ void main() {
|
||||
};
|
||||
String? activeEnvironmentId = "activeEnvId";
|
||||
const expected = HttpRequestModel(
|
||||
url: "/humanize/social",
|
||||
url: "{{url1}}/humanize/social",
|
||||
headers: [
|
||||
NameValueModel(name: "Authorization", value: "Bearer "),
|
||||
NameValueModel(name: "Authorization", value: "Bearer {{token1}}"),
|
||||
],
|
||||
params: [
|
||||
NameValueModel(name: "num", value: "8940000"),
|
||||
|
||||
Reference in New Issue
Block a user