diff --git a/README.md b/README.md index 00a72df8..b47acb31 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,19 @@ # API Dash ⚡️ -[![Discord Server Invite](https://img.shields.io/badge/DISCORD-JOIN%20SERVER-5663F7?style=for-the-badge&logo=discord&logoColor=white)](https://bit.ly/heyfoss) +[![Discord Server Invite](https://img.shields.io/badge/DISCORD-JOIN%20SERVER-5663F7?style=for-the-badge&logo=discord&logoColor=white)](https://discord.com/invite/bBeSdtJ6Ue) + +### 🚨🚨 API Dash is participating in GSoC 2025! Check out the details below: + +GSoC + +| | Link | +|--|--| +| Learn about GSoC | [Link](https://summerofcode.withgoogle.com) | +| API Dash GSoC Page | [Link](https://summerofcode.withgoogle.com/programs/2025/organizations/api-dash) | +| Project Ideas List | [Link](https://github.com/foss42/apidash/discussions/565) | +| Application Guide | [Link](https://github.com/foss42/apidash/discussions/564) | +| Discord Channel | [Link](https://discord.com/invite/bBeSdtJ6Ue) | + ### Please support this initiative by giving this project a Star ⭐️ @@ -277,4 +290,4 @@ You can contribute to API Dash in any or all of the following ways: ## Need Any Help? -In case you need any help with API Dash or are encountering any issue while running the tool, please feel free to drop by our [Discord server](https://bit.ly/heyfoss) and we can have a chat in the **#foss-apidash** channel. +In case you need any help with API Dash or are encountering any issue while running the tool, please feel free to drop by our [Discord server](https://discord.com/invite/bBeSdtJ6Ue) and we can have a chat in the **#foss-apidash** channel. diff --git a/doc/dev_guide/packaging.md b/doc/dev_guide/packaging.md index 59fc1fcf..8b637167 100644 --- a/doc/dev_guide/packaging.md +++ b/doc/dev_guide/packaging.md @@ -78,7 +78,84 @@ git push ## FlatHub (Flatpak) -TODO Instructions +Steps to generate .flatpak package of API Dash: + +1. Clone and build API Dash: + +Follow the [How to run API Dash locally](setup_run.md) guide. + +Stay in the root folder of the project directory. + +2. Install Required Packages (Debian/Ubuntu): + +```bash +sudo apt install flatpak +flatpak install -y flathub org.flatpak.Builder +flatpak remote-add --if-not-exists --user flathub https://dl.flathub.org/repo/flathub.flatpakrepo +``` + +*if using another linux distro, download flatpak and follow the rest of the steps. + +3. Build API Dash project: + +```bash +flutter build linux --release +``` + +4. Create flatpak manifest file: + +```bash +touch apidash-flatpak.yaml +``` +in this file, add: + +```yaml +app-id: io.github.foss42.apidash +runtime: org.freedesktop.Platform +runtime-version: "23.08" +sdk: org.freedesktop.Sdk + +command: /app/bundle/apidash +finish-args: + - --share=ipc + - --socket=fallback-x11 + - --socket=wayland + - --device=dri + - --socket=pulseaudio + - --share=network + - --filesystem=home +modules: + - name: apidash + buildsystem: simple + build-commands: + - cp -a build/linux/x64/release/bundle /app/bundle + sources: + - type: dir + path: . +``` + +5. Create the .flatpak file: + +```bash +flatpak run org.flatpak.Builder --force-clean --sandbox --user --install --install-deps-from=flathub --ccache --mirror-screenshots-url=https://dl.flathub.org/media/ --repo=repo builddir apidash-flatpak.yaml + +flatpak build-bundle repo apidash.flatpak io.github.foss42.apidash +``` + +The apidash.flatpak file should be the project root folder. + +To test it: + +```bash +flatpak install --user apidash.flatpak + +flatpak run io.github.foss42.apidash +``` +To uninstall it: + +```bash +flatpak uninstall io.github.foss42.apidash +``` ## Homebrew diff --git a/doc/proposals/2025/gsoc/application_debasmibasu_aiagentforapitesting.md b/doc/proposals/2025/gsoc/application_debasmibasu_aiagentforapitesting.md new file mode 100644 index 00000000..5c779ac2 --- /dev/null +++ b/doc/proposals/2025/gsoc/application_debasmibasu_aiagentforapitesting.md @@ -0,0 +1,83 @@ +# AI-Powered API Testing and Tool Integration + +## Personal Information + +- **Full Name:** Debasmi Basu +- **Email:** [basudebasmi2006@gmail.com](mailto:basudebasmi2006@gmail.com) +- **Phone:** +91 7439640610 +- **Discord Handle:** debasmibasu +- **Home Page:** [Portfolio](https://debasmi.github.io/portfolio/portfolio.html) +- **GitHub Profile:** [Debasmi](https://github.com/debasmi) +- **Socials:** + - [LinkedIn](https://www.linkedin.com/in/debasmi-basu-513726288/) +- **Time Zone:** Indian Standard Time +- **Resume:** [Google Drive Link](https://drive.google.com/file/d/1o5JxOwneK-jv2GxnKTrzk__n7UbSKTPt/view?usp=sharing) + +## University Info + +- **University Name:** Cluster Innovation Centre, University of Delhi +- **Program:** B.Tech. in Information Technology and Mathematical Innovations +- **Year:** 2023 - Present +- **Expected Graduation Date:** 2027 + +## Motivation & Past Experience + +### Project of Pride: Image Encryption using Quantum Computing Algorithms + +This project represents my most significant achievement in the field of quantum computing and cybersecurity. I developed a **quantum image encryption algorithm** using **Qiskit**, leveraging quantum superposition and entanglement to enhance security. By implementing the **NEQR model**, I ensured **100% accuracy in encryption**, preventing any data loss. Additionally, I designed **advanced quantum circuit techniques** to reduce potential decryption vulnerabilities, pushing the boundaries of modern encryption methods. + +This project is my pride because it merges **cutting-edge quantum computing** with **practical data security applications**, demonstrating the **real-world potential of quantum algorithms in cryptography**. It reflects my deep technical expertise in **Qiskit, Python, and quantum circuits**, as well as my passion for exploring **future-proof encryption solutions**. + +### Challenges that Motivate Me + +I am driven by challenges that push the boundaries of **emerging technologies, security, and web development**. The intersection of **AI, cybersecurity, web applications, and quantum computing** excites me because of its potential to redefine **secure digital interactions**. My passion lies in building **robust, AI-powered automation systems** that enhance **security, efficiency, and accessibility** in real-world applications. Additionally, I enjoy working on **scalable web solutions**, ensuring that modern applications remain secure and user-friendly. + +### Availability for GSoC + +- **Will work full-time on GSoC.** +- I will also dedicate time to exploring **LLM-based security frameworks**, improving **web API integration**, and enhancing my expertise in **AI-driven automation**. + +### Regular Sync-Ups + +- **Yes.** I am committed to maintaining **regular sync-ups** with mentors to ensure steady project progress and discuss improvements in API security and automation. + +### Interest in API Dash + +- The potential to integrate **AI-powered automation** for API testing aligns perfectly with my expertise in **web development, backend integration, and security automation**. +- I see a great opportunity in **enhancing API security validation** using AI-driven techniques, ensuring robust **schema validation and intelligent error detection**. + +### Areas for Improvement + +- API Dash can expand **real-time collaborative testing features**, allowing teams to test and debug APIs more efficiently. +- Enhancing **security automation** by integrating **AI-powered API monitoring** would significantly improve API Dash’s effectiveness. + +--- + +## Project Proposal + +### **Title** + +AI-Powered API Testing and Tool Integration + +### **Abstract** + +API testing often requires **manual test case creation and validation**, making it inefficient. Additionally, **converting APIs into structured definitions for AI integration** is a complex task. This project aims to **automate test generation, response validation, and structured API conversion** using **LLMs and AI agents.** The system will provide **automated debugging insights** and integrate seamlessly with **pydantic-ai** and **langgraph.** A **benchmarking dataset** will also be created to evaluate various LLMs for API testing tasks. + +### **Weekly Timeline** + +| Week | Focus | Key Deliverables & Achievements | +|---------------|--------------------------------|------------------------------------------------------------------------| +| **Week 1-2** | Research & Architecture | Study existing API testing tools, research AI automation methods, explore web-based API testing interfaces, and define the project architecture. Expected Outcome: Clear technical roadmap for implementation. | +| **Week 3-4** | API Specification Parsing | Develop a parser to extract API endpoints, request methods, authentication requirements, and response formats from OpenAPI specs, Postman collections, and raw API logs. Expected Outcome: Functional API parser capable of structured data extraction and visualization. | +| **Week 5-6** | AI-Based Test Case Generation | Implement an AI model that analyzes API specifications and generates valid test cases, including edge cases and error scenarios. Expected Outcome: Automated test case generation covering standard, edge, and security cases, integrated into a web-based UI. | +| **Week 7-8** | Response Validation & Debugging | Develop an AI-powered validation mechanism that checks API responses against expected schemas and detects inconsistencies. Implement logging and debugging tools within a web dashboard to provide insights into API failures. Expected Outcome: AI-driven validation tool with intelligent debugging support. | +| **Week 9-10** | Structured API Conversion | Design a system that converts APIs into structured tool definitions compatible with pydantic-ai and langgraph, ensuring seamless AI agent integration. Expected Outcome: Automated conversion of API specs into structured tool definitions, with visual representation in a web-based interface. | +| **Week 11-12**| Benchmarking & Evaluation | Create a dataset and evaluation framework to benchmark different LLMs for API testing performance. Conduct performance testing on generated test cases and validation mechanisms. Expected Outcome: A benchmarking dataset and comparative analysis of LLMs in API testing tasks, integrated into a web-based reporting system. | +| **Final Week**| Testing & Documentation | Perform comprehensive end-to-end testing, finalize documentation, create usage guides, and submit the final project report. Expected Outcome: Fully tested, documented, and ready-to-use AI-powered API testing framework, with a web-based dashboard for interaction and reporting. | + +--- + +## Conclusion + +This project will significantly **enhance API testing automation** by leveraging **AI-driven test generation, web-based API analysis, and structured tool conversion**. The benchmarking dataset will provide **a standard evaluation framework** for API testing LLMs, ensuring **optimal model selection for API validation**. The resulting **AI-powered API testing framework** will improve **efficiency, accuracy, security, and scalability**, making API Dash a more powerful tool for developers. + diff --git a/doc/proposals/2025/gsoc/idea_Nideesh_Bharath_Kumar_AI_API_Eval.md b/doc/proposals/2025/gsoc/idea_Nideesh_Bharath_Kumar_AI_API_Eval.md new file mode 100644 index 00000000..34c63dd8 --- /dev/null +++ b/doc/proposals/2025/gsoc/idea_Nideesh_Bharath_Kumar_AI_API_Eval.md @@ -0,0 +1,54 @@ +# AI API Eval Framework For Multimodal Generative AI + +## Personal Information +- **Full Name:** Nideesh Bharath Kumar +- **University Name:** Rutgers University–New Brunswick +- **Program Enrolled In:** B.S. Computer Science, Artificial Intelligence Track +- **Year:** Junior Year (Third Year) +- **Expected Graduation Date:** May 2026 + +## About Me +I’m **Nideesh Bharath Kumar**, a junior (third year) in Rutgers University–New Brunswick taking a **B.S. in Computer Science on the Artificial Intelligence Track**. I have a strong foundation in full stack development and AI engineering: I have project and internship experience in technologies like: **Dart/Flutter, LangChain, RAG, Vector Databases, AWS, Docker, Kubernetes, PostgreSQL, FastAPI, OAuth,** and other technologies that aid in developing scalable and AI-powered systems. I have interned at **Manomay Tech, IDEA, and Newark Science and Sustainability,** developing scalable systems and managing AI systems and completed fellowships with **Google** and **Codepath**, developing my technical skills. I’ve also won awards in hackathons, achieving **Overall Best Project in the CS Base Climate Hackathon for a Flutter-based project** and **Best Use of Terraform in the HackRU Hackathon for an Computer Vision Smart Shopping Cart**. I’m passionate about building distributed, scalable systems and AI technologies, and API Dash is an amazing tool that can facilitate in the process of building these solutions through easy visualization and testing of APIs; I believe my skills in **AI development** and experience with **Dart/Flutter** and **APIs** put me in a position to effectively contribute to this project. + +## Project Details +**Project Title:** AI API Eval Framework For Multimodal Generative AI +**Description:** +This project is to develop a **Dart-centered evaluation framework** designed to simplify the testing of generative AI models across **multiple types (text, image, code)**. This will be done by integrating evaluation toolkits: **llm-harness** for text, **torch-fidelity** and **CLIP** for images, and **HumanEval/MBPP** with **CodeBLEU** for code. This project will provide a unified config layer which can support standard and custom benchmark datasets and evaluation metrics. This will be done by providing a **user-friendly interface in API Dash** which allows the user to select model type, dataset management (local or downloadable), and evaluation metrics (standard toolkit or custom script). On top of this, **real-time visual analytics** will be provided to visualize the progress of the metrics as well as **parallelized batch processing** of the evaluation. + +**Related Issue:** - [#618](https://github.com/foss42/apidash/issues/618) + +**Key Features:** +1) Unified Evaluation Configuration: + - A config file in YAML will serve as the abstraction layer, which will be generated by the user's selection of model type, dataset, and evaluation metrics. This will redirect the config to either use llm-harness, torch-fidelity and CLIP, or HumanEval and MBPP with CodeBLEU. Additionally, custom evaluation scripts and datasets can be attached to this config file which can be interpreted by the systems. + - This abstraction layer ensures that whether any of these specifications are different for the eval job, all of it will be redirected to the correct resources while still providing a centralized layer for creating the job. Furthermore, these config files can be stored in history for running the same jobs later. + +2) Intuitive User Interface + - When starting an evaluation, users can select the model type (text, image, or code) through a drop-down menu. The system will provide a list of standard datasets and use cases. The user can select these datasets, or attach a custom one. If the user does not have this dataset locally in the workspace, they can attach it using file explorer or download it from the web. Furthermore, the user can select standard evaluation metrics from a list or attach a custom script. + +3) Standard Evaluation Pipelines + - The standard evaluation pipelines include text, image, and code generation. + - For text generation, llm-harness will be used, and utilize custom datasets and tasks to measure Precision, Recall, F1 Score, BLEU, ROUGE, and Perplexity. Custom integration of datasets and evaluation scores can be done through interfacing the llm-harness custom test config file. + - For image generation, torch-fidelity can be used to calculate Fréchet Inception Distance and Inception Score by comparing against a reference image database. For text to image generation, CLIP scores can be used to ensure connection between prompt and generated image. Custom integration of datasets and evaluation scores can be done through a custom interface created using Dart. + - For code generation, tests like HumanEval and MBPP can be used for functional correctness and CodeBLEU can be used for code quality checking. Custom integration will be done the same way as image generation, with a custom interface created using Dart for functional test databases and evaluation metrics. + +4) Batch Evaluations + - Parallel Processing will be supported by async runs of the tests, where a progress bar will monitor the number of processed rows in API Dash. + +5) Visualizations of Results + - Visualizations of results will be provided as the tests are running, providing live feedback of model performance, as well as a general summary of visualizations after all evals have been run. + - Bar Graphs: These will be displayed from a range of 0 to 100% accuracy to visualize a quick performance comparison across all tested models. + - Line Charts: These will be displayed to show performance trends over time of models, comparing model performance across different batches as well as between each model. + - Tables: These will provide detailed summary statistics about scores for each model across different benchmarks and datasets. + - Box Plots: These will show the distribution of scores per batch, highlighting outliers and variance, while also having side-by-side comparisons with different models. + +6) Offline and Online Support + - Offline: Models that are offline will be supported by pointing to the script the model uses to run, and datasets that are locally stored. + - Online: These models can be connected for eval through an API endpoint, and datasets can be downloaded with access to the link. + +**Architecture:** +1) UI Interface: Built with Dart/Flutter +2) Configuration Manager: Built with Dart, uses YAML for config file +3) Dataset Manager: Built with Dart, REST APIs for accessing endpoints +4) Evaluation Manager: Built with a Dart - Python layer to manage connections between evaluators and API Dash +5) Batch Processing: Built with Dart Async requests +6) Visualization and Results: Built with Dart/Flutter, using packages like fl_chart and syncfusion_flutter_charts diff --git a/doc/proposals/2025/gsoc/idea_NingWei_AI UI Designer for APIs.md b/doc/proposals/2025/gsoc/idea_NingWei_AI UI Designer for APIs.md new file mode 100644 index 00000000..b25d35cd --- /dev/null +++ b/doc/proposals/2025/gsoc/idea_NingWei_AI UI Designer for APIs.md @@ -0,0 +1,129 @@ +# GSoC 2025 Proposal: AI UI Designer for APIs + +## About + +**Full Name**: Ning Wei +**Contact Info**: Allenwei0503@gmail.com +**Discord Handle**: @allen_wn +**GitHub Profile**: [https://github.com/AllenWn](https://github.com/AllenWn) +**LinkedIn**: [https://www.linkedin.com/in/ning-wei-allen0503](https://www.linkedin.com/in/ning-wei-allen0503) +**Time Zone**: UTC+8 +**Resume**: https://drive.google.com/file/d/1Zvf1IhKju3rFfnDsBW1WmV40lz0ZMNrD/view?usp=sharing + +## University Info + +**University**: University of Illinois at Urbana-Champaign +**Program**: B.S. in Computer Engineering +**Year**: 2nd year undergraduate +**Expected Graduation**: May 2027 + +--- + +## Motivation & Past Experience + +1. **Have you worked on or contributed to a FOSS project before?** +Not yet officially, but I’ve been actively exploring open source projects like API Dash and contributing via discussion and design planning. I am currently studying the API Dash repository and developer guide to prepare for my first PR. + +2. **What is your one project/achievement that you are most proud of? Why?** +I'm proud of building an AI-assisted email management app using Flutter and Go, which automatically categorized and responded to emails using ChatGPT API. It gave me end-to-end experience in integrating APIs, generating dynamic UIs, and designing developer-friendly tools. + +3. **What kind of problems or challenges motivate you the most to solve them?** +I enjoy solving problems that eliminate repetitive work for developers and improve workflow productivity — especially through automation and AI integration. + +4. **Will you be working on GSoC full-time?** +Yes. I will be dedicating full-time to this project during the summer. + +5. **Do you mind regularly syncing up with the project mentors?** +Not at all — I look forward to regular syncs and feedback to align with the project vision. + +6. **What interests you the most about API Dash?** +API Dash is focused on improving the developer experience around APIs, which is something I care deeply about. I love the vision of combining UI tools with AI assistance in a privacy-first, extensible way. + +7. **Can you mention some areas where the project can be improved?** +- More intelligent code generation from API response types +- Drag-and-drop UI workflow +- Visual previews and theming customization +- Integration with modern LLMs for field-level naming and layout suggestions + +--- + +## Project Proposal Information + +### Proposal Title + +AI UI Designer for APIs + +# Relevant Issues: [#617] + +### Abstract + +This project aims to develop an AI-powered assistant within API Dash that automatically generates dynamic user interfaces (UI) based on API responses (JSON/XML). The goal is to allow developers to instantly visualize, customize, and export usable Flutter UI code from raw API data. The generated UI should adapt to the structure of the API response and be interactive, with features like sorting, filtering, and layout tweaking. This tool will streamline frontend prototyping and improve developer productivity. + +--- + +### Detailed Description + +The AI UI Designer will be a new feature integrated into the API Dash interface, triggered by a button after an API response is received. It will analyze the data and suggest corresponding UI layouts using Dart/Flutter widgets such as `DataTable`, `Card`, or `Form`. + +#### Step 1: Parse API Response Structure + +- Focus initially on JSON (XML can be added later) +- Build a recursive parser to convert the API response into a schema-like tree +- Extract field types, array/object structure, nesting depth +- Identify patterns (e.g., timestamps, prices, lists) + +#### Step 2: Design AI Agent Logic + +- Use a rule-based system to map schema to UI components + - List of objects → Table + - Simple object → Card/Form + - Number over time → Line Chart (optional) +- Integrate LLM backend (e.g., Ollama, GPT API) to enhance: + - Field labeling + - Layout suggestion + - Component naming + +#### Step 3: Generate UI in Flutter + +- Dynamically generate: + - `DataTable`, `Card`, `TextField`, `Dropdown`, etc. + - Optional chart widgets (e.g., `fl_chart`) +- Support: + - Layout rearrangement (form-based or drag-drop) + - Field visibility toggles + - Previewing final UI + +#### Step 4: Export UI Code + +- Export generated layout as Dart code +- Allow download or copy-to-clipboard +- Support JSON config export (optional for renderer-based architecture) + +#### Step 5: Integrate into API Dash + +- Add AI UI Designer button in the API response view +- Launch UI editing pane inside app +- Ensure local-only, privacy-friendly execution +- Write tests, docs, and polish UX + +--- + +## Weekly Timeline (Tentative) + +| Week | Milestone | +|------|-----------| +| Community Bonding | Join Discord, interact with mentors, finalize approach, get feedback | +| Week 1–2 | Build and test JSON parser → generate basic schema | +| Week 3–4 | Implement rule-based UI mapper; generate simple widgets | +| Week 5–6 | Integrate initial Flutter component generator; allow basic UI previews | +| Week 7 | Midterm Evaluation | +| Week 8–9 | Add customization options (visibility, layout) | +| Week 10 | Integrate AI backend (e.g., Ollama/GPT) for suggestions | +| Week 11–12 | Add export functions (code, JSON config) | +| Week 13 | Final polish, tests, docs | +| Week 14 | Final Evaluation, feedback, and delivery | + +--- + +Thanks again for your time and guidance. I’ve already started studying the API Dash codebase and developer guide, and I’d love your feedback on this plan — does it align with your vision? +If selected, I’m excited to implement this project. If this idea is already taken, I’m open to switching to another API Dash project that fits my background. diff --git a/doc/proposals/2025/gsoc/idea_april_lin_api_explorer.md b/doc/proposals/2025/gsoc/idea_april_lin_api_explorer.md new file mode 100644 index 00000000..2fc205e2 --- /dev/null +++ b/doc/proposals/2025/gsoc/idea_april_lin_api_explorer.md @@ -0,0 +1,129 @@ +### Initial Idea Submission + +**Full Name:** April Lin +**University Name:** University of Illinois Urbana-Champaign +**Program (Degree & Major/Minor):** Master in Electrical and Computer Engineering +**Year:** first year +**Expected Graduation Date:** 2026 + +**Project Title:** API Explorer +**Relevant Issues:** [https://github.com/foss42/apidash/issues/619](https://github.com/foss42/apidash/issues/619) + +**Idea Description:** + +I have divided the design of the API explorer into three major steps: + +1. **Designing the UI** +2. **Designing the API template model** +3. **Using AI tools to automatically extract API information from a given website** + +--- + +## 1. UI Design (User Journey) + +In this step, I primarily designed two interfaces for the API explorer: the first is the main API Explorer interface, and the second is a detailed interface for each API template. + +### API Explorer +![api explorer](images/API_Explorer_Main.png) +1. **Accessing the API Explorer** + - In the left-hand sidebar, users will find an “API Explorer” icon. + - Clicking this icon reveals the main API template search interface on the right side of the screen. + +2. **Browsing API Templates** + - At the top of the main area, there is a search bar that supports fuzzy matching by API name. + - Directly beneath the search bar are category filters (e.g., AI, Finance, Web3, Social Media). + - Users can click “More” to view an expanded list of all available categories. + - The page displays each template in a **card layout**, showing the API’s name, a short description, and (optionally) an image or icon. + +### API Templates +![api explorer](images/API_Explorer_Template.png) +1. **Selecting a Template** + - When a user clicks on a card (for example, **OpenAI**), they navigate to a dedicated page for that API template. + - This page lists all the available API endpoints or methods in a collapsible/expandable format (e.g., “API name 2,” “API name 3,” etc.). + - Each listed endpoint describes what it does—users can select which methods they want to explore or import into their workspace. + +2. **Exploring an API Method** + - Within this detailed view, users see request details such as **HTTP method**, **path**, **headers**, **body**, and **sample response**. + - If the user wants to try out an endpoint, they can import it into their API collections by clicking **import**. + - Each method will include all the fields parsed through the automated process. For the detailed API field design, please refer to **Step Two**. + +--- + +## 2. Updated Table Design + +Below is the model design for the API explorer. + +### **Base Table: `api_templates`** +- **Purpose:** + Stores the common properties for all API templates, regardless of their type. + +- **Key Fields:** + - **id**: + - Primary key (integer or UUID) for unique identification. + - **name**: + - The API name (e.g., “OpenAI”). + - **api_type**: + - Enumerated string indicating the API type (e.g., `restful`, `graphql`, `soap`, `grpc`, `sse`, `websocket`). + - **base_url**: + - The base URL or service address (applicable for HTTP-based APIs and used as host:port for gRPC). + - **image**: + - A text or string field that references an image (URL or path) representing the API’s logo or icon. + - **category**: + - A field (array or string) used for search and classification (e.g., "finance", "ai", "devtool"). + - **description**: + - Textual description of the API’s purpose and functionality. + +### **RESTful & GraphQL Methods Table: `api_methods`** +- **Purpose:** + Manages detailed configurations for individual API requests/methods, specifically tailored for RESTful and GraphQL APIs. + +- **Key Fields:** + - **id**: + - Primary key (UUID). + - **template_id**: + - Foreign key linking back to `api_templates`. + - **method_type**: + - The HTTP method (e.g., `GET`, `POST`, `PUT`, `DELETE`) or the operation type (`query`, `mutation` for GraphQL). + - **method_name**: + - A human-readable name for the method (e.g., “Get User List,” “Create Order”). + - **url_path**: + - The relative path appended to the `base_url` (for RESTful APIs). + - **description**: + - Detailed explanation of the method’s functionality. + - **headers**: + - A JSON field storing default header configurations (e.g., `Content-Type`, `Authorization`). + - **authentication**: + - A JSON field for storing default authentication details (e.g., Bearer Token, Basic Auth). + - **query_params**: + - A JSON field for any default query parameters (optional, typically for RESTful requests). + - **body**: + - A JSON field containing the default request payload, including required fields and default values. + - **sample_response**: + - A JSON field providing an example of the expected response for testing/validation. + +--- + +## 3. Automated Extraction (Parser Design) + +I think there are two ways to design the automated pipeline: the first is to use AI tools for automated parsing, and the second is to employ a rule-based approach. + +### **AI-Based Parser** +- For each parser type (OpenAPI, HTML, Markdown), design a dedicated prompt agent to parse the API methods. +- The prompt includes model fields (matching the data structures from [Step Two](#2-updated-table-design)) and the required API category, along with the API URL to be parsed. +- The AI model is instructed to output the parsed result in **JSON format**, aligned with the schema defined in `api_templates` and `api_methods`. + +### **Non-AI (Rule-Based) Parser** +- **OpenAPI**: Use existing libraries (e.g., Swagger/OpenAPI parser libraries) to read and interpret JSON or YAML specs. +- **HTML**: Perform DOM-based parsing or use regex patterns to identify endpoints, parameter names, and descriptions. +- **Markdown**: Utilize Markdown parsers (e.g., remark, markdown-it) to convert the text into a syntax tree and extract relevant sections. + +## Questions + +1. **Database Selection** + - Which type of database should be used for storing API templates and methods? Are there any specific constraints or preferences (e.g., relational vs. NoSQL, performance requirements, ease of integration) we should consider? + +2. **Priority of Automated Parsing** + - What is the preferred approach for automated parsing of OpenAPI/HTML files? Would an AI-based parsing solution be acceptable, or should we prioritize rule-based methods for reliability and simplicity? + +3. **UI Interaction Flow** + - Can I add a dedicated “api explorer” menu in the left navigation bar for api explorer? diff --git a/doc/proposals/2025/gsoc/idea_balasubramaniam_api_explorer.md b/doc/proposals/2025/gsoc/idea_balasubramaniam_api_explorer.md new file mode 100644 index 00000000..ee5b39ec --- /dev/null +++ b/doc/proposals/2025/gsoc/idea_balasubramaniam_api_explorer.md @@ -0,0 +1,101 @@ +# **Initial Idea Submission : API Explorer** + +**Full Name:** BALASUBRAMANIAM L +**University Name:** Saveetha Engineering College +**Program (Degree & Major/Minor):** Bachelor of technology Machine Learning +**Year:** First year +**Expected Graduation Date:** 2028 + +**Project Title:** API Explorer +**Relevant Issues:** [https://github.com/foss42/apidash/issues/619](https://github.com/foss42/apidash/issues/619) + +## **Project Overview** + +Our goal is to enhance API Dash by adding an API Explorer feature. This feature allows users to discover, browse, search, and import pre-configured API endpoints for testing and exploration. All API templates will be maintained in YAML, JSON, HTML, and Markdown formats within a dedicated folder in the existing Apidash GitHub repository. + +In the initial phase, contributors can manually add new API definition files (YAML, JSON, HTML, and MD) to the repo, run a local Dart script to process them into structured JSON format, and then commit and push the updated files. A Dart cron job will periodically check for new or modified API files and process them automatically. In the future, we plan to automate this process fully with GitHub Actions. + +--- + +### **Key Concepts** + +- **File Addition:** + Contributors add new API files (YAML, JSON, HTML, or MD) to a designated folder (`/apis/`) in the Apidash repository. + +- **Local Processing:** + A local Dart script (e.g., `process_apis.dart`) runs to: + - Read the files. + - Parse and extract essential API details (title, description, endpoints, etc.). + - Auto-generate sample payloads when examples are missing. + - Convert and save the processed data as JSON files in `/api_templates/`. + +- **Automated Fetching & Processing with Dart Cron Job:** + - A Dart cron-like package will schedule the script to fetch and process **new and updated** API files **weekly or on demand**. + - This reduces the need for constant manual execution and ensures templates stay up to date. + +- **Version Control:** + Contributors create a PR with both the raw YAML files and the generated JSON files to GitHub. + +- **Offline Caching with Hive:** + - The Flutter app (API Explorer) will fetch JSON templates and store them using **Hive**. + - This ensures **fast loading and offline access**. + +- **Fetching Updates via GitHub Releases (ZIP files):** + - Instead of fetching updates via the GitHub API (which has rate limits), we can leverage **GitHub Releases**. + - A new release will be created weekly or when at least 10 updates are made. + - The Flutter app will download and extract the latest ZIP release instead of making multiple API calls. + +--- + +### **Step-by-Step Workflow** + +1. **Adding API Files:** + - A contributor creates or updates an API file (e.g., `weather.yaml`) in the `/apis/` folder. + +2. **Running the Local Processing Script (Manually):** + - A Dart script (`process_apis.dart`) is executed locally: + `dart run process_apis.dart` + - The script: + - Reads YAML files from `/apis/`. + - Identifies the file format (YAML, JSON, HTML, MD). + - Parses the content accordingly. + - Extracts essential API details (title, description, endpoints, etc.). + - Generates structured JSON templates in `/api_templates/`. + +3. **Review, Commit, and PR Submission:** + - Contributors review the generated JSON files. + - They commit both raw API definition files and generated JSON files. + - Submit a **Pull Request (PR)** for review. + +4. **Offline Storage with Hive (Flutter Frontend):** + - The Flutter app fetches JSON templates and stores them in Hive. + - This ensures users can access API templates even when offline. + +5. **Fetching Updates via GitHub Releases:** + - A new **GitHub Release** (ZIP) will be created weekly or when at least 10 updates are made. + - The Flutter app will **download and extract** the latest ZIP instead of making multiple API calls. + - This approach avoids GitHub API rate limits and ensures a smooth user experience. + + + +![alt text](images/API_EXPLORER_WORKFLOW.png) + +--- + +## **Future Automation with GitHub Actions** + +In the future, we can fully automate this process: + +- A GitHub Action will trigger on updates to `/apis/`. +- It will run the Dart processing script automatically. +- The action will commit the updated JSON templates back to the repository. +- A GitHub Release will be generated periodically to bundle processed files for easier access. +- This ensures **continuous and consistent updates** without manual intervention. + +--- + +## **Conclusion** + +This Approach provides a simple and controlled method for processing API definitions. The use of a **Dart cron job** reduces manual effort by fetching and processing updates on a scheduled basis, while **Hive storage** ensures fast offline access in the Flutter app. Using **GitHub Releases (ZIP)** allows fetching updates efficiently without hitting rate limits. Once validated, we can transition to **GitHub Actions** for complete automation. This approach aligns well with our project goals and scalability needs. + +**I look forward to your feedback and suggestions on this approach. Thank you!** diff --git a/doc/proposals/2025/gsoc/idea_harsh_panchal_AI_API_EVAL.md b/doc/proposals/2025/gsoc/idea_harsh_panchal_AI_API_EVAL.md new file mode 100644 index 00000000..df60ddf8 --- /dev/null +++ b/doc/proposals/2025/gsoc/idea_harsh_panchal_AI_API_EVAL.md @@ -0,0 +1,53 @@ +# Initial Idea Submission + +**Full Name:** Harsh Panchal +**Email:** [harsh.panchal.0910@gmail.com](mailto:harsh.panchal.0910@gmail.com) +● [**GitHub**](https://github.com/GANGSTER0910) +● [**Website**](https://harshpanchal0910.netlify.app/) +● [**LinkedIn**](https://www.linkedin.com/in/harsh-panchal-902636255) + +**University Name:** Ahmedabad University, Ahmedabad +**Program:** BTech in Computer Science and Engineering +**Year:** Junior, 3rd Year +**Expected Graduation Date:** May 2026 +**Location:** Gujarat, India. +**Timezone:** Kolkata, INDIA, UTC+5:30 + +## **Project Title: AI API Eval Framework** + + +## **Relevant Issues: [#618](https://github.com/foss42/apidash/issues/618)** +** + +## **Idea Description** +The goal of this project is to create an AI API Evaluation Framework that provides an end-to-end solution to compare AI models on different kinds of data, i.e., text, images, and videos. The overall strategy is to use benchmark models to compare AI outputs with benchmark predictions. Metrics like BLEU, ROUGE, FID, and SSIM can also be utilized by the users to perform an objective performance evaluation of models. + +For the best user experience in both offline and online mode, the platform will provide an adaptive assessment framework where users can specify their own assessment criteria for flexibility in dealing with various use cases. Their will be a feature of modern version control which will enable users to compare various versions of model and moniter performance over time. For the offline mode the evalutions will be supported using LoRA models which reduces resource consumption and will give outputs without compromising with accuracy. The system will use Explainability Integration with SHAP and LIME to demonstrate how things influence model decisions. + +The visualization dashboard, built using Flutter, will include real-time charts, error analysis, and result summarization, making it easy to analyze the model performance. Offline with cached models or online with API endpoints, the framework will offer end-to-end testing. + +With its rank-based framework, model explainability, and evaluatable configuration, this effort will be a powerful resource for researchers, developers, and organizations to make data-driven decisions on AI model selection and deployment. + +## Unique Features +1) Benchmark-Based Ranking: +Compare and rank model results against pre-trained benchmark models. +Determine how well outputs resemble perfect predictions. +2) Advanced Evaluation Metrics: +Facilitate metrics such as BLEU, ROUGE, FID, SSIM, and PSNR for extensive analysis. +Allow users to define custom metrics. +3) Model Version Control: +Compare various versions of AI models. +Monitor improvement in performance over time with side-by-side comparison. +4) Explainability Integration: +Employ SHAP and LIME to explain model decisions. +Provide clear explanations of why some outputs rank higher. +5) Custom Evaluation Criteria: +Allow users to input custom evaluation criteria for domain-specific tasks. +6) Offline Mode with LoRA Models: +Storage and execution efficiency with low-rank adaptation models. +Conduct offline evaluations with minimal hardware demands. +7) Real-Time Visualization: +Visualize evaluation results using interactive charts via Flutter. +Monitor performance trends and detect weak spots visually. + + diff --git a/doc/proposals/2025/gsoc/images/API_EXPLORER_WORKFLOW.png b/doc/proposals/2025/gsoc/images/API_EXPLORER_WORKFLOW.png new file mode 100644 index 00000000..d6175e21 Binary files /dev/null and b/doc/proposals/2025/gsoc/images/API_EXPLORER_WORKFLOW.png differ diff --git a/doc/proposals/2025/gsoc/images/API_Explorer_Main.png b/doc/proposals/2025/gsoc/images/API_Explorer_Main.png new file mode 100644 index 00000000..c802a50f Binary files /dev/null and b/doc/proposals/2025/gsoc/images/API_Explorer_Main.png differ diff --git a/doc/proposals/2025/gsoc/images/API_Explorer_Template.png b/doc/proposals/2025/gsoc/images/API_Explorer_Template.png new file mode 100644 index 00000000..4c967b73 Binary files /dev/null and b/doc/proposals/2025/gsoc/images/API_Explorer_Template.png differ diff --git a/doc/user_guide/instructions_to_run_generated_code.md b/doc/user_guide/instructions_to_run_generated_code.md index 981506d2..41085943 100644 --- a/doc/user_guide/instructions_to_run_generated_code.md +++ b/doc/user_guide/instructions_to_run_generated_code.md @@ -945,11 +945,129 @@ TODO ## Rust (reqwest) -TODO +### 1. Download and Install Rust: + +#### **Windows** +1. Download and install `rustup` from [Rustup Official Site](https://rustup.rs). +2. Run the installer (`rustup-init.exe`) and follow the instructions. +3. Restart your terminal (Command Prompt or PowerShell). +4. Verify the installation: + ```sh + rustc --version + cargo --version + ``` + +#### **MacOS/Linux** +1. Run the following in your terminal: + ```sh + curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh + ``` + then follow the on-screen instructions. + +2. Restart the terminal and verify: + ```sh + rustc --version + cargo --version + ``` + +> Note: If you prefer not to use rustup for some reason, please see the Other [Rust Installation Methods](https://forge.rust-lang.org/infra/other-installation-methods.html) page for more options. + +### 2. Set Up a New Rust Project +1. Open a terminal and create a new Rust project: + ```sh + cargo new reqwest-demo + ``` +2. Navigate into the project directory: + ```sh + cd reqwest-demo + ``` + + or open this project directory in your preferred code editor. + +### 3. Add Necessary Dependencies +Run the following command to add dependencies: +```sh +cargo add reqwest --features blocking,json +cargo add tokio --features full +``` +- `"blocking"`: Enables synchronous requests. +- `"json"`: Allows JSON parsing. +- `"tokio"`: Needed for asynchronous execution. + +Run the following command to fetch dependencies: +```sh +cargo build +``` + +### 4. Execute code +1. Copy the generated code from API Dash. +2. Paste the code into your project's `src/main.rs` directory + +Run the generated code: +```sh +cargo run +``` ## Rust (ureq) -TODO +### 1. Download and Install Rust: + +#### **Windows** +1. Download and install `rustup` from [Rustup Official Site](https://rustup.rs). +2. Run the installer (`rustup-init.exe`) and follow the instructions. +3. Restart your terminal (Command Prompt or PowerShell). +4. Verify the installation: + ```sh + rustc --version + cargo --version + ``` + +#### **MacOS/Linux** +1. Run the following in your terminal: + ```sh + curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh + ``` + then follow the on-screen instructions. + +2. Restart the terminal and verify: + ```sh + rustc --version + cargo --version + ``` + +> Note: If you prefer not to use rustup for some reason, please see the Other [Rust Installation Methods](https://forge.rust-lang.org/infra/other-installation-methods.html) page for more options. + +### 2. Set Up a New Rust Project +1. Open a terminal and create a new Rust project: + ```sh + cargo new ureq-demo + ``` +2. Navigate into the project directory: + ```sh + cd ureq-demo + ``` + + or open this project directory in your preferred code editor. + +### 3. Add `ureq` Dependency + +Run the following command to add dependencies: +```sh +cargo add ureq +``` +Run the following command to fetch dependencies: +```sh +cargo build +``` + +### 4. Execute code +1. Copy the generated code from API Dash. +2. Paste the code into your project's `src/main.rs` directory + +Run the generated code: +```sh +cargo run +``` ## Rust (Actix Client) diff --git a/lib/screens/common_widgets/envfield_cell.dart b/lib/screens/common_widgets/envfield_cell.dart index 7aceae1b..b99f4a7e 100644 --- a/lib/screens/common_widgets/envfield_cell.dart +++ b/lib/screens/common_widgets/envfield_cell.dart @@ -27,22 +27,9 @@ class EnvCellField extends StatelessWidget { style: kCodeStyle.copyWith( color: clrScheme.onSurface, ), - decoration: InputDecoration( - hintStyle: kCodeStyle.copyWith( - color: clrScheme.outlineVariant, - ), + decoration: getTextFieldInputDecoration( + clrScheme, hintText: hintText, - contentPadding: const EdgeInsets.only(bottom: 12), - focusedBorder: UnderlineInputBorder( - borderSide: BorderSide( - color: clrScheme.outlineVariant, - ), - ), - enabledBorder: UnderlineInputBorder( - borderSide: BorderSide( - color: clrScheme.surfaceContainerHighest, - ), - ), ), onChanged: onChanged, ); diff --git a/lib/screens/home_page/editor_pane/details_card/request_pane/request_form_data.dart b/lib/screens/home_page/editor_pane/details_card/request_pane/request_form_data.dart index 6b0a48c2..56d3d138 100644 --- a/lib/screens/home_page/editor_pane/details_card/request_pane/request_form_data.dart +++ b/lib/screens/home_page/editor_pane/details_card/request_pane/request_form_data.dart @@ -5,6 +5,7 @@ import 'package:flutter/material.dart'; import 'package:flutter_riverpod/flutter_riverpod.dart'; import 'package:data_table_2/data_table_2.dart'; import 'package:apidash/providers/providers.dart'; +import 'package:apidash/screens/common_widgets/common_widgets.dart'; import 'package:apidash/widgets/widgets.dart'; import 'package:apidash/utils/utils.dart'; import 'package:apidash/consts.dart'; @@ -82,7 +83,7 @@ class _FormDataBodyState extends ConsumerState { key: ValueKey("$selectedId-$index-form-row-$seed"), cells: [ DataCell( - CellField( + EnvCellField( keyId: "$selectedId-$index-form-k-$seed", initialValue: formRows[index].name, hintText: kHintAddFieldName, @@ -138,7 +139,7 @@ class _FormDataBodyState extends ConsumerState { }, initialValue: formRows[index].value, ) - : CellField( + : EnvCellField( keyId: "$selectedId-$index-form-v-$seed", initialValue: formRows[index].value, hintText: kHintAddValue, diff --git a/lib/utils/envvar_utils.dart b/lib/utils/envvar_utils.dart index 230cf535..c94f6eeb 100644 --- a/lib/utils/envvar_utils.dart +++ b/lib/utils/envvar_utils.dart @@ -37,59 +37,60 @@ List getEnvironmentSecrets( } String? substituteVariables( - String? input, - Map> envMap, - String? activeEnvironmentId) { + String? input, + Map envVarMap, +) { if (input == null) return null; - - final Map combinedMap = {}; - final activeEnv = envMap[activeEnvironmentId] ?? []; - final globalEnv = envMap[kGlobalEnvironmentId] ?? []; - - for (var variable in globalEnv) { - combinedMap[variable.key] = variable.value; - } - for (var variable in activeEnv) { - combinedMap[variable.key] = variable.value; + if (envVarMap.keys.isEmpty) { + return input; } + final regex = RegExp("{{(${envVarMap.keys.join('|')})}}"); - String result = input.replaceAllMapped(kEnvVarRegEx, (match) { + String result = input.replaceAllMapped(regex, (match) { final key = match.group(1)?.trim() ?? ''; - return combinedMap[key] ?? ''; + return envVarMap[key] ?? '{{$key}}'; }); return result; } HttpRequestModel substituteHttpRequestModel( - HttpRequestModel httpRequestModel, - Map> envMap, - String? activeEnvironmentId) { + HttpRequestModel httpRequestModel, + Map> envMap, + String? activeEnvironmentId, +) { + final Map combinedEnvVarMap = {}; + final activeEnv = envMap[activeEnvironmentId] ?? []; + final globalEnv = envMap[kGlobalEnvironmentId] ?? []; + + for (var variable in globalEnv) { + combinedEnvVarMap[variable.key] = variable.value; + } + for (var variable in activeEnv) { + combinedEnvVarMap[variable.key] = variable.value; + } + var newRequestModel = httpRequestModel.copyWith( - url: substituteVariables( - httpRequestModel.url, - envMap, - activeEnvironmentId, - )!, + url: substituteVariables(httpRequestModel.url, combinedEnvVarMap)!, headers: httpRequestModel.headers?.map((header) { return header.copyWith( - name: - substituteVariables(header.name, envMap, activeEnvironmentId) ?? "", - value: substituteVariables(header.value, envMap, activeEnvironmentId), + name: substituteVariables(header.name, combinedEnvVarMap) ?? "", + value: substituteVariables(header.value, combinedEnvVarMap), ); }).toList(), params: httpRequestModel.params?.map((param) { return param.copyWith( - name: - substituteVariables(param.name, envMap, activeEnvironmentId) ?? "", - value: substituteVariables(param.value, envMap, activeEnvironmentId), + name: substituteVariables(param.name, combinedEnvVarMap) ?? "", + value: substituteVariables(param.value, combinedEnvVarMap), ); }).toList(), - body: substituteVariables( - httpRequestModel.body, - envMap, - activeEnvironmentId, - ), + formData: httpRequestModel.formData?.map((formData) { + return formData.copyWith( + name: substituteVariables(formData.name, combinedEnvVarMap) ?? "", + value: substituteVariables(formData.value, combinedEnvVarMap) ?? "", + ); + }).toList(), + body: substituteVariables(httpRequestModel.body, combinedEnvVarMap), ); return newRequestModel; } diff --git a/packages/apidash_design_system/lib/widgets/decoration_input_textfield.dart b/packages/apidash_design_system/lib/widgets/decoration_input_textfield.dart new file mode 100644 index 00000000..0fe86fcf --- /dev/null +++ b/packages/apidash_design_system/lib/widgets/decoration_input_textfield.dart @@ -0,0 +1,38 @@ +import 'package:flutter/material.dart'; +import '../tokens/tokens.dart'; + +InputDecoration getTextFieldInputDecoration( + ColorScheme clrScheme, { + Color? fillColor, + String? hintText, + TextStyle? hintTextStyle, + double? hintTextFontSize, + Color? hintTextColor, + EdgeInsetsGeometry? contentPadding, + Color? focussedBorderColor, + Color? enabledBorderColor, + bool? isDense, +}) { + return InputDecoration( + filled: true, + fillColor: fillColor ?? clrScheme.surfaceContainerLowest, + hintStyle: hintTextStyle ?? + kCodeStyle.copyWith( + fontSize: hintTextFontSize, + color: hintTextColor ?? clrScheme.outlineVariant, + ), + hintText: hintText, + contentPadding: contentPadding ?? kP10, + focusedBorder: OutlineInputBorder( + borderSide: BorderSide( + color: focussedBorderColor ?? clrScheme.outline, + ), + ), + enabledBorder: OutlineInputBorder( + borderSide: BorderSide( + color: enabledBorderColor ?? clrScheme.surfaceContainerHighest, + ), + ), + isDense: isDense, + ); +} diff --git a/packages/apidash_design_system/lib/widgets/textfield_outlined.dart b/packages/apidash_design_system/lib/widgets/textfield_outlined.dart index a347633d..b9b2588d 100644 --- a/packages/apidash_design_system/lib/widgets/textfield_outlined.dart +++ b/packages/apidash_design_system/lib/widgets/textfield_outlined.dart @@ -1,5 +1,6 @@ import 'package:flutter/material.dart'; import '../tokens/tokens.dart'; +import 'decoration_input_textfield.dart'; class ADOutlinedTextField extends StatelessWidget { const ADOutlinedTextField({ @@ -65,26 +66,16 @@ class ADOutlinedTextField extends StatelessWidget { fontSize: textFontSize, color: textColor ?? clrScheme.onSurface, ), - decoration: InputDecoration( - filled: true, - fillColor: fillColor ?? clrScheme.surfaceContainerLowest, - hintStyle: hintTextStyle ?? - kCodeStyle.copyWith( - fontSize: hintTextFontSize, - color: hintTextColor ?? clrScheme.outlineVariant, - ), + decoration: getTextFieldInputDecoration( + clrScheme, + fillColor: fillColor, hintText: hintText, - contentPadding: contentPadding ?? kP10, - focusedBorder: OutlineInputBorder( - borderSide: BorderSide( - color: focussedBorderColor ?? clrScheme.outline, - ), - ), - enabledBorder: OutlineInputBorder( - borderSide: BorderSide( - color: enabledBorderColor ?? clrScheme.surfaceContainerHighest, - ), - ), + hintTextStyle: hintTextStyle, + hintTextFontSize: hintTextFontSize, + hintTextColor: hintTextColor, + contentPadding: contentPadding, + focussedBorderColor: focussedBorderColor, + enabledBorderColor: enabledBorderColor, isDense: isDense, ), onChanged: onChanged, diff --git a/packages/apidash_design_system/lib/widgets/widgets.dart b/packages/apidash_design_system/lib/widgets/widgets.dart index 955babce..1ebd0836 100644 --- a/packages/apidash_design_system/lib/widgets/widgets.dart +++ b/packages/apidash_design_system/lib/widgets/widgets.dart @@ -2,6 +2,7 @@ export 'button_filled.dart'; export 'button_icon.dart'; export 'button_text.dart'; export 'checkbox.dart'; +export 'decoration_input_textfield.dart'; export 'dropdown.dart'; export 'popup_menu.dart'; export 'snackbar.dart'; diff --git a/test/utils/envvar_utils_test.dart b/test/utils/envvar_utils_test.dart index 6e2dfa55..48fc7735 100644 --- a/test/utils/envvar_utils_test.dart +++ b/test/utils/envvar_utils_test.dart @@ -48,10 +48,13 @@ const globalVars = [ EnvironmentVariableModel(key: "num", value: "5670000"), EnvironmentVariableModel(key: "token", value: "token"), ]; +final globalVarsMap = {for (var item in globalVars) item.key: item.value}; const activeEnvVars = [ EnvironmentVariableModel(key: "url", value: "api.apidash.dev"), EnvironmentVariableModel(key: "num", value: "8940000"), ]; +final activeEnvVarsMap = {for (var item in activeEnvVars) item.key: item.value}; +final combinedEnvVarsMap = mergeMaps(globalVarsMap, activeEnvVarsMap); void main() { group("Testing getEnvironmentTitle function", () { @@ -125,66 +128,45 @@ void main() { group("Testing substituteVariables function", () { test("Testing substituteVariables with null", () { String? input; - Map> envMap = {}; - String? activeEnvironmentId; - expect(substituteVariables(input, envMap, activeEnvironmentId), null); + Map envMap = {}; + expect(substituteVariables(input, envMap), null); }); test("Testing substituteVariables with empty input", () { String input = ""; - Map> envMap = {}; - String? activeEnvironmentId; - expect(substituteVariables(input, envMap, activeEnvironmentId), ""); + Map envMap = {}; + expect(substituteVariables(input, envMap), ""); }); test("Testing substituteVariables with empty envMap", () { String input = "{{url}}/humanize/social?num={{num}}"; - Map> envMap = {}; - String? activeEnvironmentId; - String expected = "/humanize/social?num="; - expect(substituteVariables(input, envMap, activeEnvironmentId), expected); + Map envMap = {}; + String expected = "{{url}}/humanize/social?num={{num}}"; + expect(substituteVariables(input, envMap), expected); }); test("Testing substituteVariables with empty activeEnvironmentId", () { String input = "{{url}}/humanize/social?num={{num}}"; - Map> envMap = { - kGlobalEnvironmentId: globalVars, - }; String expected = "api.foss42.com/humanize/social?num=5670000"; - expect(substituteVariables(input, envMap, null), expected); + expect(substituteVariables(input, globalVarsMap), expected); }); test("Testing substituteVariables with non-empty activeEnvironmentId", () { String input = "{{url}}/humanize/social?num={{num}}"; - Map> envMap = { - kGlobalEnvironmentId: globalVars, - "activeEnvId": activeEnvVars, - }; - String? activeEnvId = "activeEnvId"; String expected = "api.apidash.dev/humanize/social?num=8940000"; - expect(substituteVariables(input, envMap, activeEnvId), expected); + expect(substituteVariables(input, combinedEnvVarsMap), expected); }); test("Testing substituteVariables with incorrect paranthesis", () { String input = "{{url}}}/humanize/social?num={{num}}"; - Map> envMap = { - kGlobalEnvironmentId: globalVars, - "activeEnvId": activeEnvVars, - }; - String? activeEnvId = "activeEnvId"; String expected = "api.apidash.dev}/humanize/social?num=8940000"; - expect(substituteVariables(input, envMap, activeEnvId), expected); + expect(substituteVariables(input, combinedEnvVarsMap), expected); }); test("Testing substituteVariables function with unavailable variables", () { String input = "{{url1}}/humanize/social?num={{num}}"; - Map> envMap = { - kGlobalEnvironmentId: globalVars, - "activeEnvId": activeEnvVars, - }; - String? activeEnvironmentId = "activeEnvId"; - String expected = "/humanize/social?num=8940000"; - expect(substituteVariables(input, envMap, activeEnvironmentId), expected); + String expected = "{{url1}}/humanize/social?num=8940000"; + expect(substituteVariables(input, combinedEnvVarsMap), expected); }); }); @@ -251,9 +233,9 @@ void main() { }; String? activeEnvironmentId = "activeEnvId"; const expected = HttpRequestModel( - url: "/humanize/social", + url: "{{url1}}/humanize/social", headers: [ - NameValueModel(name: "Authorization", value: "Bearer "), + NameValueModel(name: "Authorization", value: "Bearer {{token1}}"), ], params: [ NameValueModel(name: "num", value: "8940000"),