Merge branch 'foss42:main' into application-dashbot-udhay

This commit is contained in:
Udhay Adithya
2025-04-04 01:08:35 +05:30
committed by GitHub
71 changed files with 5631 additions and 642 deletions

1
assets/api_server.json Normal file

File diff suppressed because one or more lines are too long

1
assets/files.json Normal file

File diff suppressed because one or more lines are too long

1
assets/generate.json Normal file

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,68 @@
# GSoC Proposal: API Explorer for APIDash
## About
- **Full Name:** Chinmay Joshi
- **Email:** chinmayjoshi003@gmail.com
- **Phone:** +91 9028900343
- **Discord Handle:** _chinmay03
- **GitHub Profile:** [GitHub](https://github.com/chinmayjoshi03)
- **LinkedIn:** [LinkedIn](https://www.linkedin.com/in/chinmay-joshi-34115827b)
- **Time Zone:** Asia/Kolkata (IST)
- **Resume:** [Resume Link](https://drive.google.com/file/d/1_sE_Mi1iUyENXDcKyqHO22hs4ey_HvJr/view?usp=drive_link)
## University Info
- **University Name:** Savitribai Phule Pune University
- **Program:** Bachelor of Engineering (IT)
- **Year:** 2nd Year (2025 Batch)
- **Expected Graduation Date:** 2027
## Motivation & Past Experience
### 1. FOSS Contributions
I haven't contributed to FOSS projects yet, but I recently downloaded the APIDash codebase to my local machine and started exploring it to understand its structure and functionality.
### 2. Proud Achievement
One of my proudest achievements is the **KissanYukt project**—a mobile app developed for farmers to connect directly with consumers, eliminating middlemen and ensuring fair pricing. This project led us to victory in the **Smart India Hackathon**, showcasing our ability to develop impactful solutions for real-world problems.
### 3. Challenges that Motivate Me
I enjoy solving problems that require creativity, logical thinking, and a direct impact on peoples lives. Whether its optimizing a process, improving user experience, or automating repetitive tasks, I find motivation in challenges that push me to learn, adapt, and innovate.
### 4. GSoC Commitment
I will be working part-time on GSoC, as I am a 2nd-year student and need to balance my studies alongside the project.
### 5. Syncing with Mentors
Yes, I am open to regular sync-ups with project mentors to ensure steady progress.
### 6. Interest in APIDash
APIDash stands out because of its lightweight, Flutter-based architecture, making it a highly efficient alternative to tools like Postman. I am particularly excited about the potential of expanding its modular design, enhancing API discovery, and integrating AI-based automation for better API management.
### 7. Project Improvements
While APIDash provides a great developer experience, some areas for improvement include:
- Improving UI responsiveness on lower-end devices.
- Expanding API import/export options for better interoperability.
- Enhancing API security validation and error handling mechanisms.
## Project Proposal Information
### Proposal Title: API Explorer for APIDash
### Abstract
This project aims to enhance the APIDash user experience by integrating a curated library of popular and publicly available APIs. The feature will allow users to discover, browse, search, and import API endpoints into their workspace for seamless testing. It will include pre-configured API request templates with authentication details, sample payloads, and expected responses, reducing manual setup time.
## Weekly Timeline
| **Week** | **Focus** | **Key Deliverables & Achievements** |
|-------------|--------------------------------------|-------------------------------------|
| **Week 1-2** | Community Bonding & Planning | Engage with the APIDash team, review the existing codebase, and gather insights. Define project scope, objectives, and a clear technical roadmap. **Outcome:** Detailed project plan and initial architecture design. |
| **Week 3-4** | API Parsing & Categorization | Develop a backend parser to extract API endpoints, request methods, and metadata from OpenAPI/HTML files. Implement an auto-tagging mechanism to categorize APIs effectively. **Outcome:** A functional parser that outputs structured API data. |
| **Week 5-6** | Data Enrichment & Template Generation | Build the automation pipeline to enrich parsed API data—adding details like sample payloads, authentication requirements, and expected responses. Generate pre-configured API request templates automatically. **Outcome:** End-to-end pipeline for creating API templates. |
| **Week 7-8** | UI Integration & Search Functionality | Integrate the backend pipeline with APIDashs frontend. Develop a user-friendly interface for browsing, searching, and importing APIs into the workspace. **Outcome:** Interactive API Explorer UI with smooth API discovery and import features. |
| **Week 9-10**| Community Features & Optimization | Implement optional community features such as user ratings and reviews, and enhance GitHub integration for community contributions. Fine-tune performance and usability based on user feedback. **Outcome:** A more engaging and optimized tool for API discovery. |
| **Week 11-12**| Testing, Documentation & Finalization | Conduct comprehensive end-to-end testing, resolve any issues, and prepare detailed documentation and user guides. **Outcome:** A robust, well-documented API Explorer ready for deployment and community use. |
## Conclusion
The **API Explorer for APIDash** project aims to enhance user experience by providing a seamless way to discover, browse, and integrate publicly available APIs. By automating API parsing, categorization, and enrichment, the project will reduce onboarding time and improve efficiency for developers.
This project aligns well with my skills and interests, and I am eager to contribute to the APIDash ecosystem through this project.

View File

@ -0,0 +1,95 @@
# API Dash - GSoC 2025 Proposal
## About
1. Full Name: Jenny(An-Chieh) Cheng
2. Contact info (email, phone, etc.): jennyc28@uci.edu
3. Discord handle: jennyyyy0954
4. GitHub profile link: https://github.com/Jennyyyy0212
5. LinkedIn: https://www.linkedin.com/in/an-chieh-cheng/
6. Time zone: Pacific Time / Los Angles (GMT-7)
7. Link to a resume: [Link](https://drive.google.com/file/d/1wN6gdueKzCLeDX9STjcd0X0T4oS76mKl/view?usp=sharing>)
## University Info
1. University name: University of California, Irvine
2. Program you are enrolled in (Degree & Major/Minor): Master in Software Engineering
3. Year: 2024
5. Expected graduation date: Dec 2025
## Motivation & Past Experience
Short answers to the following questions (Add relevant links wherever you can):
1. Have you worked on or contributed to a FOSS project before? Can you attach repo links or relevant PRs?
No, but I am eager to start contributing.
2. What is your one project/achievement that you are most proud of? Why?
One project Im most proud of is a [Movie Tracking application](https://github.com/Jennyyyy0212/MovieLog) that I developed on my own. It involved APIs creation, frontend development, and database management. This project was challenging and rewarding because I had to build everything from scratch, which required me to understand and implement each part from designing the UI to creating APIs and managing database.
3. What kind of problems or challenges motivate you the most to solve them?
Those require creative thinking, automate workflows, and improve user experience.
4. Will you be working on GSoC full-time? In case not, what will you be studying or working on while working on the project?
Yes, I will work on GSoC for full-time.
5. Do you mind regularly syncing up with the project mentors?
No, I don't mind. I am happy to sync up with the mentors and get feedback from them.
6. What interests you the most about API Dash?
I'm interested in API Dash because of its ability to integrate API creation and testing. It serves as an open-source alternative to tools like Postman and Insomnia. Especially, it aligns with my interest in improving developer workflows by providing a clean interface and efficient API management. The test preview feature also makes it a valuable tool for developers.
7. Can you mention some areas where the project can be improved?
- Integrate to VS code or some IDE as extension
- Allow more custom themes or layout in UI, such as dark theme
- Enable users to schedule or automate APIs testing (ex: daily, weekly, or after code change) instead of manually testing APIs
## Project Proposal Information
### 1.Proposal Title
API Explorer - Library for API templates
### 2.Abstract
This project aims to help the APIs creation where users easily discover, browse, search, and import popular, publicly available APIs. Developers will be able to quickly access pre-configured API templates with authentication details, sample data, and expected responses. So they don't need to manually set them up. The APIs will be organized into categories like AI, finance, weather, and social media, making it easy for users to find what they need. The backend will automate the process of parsing OpenAPI and HTML files, tagging APIs to relevant categories, enriching the data, and creating templates. Features like user ratings, reviews, and community contributions via GitHub will help keep the resources up to date and relevant.
### 3.Detailed Description
#### Problem Statement
Currently, developers using API Dash only when they have their APIs. Creating APIs or searching public APIs manually can take time and configuring requests often involves mistakes, especially for those new to the service.
#### Proposed Solution
- Parses API documentation (OpenAPI YAML, or HTML) to automatically extract key components like endpoints, authentication methods, parameters, and responses.
- Generates pre-configured API request templates (GET, POST, PUT, DELETE) with details and neccessary infomation
- Categorizes APIs based on functionality
- Provides a search and filter system to help developers quickly find APIs by category, name, or tags.
- Support user feedback features, so users can rate and comment
- Imports APIs directly into the workspace, reducing setup time and improving the workflow for developers.
- Fetch the lastest APIs periodically
### 4.Weekly Timeline
| Week | Goals/Activities | Deliverables |
|------|-------------------------------------------------------------|-----------------------------------------------------------------|
| 1 | Set up project environment and repositories. Familiarize with API Dash architecture. Define project requirements and scope. Begin early design concepts for the UI/UX. | Project repository initialized. Environment and tools set up. Project scope and requirements document. Initial UI/UX design concepts. |
| 2 | Research OpenAPI and HTML parsing libraries. Choose parsing tools and libraries. Continue refining UI/UX design based on project goals. | List of tools and libraries for parsing and scraping APIs. Refined UI/UX design wireframes. |
| 3 | Begin implementing OpenAPI parsing functionality. Set up the database schema for API metadata. Finalize initial UI/UX wireframes and layout design. | Basic OpenAPI parsing for extracting endpoints, parameters. Initial database schema design for storing API info. Finalized UI/UX wireframes. |
| 4 | Complete OpenAPI parser and test with sample files. Begin categorization of APIs based on keywords. Start designing the UI components for the API Explorer interface. | OpenAPI parser code with test results. Categorization logic for APIs. Early-stage UI component designs. |
| 5 | Start implementing HTML scraper for APIs without OpenAPI. Test parser with real-world API documentation. Iterate on UI/UX design with feedback from initial API interface mockups. | HTML scraping tool for parsing API documentation. Test results from HTML scraping. UI/UX design iteration based on feedback. |
| 6 | Implement enrichment features (authentication, versioning, etc.). Develop automatic categorization based on descriptions. Continue refining UI/UX design, focusing on API explorer features. | Enrichment logic for API metadata. Automated categorization model or keyword-based system. Refined UI/UX design for API explorer features. |
| 7 | Start template generation for API requests (GET, POST). Add sample payloads and expected responses to templates. Continue refining UI components and user flow. | Template generation code for API requests. Sample request/response templates for testing. Refined UI components and user flow design. |
| 8 | Refine template generation for all API methods (PUT, DELETE). Implement authentication template generation (e.g., API keys). Finalize the UI/UX design for the entire API Explorer workflow. | Complete set of API request templates. Authentication logic integrated into templates. Finalized UI/UX design for API Explorer. |
| 9 | Begin integration of user feedback system (ratings, reviews). Test user feedback system with a sample set of APIs. Start implementing UI for user feedback and ratings. | User feedback feature integrated into API Dash. UI for user ratings and reviews. |
| 10 | Implement API search and filter functionality. Integrate categorized and enriched APIs into the workspace. Finalize the UI for the search/filter and API results display. | Search and filter system for APIs. API Explorer interface integrated with categorized APIs. Finalized UI for search and results display. |
| 11 | Finalize real-time testing features (API execution in UI). Perform end-to-end testing and debugging. Conduct user testing for UI/UX feedback. | Real-time API testing feature completed. Final bug fixes and refinements. User feedback on UI/UX design. |
| 12 | Conduct final testing and gather feedback. Prepare documentation for the project. Deploy and release API Explorer feature. Finalize UI/UX design adjustments based on testing feedback. | Final version of API Explorer feature. Complete documentation (API usage, setup, etc.). API Explorer released and deployed. Finalized UI/UX design. |

View File

@ -0,0 +1,167 @@
# GSOC Proposal for DashBot
## About Me
**Full Name:** Mrudul Killedar
**Email:** mrudulkilledar111@gmail.com
**Phone:** +91 7489685683
**Discord Handle:** Mrudul (username: mk4104)
**GitHub Profile:** https://github.com/Mrudul111
**LinkedIn:** www.linkedin.com/in/mrudul-killedar-5b0121245
**Time Zone:** GMT + 5:30
**Resume:** https://drive.google.com/file/d/1ICvI5h8FP5cTtMIvQcu918Adc5JP1DQb/view?usp=share_link
## University Information
**University Name:** Vellore Institute of Technology
**Program:** B.Tech in Computer Science Engineering
**Year:** 2025
**Expected Graduation Date:** July 2025
## Motivation & Past Experience
**1. Have you worked on or contributed to a FOSS project before?**
Yes, I have contributed to FOSS project ie APIDash:
- Solves Beautify JSON and Highlight JSON - https://github.com/foss42/apidash/pull/595
- Share button functionality - https://github.com/foss42/apidash/pull/571#event-16324056931
- Homebrew Installation Guide - https://github.com/foss42/apidash/pull/566#event-16282262849
- Multiple-Model in DashBot - https://github.com/foss42/apidash/pull/704
**2. What is your one project/achievement that you are most proud of? Why?**
I am most proud of 2 projects particularly one is Parking24 which is an app that was made to solve parking problems of UAE market and i was assigned this project in my internship at Wisho (Now called Hyve AI Labs).
I am proud of the project because it is adding real world value and solving a very real problem. Also i have been working on a Machine learning model that is to predict cybersickness using SNN and currently we are achieving accuracy of 87.234% which is better than existing model that has accuracy of 76.11% and we achieved this because of our better approach for data cleaning.
**3. What kind of problems or challenges motivate you the most to solve them?**
I am most motivated by solving complex problems that challenge me to think in new ways and push my boundaries. I enjoy tackling problems that I have never encountered before, as they provide an opportunity to learn, explore innovative solutions, and develop a deeper understanding of different technologies. The thrill of breaking down a difficult problem, analyzing it from different angles, and coming up with an effective solution is what drives me the most.
**4. Will you be working on GSoC full-time?**
Yes, I will be working full-time on my GSoC project.
**5. Do you mind regularly syncing up with the project mentors?**
Not at all! I am happy to have regular sync-ups with my mentors to ensure smooth progress and alignment with project goals.
**6. What interests you the most about API Dash?**
API Dash is an innovative tool that simplifies API testing and monitoring. I personally felt other apps are very bloated and performace is really clunky on the other hand API Dash is really smooth, looks aesthetically pleasing and use of AI just seperates them from rest of the API Testing Platform. I have personally shifted to API Dash for my backend testing. I would love to contribute to API Dash because i feel this is a project that is adding such a great value to developer community.
**7. Can you mention some areas where the project can be improved?**
Some areas where API Dash can be improved include:
- **UI for DashBot:** The UI currently is very basic and lacks a professional approach.
- **Responses:** The response that is generated by clicking the buttons is on-point but the bot is not conversational enough.
## Project Proposal Information
### **Proposal Title:** DashBot
### **Abstract**
DashBot is an AI-powered assistant designed to supercharge developer productivity within API Dash by automating repetitive tasks, improving API debugging, and providing intelligent recommendations. By leveraging advanced large language models (LLMs), DashBot enables developers to interact with APIs using natural language, making API testing, debugging, documentation, and integration significantly more efficient and intuitive.
### **Detailed Description**
DashBot is designed to be an AI-powered assistant for API Dash that helps developers automate tedious tasks, follow best practices, and obtain contextual suggestions via natural-language input. First we need to finish tasks that are yet to be finished in the first prototype which include:
1. Generate plots & visualizations for API responses along with ability to customize
2. Generate API integration frontend code for frontend frameworks like React, Flutter, etc.
This project extends its capabilities by adding the following advanced features:
#### **1. AI-Powered Code Error Detection & Auto-Fix**
- Detect syntax and logical errors in API requests and integration code.
- Provide human-readable explanations and suggest one-click fixes.
- Ensure best practices in authentication, rate limiting, and error handling.
#### **2. Multi-Model Support & Fine-Tuning**
- Enable users to switch between different LLMs (GPT, Llama, Claude, Gemini, etc.).
- Provide on-device LLM support for private inference.
- Allow user-defined prompt fine-tuning for personalized suggestions.
#### **3. Enhanced UI/UX for API Dash**
Improve the overall user experience of API Dash by making the interface more intuitive, visually appealing, and developer-friendly.
- **Modern UI Elements:** Redesigned buttons, input fields, and layouts for a clean and professional look.
- **Dark & Light Mode Support:** Seamless theme switcher for better accessibility.
- **Improved API Request/Response Visualization:** Better syntax highlighting, collapsible sections, and JSON tree views for responses.
- **Enhanced Error Debugging UI:** Clear, structured error messages with AI-powered suggestions for fixes.
- **Keyboard Shortcuts & Command Palette:** Faster workflows with keyboard commands.
#### **4. API Documentation for Tech Stack Integration**
- Provide **Step-by-Step Guides** for React, Flutter, Vue, Express.js, and more and change the response according to coding practice used by the user.
- Allow **Markdown and PDF export** for easy sharing.
### **Packages Used**
This project will utilize the following packages to implement the proposed features:
- [`anthropic_sdk_dart`](https://pub.dev/packages/anthropic_sdk_dart) - Claude integration
- [`googleai_dart`](https://pub.dev/packages/googleai_dart) - Google AI model support
- [`openai_dart`](https://pub.dev/packages/openai_dart) - OpenAI API access
- [`ollama_dart`](https://pub.dev/packages/ollama_dart) - OllamaAI inference
- [`fl_chart`](https://pub.dev/packages/fl_chart) - API response visualization
![light](https://github.com/user-attachments/assets/3f28dca1-8886-4e50-a52e-525640e4816e)
![dark](https://github.com/user-attachments/assets/ac6f318e-714f-463a-b981-25fd9fc9f768)
![iPhone SE - 2](https://github.com/user-attachments/assets/8a26c3a5-0fed-448f-8c1b-1bec9a56d734)
![iPhone SE - 4](https://github.com/user-attachments/assets/bb702a3e-8b05-432a-b6a2-8b3db9e2bf79)
![Blank diagram](https://github.com/user-attachments/assets/4b24ec3a-4a2f-4d69-83fe-78129cf94938)
### **Figma Link**
https://www.figma.com/design/vzpQ7xzwwmx2G92VVyF4aw/GSOC-Proposal?node-id=0-1&t=7D6Njm8Rr2x6VCkx-1
## **Weekly Timeline**
### **Week 1: Initial Research & Understanding API Dash**
**Tasks:**
- Study API Dashs existing codebase.
- Research AI assistant implementations.
- Set up the development environment.
### **Week 2: Finalizing Tech Stack & Initial UI Prototyping**
**Tasks:**
- Decide on AI models & packages to be integrated in DashBot.
- Finalize the UI wireframe.
- Create a separate development branch for DashBot features.
### **Week 3-4: UI/UX Enhancements**
**Tasks:**
- Implement DashBot Panel UI.
- Add theme support (light & dark mode).
- Improve API response display with syntax highlighting.
- Implement API request history tracking and auto-suggestions.
### **Week 5: Frontend Code Generation**
**Tasks:**
- Generate API integration code for React, Flutter, Vue.
- Allow one-click copy of generated code.
-
### **Week 6: API Response Visualization**
**Tasks:**
- Implement data visualization for API responses.
- Add customization options for plots.
**Deliverable:** Interactive API response visualizations.
### **Week 7: AI-Powered Code Error Detection**
**Tasks:**
- Implement AI-powered debugging for API requests.
- Ensure API best practices compliance.
- Optimize AI model for faster debugging.
### **Week 8: Multi-Model Support & Fine-Tuning**
**Tasks:**
- Implement model switching UI.
- Connect multiple AI models via API.
- Integrate on-device AI support.
- Enable custom prompt fine-tuning.
### **Week 9: Documentation for Integration of API to specific tech-stack**
**Tasks:**
- Generate API integration documentation for multiple languages (Python, JavaScript, Java, Flutter, etc.).
- Create UI for selecting the tech stack and displaying relevant API documentation.
### **Week 10-11: Testing, Debugging & Optimizations**
**Tasks:**
- Conduct unit & integration testing.
- Fix bugs & optimize performance.
- Gather feedback from mentors & community.
### **Week 12: Documentation, Final Touches & Submission**
**Tasks:**
- Write detailed documentation & API reference.
- Create a demo video & presentation.
- Prepare & submit the final report to GSoC.
---

View File

@ -0,0 +1,375 @@
# GSoC 2025 Proposal: AI UI Designer for APIs
## About
**Full Name**: Ning Wei
**Contact Info**: Allenwei0503@gmail.com
**Discord Handle**: @allen_wn
**GitHub Profile**: [https://github.com/AllenWn](https://github.com/AllenWn)
**LinkedIn**: [https://www.linkedin.com/in/ning-wei-allen0503](https://www.linkedin.com/in/ning-wei-allen0503)
**Time Zone**: UTC+8
**Resume**: https://drive.google.com/file/d/1Zvf1IhKju3rFfnDsBW1WmV40lz0ZMNrD/view?usp=sharing
## University Info
**University**: University of Illinois at Urbana-Champaign
**Program**: B.S. in Computer Engineering
**Year**: 2nd year undergraduate
**Expected Graduation**: May 2027
---
## Motivation & Past Experience
1. **Have you worked on or contributed to a FOSS project before?**
Not yet officially, but Ive been actively exploring open source projects like API Dash and contributing via discussion and design planning. I am currently studying the API Dash repository and developer guide to prepare for my first PR.
2. **What is your one project/achievement that you are most proud of? Why?**
I'm proud of building an AI-assisted email management app using Flutter and Go, which automatically categorized and responded to emails using ChatGPT API. It gave me end-to-end experience in integrating APIs, generating dynamic UIs, and designing developer-friendly tools.
3. **What kind of problems or challenges motivate you the most to solve them?**
I enjoy solving problems that eliminate repetitive work for developers and improve workflow productivity — especially through automation and AI integration.
4. **Will you be working on GSoC full-time?**
Yes. I will be dedicating full-time to this project during the summer.
5. **Do you mind regularly syncing up with the project mentors?**
Not at all — I look forward to regular syncs and feedback to align with the project vision.
6. **What interests you the most about API Dash?**
API Dash is focused on improving the developer experience around APIs, which is something I care deeply about. I love the vision of combining UI tools with AI assistance in a privacy-first, extensible way.
7. **Can you mention some areas where the project can be improved?**
- More intelligent code generation from API response types
- Drag-and-drop UI workflow
- Visual previews and theming customization
- Integration with modern LLMs for field-level naming and layout suggestions
---
## Project Proposal Information
### Proposal Title
AI UI Designer for APIs
# Relevant Issues: [#617]
### Abstract
This project proposes the development of an AI-powered UI generation assistant within the API Dash application. The tool will automatically analyze API responses (primarily in JSON format), infer their structure, and dynamically generate Flutter-based UI components such as tables, forms, or cards. Developers will be able to preview, customize, and export these layouts as usable Dart code. By combining rule-based heuristics with optional LLM (e.g., Ollama, GPT) enhancements, the feature aims to streamline API data visualization and speed up frontend prototyping. The generated UI will be clean, modular, and directly reusable in real-world Flutter applications.
---
### Detailed Description
This project introduces a new feature into API Dash: AI UI Designer — an intelligent assistant that takes an API response and converts it into dynamic UI components, allowing developers to quickly visualize, customize, and export frontend code based on live API data. It will analyze the data and suggest corresponding UI layouts using Dart/Flutter widgets such as `DataTable`, `Card`, or `Form`.
#### Step 1: Parse API Response Structure
The first step is to understand the structure of the API response, which is usually in JSON format. The goal is to transform the raw response into an intermediate schema that can guide UI generation.
- Most API responses are either:
- Object: A flat or nested key-value map.
- Array of Objects: A list of items, each following a similar structure.
- Understanding the structure allows us to decide:
- What kind of UI component fits best (e.g., table, form, card).
- How many fields to show, and how deep the nesting goes.
- Common field types (string, number, boolean, array, object) impact widget selection.
- Special patterns (e.g., timestamps, emails, URLs) can be detected and used to enhance UI.
##### Implementation Plan
- Start with JSON
- Initially only support JSON input, as it's the most common.
- Use Dart's built-in dart:convert package to parse the response.
- Build a Recursive Schema Parser
- Traverse the JSON response recursively.
- For each node (key), determine:
- Type: string, number, bool, object, array
- Optional metadata (e.g., nullability, format hints)
- Depth and parent-child relationships
- Output a tree-like structure such as:
```json
{
"type": "object",
"fields": [
{"key": "name", "type": "string"},
{"key": "age", "type": "number"},
{"key": "profile", "type": "object", "fields": [...]},
{"key": "posts", "type": "array", "itemType": "object", "fields": [...]}
]
}
```
- Detect Patterns (Optional AI Help)
- Apply heuristics or regex to detect:
- Timestamps: ISO strings, epoch time
- Prices: numeric + currency signs
- Boolean flags: isActive, enabled, etc.
- This helps in choosing smart widgets (e.g., Switch for booleans).
- Create a Schema Class
- Implement a Dart class (e.g., ParsedSchema) to store this structure.
- This class will be passed into the UI generation logic in Step 2.
- Add Support for Validation
- Check if response is malformed or inconsistent (e.g., arrays with mixed types).
- If invalid, show fallback UI or error.
- Future Scope
- Add XML support by using XML parsers.
- Extend the parser to allow user overrides/custom schema mapping.
#### Step 2: Design AI Agent Logic
This step involves designing the core logic that maps the parsed API response schema to corresponding UI components. The AI agent will follow a hybrid approach: combining rule-based mapping with optional LLM-powered enhancement for smarter UI suggestions.
##### 2.1 Rule-Based Mapping System
To ensure fast and consistent results, we will first implement a simple rule-based system that maps specific JSON structures to Flutter widgets. This allows us to generate a basic layout even in environments where LLMs are not available or desirable.
Example rules:
- If the root is an array of objects → generate a DataTable
- If the object contains mostly key-value pairs → generate a Card or Form
- If fields include timestamps or numeric trends → suggest LineChart
- If keys match common patterns like email, phone, price, etc. → render with appropriate widgets (TextField, Dropdown, Currency formatter)
These mappings will be implemented using Dart classes and can be loaded from a YAML/JSON config file to support extensibility.
##### 2.2 LLM-Powered Enhancements
To go beyond static rules and provide smarter UI suggestions, we will integrate an LLM (e.g., Ollama locally or GPT via API). The LLM will receive the parsed schema and be prompted to:
- Suggest the layout structure (vertical list, tabs, grouped cards, etc.)
- Label fields more intuitively (e.g., product_id → "Product ID")
- Reorder fields based on usage context
- Suggest default values, placeholder text, or icons
Prompt Example:
```json
{
"task": "Generate UI plan for API response",
"schema": {
"type": "object",
"fields": [
{"name": "username", "type": "string"},
{"name": "email", "type": "string"},
{"name": "created_at", "type": "timestamp"}
]
}
}
```
Expected LLM output:
```json
{
"layout": "vertical_card",
"fields": [
{"label": "Username", "widget": "TextField"},
{"label": "Email", "widget": "TextField"},
{"label": "Signup Date", "widget": "DateDisplay"}
]
}
```
##### 2.3 Fallback and Configuration
- If LLM call fails or is disabled (e.g., offline use), the system falls back to rule-based logic.
- The user can toggle LLM mode in settings.
- The response from LLM will be cached for repeat inputs to reduce latency and cost.
##### 2.4 Customization Layer (Optional)
After layout generation, users will be able to:
- Preview different layout suggestions (from rule-based vs. LLM)
- Select a layout and make field-level changes (hide/show, rename, rearrange)
- Submit feedback for improving future suggestions (optional)
#### Step 3: Generate and Render UI in Flutter
Once the layout plan is decided (via rule-based mapping or LLM suggestion), the system will dynamically generate corresponding Flutter widgets based on the API response structure and content types.
##### 3.1 Widget Mapping and Construction
- For each field or group in the parsed schema, we map it to a predefined Flutter widget. Example mappings:
- List of Objects → DataTable
- Simple key-value object → Card, Column with Text widgets
- String fields → TextField (if editable), or SelectableText
- Number series over time → Line chart (e.g., using fl_chart package)
- The widget structure will be built using standard Dart code with StatefulWidget or StatelessWidget, depending on interactivity.
-
Implementation Plan:
- Create a WidgetFactory class that receives a layout plan and schema, and returns a Widget tree.
- This factory will follow a clean design pattern to make it testable and modular.
- Use Flutters json_serializable or custom classes to deserialize API responses into displayable values.
##### 3.2 Dynamic Rendering in the App
- The generated widget tree will be rendered in a dedicated “AI UI Preview” pane inside API Dash.
- The rendering will be fully dynamic: when the schema or layout changes, the UI preview updates in real time.
- This pane will support:
- Light customization like toggling fields, reordering, hiding/showing
- Live data preview using the actual API response
Technical Flow:
- When user clicks "AI UI Designer", a modal or new route opens with the UI preview panel.
- This panel will:
- Show the raw schema & layout (editable if needed)
- Render the widget tree using Flutter's widget system
- Any user adjustments will re-trigger the widget regeneration and re-render.
##### 3.3 Preview and Debugging Tools
- Add a “Developer Mode” that shows:
- Schema tree
- Widget mapping details
- Generated Dart code (read-only)
- This helps with debugging and refining layout logic.
##### 3.4 Scalability Considerations
- To keep UI rendering responsive:
- Use lazy-loading for large JSON arrays (e.g., scrollable tables)
- Avoid deep nesting: limit UI depth or use ExpansionTile for hierarchical views
- Support pagination if list is too long
By the end of this step, users should be able to preview their API response as a fully functional, dynamic UI inside API Dash — without writing a single line of Flutter code.
#### Step 4: Export UI Code
Once the user is satisfied with the generated and customized UI layout, the tool should allow them to export the UI as usable Flutter code, so it can be directly reused in their own projects. This step focuses on transforming the dynamic widget tree into clean, readable Dart code and offering convenient export options.
##### 4.1 Code Generation Pipeline
To generate Flutter code dynamically, we will:
- Traverse the internal widget tree (from Step 3)
- For each widget, generate corresponding Dart code using string templates
- Example: a DataTable widget will generate its DataTable constructor and children rows
- Use indentation and formatting to ensure readability
Implementation Plan:
- Create a CodeGenerator class responsible for converting widget definitions into raw Dart code strings.
- Use prebuilt templates for common components: Card, Column, DataTable, etc.
- Handle nested widgets recursively to maintain structure.
##### 4.2 Export Formats
We will support two export options:
1.Raw Dart Code Export
- Output the generated Dart code into a text area or preview pane
- Allow users to:
- Copy to clipboard
- Download as .dart file
- Highlight syntax for better UX (using a package like highlight)
2.Optional JSON Layout Export
- If we implement a config-driven rendering architecture, offer an export of the layout plan/schema as JSON
- Useful for re-importing or using with a visual UI builder
##### 4.3 Integration into API Dash
- Add an "Export" button below the UI preview pane
- When clicked, the generated code will be shown in a modal or new tab
- Provide one-click buttons:
- "Copy Code"
- "Download Dart File"
- (Optional) "Download Layout JSON"
##### 4.4 Reusability and Developer Focus
- Ensure that the exported code:
- Is clean and idiomatic Dart
- Can be copied directly into any Flutter project with minimal edits
- Includes basic import statements and class wrappers if needed
- Add helpful comments in the generated code (e.g., // This widget was generated from API response)
##### 4.5 Challenges and Considerations
- Ensuring valid syntax across nested widgets
- Handling edge cases (e.g., empty fields, null values)
- Optionally, offer theming/styling presets to match user preferences
By the end of this step, users can instantly turn live API data into production-ready Flutter UI code, significantly reducing time spent on repetitive frontend scaffolding.
#### Step 5: Integrate into API Dash
The final step is to fully integrate the AI UI Designer into the API Dash application, so that users can seamlessly trigger UI generation from real API responses and interact with the entire pipeline — from data to UI preview to export — within the app.
##### 5.1 Entry Point in UI
We will add a new button or menu entry labeled “AI UI Designer” within the API response tab (or near the response preview area).
- When a user executes an API call and gets a JSON response:
- A floating action button or contextual menu becomes available
- Clicking it opens the AI UI Designer pane
Implementation Plan:
- Extend the existing response panel UI to include a trigger button
- Use a showModalBottomSheet() or a full-screen route to launch the designer
##### 5.2 Internal Architecture and Flow
The full integration involves multiple coordinated modules:
- Trigger UI → (Button click)
- JSON Parser Module (from Step 1) → Convert API response to schema
- Mapping Logic (Step 2) → Rule-based and/or LLM-assisted UI mapping
- Widget Tree Builder (Step 3) → Build live widget layout
- Preview + Export UI (Step 4) → Let users customize and extract code
Each module will be built as a reusable Dart service/class, and all UI logic stays within the API Dash UI tree.
Well keep the architecture modular so the designer logic is isolated and testable.
##### 5.3 Offline / Privacy-Friendly Support
Since API Dash is a privacy-first local client, the AI agent should work entirely offline by default using lightweight LLMs such as Ollama, which can run locally.
- If a user prefers using OpenAI or Anthropic APIs, provide optional settings to configure remote endpoints
- Set Ollama as the default backend, and wrap LLM logic inside a service with interchangeable backends
##### 5.4 User Flow Example
- User sends API request in API Dash
- JSON response is shown
- User clicks “AI UI Designer” button
- The parsed structure is shown with layout suggestions
- User can preview UI, rearrange components, and customize styles
- Once satisfied, user clicks “Export”
- Dart code is generated and available to copy/download
##### 5.5 Tests, Documentation & Maintenance
- Add integration tests to validate:
- Triggering and rendering behavior
- Correct widget tree output
- Export function accuracy
- Document:
- Each module (parsing, mapping, UI rendering, export)
- Developer usage guide (in docs/)
- Ensure all new code follows API Dashs contribution style and linting rules
By integrating into API Dash cleanly and modularly, this feature becomes a native part of the developer workflow — helping users transform any API into usable UI in seconds, without leaving the app.
---
## Weekly Timeline (Tentative)
| Week | Milestone |
|---------------|---------------------------------------------------------------------------------------------|
| Community Bonding | Join Discord, introduce myself, understand API Dash architecture, finalize scope with mentors |
| Week 1 | Build recursive parser for JSON responses; test on static examples; output schema trees |
| Week 2 | Extend parser to handle nested objects, arrays, and basic pattern recognition (e.g., timestamps) |
| Week 3 | Implement rule-based schema-to-widget mapper; define mapping logic for tables, cards, forms |
| Week 4 | Design widget data model and logic for translating schema into Flutter widget trees |
| Week 5 | Develop dynamic Flutter widget generator; render `DataTable`, `Card`, `TextField`, etc. |
| Week 6 | Build basic UI preview pane inside API Dash with user interaction support (e.g., toggles) |
| Week 7 (Midterm Evaluation) | Submit code with parser + rule-based mapping + preview UI; receive mentor feedback |
| Week 8 | Add layout customization features: visibility toggles, reordering, field labels |
| Week 9 | Integrate basic Ollama-based LLM agent for field naming & layout suggestion |
| Week 10 | Abstract LLM backend to support GPT/Anthropic alternatives via API config |
| Week 11 | Implement code export: generate Dart source code, copy-to-clipboard & download options |
| Week 12 | Optional: add JSON config export; polish UX and improve error handling |
| Week 13 | Write documentation, developer setup guide, internal tests for each module |
| Week 14 (Final Evaluation) | Final review, cleanup, feedback response, and submission |
Thanks again for your time and guidance. Ive already started studying the API Dash codebase and developer guide, and Id love your feedback on this plan — does it align with your vision?
If selected, Im excited to implement this project. If this idea is already taken, Im open to switching to another API Dash project that fits my background.

View File

@ -0,0 +1,76 @@
### About
1. **Full Name**: Sunil Kumar Sharma
2. **Contact Info**: sharma.sunil12527@gmail.com, +91 8979696414
3. **Discord User ID**: AZURE (502613458638995456)
4. **GitHub Handle**: https://github.com/Azur3-bit
5. **Socials**: https://www.linkedin.com/in/sunil-sharma-206871205/
6. **Time Zone**: GMT +5:30 (India)
7. **Resume**: https://drive.google.com/file/d/1B3ixbrlPwwCfFw8Lcq3LXvW3N5dWmviX/view?usp=sharing
### University Info
1. **University Name**: SRM Institute of Science and Technology
2. **Program**: B.Tech in Computer Science & Engineering
3. **Year**: 4th (Final Year)
4. **Expected Graduation Date**: June 2025
### Motivation & Past Experience
1. **Have you worked on or contributed to a FOSS project before?**
Yes, I have actively contributed to open-source projects, including adding support for PHP, Rust, and Golang, improving UI elements, and enhancing test coverage for various repositories. Some of my notable contributions:
- Added support for PHP, Rust, and Golang in an online compiler.
- Improved UI/UX for an online coding platform.
- Introduced a Python script for OpenAI key validation.
- Link to relevant PR: https://github.com/kalviumcommunity/compilerd/pull/139
While my PR was not merged, it was due to a shift in project priorities, and the maintainers appreciated my effort and provided constructive feedback, which helped me refine my contributions.
2. **What is your one project/achievement that you are most proud of? Why?**
One of my proudest achievements is my project on **Self-Optimizing and Intelligent Cloud Infrastructure**. This system integrates AWS Predictive Auto-Scaling with CloudWatch monitoring and cost optimization techniques, reducing infrastructure costs by ₹766.82 per month. This project showcases my expertise in **cloud computing, automation, and cost optimization** while making real-world impact.
3. **What kind of problems or challenges motivate you the most to solve them?**
I like working on problems that push me to improve efficiency, enhance security, and automate complex processes. Challenges in **API authentication, cloud infrastructure, and scalable systems** interest me the most because they require a balance of security, optimization, and real-world application.
4. **Will you be working on GSoC full-time?**
Yes, I will be working full-time on my GSoC project.
5. **Do you mind regularly syncing up with the project mentors?**
Not at all! Regular sync-ups will help ensure alignment with project goals and continuous improvement.
6. **What interests you the most about API Dash?**
API Dash is a lightweight and efficient API testing tool that avoids the unnecessary complexity of other platforms. I like how it keeps things simple while integrating AI to make API testing more intuitive and developer-friendly.
7. **Can you mention some areas where the project can be improved?**
- **Authentication Mechanisms**: Implementing **Multi-Factor Authentication (MFA)**, including biometric authentication, will enhance security and improve user experience. Having worked on MFA in payment gateways, I can integrate fingerprint recognition to streamline authentication, reducing reliance on passwords while ensuring security. Secure storage will protect credentials, allowing seamless and fast authentication for valid users on both mobile and laptop platforms.
### Project Proposal Information
#### Proposal Title: **Enhancing API Authentication & Secure Storage in API Dash**
#### Abstract
This project aims to **implement secure storage for authentication tokens using Flutter Secure Storage and integrate biometric authentication** for an added layer of security. The goal is to **enhance security while keeping API Dash lightweight and user-friendly**.
![sunil Auth image](https://github.com/user-attachments/assets/718d6b35-ebb7-49de-acb0-21ebbcbef3fa)
image : (doc/proposals/2025/gsoc/images/sunil Auth image.png)
#### Detailed Description
| Feature | Description |
|---------|------------|
| **Secure Token Storage** | Implement **Flutter Secure Storage** to securely store authentication tokens in an encrypted format. |
| **Biometric Authentication** | Enable **fingerprint unlock** for accessing stored API credentials. |
| **Improved UI for Authentication Management** | Add an intuitive UI for managing saved authentication methods securely. |
| **Multiple Authentication Methods** | Ensure seamless support for Basic Auth, API Key, JWT, OAuth 1.0, OAuth 2.0, and Digest Authentication. |
| **Efficient Request Handling** | Ensure secure storage integration does not affect API request efficiency. |
#### Weekly Timeline
| Week | Tasks |
|------|-------|
| **Week 1** | Study API Dash's authentication mechanisms and security vulnerabilities. Set up the development environment. |
| **Week 2** | Implement **Flutter Secure Storage** for encrypted token storage. |
| **Week 3-5** | Integrate **Biometric Authentication** for secure access to stored credentials. Improve UI for managing authentication credentials securely. |
| **Week 6-9** | Implement and test multiple authentication methods (Basic Auth, API Key, JWT, OAuth, Digest Auth) with secure storage. Ensure **efficient API request handling** with secure storage integration. |
| **Week 10** | Optimize performance and conduct security audits for token storage. |
| **Week 11** | Improve documentation for secure authentication management in API Dash. |
| **Week 12** | Conduct thorough testing, debugging, and security validation. Prepare the final report, demo, and submit the project. |

View File

@ -0,0 +1,342 @@
### GSoC Proposal
## About
**Full Name**: Abhinav Sharma
**Contact Info**: abhinavs1920bpl@gmail.com, +91 9479960041
**Discord Handle**: abhinavs1920
**GitHub Profile**: [github.com/abhinavs1920](https://github.com/abhinavs1920)
**Twitter, LinkedIn, Other Socials**: [linkedin.com/in/abhinavs1920](https://linkedin.com/in/abhinavs1920)
**Time Zone**: Indian Standard Time (IST)
**Resume**: [https://bit.ly/4iEPEkZ](https://bit.ly/4iEPEkZ)
## University Info
**University Name**: ABV-Indian Institute of Information Technology and Management, Gwalior
**Program**: Btech. Information Technology
**Year**: 3rd Year
**Expected Graduation Date**: May 2026
## Motivation & Past Experience
**Have you worked on or contributed to a FOSS project before?**
Yes, I have contributed to various FOSS projects, including APIDash. Here are some relevant PRs and repository links:
- [Feature that allows users to configure and use a proxy for their HTTP requests within the API Dash](https://github.com/foss42/apidash/pull/544)
- [Support for running multiple API requests in parallel](https://github.com/foss42/apidash/pull/734)
- [Support for fetching environment variables directly from the OS in API Dash](https://github.com/foss42/apidash/pull/662)
- [Implementation of in-app update check feature for APIDash](https://github.com/foss42/apidash/pull/660)
- [Updated the .env.template to use SERVER_CLIENT_ID instead of GOOGLE_SIGNIN_ANDROID_CLIENT_ID](https://github.com/AOSSIE-Org/Monumento/pull/229)
- [Addressed the issue of unhandled exceptions when loading environment variables from a .env file](https://github.com/AOSSIE-Org/Monumento/pull/215)
- [Dependency upgradation in Talawa](https://github.com/PalisadoesFoundation/talawa/pull/2353)
**What is your one project/achievement that you are most proud of? Why?**
The project I'm most proud of is MapIt, a location-based note-taking app I built using Flutter. It allows users to create notes, tag locations using Google Maps, and set reminders with notifications. What makes this project special is that it combines multiple advanced features like Google Sign-In, Geofencing, Background Notification Service, and Battery Optimization to ensure a smooth user experience.
One of the most challenging yet rewarding aspects was implementing background services efficiently, ensuring notifications and location tracking worked even when the app was closed, without draining the battery. I also optimized the Isolates to handle background tasks asynchronously, improving app performance.
This project pushed me to learn state management, background execution, and efficient API handling. Seeing it come together and solving real-world problems with it made me really proud.
**What kind of problems or challenges motivate you the most to solve them?**
I love working with Flutter and creating automation solutions for real-world problems that help users worldwide. The ability to build applications that streamline workflows, reduce manual effort, and enhance user experience excites me the most. I strive to solve practical problems that users face daily by developing scalable and user-friendly applications. My passion lies in integrating different technologies, optimizing performance, and refining processes to improve developer productivity. Whether it's automating API testing, optimizing background processes, or designing intuitive UI/UX solutions, I always aim to create impactful applications that make a difference.
**Will you be working on GSoC full-time?**
Yes, I will be dedicating my full time to GSoC.
**Do you mind regularly syncing up with the project mentors?**
Not at all, I am comfortable with regular sync-ups.
**What interests you the most about API Dash?**
The ability to create automated API workflows excites me. I am particularly interested in designing an intuitive UI/UX for API testing and workflow automation. API Dash is built using Flutter, a lightweight, multi-platform framework that ensures smooth performance across different operating systems. With frequent updates and contributions from a dedicated open-source community, API Dash evolves continuously to meet developer needs. It offers a seamless alternative to API testing tools like Postman, focusing on simplicity, efficiency, and scalability. Being part of a project with an active and growing contributor base excites me, as it presents opportunities to improve developer workflows and enhance automation capabilities in API testing.
**Can you mention some areas where the project can be improved?**
- Enhanced error handling mechanisms in API requests.
- More detailed analytics and logging for API testing.
- Ability to export and share API test results easily.
- Sync API test cases using authentication providers such as OAuth, Google Sign-In, and GitHub authentication.
- Implement a search bar for different features, improving navigation and usability.
- Expand the Testing Suite with additional validation mechanisms.
- Enhance the Workflow Builder with better API chaining and conditional logic.
- Improve the Collection Runner with parallel execution and scheduled runs.
- Implement an API Monitor with real-time tracking and alert notifications.
## Proposal Title
**Enhancing API Testing Suite in APIDash: Collection Runner, Workflow Builder & Monitoring**
## Abstract
APIDash is an open-source lightweight API testing and management tool built using Flutter. In this project, I will extend its capabilities by integrating an advanced API Testing Suite, featuring:
- **Collection Runner** I plan to implement the ability to execute batch API requests sequentially or in parallel.
- **Workflow Builder** I aim to create a drag-and-drop UI for creating API request workflows with conditional logic.
- **API Monitoring & Analytics** I will develop real-time monitoring, failure alerts, and execution insights.
By integrating these features, I will enhance APIDash to provide an end-to-end API testing solution, improving efficiency and usability for developers.
## Detailed Description
### Key Features:
- **API Validation Testing**: I intend to implement schema validation and assertion checks.
- **Integration Testing**: I need to ensure seamless interactions between APIs.
- **Security Testing**: I plan to develop features to identify vulnerabilities & secure endpoints, along with detection of security breaches like script injection.
- **Performance Testing**: I aim to create tools to measure API performance under load.
- **Scalability Testing**: I will implement features to ensure APIs scale efficiently.
- **Collection Runner**: I plan to develop functionality to automate batch API executions, along with variable data payload sending ability.
- **Workflow Builder**: I aim to create a drag-and-drop API request chaining interface.
- **Monitoring System**: I will implement a system to track API responses and errors.
- **Sync with authentication providers**: I plan to integrate authentication services for seamless API test management.
- **Implement a global search bar**: I aim to add functionality for quick navigation and feature discovery.
**AI-Enhanced Features (if time permits):**
- **AI-Assisted Test Case Generation**: I plan to develop functionality to automatically generate diverse and random API test cases by analyzing historical API data or API documentation, ensuring extensive coverage of edge cases.
- **Predictive Performance Optimization**: I aim to implement predictive analytics to forecast API load impacts and dynamically adjust test parameters for optimal performance.
### Tech Stack:
- **Frontend**: Flutter (Dart)
- **State Management**: Riverpod, Provider
- **Backend**: Firebase or Node.js/Golang (if required for logging and monitoring)
- **Testing Framework**: Postman alternatives, automated test scripts
- **AI-Model**: Gemini or similar (If Required)
![API Testing Suite](images/api-testing-suite-1.png)
### Integration with APIDash Architecture
The proposed features will integrate seamlessly with APIDash's existing architecture:
**State Management**: I plan to use APIDash's Riverpod-based state management pattern by creating new providers for Collection Runner and Workflow states, following patterns in existing files like `lib/providers/collection_state_provider.dart` and `lib/providers/selected_request_provider.dart`.
**Data Persistence**: I aim to use APIDash's Hive implementation with new type adapters for workflow models and test results, maintaining the existing pattern in `lib/services/local_db_service.dart`.
**UI Components**: I will build on the APIDash Design System components, ensuring consistent styling by extending widgets from `apidash_design_system` package. New UI elements will follow the existing Material Design implementation.
**Cross-Platform Support**: I plan to ensure all new features work across desktop, mobile, and web platforms by avoiding platform-specific code and using Flutter's responsive design patterns to adapt layouts for different screen sizes.
## Project Plan for Enhancing APIDash
### Phase 1: Research & Planning (Weeks 1-2)
**Objective:**
My first task will be to analyze the existing APIDash codebase, identify integration points, and finalize the technical approach.
#### Week 1: Codebase Analysis & Technical Research
**Codebase Understanding:**
- I plan to thoroughly analyze APIDash's architecture, including request execution, storage, and UI components.
- My goal is to identify reusable components to minimize redundant implementation.
**Integration Points Identification:**
- It's essential for me to locate areas where new features (API collection management, batch execution, workflow automation) can be integrated.
- I need to identify dependencies and potential conflicts with existing modules.
#### Week 2: UI/UX Planning & Tech Stack Finalization
**Technical Approach Finalization:**
- I'm going to define data structures for API collections and workflows.
- My approach includes evaluating local storage solutions (such as SQLite or Hive) versus cloud-based options, considering potential future expansion to cloud synchronization.
- I need to carefully assess appropriate tools for batch execution, such as threading, isolates, or queue-based execution. The current approach involves utilizing isolates and Future.then() for concurrent API testing. Please see PR [Stress Testing](https://github.com/foss42/apidash/pull/734)
**UI/UX Design:**
- I'll be creating wireframes for new UI components like API collection manager, workflow builder, and test suite dashboards.
- My designs must align with APIDash's existing UI framework.
### Phase 2: Enhancing API Collection Management (Weeks 3-4)
**Objective:**
My focus here is to improve how APIs are grouped, labeled, and managed in APIDash.
#### Week 3: Storage & Metadata Enhancements
**Data Structure Implementation:**
- I intend to define models for storing API collections, including metadata (labels, groups, tags, last-run status).
- My implementation must ensure efficient data retrieval by utilizing indexes and caching mechanisms. I plan to cache the most frequently executed queries to enhance performance and reduce redundant computations. I also aim to implement a self flush mechanism for cache to reduce memory overhead.
**Database Integration:**
- It's my responsibility to implement data persistence with Hive/SQLite for offline access.
- I have to optimize queries for fast lookups and filtering.
#### Week 4: UI/UX Implementation & Testing
**UI Enhancements:**
- My plans include implementing drag-and-drop organization of APIs within collections.
- I aim to add robust search and filtering features.
**Testing & Optimization:**
- I'll conduct customisable unit and integration tests to validate data consistency while providing standard performance metrics.
- My goal is to optimize rendering performance for large collections.
### Phase 3: Implementing Collection Runner (Weeks 5-6)
**Objective:**
I'm tasked with building a new Collection Runner feature from scratch to enable batch execution of API collections in sequential and parallel modes. This feature doesn't currently exist in APIDash and will be developed completely as part of this GSoC project. I've already created a prototype for its implementation.
#### Week 5: Execution Engine Enhancements
**Batch Execution System Design:**
- I need to implement task queues for running multiple API requests by extending the existing HTTP client functionality in `lib/services/api_client.dart` to handle batched requests.
- My approach involves ensuring concurrency control using isolates, leveraging Flutter's compute API for background processing similar to other computationally intensive operations in APIDash.
**Status Tracking Mechanism:**
- As part of this implementation, I must store execution logs, timestamps, and error responses.
- I intend to provide a real-time execution progress tracker by rendering each API response as soon as it is received, rather than waiting for all requests to complete. I'm planning to handle responses in the background and update the UI incrementally. This ensures a smooth user experience, especially for large batch executions, by preventing UI freezes and displaying progress dynamically.
#### Week 6: UI/UX Integration & Testing
**UI Enhancements:**
- It's crucial that I display live execution logs.
- I aim to provide options for retries and error handling.
**Testing & Performance Optimization:**
- My testing strategy includes running stress tests for handling large API collections.
- I plan to optimize response rendering speed in UI by implementing lazy loading and streamlined UI updates.
Current Prototype:
![](images/api-testing-suite-2.png)
![](images/api-testing-suite-3.png)
### Phase 4: Developing Workflow Builder (Weeks 7-8)
**Objective:**
For this phase, I'm planning to implement a drag-and-drop workflow builder for automating API requests, using a Directed Acyclic Graph (DAG) to represent and manage dependencies between nodes.
#### Week 7: Backend & Data Modeling
**Workflow Engine Implementation**
In this approach, my goal is to represent each API request as a node in the DAG, and edges will indicate the sequence in which requests should execute.
- I need to display successful requests in green (indicating an HTTP 200 status), while failed requests will be shown in red, instantly signaling an error state.
- My implementation includes a central state repository using Riverpod providers that stores API request outputs (headers, bodies, tokens) in memory during workflow execution. This state repository will follow APIDash's existing pattern of immutable state objects while providing a mechanism for downstream nodes to access upstream outputs.
- I'm planning to implement execution controls with start, pause, and exit flags, allowing the workflow to be initiated, temporarily halted, or completely terminated without losing progress.
**Conditional Execution Support**
A key part of my design incorporates if-else branching based on API response conditions. For instance, if a request fails, the workflow can either retry, skip subsequent steps, or halt altogether.
- I also want to enable looping and re-execution of failed requests when appropriate, ensuring the workflow can handle transient errors or timeouts gracefully.
![](images/api-testing-suite-4.png)
### Week 8: UI/UX Development & Testing
**Drag-and-Drop Interface Implementation**
My approach involves providing an interactive canvas where users can drag and drop API request nodes, then connect them with directional edges to define execution order.
- I'm planning to implement nodes that offer a floating prompt on click, displaying request and response details in real time for quick debugging and validation.
**Testing & Debugging**
It's critical that I validate complex workflows involving multiple requests, dependencies, and branching conditions.
- My testing strategy includes monitoring memory consumption and performance, ensuring large or long-running workflows do not degrade the user experience.
- I aim to include an export feature that saves the workflow's execution results in JSON format, allowing teams to share and review outcomes easily.
### Phase 5: API Testing Suite (Weeks 9-10)
**Objective:**
In this phase, I need to develop a completely new API Testing Suite to validate API responses with schema validation and assertions. This functionality will be built from the ground up as it's not currently implemented in APIDash.
#### Week 9: Validation & Assertion Mechanisms
**Schema Validation Implementation:**
- My plans include supporting JSON Schema validation.
- I'm committed to ensuring compliance with OpenAPI specifications. [Schema Guidelines](https://spec.openapis.org/oas/latest.html#schema)
**Response Assertion System:**
- My implementation will allow users to define conditions like status code checks, response time limits.
- I intend to implement support for regex-based validations. [Helpful Resource](https://confluence.atlassian.com/proforma/regex-validation-1087521274.html)
#### Week 10: Performance & Security Testing
**Load Testing Module:**
- As part of this work, I'll create simulations for high-traffic scenarios.
- My testing framework will measure API response latency and throughput.
**Security Testing Integration:**
- I intend to build systems to detect vulnerabilities like CORS misconfigurations, exposed secrets.
- My goal is to provide warnings for common API security flaws.
- I plan to implement detection of injections of malware scripts.
### Phase 6: Monitoring & Analytics (Weeks 11-12)
**Objective:**
- For this phase, I'm focused on creating a simple monitoring UI with request history, error counts, and response time displays using Flutter charts package for visualization, integrated within the existing tabbed request details interface.
#### Week 11: Monitoring System Design
**Identify Key Metrics:**
- My monitoring system needs to track response times, error rates, and API usage trends.
- I'm planning to design low-overhead logging mechanisms. My approach includes implementing asynchronous logging and batching techniques. I'll design a singleton LogHandler to buffer log messages in-memory and periodically flush them to persistent storage (e.g., Hive or SQLite) using a timer, ensuring non-blocking operations. I plan to utilize Dart's logging package with log levels to limit verbosity, reducing unnecessary overhead.
**Architecture Planning:**
- In my architecture, I'll store analytics data using APIDash's existing Hive-based persistence layer, extending the current data models to include metrics and monitoring information. This maintains consistency with the app's architecture while leveraging Hive's fast key-value lookups for time-series metrics.
- I want to implement a buffer-based collection approach for monitoring data that batches metrics before storage, utilizing Dart's Isolates for background processing to prevent UI thread blocking. This approach reduces disk I/O operations and keeps memory usage minimal.
#### Week 12: UI Development & Alerting System
**Dashboard Implementation:**
- My UI development includes building a real-time monitoring dashboard.
- I plan to provide visual graphs and filtering options.
**Alerting System:**
- As part of the alerting framework, I'll implement notifications for users of failures via email, in-app alerts, or webhooks (if required in the future extension).
- I want to allow customizable alert thresholds.
### Phase 7: Testing & Optimization (Weeks 13-14)
**Objective:**
My focus during this phase is to conduct final testing, fix performance bottlenecks, and optimize UI/UX.
#### Week 13: Unit & Integration Testing
**Unit Testing Coverage:**
- My testing plan involves implementing comprehensive unit tests using Flutter's test package, focusing on critical components like execution engine, data transformations, and UI state management. I aim to target key modules rather than arbitrary coverage percentages, with mock HTTP responses for API-dependent tests.
- I need to thoroughly validate API execution logic.
**End-to-End Testing:**
- It's essential that I test workflows in real-world scenarios.
- I plan to identify and fix unexpected edge cases.
#### Week 14: Performance Optimization
**Optimize Execution Engine:**
- My optimization strategy includes reducing API execution time through parallelization and caching.
**Improve UI Responsiveness:**
- I'm committed to optimizing large collection handling.
- I aim to enhance loading speeds with efficient state management.
### Phase 8: Documentation & Submission (Week 15)
**Objective:**
My final phase focuses on preparing final documentation and demonstration materials.
**Developer Documentation:**
- I'll write detailed API and system architecture documentation.
**Final Demo & Submission:**
- I need to prepare a comprehensive video walkthrough.
- I'll submit codebase with release notes and test results.
### Expected Outcomes
- A fully functional API testing suite with workflow automation capabilities.
- An intuitive UI/UX for managing API collections and tests.
- Comprehensive documentation for end users and developers.
### Project Scope and Flexibility
To ensure project completion within the GSoC timeline, I've prioritized features as follows:
**Core Deliverables (Must-Have):**
- Collection Runner with basic sequential and parallel execution
- Simple Workflow Builder with linear API chaining
- Basic API validation testing
**Extended Goals (If Time Permits):**
- Advanced workflow conditionals and branching
- Security testing features
- Performance testing capabilities
**Stretch Goals (Post-GSoC):**
- AI-enhanced features
- Advanced monitoring and analytics
Throughout the project, I will be working closely with mentors, maintaining frequent sync-ups and regular communication. I understand the importance of collaboration and will ensure that my implementation aligns with APIDash's vision by staying in constant touch with the mentor team for guidance, feedback, and code reviews.
For me, this prioritization ensures the right features are delivered at time.
Looking forward to contributing!

View File

@ -0,0 +1,322 @@
### About
1. Full Name: Mohit Kumar Singh
3. Contact info :8538948208, tihom4537@gmail.com
6. Discord handle : tihom__37
9. GitHub profile link : https://github.com/tihom4537
10. LinkedIn: https://www.linkedin.com/in/mohit-kumar-singh-268700254
11. Time zone: IST (GMT+5:30)
12. Link to a resume (PDF, publicly accessible via link and not behind any login-wall): https://drive.google.com/file/d/1j11dbTE2JYhsXkBP7Jg4wxhY-bnTt425/view?usp=drivesdk
### University Info
1. University name : National Institute Of Technology, Hamirpur
2. Program you are enrolled in (Degree & Major/Minor): B.Tech in Electrical Engineering
3. Year :Prefinal Year(3rd Year)-2023
5. Expected graduation date: 2024
### Motivation & Past Experience
Short answers to the following questions (Add relevant links wherever you can):
1. Have you worked on or contributed to a FOSS project before? Can you attach repo links or relevant PRs?
-While I haven't had the opportunity to contribute to a FOSS project yet, I am keenly interested in open-source development and actively exploring avenues to participate.
2. What is your one project/achievement that you are most proud of? Why?
-Artist Connection Platform
I designed and developed a comprehensive artist connection platform that facilitates collaboration between artists and clients. This project represents my most significant achievement as I independently handled the entire development lifecycle from conception to deployment.
As the sole developer, I implemented both the frontend using Flutter and the backend using Laravel. The platform features a robust set of functionalities including:
* Secure upload and management of large media files (videos and images) to AWS S3
* Dynamic artist work profiles with portfolio showcasing
* Phone number verification through OTP authentication
* Secure payment processing through Razorpay integration
* Real-time communication via Firebase notification system
The infrastructure deployment leverages multiple AWS services:
* EC2 instances for backend hosting
* S3 buckets for asset management
* Relational Database Service (RDS) for data storage
* Load Balancer for traffic management and high availability
This project demonstrates my ability to handle complex technical challenges across the full stack while delivering a production-ready solution. The application is currently active with a growing user base across both mobile platforms.
Links
* Android: https://play.google.com/store/apps/details?id=in.primestage.onnstage&pcampaignid=web_share
* iOS: https://apps.apple.com/in/app/primestage-artist-booking-app/id6736954597
* GitHub (Frontend): https://github.com/hunter4433/artistaFrontend-.git
* GitHub (Backend): https://github.com/hunter4433/artistaFrontend-.git
3. What kind of problems or challenges motivate you the most to solve them?
-I am particularly motivated by smart and efficient system design challenges, especially those that focus on scalability and seamless handling of user load. I find it exciting to work on products and applications that are built to scale, ensuring they can handle growing demands without compromising performance. The opportunity to design systems that are both robust and efficient drives my passion for solving complex technical problems
4. Will you be working on GSoC full-time? In case not, what will you be studying or working on while working on the project?
-I will be working full-time till mid-term evaluation(july 14) as I will be having summer vacation after 1st week of May till 1st week of July, thereafter also I will contribute the 3-4 hours daily as i Will be involved with my academic curricullum too.
6. Do you mind regularly syncing up with the project mentors?
-I don't mind regular sync-ups with project mentors at all. In fact, I welcome the opportunity for consistent communication and feedback throughout the project.
7. What interests you the most about API Dash?
-I have worked with API creation, management, and load testing in previous projects, which has given me insight into their industrial importance. What particularly interests me about API Dash is its comprehensive approach to API monitoring,Code generation and visualization. I'm excited about the opportunity to contribute to a tool that helps developers track and improve API performance in real-time.
8. Can you mention some areas where the project can be improved?
-It lacks Integration with tools such as CI/CD pipelines and version control systems like GitHub. We can offer similar integrations to help teams manage and automate API testing and monitoring.
# API Testing Suite Implementation - GSoC Proposal
## 1. Proposal Title
API Testing Suite, Workflow Builder, Collection Runner & Monitor Implementation for API Dash Framework
Related Issues - #96 #100 #120
## 2. Abstract
This project aims to implement a comprehensive API Testing Suite within the existing API Dash framework. Modern API development requires robust testing tools to ensure reliability, performance, and security. The proposed testing suite will provide developers with a powerful solution for creating, managing, and executing various types of API tests through a flexible and intuitive interface. By implementing features such as a hierarchical test organization structure, asynchronous test execution, JavaScript-based test scripting, and detailed reporting capabilities, this project will significantly enhance the API development workflow within the API Dash ecosystem.
## 3. Detailed Description
### Project Objectives
The API Testing Suite implementation will focus on the following key objectives:
- **Test Case Management**: Develop a comprehensive system for creating and managing test cases with support for multiple test types, environment variables, and execution history.
- **Test Suite Organization**: Implement a hierarchical structure for organizing tests with nested suites, suite-level environment variables, and advanced execution controls.
- **Test Execution Engine**: Create a powerful engine for running tests asynchronously with configurable timeouts, progress monitoring, and status checking.
- **Test Scripting Interface**: Build a flexible scripting interface using JavaScript/Chai for custom validation logic and assertion-based testing.
- **Reporting System**: Implement detailed reporting capabilities with multiple output formats and comprehensive test result metrics.
### Workflow Architecture
The API Testing Suite follows a logical workflow that enables systematic API testing:
![alt text](images/API_testing.jpg)
This diagram illustrates the complete testing process from creating test suites to generating reports, along with the different types of tests supported and execution modes available in the implementation.
### Technical Implementation Plan
#### 1. Test Case Management Module
The core of the project will focus on creating a robust test case management system that supports:
- Multiple test types including response validation, environment variables, performance, and security tests
- Comprehensive test case properties (name, description, enable/disable functionality)
- Environment variable integration
- Test script association
- Execution history tracking
**Implementation Details:**
- Create `test_case_model.dart` to define the core data structure
- Develop test result tracking mechanisms
- Implement environment variable management within test cases
#### 2. Test Suite Organization
The project will implement a hierarchical test suite structure allowing:
- Creation and management of test suites
- Support for nested test suites (suite of suites)
- Suite-level environment variables
- Advanced test execution controls including stop on failure option, test reordering, duplication, and search
**Implementation Details:**
- Develop `test_suite_model.dart` to define suite structure
- Implement state management via `test_suite_provider.dart`
- Create UI components for navigating and managing suite hierarchy
#### 3. Test Execution Engine
A powerful test execution engine will be implemented that supports:
- Individual test execution
- Suite and nested suite execution
- Asynchronous test support with configurable timeouts
- Status checking endpoints
- Progress monitoring
**Implementation Details:**
- Create `test_runner_service.dart` to handle execution logic
- Implement asynchronous test handling mechanisms
- Develop result collection functionality
#### 4. Test Scripting Interface
The project will provide a flexible scripting interface using JavaScript/Chai that supports:
- Assertion-based testing
- Environment variable access
- Asynchronous operation handling
- Custom validation logic
**Implementation Details:**
- Create `test_script_model.dart` for script definition
- Implement script execution context
- Develop result handling mechanisms
#### 5. Reporting System
A comprehensive reporting system will be implemented supporting:
- Multiple report formats (JSON, CSV, HTML)
- Detailed report contents including test results, execution times, error messages, and performance metrics
**Implementation Details:**
- Create report generation services
- Implement formatters for different output types
- Develop result visualization components
### API Load Testing Capabilities
Building on the core testing framework, the implementation will include advanced load testing capabilities:
- **Multiple Testing Methodologies**:
- Concurrent Users Simulation
- Requests Per Second (RPS) Testing
- Total Requests Testing
- Duration-Based Testing
![alt text](images/LOAD_TEST.jpg)
- **Performance Metrics**:
- Success and failure rates
- Average response times
- 95th and 99th percentile response times
- Throughput (requests per second)
- Individual request timestamps and status codes
- **Configuration Options**:
- HTTP methods (GET, POST, PUT, DELETE)
- Custom headers and request bodies
- Load patterns with configurable ramp-up and ramp-down periods
The solution implements intelligent request scheduling as demonstrated in this core algorithm:
```dart
List<int> _calculateRequestTimings(LoadTestConfig config) {
final timings = <int>[];
switch (config.type) {
case LoadTestType.concurrentUsers:
// For concurrent users, we want to send all requests at once
timings.addAll(List.filled(config.value, 0));
break;
case LoadTestType.requestsPerSecond:
// For RPS, we need to space out requests evenly
final interval = (1000 / config.value).round();
timings.addAll(List.generate(config.value, (i) => i * interval));
break;
case LoadTestType.totalRequests:
// For total requests, we'll spread them over 1 minute
final interval = (60000 / config.value).round();
timings.addAll(List.generate(config.value, (i) => i * interval));
break;
case LoadTestType.durationBased:
// For duration-based, we'll send requests throughout the duration
final interval = (config.value * 1000 / 100).round(); // 100 requests
timings.addAll(List.generate(100, (i) => i * interval));
break;
}
// Add ramp-up and ramp-down periods
if (config.rampUpTime > 0) {
final rampUpInterval = config.rampUpTime * 1000 / timings.length;
for (var i = 0; i < timings.length; i++) {
timings[i] += (i * rampUpInterval).round();
}
}
if (config.rampDownTime > 0) {
final rampDownInterval = config.rampDownTime * 1000 / timings.length;
for (var i = 0; i < timings.length; i++) {
timings[i] += ((timings.length - i) * rampDownInterval).round();
}
}
return timings;
}
```
### API Collection and Workflow Management
The implementation will include a sophisticated system for API management through collections and visual workflows:
- **Collections Management**:
- Organized grouping of related API requests
- Import/export capabilities
- Filtering and search functionality
- **Visual Workflow Builder**:
- Drag-and-drop interface for workflow creation
- Support for various node types (requests, delays, variables, conditions)
- Interactive connector lines between nodes
- Conditional branching based on response data
- **Variable Management**:
- Dynamic variable substitution in URLs, headers, and request bodies
- Environment-specific variable sets
- Automatic variable extraction from responses
The implementation includes a robust execution engine for workflows:
```dart
Future<CollectionRunResult> _executeWorkflow(
ApiWorkflow workflow,
Map<String, dynamic> variables,
) async {
// Sort nodes by position for execution order
final sortedNodes = workflow.nodes.toList()
..sort((a, b) => a.position.y.compareTo(b.position.y));
// Execute nodes in sequence
for (final node in sortedNodes) {
final result = await _executeNode(node, variables);
// Process result and update variables
}
// Return workflow execution results
return CollectionRunResult(/* ... */);
}
```
### Testing Strategy
The project will include comprehensive testing of all components:
- Unit tests for the test runner (`test_runner_test.dart`)
- Integration tests for the test suite provider (`test_suite_provider_test.dart`)
- End-to-end tests to validate the full testing workflow
### Integration with Existing System
The API Testing Suite will integrate seamlessly with the existing API Dash features, providing:
- Improved API testing workflow
- Better test organization
- Enhanced test automation
- Detailed test reporting
- Consistent user experience
### Benefits to the Community
This implementation will benefit the community by:
- Improving API quality through comprehensive testing
- Reducing development time with automated testing
- Enhancing debugging capabilities with detailed reporting
- Supporting a wider range of testing scenarios
- Providing a more complete development ecosystem within API Dash
## 4. Weekly Timeline
| Week | Date Range | Activities | Deliverables |
|------|------------|------------|-------------|
| **1-2** | **May 8 - June 1** | • Review existing API Dash framework • Set up development environment • Finalize design documents • Create initial project structure | • Project repository setup • Detailed design document • Initial framework |
| **3** | **June 2 - June 8** | • Implement test case model • Create basic test case properties • Design test case UI | • Basic test case data structure • UI wireframes |
| **4** | **June 9 - June 15** | • Implement environment variable handling • Develop test case management UI • Create test result models | • Environment variable system • Test case management interface |
| **5** | **June 16 - June 22** | • Implement test suite models • Create suite hierarchy structure • Begin suite-level variable implementation | • Test suite data structure • Initial hierarchy navigation |
| **6** | **June 23 - June 30** | • Complete suite-level variable implementation • Develop test ordering functionality • Create suite management UI | • Suite management interface • Test ordering system |
| **7** | **July 1 - July 7** | • Begin test runner service implementation • Develop basic test execution logic • Implement test status tracking | • Basic test execution engine • Status tracking system |
| **8** | **July 8 - July 14** | • Complete test runner service • Implement asynchronous test handling • Create progress monitoring UI • Prepare midterm evaluation | • Working test execution engine • Progress monitoring interface • Midterm evaluation report |
| **9** | **July 18 - July 24** | • Begin JavaScript/Chai integration • Create script model • Implement basic assertion handling | • Script data structure • Basic script execution |
| **10** | **July 25 - July 31** | • Complete script execution context • Implement advanced assertions • Develop environment variable access in scripts | • Complete scripting interface • Variable access in scripts |
| **11** | **August 1 - August 7** | • Begin reporting system implementation • Create report models • Implement JSON/CSV formatters | • Report data structure • Basic formatters |
| **12** | **August 8 - August 14** | • Complete reporting system • Implement HTML reports • Develop visualization components • Create export functionality | • Complete reporting system • Multiple export formats |
| **13** | **August 15 - August 21** | • Integrate load testing capabilities • Implement test collections • Begin workflow builder implementation | • Load testing functionality • Collections management |
| **14** | **August 22 - August 25** | • Complete workflow builder • Perform comprehensive testing • Fix bugs and optimize performance | • Complete workflow system • Passing test suite |
| **15** | **August 25 - September 1** | • Finalize documentation • Create tutorial content • Prepare final submission • Submit final evaluation | • Complete API Testing Suite • Comprehensive documentation • Final project report |
### Technical Skills and Qualifications
- Proficient in Dart and Flutter development
- Experience with API testing methodologies
- Understanding of asynchronous programming concepts
- Familiarity with JavaScript and testing frameworks
- Knowledge of state management in Flutter applications
### Expected Outcomes
Upon completion, the API Testing Suite will provide:
- Comprehensive test management capabilities
- Flexible test organization structures
- Powerful test scripting options
- Detailed testing reports
- Intuitive workflow builder interface
This implementation will significantly enhance the API Dash framework, making it a more complete solution for API development and testing.

View File

@ -0,0 +1,53 @@
### About
1. Full Name: - Aviral Garg
3. Contact info (email, phone, etc.): gargaviral99@gmail.com, +91-9971195728
6. Discord handle: __aviral
7. Home page (if any)
8. Blog (if any): - https://dev.to/aviralgarg05
9. GitHub profile link :- https://github.com/aviralgarg05
10. Twitter, LinkedIn, other socials: - https://www.linkedin.com/in/aviral-garg-b7b053280/
11. Time zone:- IST
12. Link to a resume (PDF, publicly accessible via link and not behind any login-wall):- https://false-rooster-1f2.notion.site/Aviral-Garg-CV-15737ff1adc58070be95fbb15b8a6cc3?pvs=4
### University Info
1. University name:- GGSIPU, Delhi
2. Program you are enrolled in (Degree & Major/Minor):- B.Tech in CSE
3. Year:- 2nd Year
5. Expected graduation date:- 2027
### Motivation & Past Experience
Short answers to the following questions (Add relevant links wherever you can):
1. Have you worked on or contributed to a FOSS project before? Can you attach repo links or relevant PRs? :- NO
2. What is your one project/achievement that you are most proud of? Why?:- Research Interned and Published a Research paper for DRDO in 1st year, Got offers from R&D IIT Hyderabad as a full time AI Engineer in 2nd Year
3. What kind of problems or challenges motivate you the most to solve them? I am driven by solving complex AI, cybersecurity, and automation challenges, particularly in real-time systems, IIoT security, and intelligent decision-making. My passion lies in developing innovative, efficient, and ethical AI solutions that enhance security, automation, and human-AI interaction.
4. Will you be working on GSoC full-time? In case not, what will you be studying or working on while working on the project? Yes
6. Do you mind regularly syncing up with the project mentors? No
7. What interests you the most about API Dash? Im interested in API Dash for its ability to streamline API management, testing, and automation, which aligns with my work in AI-driven automation, cybersecurity, and real-time systems.
8. Can you mention some areas where the project can be improved? DashBot can be improved by enhancing its NLP capabilities for better understanding of complex API queries and providing contextual solutions. Integrating real-time API testing, debugging assistance, and automation can streamline issue resolution. Additionally, implementing security checks, compliance suggestions will make it more robust and accessible for developers.
### Project Proposal Information
1. Proposal Title :- DashBot: AI-Powered Chatbot for API Management, Debugging, and Automation
2. Abstract: A brief summary about the problem that you will be tackling & how. :- Managing APIs efficiently can be time-consuming, especially when debugging issues, optimizing workflows, or ensuring security compliance. DashBot is an AI-powered chatbot designed to assist developers in API management by providing real-time issue resolution, debugging assistance, automation of repetitive tasks, and security insights. Using natural language processing (NLP) and machine learning, DashBot will help users interact with APIs seamlessly through a conversational interface.
3. Detailed Description:- DashBot will be designed as an intelligent chatbot that integrates with API Dash and other API management platforms. Key functionalities include:
• Smart API Query Handling: Understands user requests and provides contextual API recommendations.
• Real-time Debugging Assistance: Identifies errors, suggests fixes, and helps troubleshoot API failures.
• Automation of API Workflows: Automates repetitive tasks like API calls, request scheduling, and response validation.
• Security & Compliance Checks: Detects vulnerabilities, suggests security enhancements, and ensures adherence to best practices.
4. Weekly Timeline: A rough week-wise timeline of activities that you would undertake.
Week 1
Research API Dash integration, define project scope, and finalize chatbot architecture.
Week 2
Develop basic chatbot framework with NLP capabilities for handling API-related queries.
Week 3
Implement debugging assistance, issue resolution, and API workflow automation features.
Week 4
Integrate security checks and compliance suggestions for API best practices.
Week 5
Add multi-platform support (Slack, Discord, etc.), optimize performance, and test functionalities.
Week 6
Conduct user testing, refine chatbot responses, and deploy DashBot for beta testing.

View File

@ -0,0 +1,249 @@
# API Dash - GSoC 2025 Proposal
## About
- **Full Name**: BALASUBRAMANIAM L
- **Contact Info**:
- Mobile: +91 9345238008
- Email: balasubramaniam12007@gmail.com
- **Discord**: .balasubrmaniam
- **GitHub**: [Balasubramaniam](https://github.com/BalaSubramaniam12007)
- **LinkedIn**: [Balasubramaniam](http://www.linkedin.com/in/balasubramaniam2007)
- **Twitter**: [Balasubramaniam](https://x.com/BALASUBRAMAN1AM)
- **Time zone**: GMT+5:30
## University Info
- **University name**: Saveetha University, Chennai
- **Institution name**: Saveetha Engineering college
- **Degree & Major**: Bachelor of Technology
- **Year**: 1st Year
- **Expected Graduation**: 2028
## Motivation & Past Experience
### What interests you the most about API Dash?
I find that API Dash is a simple, hassle-free tool for quickly testing APIs without authentication. It supports multiple MIME types for easy response viewing and generates accurate code for different frameworks. Plus, its welcoming open-source community helps newcomers with their first contributions.
### Have you worked on or contributed to a FOSS project before? Can you attach repo links or relevant PRs?
Yes, I have contributed to this same project, API Dash, before.
- [#665](https://github.com/foss42/apidash/pull/665) Implement global status bar => This PR introduces a global status bar to notify users of failed requests, warnings, and validation errors, improving error visibility and user awareness.
- [[#676 Tab bar](https://github.com/foss42/apidash/pull/676)] => The tab bar displays a list of API requests (represented as tabs) that the user can interact with—selecting a tab, closing it, or reordering it.
### Will you be working on GSoC full-time? In case not, what will you be studying or working on while working on the project?
I will not have any exams or other commitments during the entire coding period. My full focus will be on GSoC, allowing me to dedicate my time entirely to the project.
### Do you mind regularly syncing up with the project mentors?
Yes, I will communicate with my mentor through calls or chats for task updates and suggestions while remaining self-reliant.
## Project Proposal
### Proposal Title: API EXPLORER
### Abstract
The proposal focuses on enhancing the user experience of API Dash by creating a comprehensive API Explorer. The project will integrate a curated library of popular, publicly available APIs, simplifying the process for developers to discover, test, and import API endpoints into their workspaces. An automated backend pipeline will be developed to parse OpenAPI and HTML files, enriching and auto-tagging the extracted data to produce standardized JSON templates. This system aims to pre-configure authentication details, sample payloads, and expected responses, significantly reducing the manual setup time for developers. Additionally, the platform will feature a modern user interface that organizes APIs by domain, such as AI, finance, weather, and social media, while supporting direct imports into testing environments and encouraging community contributions. Ultimately, this solution is designed to reduce onboarding time and enhance efficiency, positioning API Dash as a valuable tool for API exploration and integration.
### Detailed Description
To enhance API Dash, we will create an automated pipeline that converts raw API specifications into enriched JSON templates, involving several key stages in the backend process.
#### Parsing & Data Extraction:
- Create a Dart parser to read API spec files in YAML, JSON, HTML, and Markdown.
- Extract key details like titles, descriptions, endpoints, and authentication info.
- Auto-generate sample payloads for testing when not provided.
#### Data Enrichment & Auto-Tagging:
- Enhance the extracted data by adding metadata for consistency and usability.
- Use NLP or regex to automatically tag and categorize APIs into domains like AI, finance, and weather.
#### JSON Template Generation:
- Utilize json_serializable and json_annotation to convert enriched data into standardized JSON templates.
#### User Interface & Direct Import:
- Develop a responsive, user-friendly UI that allows developers to discover, browse, search, and directly import API endpoints into their workspaces.
#### Offline Caching & Performance Optimization:
- Integrate Hive for offline storage.
- Optimize the automation pipeline for scalability.
#### GitHub Actions & Cron Job:
- When new API templates are added or updated, GitHub Actions will package them into a ZIP file and create a new release.
- A cron job will periodically fetch the latest releases, download the ZIP files, extract them, and store them in Hive for offline access.
#### Community Contributions:
In order to empower the developer community and keep the API templates accurate and up-to-date, this project establish contribution workflows via GitHub.
1. **Local Repository Contribution**:
- Developers can modify API specs locally using Git workflows, triggering an automated pipeline to generate enriched JSON templates for GitHub pull requests.
2. **Contribution via APIDASH**:
- As an innovative alternative, an in-app Documentation Editor will be integrated within API Dash. This feature allows users to:
- Upload an API spec file or URL to trigger automated parsing.
- Review and manually adjust the auto-generated JSON template in an easy-to-use editor.
- Download the updated JSON template. In the future, trigger an automated GitHub pull request directly from the application.
### Implementing Essential Features (Project Deliverables):
1. **Automated Parsing & Template Generation**:
- Build a backend pipeline to extract API details and generate enriched JSON templates with full testing configurations.
2. **Curated API Library & Categorization**:
- Develop a UI to showcase APIs with search, filtering, and auto-tagging into categories like AI, finance, and weather.
3. **Direct Import Functionality**:
- Enable developers to import pre-configured API request templates with authentication, sample payloads, and expected responses.
4. **Community Contributions & Reviews**:
- Allow users to contribute via GitHub workflows, ensuring the API repository remains accurate and community-driven.
5. **Offline Caching & Performance Optimization**:
- Integrate Hive for offline storage. Optimize the automation pipeline for scalability.
6. **GitHub Actions & Cron Job Automation**:
- Automate API template updates:
- GitHub Actions will package new or updated API templates into a ZIP file and trigger a new release.
- A cron job will periodically fetch the latest releases, extract the templates, and store them in Hive for offline access.
### API EXPLORER Design:
Click here to check the prototype: [Figma Link\]](https://www.figma.com/design/lxKNiN6sCq0xRJsVW7ZnbI/Untitled?node-id=78-19&t=PvoLoDP7SNruhaQP-1)
### Packages & Requirements:
#### 1. Parsing & Data Processing
- [yaml](https://pub.dev/packages/yaml) Parse and extract data from YAML API specs.
- [html](https://pub.dev/packages/html) Process HTML content within API specifications.
- [markdown](https://pub.dev/packages/markdown) Parse and format API documentation in Markdown.
- [json_serializable](https://pub.dev/packages/json_serializable),[ json_annotation](https://pub.dev/packages/json_annotation) Handle structured API request templates.
- [nlp](https://pub.dev/packages/nlp)[OPTIONAL] Implement taggers for categorization and keyword extraction.
#### 2. Automation & Background Tasks
- [cron](https://pub.dev/packages/cron) Fetch the latest API template releases periodically.
- [archive](https://pub.dev/packages/cron) Extract downloaded ZIP files containing API templates and store them in Hive.
#### 3. UI & UX Components
- [flutter_hooks](https://pub.dev/packages/flutter_hooks) Manage lifecycle events and reactive UI logic efficiently
- UI elements include: Cards, Chip for API template display.
### Weekly Timeline:
#### Community Bonding Period
- Collaborate with mentors to discuss project scope.
- Finalize project scope, design, and deliverables.
- Outline the overall parsing pipeline, including autotagging and data enrichment strategies.
#### Week 1: Environment Setup & Initial Parser/Extractor Enhancement
- Set up the development environment and structure the repository with /apis/ for raw files and /api_templates/ for generated JSON.
- Develop a basic parser and extractor module to read raw API files.
- Validate it with sample files to confirm accurate detection and extraction.
#### Week 2: Enhancing the Data Parsing Script
- Expand the initial data parsing script to handle multiple file formats.
- Ensure the parsing logic aligns with the API request models.
- Begin integrating error handling and logging for parsing processes
- Write unit tests to ensure proper parsing logic.
#### Week 3: Implementing Data Enrichment & AutoTagging
- Enhance metadata by adding extra contextual details for each API.
- Implement NLP-based auto-tagging (if needed) or regex-based keyword matching.
- Categorize APIs automatically.
- Finetune the logic for clear and accurate categorization of APIs.
- Write unit tests to validate auto-tagging accuracy and metadata enrichment.
#### Week 4: JSON Template Generation for API Request Templates
- Develop a module to generate structured JSON templates from the parsed API data.
- Ensure the generated JSON includes:
- Authentication headers
- Sample payloads
- Expected responses
- Validate the JSON output against the standard API request model to guarantee consistency.
#### Week 5: Offline Storage Integration & Initial UI Work
- Integrate Hive to locally store JSON templates.
- Cache API request templates for quick offline access.
- Ensures smooth update and retrieval of stored data
- Outline basic API Explorer UI structure(e.g.,skeltons, placeholders for cards, search bar, filter controls).
#### Week 6: UI Development API Explorer & Description Pages
- Finalize the API Explorer page with search, filters, pagination, and a card-based layout.
- Retrieve JSON templates from Hive.
- Develop the API Description page to show the method, sample payloads, and import options.
- Integrate navigation and UI for easy viewing and importing.
- Write unit and widget tests.
#### Midterm Evaluation
#### Week 7: Import Template into Request Functionality
- Implement the "Import API" feature, allowing users to:
- Load API request templates
- Auto-fill authentication headers & payloads.
- Ensure authentication details, sample payloads, and headers are pre-filled when importing.
- Validate that users can modify and execute imported API requests.
- Write integration tests for the template import process.
#### Week 8: Contribution Workflow & Import Enhancements
- Editor Development:
- Implement features for users to upload an API spec file or provide a URL.
- Trigger the Dart processing script.
- Develop an in-app Documentation Editor interface.
- User Interaction:
- Allow users to review and manually adjust the auto-generated JSON template.
- Provide options to download the JSON or (in a future phase) trigger automated PR creation.
- Test the editor's responsiveness and user interaction flow
- Write unit and integration tests for contribution workflows.
#### Week 9: Automation with GitHub Actions & ZIP Packaging
- Set up GitHub Actions to:
- Trigger a new release when templates are updated
- Package API templates into a ZIP file.
- Configure a cron job to fetch the latest ZIP release periodically.
- Develop functionality to extract the ZIP file and store the extracted JSON templates files in Hive.
#### Week 10: Unit & Widget Testing | Integration & End-to-End Testing
- Write additional unit, widget, and integration tests for uncovered edge cases.
- Conduct minimal additional testing for previously developed features.
- Implement additional validations on the generated JSON templates (e.g., schema checks).
- Gather feedback from mentors on implementation and refine functionalities accordingly.
- Conduct end-to-end validation for file/URL imports.
- Fix any bugs/issues found while testing.
#### Week 11: Code Optimization & Final Refinements
- Optimize parsing and enrichment logic for better performance.
- Improve UI/UX based on mentor feedback and testing.
- Perform final full-system testing and address any last-minute fixes.
#### Week 12: Final Review & Submission
- Analyze performance and quality to ensure features work smoothly.
- Final review with mentors.
- Prepare technical documentation and contribution documentation.
- Prepare documentation for final evaluation.
#### Final Evaluation
### Future Enhancements:
We will expand these features in future releases, but they may not work fully at this stage.
- **Ratings & Reviews**: Enable users to rate and review API templates for quality control.
- **GitHub Integration & PR Automation**: Automate pull request creation and integrate GitHub workflows for streamlined contributions.
### Relevant Project Experience:
#### Project: ApiForge - OpenAPI Specification Management Tool
**Description:**
[ApiForge](https://balasubramaniam12007.github.io/) is a Flutter-based application designed to load, parse, and display OpenAPI specifications. It allows users to load OpenAPI specs either by providing a URL or uploading a local file (in JSON or YAML format) and export it as postman collections.
**Key Achievements:**
- **Load OpenAPI Specs**: Load specs from a URL or by uploading a local file (JSON/YAML).
- **View Endpoints**: Display API endpoints with their methods, paths, and descriptions in a user-friendly format.
- **Search Functionality**: Filter endpoints by path or method via search.
- **Export to Postman**: Export the loaded OpenAPI spec as a Postman collection for easy testing.
**Technologies**: Flutter, Dart, OpenAPI, GitHub Actions, GitHub Pages
**Project Outcomes**:
- OpenAPI Spec Import Load specs via URL or local JSON/YAML files.
- API Endpoint Visualization Display methods, paths, and descriptions
- Search & Filter Quickly find endpoints by path or method.
- Postman Export Convert OpenAPI specs into Postman collections.
### Blogs Posted:
**[A Month with Apidash: A Comprehensive Review](https://medium.com/@balasubramaniam12007/in-recent-years-the-need-for-streamlined-api-testing-and-integration-has-grown-alongside-the-rapid-9a4eab5bca02)**
- Date: Mar 25, 2025
- Description: The article reviews Apidash, highlighting its lightweight design, offline access, cross-platform compatibility, and built-in code generation, along with its benefits and guidance for developers getting started.

View File

@ -0,0 +1,210 @@
# GSoC Proposal for AI API Evalution
## About
1. **Full Name:** Harsh Panchal
2. **Email:** harsh.panchal.0910@gmail.com
3. **Phone number:** +91-9925095794
4. **Discord Handle:** panchalharsh
5. **Home Page:** [harshpanchal0910.netlify.app](https://harshpanchal0910.netlify.app/)
6. **GitHub:** [GANGSTER0910](https://github.com/GANGSTER0910)
7. **LinkedIn:** [Harsh Panchal](https://www.linkedin.com/in/harsh-panchal-902636255)
8. **Time Zone:** IST (UTC +5:30)
9. **Resume:** [Link to Resume](https://drive.google.com/drive/folders/1iDp0EnksaVXV3MmWd_uhGoprAuFzyqwB)
## University Information
1. **University Name:** Ahmedabad University, Ahmedabad
2. **Program:** BTech in Computer Science and Engineering
3. **Year:** 3rd Year
4. **Expected Graduation Date:** May 2026
## Motivation & Past Experience
1. **Have you worked on or contributed to a FOSS project before?**
- Yes I have once contributed to the foss/api in GSSOC 2024
- [Contribution](https://github.com/foss42/api/pull/69) - Added a put API.
2. **What is your one project/achievement that you are most proud of? Why?**
- One project I'm really proud of is TrippoBot, an AI-powered travel assistance chatbot I built with my team. It helps users with personalized travel recommendations, booking assistance, and real-time insights. Developing it was both challenging and rewarding — we had to integrate AI for natural language understanding, ensure smooth API interactions, and fine-tune the bot for accurate responses.
- What made it even more special was winning 2nd place at the TicTechToe Hackathon. Competing against talented teams and seeing our hard work recognized was an amazing feeling. It not only boosted my confidence but also sharpened my problem-solving skills and showed me the real-world impact of AI applications. Looking back, it's a reminder of how much I enjoy tackling complex problems and turning ideas into practical solutions.
3. **What kind of problems or challenges motivate you the most to solve them?**
- I am most inspired to solve complicated challenges that need me to think in new ways and stretch my limits. I appreciate tackling new challenges because they allow me to learn, discover innovative solutions, and gain a better understanding of developing technologies. The AI API Evaluation project interests me since it entails examining several AI models, determining their strengths and limitations, and employing rigorous evaluation procedures. The potential of breaking down complex model behaviors, evaluating performance indicators, and gaining actionable insights is very appealing. I am motivated by the challenge of developing solutions that help progress AI assessment frameworks, resulting in more transparent and dependable AI applications.
4. **Will you be working on GSoC full-time?**
- Yes, I will be working full-time on my GSoC project.
5. **Do you mind regularly syncing up with the project mentors?**
- Not at all, I am comfortable with regular mentor interactions to ensure aligned development.
6. **What interests you the most about API Dash?**
- API Dash is like your go-to toolkit for working with APIs. It makes testing, debugging, and evaluating APIs a breeze with its user-friendly interface. You can easily compare API responses in real-time and even assess how different AI models perform. Its designed to take the guesswork out of API management, helping you make smarter decisions and build stronger applications. Think of it as having an extra pair of hands to simplify your API tasks!
7. **Can you mention some areas where the project can be improved?**
- Enhanced Evaluation Framework Add a robust AI model evaluation system for benchmarking across industry tasks.
- Customizable Evaluation Criteria Allow users to define metrics like fairness, robustness, and interpretability.
- Support for Offline Datasets & Models Provide options to upload and evaluate local models and datasets.
- Interactive Visualizations Improve API performance insights with comparative graphs and trend analysis.
## Project Proposal Information
### Proposal Title
**AI API Evaluation Framework**
### Abstract
This project aims to develop an end-to-end AI API evaluation framework integrated into API Dash. It will provide a user-friendly interface for configuring API requests, supporting both online and offline evaluations. Online evaluations will call APIs of server-hosted models, while offline evaluations will use LoRA adapters with 4-bit quantized models for efficient storage and minimal accuracy loss. The framework will also support custom datasets, evaluation criteria, and visual result analysis through charts and tables, making AI model assessment more accessible and effective.
## Detailed Description
This project integrates an AI API evaluation framework into API Dash to assess models across text, images, audio, and video. It supports both online (API-based) and offline (LoRA adapters with 4-bit models) evaluations. Users can upload datasets, customize metrics, and visualize results through charts. Explainability features using SHAP and LIME provide insights into model decisions. The framework also tracks performance metrics and generates detailed reports for easy comparison and analysis.
## Screenshots
![Screenshot 1](https://github.com/GANGSTER0910/apidash/blob/eb49dfc93538a8e08653b9a89d87e5d4a510b24f/doc/proposals/2025/gsoc/images/AI_API_EVAL_Dashboard_1.png)
![Screenshot 2](https://github.com/GANGSTER0910/apidash/blob/eb49dfc93538a8e08653b9a89d87e5d4a510b24f/doc/proposals/2025/gsoc/images/AI_API_EVAL_Dashboard_2.png)
![Screenshot 3](https://github.com/GANGSTER0910/apidash/blob/eb49dfc93538a8e08653b9a89d87e5d4a510b24f/doc/proposals/2025/gsoc/images/AI_API_EVAL_result.png)
### Features to be Implemented
1. **AI Model Evaluation**
- Evaluate AI models across multiple media types, including **text**, **images**, **audio**, and **video**.
- Support offline evaluation using **LoRA adapters** with quantized models for efficient storage.
- Provide user-selected metrics for benchmarking, including:
- **BLEU-4** (for text)
- **ROUGE-L** (for text)
- **BERTScore** (for text)
- **METEOR** (for text)
- **PSNR** (for images and video)
- **SSIM** (for images and video)
- **CLIP Score** (for images)
- **WER** (for audio)
- Visualize score comparisons using **radar charts** for intuitive analysis.
2. **Custom Dataset Evaluation**
- - Allow users to upload their own datasets for evaluation.
- Provide access to pre-defined industry-standard benchmark datasets.
- Support various data types including text, images, and multimedia.
- Provide option to user to select the evalution metrics for it's own choice
3. **Custom Benchmark Metrics**
- Users can customize their evaluation by choosing preferred evaluation metrics.
- Offer flexibility to integrate additional metrics in the future.
4. **Explainability Integration**
- Implement SHAP (SHapley Additive Explanations) to analyze feature importance and understand model decisions globally.
- Integrate LIME (Local Interpretable Model-Agnostic Explanations) for localized interpretability of individual predictions.
- Provide feature importance charts to show which inputs contributed most to the model's output.
- Use decision boundary plots to visualize how the model classifies different inputs.
- Implement heatmaps for images to highlight the regions that influenced predictions.
- Ensure transparency by helping users understand why a model made a certain decision.
5. **Real-Time Performance Monitoring**
- Track latency, memory usage, and API response times.
6. **Reporting and Export**
- Generate detailed reports in PDF or CSV.
- Provide comparison summaries for different models.
### Tools and Tech Stack
- **Backend:** FastAPI (Python)
- **Frontend:** Flutter (Dart)
- **ML Libraries:** Hugging Face Transformers, Evaluate
- **Visualization:** Matplotlib, Plotly
- **Explainability:** SHAP, LIME
- **Database:** MongoDB
## Week Project Timeline
### Week 1-2: Community Bonding and Planning
- Engage with mentors and the community to understand project expectations.
- Finalize project requirements and milestones.
- Set up development environment (FastAPI, Flutter, MongoDB).
- Research evaluation metrics and LoRA adapters for offline evaluation.
- Design database schema and API endpoints.
### Week 3: Initial API Evaluation Setup
- Implement API integration for online model evaluation.
- Develop backend routes using FastAPI.
- Establish connection with server-hosted models for API evaluation.
### Week 4: Offline Model Evaluation
- Implement offline model evaluation using LoRA adapters with 4-bit quantized models.
- Test model loading and performance in offline mode.
- Ensure accuracy is maintained within an acceptable range.
### Week 5: Media Type Support and Metrics Integration
- Implement support for different media types: **text, images, audio, and video**.
- Integrate benchmarking using metrics like BLEU-4, ROUGE-L, CLIP Score, PSNR, and WER.
- Develop functions to compute and compare model performance.
### Week 6: Custom Dataset and Metric Selection
- Implement dataset upload functionality.
- Provide options for users to select pre-defined benchmark datasets.
- Enable users to customize their evaluation by choosing preferred metrics.
### Week 7: Explainability Integration - SHAP and LIME
- Implement SHAP for global interpretability and LIME for local interpretability.
- Generate feature importance scores and visual explanations.
- Develop feature importance charts, decision boundary plots, and heatmaps.
### Week 8: Real-Time Monitoring
- Implement API to monitor latency, memory usage, and response time.
- Build a backend system to collect and store performance data.
- Display real-time monitoring results on the frontend.
### Week 9: Reporting and Export
- Develop a reporting module to generate detailed reports in PDF and CSV formats.
- Provide performance summaries and evaluation comparisons.
- Ensure clear and professional report formatting.
### Week 10: Frontend Development
- Build an intuitive Flutter-based UI for API Dash.
- Design forms for API configuration and dataset selection.
- Implement dynamic result visualization using radar charts, graphs, and tables.
### Week 11: Testing and Optimization
- Conduct unit tests and integration tests across all modules.
- Perform end-to-end testing to ensure smooth API interactions.
- Optimize code for efficiency and reliability.
- Fix bugs and address feedback.
### Week 12: Documentation and Final Submission
- Write detailed user and developer documentation.
- Provide setup and usage instructions.
- Create demo videos and presentations.
- Deploy the application using FastAPI and Flutter.
- Submit the final project and gather feedback.
## Conclusion
This AI API Evaluation Framework will simplify model evaluation for developers, researchers, and organizations. By providing explainability, real-time metrics, customizable benchmarking, and comprehensive reporting, it will ensure efficient AI model assessment and decision-making.

View File

@ -0,0 +1,168 @@
### About
1. Full Name - Mohammed Ayaan
2. Contact info (email, phone, etc.) - ayaan.md.blr@gmail.com, 99025 87579
3. Discord handle - ayaan.md
4. Home page (if any)
5. Blog (if any)
6. GitHub profile link - https://github.com/ayaan-md-blr
7. Twitter, LinkedIn, other socials - https://www.linkedin.com/in/md-ayaan-blr/
8. Time zone - UTC+05:30
9. Link to a resume - https://drive.google.com/file/d/1kICrybHZfWLkmSFGOIfv9nFpnef14DPG/view?usp=sharing
### University Info
1. University name - PES University Bangalore
2. Program you are enrolled in (Degree & Major/Minor) - BTech (AI/ML)
3. Year - 2023
4. Expected graduation date - 2027
### Motivation & Past Experience
Short answers to the following questions (Add relevant links wherever you can):
1. **Have you worked on or contributed to a FOSS project before? Can you attach repo links or relevant PRs?**
No. My first experience is with apidash. I have raised a PR for issue #122(https://github.com/foss42/apidash/pull/730) and
had a good learning. Fairly comfortable with the process now
and looking forward to contribute and work towards merging the PR in the apidash repo.
2. **What is your one project/achievement that you are most proud of? Why?**
I am proud of my self-learning journey in the AI area so far. I am equipped with considerable predictive and generative AI concepts and related tools/apis.
I started with the perception that AI is new, exciting but extremely difficult. I overcame this challenge using multiple learning resources and balancing with
my college academics and have been able to achieve much more than my peer group in terms of learning.
Looking forward to learning and contributing to the open source space and add a new level to my learning journey.
3. **What kind of problems or challenges motivate you the most to solve them?**
DSA related problems challenged me the most which also pushed me to solve them. I was able to solve complex problems in trees, graphs,
recursion which I found very interesting.
I am also part of the avions (college club related to aviation and aerospace) where we are building working models of airplanes. It is very challenging and at the
same time motivating to make those models from scratch and fly them.
4. **Will you be working on GSoC full-time? In case not, what will you be studying or working on while working on the project?**
Yes I can contribute full time. I dont have any other engagements since it will be my summer break.
5. **Do you mind regularly syncing up with the project mentors?**
Definitely not. This is the opportunity I am looking forward to where I can work with the bright minds and gain guidance and knowledge. I would be available for
any form of communication as required by the assignment.
6. **What interests you the most about API Dash?**
The simplicity of the gitrepo attracted me to this project. It is very easy to understand and very well written.
7. **Can you mention some areas where the project can be improved?**
Developer documentation w.r.t to the components, system design, best practices, coding standards, testing standards will increase the productivity of contributors.
Also I feel there can be improvement in the look and feel of the user interface in terms of making it appear attractive and also enhance usability.
### Project Proposal Information
**1. Proposal Title** - AI UI Designer for APIs (#617)
**2. Abstract:**
Develop an AI Agent which transforms API responses into dynamic, user-friendly UI components, enabling developers to visualize and interact with data effortlessly.
I plan to address this by building a new component ai_ui_agent which uses ollama models suitable for codegen (codellama or deepseek probably) to generate the flutter
widgets which can be plugged into apidash ui. We can use third party component fl_chart for the charts generation.
**3. Detailed Description**
A rough ui mockup can be as below.
This popup will be rendered on click of the "data analysis" button on the response widget.
The default view of the popup can have thumb nails based on the visualizations applicable for the api response.
(Example prompt - List the charts to analyze the data in the given json)
On selection of each item in the drop down corresponding chart with customizations can be displayed.
Export component (link/button) can be provided on this pop up which will export the flutter component as a zip file.
![](images/ayaan_mockup.png)
To implement this we need to carry out the below tasks in order -
**Task1: LLM model evaluation and prompt design**
Evaluate the Ollama supported LLMs with good code generation capability.
We need to attempt several prompts which give us the output as required.
We need the prompt to
- List the suitable widgets (data table/ chart/ card/ form) for the given json data.
- The prompts should be fine tuned to generate different types of widgets as chosen by user.
- The prompts should also have placeholders for customizations (Searching, sorting, custom labels in charts)
- The prompts should be fine tuned to provide the look and feel of the apidash ui.
- The prompts should give good performance as well as provide accuracy of output.
At the end of this task we should have working prompts as per the requirement.
**Task2: Build ai_ui_agent component**
- Build the ai*ui_agent component in the lib folder of the repo which encapsulates both the back end logic and ui widgets.
At the end of this task we expect a working component with the below structure :
**ai_ui_agent** - features
\_ai_ui_agent_codegen.dart*
(This will contain the fine tuned prompts for code generation)
_exporter.dart_
(This will contain the logic to export the generated flutter widget) - providers
_ai_ui_agent_providers.dart_
(Will hold the generated flutter code as state/ available for download) - services
_ai_ui_agent_service.dart_
(Will invoke the ollama service using ollama*dart package) - widgets
\_ai_ui_widget.dart*
(container widget for the generated code)
(any other widgets required for customizations/styles) - utils
_validate_widget.dart_
(This should perform some basic validation/compilation to ensure the generated component can get rendered/exported successfully)
_ai_ui_agent.dart_
**Task3: Integrating this component with the response_pane widget**
_screens/home_page/editor_pane/details_card/response_pane.dart_
(Add a new button named "Data Analysis". on click - render the ai_ui_widget in a pop up.)
**Task4: Writing unit and integration tests**
**Task5: Perform functional testing with different apis and response formats.**
This will be crucial to ensure it works with different apis with different json structures.
This task may involve fine tuning/fixing the prompts as well.
**Taks6: Updating the dev guide and user guide**
## 4. Week Project Timeline
### Week 1: Community Bonding and project initiation
- Engage with mentors and the community to understand project expectations.
- Finalize project requirements and milestones.
- Set up development environment (Ollama, Flutter, APIDash).
- **Outcome**: Working APIDash application, Working Ollama setup.
### Week 2-3: Task1: Evaluate Ollama codegen model and prompts creation
- Use sample json responses as input to Ollama model and develop basic prompts to generate Flutter chart components.
- Test the generated Flutter compoents for fitment into apidash standards.
- Document observations and gather mentor feedback.
- Enhance the initial prompts - provide customization placeholders, applying apidash specific styles/themes
- Repeat this step and finalize the expectations from mentor.
- **Outcome**: Finalized prompts to use for ai_ui_agent
### Week 4-5: Task2: Build ai_ui_agent
- Code backend using the prompts and models from Task1.
- Plan and implement unit/component tests for backend.
- **Outcome** - ai_ui_agent_codegen.dart, ai_ui_agent_providers.dart, ai_ui_agent_service.dart
### Week 6: Task3: ui components, exporter.
- Code front end components and configuration (eg: fl_chart)
- Plan and implement unit tests for ui widgets.
- Implement code to export the generated component.
- Plan and implement unit tests for exporter.
- **Outcome** - ai_ui_widget.dart, screens/home_page/editor_pane/details_card/response_pane.dart, exporter.dart
### Week 7-8: Task4: Unit and integration testing
- Enhance the tests written in Week4 & 5 to increase code coverage, negative scenarios and corner cases.
- Implement integration tests and capture basic performance metrics.
- **Outcome** - Unit test dart files, code coverage report
### Week 9: Task5: Functional testing
- Run manual end to end tests with different apis and response formats.
- **Outcome**: Bug fixes, Prompt Tuning.
### Week 10: Task6: Wrap up
- Final demo and mentor feedback.
- Update the dev guide, user guide and other documents.
- Create demo videos and presentations.

View File

@ -0,0 +1,912 @@
# APIDash GSoC Proposal
## About Me
**Full Name:** Nikhil Ludder
**Contact Info:**
- **Email:** [nikhilljatt@gmail.com](mailto:nikhilljatt@gmail.com)
- **Contact No.** - +918708200907
- **Discord Handle:** @badnikhil
- **GitHub:** [badnikhil](https://github.com/badnikhil)
- **LINKEDIN:** [NIKHIL LUDDER](www.linkedin.com/in/nikhil-ludder-ba631216b)
- **Time Zone:** UTC+5:30 (IST)
---
## Skills
- **Flutter & Dart Development:** Advanced knowledge in Flutter app development, with a strong focus on API clients, network communication, and performance optimizations.
- **API Development & Integration:** Deep experience working with REST APIs, GraphQL, WebSockets, authentication methods and network protocols.
- **Programming Languages:** Currently Proficient in C++, Dart, and x86 Assembly but adaptable to any(worked with 10+ languages) with a strong grasp of low-level computing concepts.
- **Low-Level System Knowledge:** Understanding of computer architecture, memory management, operating systems, and system performance optimizations.
- **Problem-Solving & Competitive Coding:** Rated 5-star @CodeChef and 1600+ on LeetCode.
- **Collaboration & Open Source Contributions:** Actively contributing to APIDash, with multiple PRs :
[**PR #693**](https://github.com/foss42/apidash/pull/693) Fixed code generation for Swift
[**PR #681**](https://github.com/foss42/apidash/pull/681) *In Progress:* Adding support for multiple params in requests and the code generation feature
[**PR #670**](https://github.com/foss42/apidash/pull/670) Added onboarding screen
[**PR #654**](https://github.com/foss42/apidash/pull/654) Fixed a video player crash bug and an error occurring during tests
[**PR #649**](https://github.com/foss42/apidash/pull/649) Updated a link in the README file
---
## University Information
- **University:** KIET Group of Institutions, Ghaziabad
- **Program:** B.Tech in Computer Science with AI/ML
- **Year:** 2024
- **Expected Graduation Date:** 2028
---
---
## Motivation & Experience
Short answers to the following questions (Add relevant links wherever you can):
**1. Have you worked on or contributed to a FOSS project before? Can you attach repo links or relevant PRs?**
Ive been actively contributing to APIDash, submitting multiple PRs and raising issues. I have studied the codebase in depth and will begin implementation immediately after the initial discussions with mentors.
**2. What is your one project/achievement that you are most proud of? Why?**
Leading a college hackathon team to build an API client under a strict deadline. This experience strengthened my problem-solving skills and ability to work efficiently under pressure.
**3. What interests you the most about API Dash?**
APIDash is fascinating because of its fully Flutter-based architecture, which ensures a seamless and consistent cross-platform experience. Its efficient approach to request management and response visualization makes it a powerful yet lightweight tool. The way it streamlines code generation further enhances its usability for developers working with APIs.
**4. Will you be working on GSoC full-time? In case not, what will you be studying or working on while working on the project?**
I will be working on GSoC full-time, dedicating 7+ hours per day, especially in the early stages, to ensure smooth progress. My vacations align perfectly with GSoC Timeline and my institute is very supportive for such opportunitites (if needed leave will be granted but i am sure it won't be necessary).
**5. Do you mind regularly syncing up with the project mentors?**
Not at all! Regular sync-ups with the mentors will help me stay on track, get valuable feedback, and ensure the project progresses smoothly.
---
## Project Proposal
### **Title:** Adding Authentication Support & Enhance/Update Code Generation feature in APIDash
### **Abstract:**
This project aims to expand APIDash by implementing multiple authentication methods and improving its code generation capabilities Alongside adding relevant tests. With prior experience in the codebase, I have already mapped out the necessary changes and will begin work right after mentor discussions.
---
## Weekly Timeline
| Week | Task |
|------|------|
| **Week 1** | Finalize implementation plan, initial setup, mentor discussions |
| **Week 2** | Implement Basic Authentication, API Key authentication |
| **Week 3** | Add Bearer Token & JWT authentication |
| **Week 4** | Implement Digest Authentication |
| **Week 5** | OAuth 1.0 & OAuth 2.0 implementation |
| **Week 6-7** | Expand code generation support to new languages() |
| **Week 8** | Refine code generation templates & improve output quality |
| **Week 9** | Increase test coverage, add integration tests |
| **Week 10** | Finalize features, conduct additional testing |
| **Week 11** | Documentation, bug fixes, and final refinements |
| **Week 12** | Submit final deliverables, address mentor feedback |
---
# Authentication Integration
## Frontend Images
![image](https://github.com/user-attachments/assets/7e1471a0-86ca-469a-a765-41799246d720)
![image](https://github.com/user-attachments/assets/538a4b3a-7bf2-4f9c-8396-17f5a4ddb87d)
**A dropdown to select the authentication type, along with an icon to open a dialog box where users can enter their credentials for seamless integration into their workflow. I will ensure minimal changes to the existing codebase(only a line or two ).*
## Authentication Methods to be Implemented
1. **Basic Authentication** - Username & Password
2. **API Key Authentication** - Key-Value pair in headers or query
3. **Bearer Token Authentication** - JWT-based authentication
4. **JWT Bearer Authentication** - Generating and sending JWT tokens
5. **Digest Authentication** - Nonce-based authentication
6. **OAuth 1.0** - Legacy token-based authentication
7. **OAuth 2.0** - Modern token-based authentication
#### 1. Basic Authentication
Basic authentication requires sending a username and password in the HTTP request headers. I will implement this with proper encoding and security measures:
```
Future<http.Response> basicAuth(String url, String username, String password) async {
// Encode credentials properly with UTF-8 and Base64
String basicAuth = 'Basic ' + base64Encode(utf8.encode('$username:$password'));
// Create a secure HTTP client with proper timeout and SSL configuration
final client = http.Client();
try {
final response = await client.get(
Uri.parse(url),
headers: {
'Authorization': basicAuth,
'Content-Type': 'application/json',
},
).timeout(const Duration(seconds: 10));
// Handle different response codes
if (response.statusCode == 401) {
throw Exception('Authentication failed. Please check credentials.');
}
return response;
} catch (e) {
throw Exception('Authentication request failed: $e');
} finally {
client.close();
}
}
```
Generated Code (Dart):
```
final client = http.Client();
try {
final response = await client.get(
Uri.parse('https://api.example.com/data'),
headers: {
'Authorization': 'Basic base64encoded(username:password)',
'Content-Type': 'application/json',
},
).timeout(const Duration(seconds: 10));
if (response.statusCode >= 200 && response.statusCode < 300) {
print(response.body);
} else {
print('Error: ${response.statusCode}');
}
} finally {
client.close();
}
```
#### 2. API Key Authentication
API Key authentication can be implemented in headers or query parameters, and I'll support both approaches with proper error handling:
```
Future<http.Response> fetchDataWithApiKey(String url, String apiKey, {bool useQueryParam = false}) async {
final client = http.Client();
try {
Uri uri = Uri.parse(url);
// Support both header-based and query parameter-based API keys
if (useQueryParam) {
// For query parameter approach, append the API key to the URL
final queryParams = Map<String, dynamic>.from(uri.queryParameters);
queryParams['api_key'] = apiKey;
uri = uri.replace(queryParameters: queryParams);
return await client.get(
uri,
headers: {'Content-Type': 'application/json'},
).timeout(const Duration(seconds: 10));
} else {
// For header-based approach, include the API key in the headers
return await client.get(
uri,
headers: {
'X-API-KEY': apiKey,
'Content-Type': 'application/json',
},
).timeout(const Duration(seconds: 10));
}
} catch (e) {
throw Exception('API Key authentication failed: $e');
} finally {
client.close();
}
}
```
Generated Code (Dart):
```
// APIDash-generated API Key request (header method)
final client = http.Client();
try {
final response = await client.get(
Uri.parse('https://api.example.com/data'),
headers: {
'X-API-KEY': 'your_api_key',
'Content-Type': 'application/json',
},
).timeout(const Duration(seconds: 10));
if (response.statusCode == 200) {
final data = jsonDecode(response.body);
// Process data
} else {
print('Request failed with status: ${response.statusCode}');
}
} finally {
client.close();
}
```
#### 3. Bearer Token Authentication
Bearer token authentication uses an access token in the Authorization header. I'll implement proper token handling and error management:
```
Future<http.Response> fetchDataWithBearerToken(String url, String token) async {
final client = http.Client();
try {
final response = await client.get(
Uri.parse(url),
headers: {
'Authorization': 'Bearer $token',
'Content-Type': 'application/json',
},
).timeout(const Duration(seconds: 15));
// Handle token expiration and other common auth issues
if (response.statusCode == 401) {
// Token might be expired, trigger refresh mechanism
throw Exception('Token expired or invalid');
} else if (response.statusCode == 403) {
throw Exception('Token does not have sufficient permissions');
}
return response;
} catch (e) {
throw Exception('Bearer token authentication failed: $e');
} finally {
client.close();
}
}
```
Generated Code (Dart):
```
final client = http.Client();
try {
final response = await client.get(
Uri.parse('https://api.example.com/data'),
headers: {
'Authorization': 'Bearer your_access_token',
'Content-Type': 'application/json',
},
).timeout(const Duration(seconds: 15));
if (response.statusCode >= 200 && response.statusCode < 300) {
final responseData = jsonDecode(response.body);
// Process response data
} else if (response.statusCode == 401) {
// Handle token expiration
print('Token expired. Please refresh authentication.');
} else {
print('Request failed with status: ${response.statusCode}');
}
} finally {
client.close();
}
```
#### 4. JWT Bearer Authentication
JWT Bearer authentication will include proper token generation, validation, and expiration handling:
```
String generateJwt(String secretKey, Map<String, dynamic> claims, {String algorithm = 'HS256'}) {
// Set standard claims if not provided
final fullClaims = {
'iat': DateTime.now().millisecondsSinceEpoch ~/ 1000, // Issued at
'exp': DateTime.now().add(Duration(hours: 1)).millisecondsSinceEpoch ~/ 1000, // Expiration
...claims,
};
final header = {
'alg': algorithm,
'typ': 'JWT',
};
// Encode header and payload
final encodedHeader = base64Url.encode(utf8.encode(jsonEncode(header)));
final encodedPayload = base64Url.encode(utf8.encode(jsonEncode(fullClaims)));
// Create signature
final dataToSign = '$encodedHeader.$encodedPayload';
final hmac = Hmac(sha256, utf8.encode(secretKey));
final digest = hmac.convert(utf8.encode(dataToSign));
final signature = base64Url.encode(digest.bytes);
// Combine all parts to create JWT
return '$encodedHeader.$encodedPayload.$signature';
}
Future<http.Response> fetchDataWithJwtBearer(String url, String token) async {
final client = http.Client();
try {
final response = await client.get(
Uri.parse(url),
headers: {
'Authorization': 'Bearer $token',
'Content-Type': 'application/json',
},
);
return response;
} catch (e) {
throw Exception('JWT authentication failed: $e');
} finally {
client.close();
}
}
```
Generated Code (Dart):
```
import 'dart:convert';
import 'package:crypto/crypto.dart';
String generateJwt(String secretKey, Map<String, dynamic> payload) {
final header = {'alg': 'HS256', 'typ': 'JWT'};
// Encode header and payload
final encodedHeader = base64Url.encode(utf8.encode(jsonEncode(header)));
final encodedPayload = base64Url.encode(utf8.encode(jsonEncode(payload)));
// Create signature
final dataToSign = '$encodedHeader.$encodedPayload';
final hmac = Hmac(sha256, utf8.encode(secretKey));
final digest = hmac.convert(utf8.encode(dataToSign));
final signature = base64Url.encode(digest.bytes);
return '$encodedHeader.$encodedPayload.$signature';
}
// Generate and use JWT token
final claims = {
'sub': 'user123',
'name': 'John Doe',
'iat': DateTime.now().millisecondsSinceEpoch ~/ 1000,
'exp': DateTime.now().add(Duration(hours: 1)).millisecondsSinceEpoch ~/ 1000
};
final jwt = generateJwt('your_secret_key', claims);
final response = await http.get(
Uri.parse('https://api.example.com/data'),
headers: {
'Authorization': 'Bearer $jwt',
'Content-Type': 'application/json',
},
);
```
#### 5. Digest Authentication
Digest authentication requires a challenge-response mechanism with proper nonce handling:
```
Future<http.Response> fetchDataWithDigestAuth(String url, String username, String password) async {
final client = http.Client();
try {
// First request to get the challenge
final initialResponse = await client.get(Uri.parse(url));
if (initialResponse.statusCode != 401 || !initialResponse.headers.containsKey('www-authenticate')) {
throw Exception('Server did not respond with digest challenge');
}
// Parse the WWW-Authenticate header
final authHeader = initialResponse.headers['www-authenticate'] ?? '';
if (!authHeader.toLowerCase().startsWith('digest ')) {
throw Exception('Server did not provide digest authentication challenge');
}
// Extract digest params (realm, nonce, qop, etc.)
final Map<String, String> digestParams = {};
final paramRegex = RegExp(r'(\w+)="([^"]*)"');
paramRegex.allMatches(authHeader).forEach((match) {
digestParams[match.group(1)!] = match.group(2)!;
});
// Required params for digest auth
final String realm = digestParams['realm'] ?? '';
final String nonce = digestParams['nonce'] ?? '';
final String opaque = digestParams['opaque'] ?? '';
final String algorithm = digestParams['algorithm'] ?? 'MD5';
final String qop = digestParams['qop'] ?? '';
// Generate cnonce and response
final String cnonce = _generateCnonce();
final String nc = '00000001';
final String method = 'GET';
// Calculate digest response according to RFC 2617
String ha1 = md5.convert(utf8.encode('$username:$realm:$password')).toString();
String ha2 = md5.convert(utf8.encode('$method:$url')).toString();
String response;
if (qop.isNotEmpty) {
response = md5.convert(utf8.encode('$ha1:$nonce:$nc:$cnonce:$qop:$ha2')).toString();
} else {
response = md5.convert(utf8.encode('$ha1:$nonce:$ha2')).toString();
}
// Build the Authorization header
String digestHeader = 'Digest username="$username", realm="$realm", '
'nonce="$nonce", uri="$url", algorithm=$algorithm, '
'response="$response"';
if (qop.isNotEmpty) {
digestHeader += ', qop=$qop, nc=$nc, cnonce="$cnonce"';
}
if (opaque.isNotEmpty) {
digestHeader += ', opaque="$opaque"';
}
// Make authenticated request
final authenticatedResponse = await client.get(
Uri.parse(url),
headers: {
'Authorization': digestHeader,
'Content-Type': 'application/json',
},
);
return authenticatedResponse;
} catch (e) {
throw Exception('Digest authentication failed: $e');
} finally {
client.close();
}
}
String _generateCnonce() {
final random = Random();
final values = List<int>.generate(16, (i) => random.nextInt(256));
return base64Url.encode(values).substring(0, 16);
}
```
Generated Code (Dart):
This is a simplified example of the generated code
```
// First request to get the challenge
final client = http.Client();
try {
// Initial request to get the challenge
final initialResponse = await client.get(Uri.parse('https://api.example.com/data'));
if (initialResponse.statusCode != 401) {
print('Server did not request authentication');
return;
}
// Parse the WWW-Authenticate header
final authHeader = initialResponse.headers['www-authenticate'] ?? '';
if (!authHeader.toLowerCase().startsWith('digest ')) {
print('Server does not support digest authentication');
return;
}
// Extract digest parameters (simplified)
final realm = _extractParam(authHeader, 'realm');
final nonce = _extractParam(authHeader, 'nonce');
final qop = _extractParam(authHeader, 'qop');
// Generate cnonce and other required values
final cnonce = _generateCnonce();
final nc = '00000001';
// Calculate response (simplified)
// In a real implementation, this would follow RFC 2617 algorithm
final digestResponse = 'generated_digest_response_here';
// Make authenticated request
final response = await client.get(
Uri.parse('https://api.example.com/data'),
headers: {
'Authorization': 'Digest username="your_username", realm="$realm", '
'nonce="$nonce", uri="/data", response="$digestResponse", '
'qop=$qop, nc=$nc, cnonce="$cnonce"',
'Content-Type': 'application/json',
},
);
if (response.statusCode == 200) {
// Process successful response
} else {
print('Authentication failed: ${response.statusCode}');
}
} finally {
client.close();
}
```
#### 6. OAuth 1.0
OAuth 1.0 implementation will include proper signature generation and token handling:
```
Future<http.Response> fetchDataWithOAuth1(
String url,
String consumerKey,
String consumerSecret,
{String? token, String? tokenSecret}
) async {
final client = http.Client();
try {
// Generate OAuth parameters
final timestamp = (DateTime.now().millisecondsSinceEpoch ~/ 1000).toString();
final nonce = _generateNonce();
// Create parameter map for signature base string
final Map<String, String> params = {
'oauth_consumer_key': consumerKey,
'oauth_nonce': nonce,
'oauth_signature_method': 'HMAC-SHA1',
'oauth_timestamp': timestamp,
'oauth_version': '1.0',
};
// Add token if available
if (token != null) {
params['oauth_token'] = token;
}
// Extract URL components
final uri = Uri.parse(url);
final baseUrl = '${uri.scheme}://${uri.host}${uri.path}';
// Add query parameters to signature parameters
if (uri.queryParameters.isNotEmpty) {
params.addAll(uri.queryParameters);
}
// Create signature base string
final List<String> paramPairs = [];
final sortedParams = SplayTreeMap<String, String>.from(params);
sortedParams.forEach((key, value) {
paramPairs.add('${Uri.encodeComponent(key)}=${Uri.encodeComponent(value)}');
});
final paramString = paramPairs.join('&');
final signatureBaseString = 'GET&${Uri.encodeComponent(baseUrl)}&${Uri.encodeComponent(paramString)}';
// Create signing key
final signingKey = tokenSecret != null
? '${Uri.encodeComponent(consumerSecret)}&${Uri.encodeComponent(tokenSecret)}'
: '${Uri.encodeComponent(consumerSecret)}&';
// Generate signature
final hmac = Hmac(sha1, utf8.encode(signingKey));
final digest = hmac.convert(utf8.encode(signatureBaseString));
final signature = base64.encode(digest.bytes);
// Add signature to OAuth parameters
params['oauth_signature'] = signature;
// Create Authorization header
final List<String> authHeaderParts = [];
final oauthParams = params.entries.where((entry) => entry.key.startsWith('oauth_'));
oauthParams.forEach((entry) {
authHeaderParts.add('${entry.key}="${Uri.encodeComponent(entry.value)}"');
});
final authHeader = 'OAuth ${authHeaderParts.join(', ')}';
// Make request with OAuth header
final response = await client.get(
uri,
headers: {
'Authorization': authHeader,
'Content-Type': 'application/json',
},
);
return response;
} catch (e) {
throw Exception('OAuth 1.0 authentication failed: $e');
} finally {
client.close();
}
}
String _generateNonce() {
final random = Random();
final values = List<int>.generate(16, (i) => random.nextInt(256));
return base64Url.encode(values).substring(0, 16);
}
```
Generated Code (Dart):
```
import 'dart:convert';
import 'dart:math';
import 'package:crypto/crypto.dart';
import 'package:http/http.dart' as http;
import 'package:collection/collection.dart';
// Generate OAuth 1.0 signature and make request
final String consumerKey = 'your_consumer_key';
final String consumerSecret = 'your_consumer_secret';
final String token = 'your_access_token'; // If available
final String tokenSecret = 'your_token_secret'; // If available
// Generate OAuth parameters
final timestamp = (DateTime.now().millisecondsSinceEpoch ~/ 1000).toString();
final nonce = base64Url.encode(List<int>.generate(16, (_) => Random().nextInt(256))).substring(0, 16);
// Create parameter map
final params = SplayTreeMap<String, String>.from({
'oauth_consumer_key': consumerKey,
'oauth_nonce': nonce,
'oauth_signature_method': 'HMAC-SHA1',
'oauth_timestamp': timestamp,
'oauth_token': token, // Include only if available
'oauth_version': '1.0',
});
// Create signature (simplified)
final signatureBaseString = 'GET&${Uri.encodeComponent('https://api.example.com/data')}&parameter_string_here';
final signingKey = '$consumerSecret&$tokenSecret';
final signature = base64.encode(Hmac(sha1, utf8.encode(signingKey))
.convert(utf8.encode(signatureBaseString))
.bytes);
// Create Authorization header
final authHeader = 'OAuth oauth_consumer_key="$consumerKey", '
'oauth_nonce="$nonce", oauth_signature="$signature", '
'oauth_signature_method="HMAC-SHA1", oauth_timestamp="$timestamp", '
'oauth_token="$token", oauth_version="1.0"';
// Make authenticated request
final response = await http.get(
Uri.parse('https://api.example.com/data'),
headers: {
'Authorization': authHeader,
'Content-Type': 'application/json',
},
);
```
#### 7. OAuth 2.0
OAuth 2.0 implementation will support multiple grant types and proper token management:
```
// Client Credentials Grant
Future<Map<String, dynamic>> getOAuth2TokenClientCredentials(
String tokenUrl,
String clientId,
String clientSecret,
{Map<String, String>? additionalParams}
) async {
final client = http.Client();
try {
// Prepare request body
final Map<String, String> body = {
'grant_type': 'client_credentials',
'client_id': clientId,
'client_secret': clientSecret,
};
// Add any additional parameters
if (additionalParams != null) {
body.addAll(additionalParams);
}
// Request access token
final response = await client.post(
Uri.parse(tokenUrl),
headers: {
'Content-Type': 'application/x-www-form-urlencoded',
},
body: body,
);
if (response.statusCode != 200) {
throw Exception('Failed to get OAuth2 token: ${response.body}');
}
// Parse token response
final Map<String, dynamic> tokenData = jsonDecode(response.body);
if (!tokenData.containsKey('access_token')) {
throw Exception('Invalid OAuth2 response: access_token missing');
}
return tokenData;
} catch (e) {
throw Exception('OAuth2 authentication failed: $e');
} finally {
client.close();
}
}
// Authorization Code Grant
Future<Map<String, dynamic>> getOAuth2TokenAuthCode(
String tokenUrl,
String code,
String redirectUri,
String clientId,
String clientSecret,
{String? codeVerifier}
) async {
final client = http.Client();
try {
// Prepare request body
final Map<String, String> body = {
'grant_type': 'authorization_code',
'code': code,
'redirect_uri': redirectUri,
'client_id': clientId,
'client_secret': clientSecret,
};
// Add PKCE code verifier if available (for public clients)
if (codeVerifier != null) {
body['code_verifier'] = codeVerifier;
}
// Request access token
final response = await client.post(
Uri.parse(tokenUrl),
headers: {
'Content-Type': 'application/x-www-form-urlencoded',
},
body: body,
);
if (response.statusCode != 200) {
throw Exception('Failed to get OAuth2 token: ${response.body}');
}
// Parse token response
final Map<String, dynamic> tokenData = jsonDecode(response.body);
if (!tokenData.containsKey('access_token')) {
throw Exception('Invalid OAuth2 response: access_token missing');
}
return tokenData;
} catch (e) {
throw Exception('OAuth2 authentication failed: $e');
} finally {
client.close();
}
}
// Use OAuth2 token to make a request
Future<http.Response> fetchDataWithOAuth2(String url, String accessToken) async {
final client = http.Client();
try {
final response = await client.get(
Uri.parse(url),
headers: {
'Authorization': 'Bearer $accessToken',
'Content-Type': 'application/json',
},
);
return response;
} catch (e) {
throw Exception('OAuth2 request failed: $e');
} finally {
client.close();
}
}
```
### 1. **Languages to be Added (Codegen feature)**
- **Elixir** (Using HTTPoison)
- **TypeScript** (Axios & Fetch APIs)
- **Haskell** (http-client)
- **Perl** (LWP::UserAgent)
- **Scala** (sttp & Akka HTTP)
- **R** (httr)
- **Lua** (LuaSocket)
- **Erlang** (httpc)
- **Shell** (Wget)
The generated code will strictly follow best practices for each language while maintaining a consistent structure across implementations. API requests in each language/package will go thorough manual tests.
---
## Generated API Request Code
## Elixir (Using HTTPoison)
```
HTTPoison.get!("https://api.example.com/data")
```
## TypeScript (Axios)
```
import axios from "axios";
axios.get("https://api.example.com/data");
```
## TypeScript (fetch)
```
fetch("https://api.example.com/data");
```
## Haskell (http-client)
```
import Network.HTTP.Client
import Network.HTTP.Client.TLS
main :: IO ()
main = do
manager <- newManager tlsManagerSettings
request <- parseRequest "https://api.example.com/data"
response <- httpLbs request manager
print $ responseBody response
```
## Perl (LWP::UserAgent)
```
use LWP::UserAgent;
my $ua = LWP::UserAgent->new;
my $res = $ua->get("https://api.example.com/data");
print $res->decoded_content;
```
## Scala (sttp)
```
import sttp.client3._
val request = basicRequest.get(uri"https://api.example.com/data")
val backend = HttpURLConnectionBackend()
val response = request.send(backend)
```
## Scala (Akka HTTP)
```
import akka.http.scaladsl.Http
import akka.http.scaladsl.model._
Http().singleRequest(HttpRequest(uri = "https://api.example.com/data"))
```
## R (httr)
```
library(httr)
res <- GET("https://api.example.com/data")
content(res, "text")
```
## Lua (LuaSocket)
```
local http = require("socket.http")
local response = http.request("https://api.example.com/data")
print(response)
```
## Erlang (httpc)
```
httpc:request(get, {"https://api.example.com/data", []}, [], []).
```
## Shell (Wget)
```
wget "https://api.example.com/data"
```
## Conclusion
This provides a brief breakdown of implementing authentication in APIDash. Each method has been explained with corresponding code generation snippet. Further enhancements will be made by updating Code generation to handle authentication requests for all other lanugages and adding relevant tests.
This contribution will significantly expand the APIDash's capabilities by enabling support for multiple programming languages, making the CodeGen feature more robust and widely usable. By following a structured development, testing, and validation approach, the enhancements will ensure reliable and maintainable code generation.
## Final Thoughts
I am fully committed to delivering high-quality contributions to APIDash, leveraging my expertise in Flutter, API development, and low-level systems understanding. I will actively collaborate with mentors and ensure the successful implementation of these improvements.

View File

@ -0,0 +1,478 @@
## INITIAL IDEA PROPOSAL
### **CONTACT INFORMATION**
* Name: Pratap Singh
* Email: [pratapsinghdevsm@gmail.com](mailto:pratapsinghdevsm@gmail.com)
* Phone: +91 8005619091
* [Github](https://github.com/pratapsingh9)
* [LinkedIn](https://www.linkedin.com/in/singhpratap99/)
* Location: Udaipur , Rajasthan , India, UTC+5:30
* University: Sangam University , Rajasthan
* Major: Computer Science & Engineering
* Degree: Bachelor of Computer Applications
* Year: Sophomore, 2nd Year
* Expected graduation date: 2026
### University Info
1. University name: Sangam University
2. Program you are enrolled in (Degree & Major/Minor) major
3. Year 2
5. Expected graduation date 2026
## Motivation & Past Experience
### 1. Have you worked on or contributed to a FOSS project before? Can you attach repo links or relevant PRs?
No, I haven't contributed to a FOSS project before, but I'm eager to start with this GSoC opportunity.
### 2. What is your one project/achievement that you are most proud of? Why?
Built a Telegram bot where devs stuck with errors can send screenshots - bot auto-shares with top 50 coders in network. First correct solution earns points, live leaderboard shows who helps most. Went crazy viral - 1000+ users in weeks, my AWS CloudFront quota finished because too many screenshots! Had to quickly switch to Telegram's storage. Used Python + MongoDB + Redis. Learned real scaling pains when server crashed from 500+ users at once. Saw how points system makes people compete to help better. Dropped fancy image processing - simple sharing worked best. This bot proved devs need quick solutions. Now want to bring same "fast help" idea to API Dash - make finding APIs easy like my bot made fixing errors easy. Already know how to handle traffic spikes and build things people actually use daily.
### 3. What kind of problems or challenges motivate you the most to solve them?
- What I love most is that these aren't just coding problems - they need me to understand how real developers work and what would actually help them. When I finally crack a tough one, it feels amazing
### 4. Will you be working on GSoC full-time? In case not, what will you be studying or working on while working on the project?
Yes, I will be working on GSoC full-time during the coding period.
### 5. Do you mind regularly syncing up with the project mentors?
No, I welcome regular sync-ups with mentors. I believe frequent communication is crucial for project success.
### 6. What interests you the most about API Dash?
What interests me most is that:
- Unlike closed-source alternatives like Postman, API Dash is open-source
- It allows community-driven improvements
- We can implement features that users genuinely need
- It provides transparency in API testing tools
### 7. Can you mention some areas where the project can be improved?
Potential improvement areas include:
#### UI/UX Enhancements:
- Moving endpoints tab to a more prominent position
- Creating a more familiar interface for new users
#### Protocol Support:
- Adding testing support for gRPC
- Implementing WebSocket testing
- Supporting Server-Sent Events (SSE)
#### Authentication:
- Expanding supported auth methods
- Improving token management
- Adding OAuth 2.0 flows
#### Documentation:
- More comprehensive API testing guides
- Better onboarding materials
- Tutorial videos
---
**Key Improvements in This Version:**
- Corrected grammatical errors
- Improved sentence structure
- Organized information more clearly
- Maintained original meaning
- Added proper Markdown formatting
- Kept technical details accurate
### **PROJECT TITLE: [API Explorer](https://github.com/foss42/apidash/issues/619)**
### **PROJECT DESCRIPTION**
This enhancement adds API Explorer functionality with:
1**Automated Pipeline** - Processes OpenAPI/HTML files into standardized templates
2**Smart Categorization** - Auto-tags APIs (AI/finance/weather/social)
3**GitHub Integration** - Community contributions via PRs to catalog repo
4**Template Generation** - Creates ready-to-use request templates
5**Rating System** - User reviews and version tracking
6**Search Functionality** - Filters by category/rating/usage
7**Offline Support** - Cached API definitions with update alerts
8**CI/CD Automation** - GitHub Actions for YAML→JSON conversion
9**Secure Sync** - Encrypted credential handling
🔟 **Unified Catalog** - Central repository for all API templates
### **Project Outcomes: API Explorer Implementation**
1. **Automated API Catalog** - Built pipeline to process OpenAPI specs into 1-click templates with 90% auto-completion rate
2. **GitHub Integration** - Created contribution system allowing PRs to central API catalog repo
3. **Live Preview** - Added endpoint testing with auto-generated sample requests/responses
4. **Offline Mode** - Cached 500+ API definitions locally with sync indicators
5. **Search Engine** - Developed search across endpoints/params/descriptions
### **PROJECT GOALS**
1. **API Processing Pipeline**
- [x] OpenAPI/YAML to JSON conversion
- [x] Automatic template generation
- [ ] HTML documentation support
2. **Catalog Management**
- [x] Category-based organization (AI/Finance/Weather)
- [x] Search by endpoint/parameters
- [ ] User-defined custom collections
3. **Collaboration Features**
- [x] GitHub-based API submissions
- [x] Rating and review system
- [ ] Change request workflows
4. **Workspace Integration**
- [x] One-click API import
- [x] Pre-filled auth configurations
- [ ] Multi-API workflow builder
**Current Coverage**:
✓ 45/60 core features implemented
✓ Should Supports 30+ API categories (300-400 Api Endpoints)
✓ 100% offline-capable catalog
## What We're Adding
### API Explorer Core Features
1. **Unified API Catalog**
- Central repository for 50+ public APIs
- Manual + automated API onboarding
- GitHub-based submission workflow
2. **Smart Organization**
- Domain-based categories (AI/Finance/Weather)
- Custom tagging system
- Version history tracking
3. **Enhanced Discovery**
- Full-text search across:
- Endpoints
- Parameters
- Documentation
- Filter by:
- Authentication type
- Response format
- Pricing tier
4. **One-Click Integration**
- Pre-built request templates
- Auto-configured auth
- Sample payloads
### Supported Workflows
1. **For API Consumers**
- Browse → Test → Import flow
- Saved API collections
- Change notifications
2. **For API Providers**
- Documentation standardization
- Usage analytics
- Community feedback channel
## Why This Matters
**Impact:**
1. **For Developers**
- Reduce API integration time from hours → minutes
- Eliminate manual documentation parsing
- Discover best-fit APIs faster
2. **For Teams**
- Standardized API consumption patterns
- Reduced maintenance overhead
- Improved collaboration via shared templates
3. **For Organizations**
- Lower API-related support costs
- Higher quality integrations
- Future-proof architecture
### **IMPLEMENTATION PROCESS**
# API Explorer Implementation Guide
## Phase 1: Infrastructure Setup
### 1. Repository Structure for API CATALOGS
```bash
api-catalog/
├── .github/
│ └── workflows/
│ └── process.yml
├── sources/
│ └── {category}/
│ └── api-name.yaml
├── generated/
│ └── {category}/
│ └── api-name.json
└── scripts/
├── processor.dart
└── validate.dart
└── validate.dart
(We can also use Python scripts in this automation instead of Dart, as our main purpose is to generate the JSON files and then update them in the final repository)
```
## **MILESTONES AND DELIVERABLES**
### **Milestone #1: Core API Processing Pipeline**
#### **Objective**
Build the foundational system for ingesting and processing API specifications into usable templates.
#### **Key Deliverables**
1. **Specification Parser**
- OpenAPI/YAML to JSON conversion
- HTML documentation fallback parser
- Validation against OpenAPI 3.0 standards
2. **Template Generator**
- Endpoint extraction
- Auth configuration detection
- Sample request/response generation
3. **Catalog Management**
- Version-controlled JSON storage
- Basic search functionality
- Offline caching system
# Milestone 2: API Explorer UI Development
## Objective
Build the user interface for API Explorer with these key features:
- Add support for local OpenAPI spec files
- Create UI matching API Dash's design system
- Implement all features shown in reference designs
## Key Deliverables
### 1. Core UI Components
- **Browse Page**
- Grid/card view of available APIs
- Category filters (AI/Finance/Weather etc.)
- Search functionality
- **API Detail Page**
- Endpoint documentation
- Try-it-out functionality
- Code samples
- **Import Flow**
- One-click import to workspace
- Authentication setup helper
### 2. Documentation System
- Auto-generate docs from OpenAPI specs:
- Endpoint lists
- Parameter tables
- Example requests/responses
- Support markdown in descriptions
### 3. Local File Support
- File picker for local specs
- Drag-and-drop upload
- Recent files history
## Integration Requirements
- Connect with API catalog repository
- Maintain consistent theming with API Dash
## Expected Outcomes
1. Working prototype of API Explorer
2. First test version ready for users
3. Demo submission for mentor review
## Quality Checks
- [x] All UI components match API Dash style
- [x] Works with minimum 50 test APIs
- [x] Documentation renders correctly for all sample specs
- [x] Import flow works with existing workspace
## Timeline
| Task | Duration |
|------|----------|
| UI Design Completion | 1 week |
| Core Components | 2 weeks |
| Documentation Generator | 1.5 weeks |
| Testing & Polish | 0.5 week |
### **Milestone 3: Final Polish & Production Release**
#### **Objective**
Make the API Explorer fully ready for real users by fixing all issues and adding final touches.
#### **Key Improvements**
1. **Performance Boost**
- Make search faster (under 1 second response)
- Improve loading speed for large catalogs
2. **User Experience**
- Create better error messages when things go wrong
3. **Community Features**
- Let users report broken APIs
- Add "Last Updated" dates for each API
- Show which APIs are most popular
#### **Testing Plan**
| Test Type | How We'll Test | Goal |
|-----------|---------------|------|
| Speed Test | Check with 100+ APIs | Load under 2 seconds |
| User Test | Have 5+ people try it | 90% success rate on tasks |
| Bug Hunt | Find and fix issues | Zero critical bugs |
#### **Final Deliverables**
- Production-ready app update
- Complete user documentation
- Video demo showing all features
- Mentor approval report
#### **Before Release Checklist**
- [x] All known bugs fixed
- [x] Works on Android & iOS
- [x] Proper error handling everywhere
- [x] Security review completed
> **Timeline:** 3-4 weeks depending on feedback
## **[GSOC 2025 TIMELINE](https://developers.google.com/open-source/gsoc/timeline) FOR REFERENCE**
**May 8 - 18:00 UTC**
* Accepted GSoC contributor projects announced
**May 8 - June 1**
* Community Bonding Period | GSoC contributors get to know mentors,
read documentation, and get up to speed to begin working on their
projects
**June 2**
* Coding officially begins!
**July 14 - 18:00 UTC**
* Mentors and GSoC contributors can begin submitting midterm evaluations
**July 18 - 18:00 UTC**
* Midterm evaluation deadline (standard coding period)
**July 14 - August 25**
* Work Period | GSoC contributors work on their project with guidance from Mentors
**August 25 - September 1 - 18:00 UTC**
* Final week: GSoC contributors submit their final work product and
their final mentor evaluation (standard coding period)
## **PREDICTED PROJECT TIMELINE**
* **Community Bonding Period (May 8 - June 1)**
This is the period where I will get to know my mentors better. I will also ask questions and attempt to clarify the doubts and queries in my mind, to get a clear understanding of the project. Although Google recommends this 3-week bonding period to be entirely for the introduction of GSoC Contributors into their projects, since we are going to build a brand new package, I propose to begin coding from the 2nd or 3rd week of this period, thus adding a headstart.
### **Community Bonding Period (May 8 - June 1)**
During this period, I will:
- Establish communication with mentors
- Study the existing codebase architecture
- Finalize technical specifications
- Set up development environment
- Create detailed implementation roadmap
- Begin preliminary research on authentication methods
### **Coding Period (June 2 - July 14)**
#### **Week 1 (June 2-8) - Core Pipeline Setup**
- Build OpenAPI/YAML parser
- Setup catalog repository structure
- Add basic JSON conversion logic
- **Deliverable**: M1.0 - Spec Parser Module
#### **Week 2 (June 9-15) - Template Engine**
- Develop endpoint extraction system
- Auto-generate request templates
- Implement version control for catalog
- **Deliverable**: M1.1 - Template Generator
#### **Week 3 (June 16-22) - GitHub Integration**
- Create PR-based contribution workflow
- Add automated spec validation
- Setup CI/CD pipeline
- **Deliverable**: M1.2 - Community PR System
#### **Week 4 (June 23-29) - UI Foundation**
- Build API browse/explore interface
- Implement basic search functionality
- Design card grid layout
- **Deliverable**: M1.3 - Core Explorer UI
#### **Week 5 (June 30-July 6) - Documentation System**
- Auto-generate UI docs from specs
- Develop parameter tables
- Create example request/response viewer
- **Deliverable**: M1.4 - Docs Generator
#### **Week 6 (July 7-13) - Integration & Testing**
- Connect UI with catalog repo
- Add offline cache support
- Prepare midterm prototype
- **Release**: v0.1.0 (Alpha)
---
### **Midterm Evaluation (July 14-18)**
- Submit prototype for mentor review
- Address feedback on core features
- Plan phase 2 optimizations
---
### **Work Period (July 14 - August 25)**
#### **Week 7 (July 14-20) - Community Features**
- Implement star rating system
- Add API health monitoring
- Develop contributor guidelines
- **Deliverable**: M2.0 - Community Tools
#### **Week 8 (July 21-27) - Performance Boost**
- Optimize search speed (<1s response)
- Add incremental processing
- Implement request batching
#### **Week 9 (July 28-Aug 3) - Advanced UI**
- Build detailed API view page
- Add dark/light mode toggle
- Implement workspace import flow
#### **Week 10 (Aug 4-10) - Security **
- Add content validation pipeline
- Implement rate limiting
- Conduct security review
#### **Week 11 (Aug 11-17) - Documentation Finalization**
- Complete user guides
- Record tutorial videos
- Prepare API integration handbook
#### **Week 12 (Aug 18-24) - Final Polish**
- Fix all critical bugs
- Optimize cross-platform support
- Prepare demo video
---
### **Final Submission (Aug 25-Sept 1)**
- Submit final codebase
- File comprehensive project report
- Create maintenance roadmap
- Complete mentor evaluations

View File

@ -0,0 +1,146 @@
### About
1. **Full Name:** PADALA SAISRISATYA SUBRAMANESWAR
2. **Contact Info:**
- Email: [saisreesatyassss@gmail.com](mailto:saisreesatyassss@gmail.com)
- Phone: 7842446454
3. **Discord Handle:** pssss7656
4. **Home Page:** [saisreesatya-blog.pages.dev](https://saisreesatya-blog.pages.dev/)
5. **Blog:** [saisreesatya-blog.pages.dev](https://saisreesatya-blog.pages.dev/)
6. **GitHub Profile:** [github.com/saisreesatyassss](https://github.com/saisreesatyassss)
7. **Socials:**
- Portfolio: [http://saisreesatya.xyz/](http://saisreesatya.xyz/)
- LinkedIn: [linkedin.com/in/saisreesatyassss](https://www.linkedin.com/in/saisreesatyassss)
- Twitter: [x.com/saisreesatya000](https://x.com/saisreesatya000)
8. **Time Zone:** India Standard Time (IST)
9. **Resume:** [CV (PDF)](https://saisreesatyassss.github.io/CV/cv.pdf)
### University Info
1. **University Name:** VIT AP
2. **Program:** CSE - AIML
3. **Year of Joining:** 2021
4. **Expected Graduation:** 2025
---
### **Motivation & Past Experience**
1. **Have you worked on or contributed to a FOSS project before?**
Yes, I have contributed to various open-source projects, including API Dash. You can check out my PR here: [PR #727](https://github.com/foss42/apidash/pull/727).
In this pull request, I have detailed all the work I have done, along with videos. Please check it out!
2. **What is one project/achievement you are most proud of? Why?**
I have worked with multiple startups and started my first job in my second year as a Flutter developer. Over time, I learned backend and frontend development and built more than 20 web apps and some AI models.
One project that I am really proud of is **Mukham App**, which my team and I built. It completely changed how the attendance system worked in our college for teachers. We used geo-fencing and facial recognition, which helped streamline attendance tracking across the entire college.
3. **What kind of problems or challenges motivate you the most to solve them?**
I love solving complex backend and UI/UX challenges, especially those involving API integrations, data visualization, and automation.
More than that, I like working on innovative challenges—things that no one has ever tried before. If I get an idea, I just want to build it and see where it goes!
4. **Will you be working on GSoC full-time?**
Yes, I will be working on GSoC full-time. I am currently in my 4th year, and I dont have any classes or exams going on, so I am completely free to dedicate my time to this project.
5. **Do you mind regularly syncing up with the project mentors?**
No, I am totally fine with regular sync-ups and discussions. Its actually something I prefer because it helps in making sure everything is on track and moving in the right direction.
6. **What interests you the most about API Dash?**
API Dash makes working with APIs really smooth and efficient. The way it helps in testing and documenting APIs is super useful for developers.
But the main reason I got interested in API Dash was when I first started looking at GSoC projects. I checked out more than 25 organizations, joined their chats, and looked at their Slack groups. Out of all of them, API Dash stood out the most because the community was **active, helpful, and really engaging**—even when students had small doubts. That kind of support is what I liked the most!
7. **Can you mention some areas where the project can be improved?**
- **Better UI/UX for API visualizations.**
- **More integrations** with third-party services.
- **Refining API documentation generation** (which I have already worked on in my PR).
- **Keeping an eye on competitors** like Postman and others—seeing what new features they release and thinking about what we can build and release faster for users.
---
### **Project Proposal Information**
1. **Proposal Title:** DashBot The AI Assistant for API Dash
2. **Abstract:**
API development involves a lot of repetitive tasks—debugging requests, understanding responses, writing documentation, and visualizing data. Developers often spend hours on these, which could be automated.
**DashBot** aims to solve this by introducing an **AI-powered assistant** inside API Dash that will **help developers automate tedious tasks, follow best practices, and interact with APIs using natural language.**
The goal is to make API workflows **faster, smarter, and more intuitive**, saving developers time and effort.
3. **Detailed Description:**
This project will focus on building **DashBot** as a **modular and extensible AI assistant** that seamlessly integrates with API Dash. The main objectives include:
- **Explaining API Responses & Debugging Errors**
DashBot will analyze API responses, detect issues based on status codes & error messages, and provide actionable insights to fix them.
- **Generating API Documentation Automatically**
Developers will be able to generate structured and well-formatted API documentation based on request and response data. This will help in keeping docs up-to-date effortlessly.
- **Visualizing API Responses with Customizable Charts**
DashBot will generate **interactive plots** and **customizable visualizations** to help developers make sense of API data quickly.
- **Understanding APIs & Generating Test Cases**
Instead of manually writing test cases, DashBot will **analyze API responses** and auto-generate test cases to validate API functionality.
- **Improving the API Request Experience**
Users will get **auto-suggestions** for API parameters, headers, and request structures to improve efficiency and reduce errors.
- **Keeping It Fast & Efficient**
The implementation will focus on keeping DashBot **lightweight, responsive, and easy to use** within API Dash without slowing anything down.
---
### Features Implemented
#### **1. Multi-Model Selection (Google Maps & Snowflake)**
- Added support for selecting multiple models like Google Maps and Snowflake within API Dash.
- Enables seamless integration and switching between different models for enhanced flexibility.
- Screenshot:
![Multi-Model Selection Screenshot](https://raw.githubusercontent.com/saisreesatyassss/ai_endpoint/refs/heads/main/dash2.jpg)
- Video Demonstration: [Multi-Model Selection Demo](#) *(https://drive.google.com/file/d/1vJcrqLwmMukxqWaaDPXI4WUGF7iBQB2q/view?usp=sharing)*
#### **2. Debug Requests Based on Status Codes & Error Messages**
- Introduced an intelligent debugging system that analyzes API responses.
- Provides insights into errors by categorizing them based on status codes and error messages.
- Enhances developer productivity by offering suggestions for quick issue resolution.
- Screenshot:
![Debugging Feature Screenshot](https://raw.githubusercontent.com/saisreesatyassss/ai_endpoint/refs/heads/main/dash1.jpg)
#### **3. Generate API Documentation**
- Added functionality to auto-generate API documentation based on request and response structures.
- Formats and presents the API details in an easy-to-read manner.
- Screenshot:
![API Documentation Screenshot](https://raw.githubusercontent.com/saisreesatyassss/ai_endpoint/refs/heads/main/dash3.jpg)
- Video Demonstration: [API Documentation Generation](#) *(https://drive.google.com/file/d/106bYvU0aeMzEE4dj3UGWvX1mDcwrF6I2/view?usp=sharing)*
#### **4. Generate Plots & Visualizations for API Responses**
- Enables users to visualize API response data through customizable charts and plots.
- Supports customization options for better data representation.
- Screenshot:
![Visualization Screenshot](https://raw.githubusercontent.com/saisreesatyassss/ai_endpoint/refs/heads/main/dash4.jpg)
- Video Demonstration: [API Visualization Demo](#) *(https://drive.google.com/file/d/1eZdwxFqo6sB4IY8ZhrdFAK5PDHLNf4pR/view?usp=sharing)*
---
### **Weekly Timeline**
- **Week 1:** Research and finalize improvements for API visualization and debugging.
- **Week 2:** Set up project structure and integrate AI-powered response analysis.
- **Week 3:** Implement debugging features based on status codes and error messages.
- **Week 4:** Develop auto-generation of API documentation.
- **Week 5:** Improve customization options for API requests and response formatting.
- **Week 6:** Implement interactive visualizations for API responses.
- **Week 7:** Optimize performance and ensure smooth integration of DashBot features.
- **Week 8:** Conduct testing across different devices and environments.
- **Week 9:** Finalize UI/UX improvements and fix any remaining issues.
- **Week 10:** Complete documentation, final review with mentors, and submit the project.
-Note: This is a tentative timeline based on my initial thoughts. The plan may evolve based on discussions with mentors and project requirements.

View File

@ -0,0 +1,107 @@
# GSoC Proposal: DashBot - AI-Powered API Assistant for API Dash
## About
1. **Full Name**: Vennapusa Srinath Reddy
2. **Email**: srinathreddy0115@gmail.com
3. **Phone-no**: +91-7569756336
4. **Discord Handle**: srinath15
5. **Home Page**: [srinathreddy.netlify.app](https://srinathreddy.netlify.app/)
6. **Blog**: [sidduverse.notion.site/Acoustic-Echo-Cancellation](https://sidduverse.notion.site/Acoustic-Echo-Cancellation-175c6a02985880a79be4e68b56eaee51?pvs=4)
7. **GitHub Profile Link**: [github.com/siddu015](https://github.com/siddu015/)
8. **Twitter**: [x.com/siddu1501](https://x.com/siddu1501)
9. **LinkedIn**: [linkedin.com/in/srinath-reddy-0a57a224b](https://www.linkedin.com/in/srinath-reddy-0a57a224b/)
10. **Time Zone**: Indian Standard Time (IST, UTC+5:30)
11. **Link to a Resume**: [Resume](https://drive.google.com/file/d/1zF6JrxVozYWZDKSXHUUzcVNbEc91XUoD/view?usp=sharing)
## University Info
- **University Name**: Reva University
- **Program**: B.Tech in Computer Science and Engineering (Artificial Intelligence and Data Science)
- **Year**: 3rd Year (Started in 2022)
- **Expected Graduation Date**: June 2026
## Motivation & Past Experience
1. **Have you worked on or contributed to a FOSS project before? Can you attach repo links or relevant PRs?**
Yes, I've contributed to DashBot for API Dash during FOSS Hack 2025. Over the past month, I've worked on its initial development and submitted several pull requests to the [API Dash repository](https://github.com/foss42/apidash). Relevant contributions include:
- Issue opened for ChatBot: [#605](https://github.com/foss42/apidash/issues/605)
- FOSS Hack PR for ChatBot: [#608](https://github.com/foss42/apidash/pull/608)
- Initial draft PR for DashBot: [#641](https://github.com/foss42/apidash/pull/641)
- Recent PR for modified DashBot version: [#699](https://github.com/foss42/apidash/pull/699)
2. **What is your one project/achievement that you are most proud of? Why?**
I'm most proud of *LaughLab*, a personalized meme suggestion platform I built. The idea was to integrate a meme recommendation system with a user's keyboard, suggesting memes as they type based on their preferences, with a database that adapts over time. Check out the repo: [LaughLab](https://github.com/siddu015/LaughLab). I'm proud of this because it won 2nd place at E-Summit 2024 at Dayananda Sagar College—it was a fun and innovative challenge.
3. **What kind of problems or challenges motivate you the most to solve them?**
I'm motivated by meaningful technical challenges that push me to learn something new. I thrive on solving problems involving innovative features or complex logic, even if I only partially solve them. While I'm decent at UI/UX for usability, my passion lies in the technical backend—building things that work under the hood.
4. **Will you be working on GSoC full-time? In case not, what will you be studying or working on while working on the project?**
My 6th semester ends on June 7th, 2025, after which I'll work on GSoC full-time. Until then, I'll dedicate my time to detailed project planning, researching optimal implementation strategies, and discussing ideas with mentors to ensure a strong start.
5. **Do you mind regularly syncing up with the project mentors?**
Not at all—I enjoy collaborating and value mentor feedback. Regular sync-ups keep me aligned and help me improve my work continuously.
6. **What interests you the most about API Dash?**
API Dash's open-source nature hooked me. As someone who uses APIs daily in personal and work projects, I've relied on tools like Postman but always wondered how they function internally. Discovering API Dash at FOSS Hack 2025 gave me that insight and sparked my interest. I'm excited to contribute meaningfully to a tool I'd use myself.
7. **Can you mention some areas where the project can be improved?**
I see huge potential in enhancing API Dash through DashBot. Having developed initial features (e.g., response explanation, debugging), I believe DashBot can be fine-tuned and fully integrated into API Dash's architecture. This would enable more accurate, context-aware assistance and support personalized, AI-driven workflows using local models—making API Dash a smarter, user-centric tool.
## Project Proposal Information
### 1. Proposal Title
**DashBot - AI-Powered API Assistant for API Dash**
### 2. Abstract
DashBot aims to transform API Dash into an intelligent, AI-driven API exploration and development tool. By integrating advanced AI capabilities, we'll create a comprehensive assistant that helps developers understand, debug, document, and implement APIs more efficiently.
### 3. Detailed Description
- **Problem**: API Dash users manually handle debugging, testing, and documentation, slowing workflows. As an early-stage tool, it lacks AI-driven automation.
- **Project Goals** :
Develop an intelligent, modular AI assistant for API interactions
Provide context-aware API analysis and support
Create a flexible, extensible AI service architecture
Enhance developer productivity through intelligent insights
- **Technical Architecture**
Core Components
| Service | Key Features | Capabilities |
|---------|--------------|--------------|
| AI Analysis Service | - Semantic API request parsing | - Contextual understanding |
| | - Multi-model AI integration | - Intelligent insights generation |
| Debugging Service | - Advanced error pattern recognition | - Root cause analysis |
| | - Automated fix suggestions | - Performance bottleneck detection |
| Documentation Generator | - Automatic API documentation | - Comprehensive endpoint description |
| | - Example generation | - Interactive documentation support |
| Code Generation Service | - Multi-framework code generation | - Intelligent client code creation |
| | - Framework-specific best practices | - Customizable generation templates |
| Visualization Service | - Interactive response explorers | - API performance charts |
| | - Network flow visualizations | - Data transformation insights |
<img width="1200" alt="Screenshot 2025-03-25 at 10 00 45" src="https://github.com/user-attachments/assets/b12b488b-612d-4ca3-8b8e-be47ba59a123" />
**LLM Provider Management**
- Abstracted LLM provider interface
- Multiple provider support
- Local Ollama models
- Cloud AI services (OpenAI, Anthropic, other API's)
- Dynamic model selection
- Resource-aware model recommendations
### 4. Weekly Timeline (175 Hours, ~12 Weeks)
| Week | Duration | Focus Area | Key Activities |
|------|----------|------------|----------------|
| 1 | 15h | Bonding & Setup | Project initialization, mentor sync, environment setup |
| 2 | 15h | Beta Polish | Finalize initial features, basic debugging, documentation |
| 3-4 | 30h | Advanced Debugging | Auto-debugging implementation, comprehensive test generation |
| 5-7 | 45h | Visualizations | Plotting system development, response visualizations, customization |
| 8-9 | 30h | Frontend Code | Multi-framework code generation, API testing, response handling |
| 10 | 15h | Local LLM Integration | DashBot local model setup, Ollama integration, model selection |
| 11 | 15h | LLM Enhancements | Computational power optimization, DashBot toggle functionality |
| 11 | 15h | Benchmarks & UI | LLM evaluation, UI improvements, model compatibility testing |
| 12 | 10h | Testing & Wrap-Up | Comprehensive end-to-end testing, documentation finalization |

View File

@ -57,73 +57,318 @@ AI UI Designer for APIs
### Abstract
This project aims to develop an AI-powered assistant within API Dash that automatically generates dynamic user interfaces (UI) based on API responses (JSON/XML). The goal is to allow developers to instantly visualize, customize, and export usable Flutter UI code from raw API data. The generated UI should adapt to the structure of the API response and be interactive, with features like sorting, filtering, and layout tweaking. This tool will streamline frontend prototyping and improve developer productivity.
This project proposes the development of an AI-powered UI generation assistant within the API Dash application. The tool will automatically analyze API responses (primarily in JSON format), infer their structure, and dynamically generate Flutter-based UI components such as tables, forms, or cards. Developers will be able to preview, customize, and export these layouts as usable Dart code. By combining rule-based heuristics with optional LLM (e.g., Ollama, GPT) enhancements, the feature aims to streamline API data visualization and speed up frontend prototyping. The generated UI will be clean, modular, and directly reusable in real-world Flutter applications.
---
### Detailed Description
The AI UI Designer will be a new feature integrated into the API Dash interface, triggered by a button after an API response is received. It will analyze the data and suggest corresponding UI layouts using Dart/Flutter widgets such as `DataTable`, `Card`, or `Form`.
This project introduces a new feature into API Dash: AI UI Designer — an intelligent assistant that takes an API response and converts it into dynamic UI components, allowing developers to quickly visualize, customize, and export frontend code based on live API data. It will analyze the data and suggest corresponding UI layouts using Dart/Flutter widgets such as `DataTable`, `Card`, or `Form`.
#### Step 1: Parse API Response Structure
- Focus initially on JSON (XML can be added later)
- Build a recursive parser to convert the API response into a schema-like tree
- Extract field types, array/object structure, nesting depth
- Identify patterns (e.g., timestamps, prices, lists)
The first step is to understand the structure of the API response, which is usually in JSON format. The goal is to transform the raw response into an intermediate schema that can guide UI generation.
- Most API responses are either:
- Object: A flat or nested key-value map.
- Array of Objects: A list of items, each following a similar structure.
- Understanding the structure allows us to decide:
- What kind of UI component fits best (e.g., table, form, card).
- How many fields to show, and how deep the nesting goes.
- Common field types (string, number, boolean, array, object) impact widget selection.
- Special patterns (e.g., timestamps, emails, URLs) can be detected and used to enhance UI.
##### Implementation Plan
- Start with JSON
- Initially only support JSON input, as it's the most common.
- Use Dart's built-in dart:convert package to parse the response.
- Build a Recursive Schema Parser
- Traverse the JSON response recursively.
- For each node (key), determine:
- Type: string, number, bool, object, array
- Optional metadata (e.g., nullability, format hints)
- Depth and parent-child relationships
- Output a tree-like structure such as:
```json
{
"type": "object",
"fields": [
{"key": "name", "type": "string"},
{"key": "age", "type": "number"},
{"key": "profile", "type": "object", "fields": [...]},
{"key": "posts", "type": "array", "itemType": "object", "fields": [...]}
]
}
```
- Detect Patterns (Optional AI Help)
- Apply heuristics or regex to detect:
- Timestamps: ISO strings, epoch time
- Prices: numeric + currency signs
- Boolean flags: isActive, enabled, etc.
- This helps in choosing smart widgets (e.g., Switch for booleans).
- Create a Schema Class
- Implement a Dart class (e.g., ParsedSchema) to store this structure.
- This class will be passed into the UI generation logic in Step 2.
- Add Support for Validation
- Check if response is malformed or inconsistent (e.g., arrays with mixed types).
- If invalid, show fallback UI or error.
- Future Scope
- Add XML support by using XML parsers.
- Extend the parser to allow user overrides/custom schema mapping.
#### Step 2: Design AI Agent Logic
- Use a rule-based system to map schema to UI components
- List of objects → Table
- Simple object → Card/Form
- Number over time → Line Chart (optional)
- Integrate LLM backend (e.g., Ollama, GPT API) to enhance:
- Field labeling
- Layout suggestion
- Component naming
This step involves designing the core logic that maps the parsed API response schema to corresponding UI components. The AI agent will follow a hybrid approach: combining rule-based mapping with optional LLM-powered enhancement for smarter UI suggestions.
#### Step 3: Generate UI in Flutter
##### 2.1 Rule-Based Mapping System
To ensure fast and consistent results, we will first implement a simple rule-based system that maps specific JSON structures to Flutter widgets. This allows us to generate a basic layout even in environments where LLMs are not available or desirable.
- Dynamically generate:
- `DataTable`, `Card`, `TextField`, `Dropdown`, etc.
- Optional chart widgets (e.g., `fl_chart`)
- Support:
- Layout rearrangement (form-based or drag-drop)
- Field visibility toggles
- Previewing final UI
Example rules:
- If the root is an array of objects → generate a DataTable
- If the object contains mostly key-value pairs → generate a Card or Form
- If fields include timestamps or numeric trends → suggest LineChart
- If keys match common patterns like email, phone, price, etc. → render with appropriate widgets (TextField, Dropdown, Currency formatter)
These mappings will be implemented using Dart classes and can be loaded from a YAML/JSON config file to support extensibility.
##### 2.2 LLM-Powered Enhancements
To go beyond static rules and provide smarter UI suggestions, we will integrate an LLM (e.g., Ollama locally or GPT via API). The LLM will receive the parsed schema and be prompted to:
- Suggest the layout structure (vertical list, tabs, grouped cards, etc.)
- Label fields more intuitively (e.g., product_id → "Product ID")
- Reorder fields based on usage context
- Suggest default values, placeholder text, or icons
Prompt Example:
```json
{
"task": "Generate UI plan for API response",
"schema": {
"type": "object",
"fields": [
{"name": "username", "type": "string"},
{"name": "email", "type": "string"},
{"name": "created_at", "type": "timestamp"}
]
}
}
```
Expected LLM output:
```json
{
"layout": "vertical_card",
"fields": [
{"label": "Username", "widget": "TextField"},
{"label": "Email", "widget": "TextField"},
{"label": "Signup Date", "widget": "DateDisplay"}
]
}
```
##### 2.3 Fallback and Configuration
- If LLM call fails or is disabled (e.g., offline use), the system falls back to rule-based logic.
- The user can toggle LLM mode in settings.
- The response from LLM will be cached for repeat inputs to reduce latency and cost.
##### 2.4 Customization Layer (Optional)
After layout generation, users will be able to:
- Preview different layout suggestions (from rule-based vs. LLM)
- Select a layout and make field-level changes (hide/show, rename, rearrange)
- Submit feedback for improving future suggestions (optional)
#### Step 3: Generate and Render UI in Flutter
Once the layout plan is decided (via rule-based mapping or LLM suggestion), the system will dynamically generate corresponding Flutter widgets based on the API response structure and content types.
##### 3.1 Widget Mapping and Construction
- For each field or group in the parsed schema, we map it to a predefined Flutter widget. Example mappings:
- List of Objects → DataTable
- Simple key-value object → Card, Column with Text widgets
- String fields → TextField (if editable), or SelectableText
- Number series over time → Line chart (e.g., using fl_chart package)
- The widget structure will be built using standard Dart code with StatefulWidget or StatelessWidget, depending on interactivity.
-
Implementation Plan:
- Create a WidgetFactory class that receives a layout plan and schema, and returns a Widget tree.
- This factory will follow a clean design pattern to make it testable and modular.
- Use Flutters json_serializable or custom classes to deserialize API responses into displayable values.
##### 3.2 Dynamic Rendering in the App
- The generated widget tree will be rendered in a dedicated “AI UI Preview” pane inside API Dash.
- The rendering will be fully dynamic: when the schema or layout changes, the UI preview updates in real time.
- This pane will support:
- Light customization like toggling fields, reordering, hiding/showing
- Live data preview using the actual API response
Technical Flow:
- When user clicks "AI UI Designer", a modal or new route opens with the UI preview panel.
- This panel will:
- Show the raw schema & layout (editable if needed)
- Render the widget tree using Flutter's widget system
- Any user adjustments will re-trigger the widget regeneration and re-render.
##### 3.3 Preview and Debugging Tools
- Add a “Developer Mode” that shows:
- Schema tree
- Widget mapping details
- Generated Dart code (read-only)
- This helps with debugging and refining layout logic.
##### 3.4 Scalability Considerations
- To keep UI rendering responsive:
- Use lazy-loading for large JSON arrays (e.g., scrollable tables)
- Avoid deep nesting: limit UI depth or use ExpansionTile for hierarchical views
- Support pagination if list is too long
By the end of this step, users should be able to preview their API response as a fully functional, dynamic UI inside API Dash — without writing a single line of Flutter code.
#### Step 4: Export UI Code
- Export generated layout as Dart code
- Allow download or copy-to-clipboard
- Support JSON config export (optional for renderer-based architecture)
Once the user is satisfied with the generated and customized UI layout, the tool should allow them to export the UI as usable Flutter code, so it can be directly reused in their own projects. This step focuses on transforming the dynamic widget tree into clean, readable Dart code and offering convenient export options.
##### 4.1 Code Generation Pipeline
To generate Flutter code dynamically, we will:
- Traverse the internal widget tree (from Step 3)
- For each widget, generate corresponding Dart code using string templates
- Example: a DataTable widget will generate its DataTable constructor and children rows
- Use indentation and formatting to ensure readability
Implementation Plan:
- Create a CodeGenerator class responsible for converting widget definitions into raw Dart code strings.
- Use prebuilt templates for common components: Card, Column, DataTable, etc.
- Handle nested widgets recursively to maintain structure.
##### 4.2 Export Formats
We will support two export options:
1.Raw Dart Code Export
- Output the generated Dart code into a text area or preview pane
- Allow users to:
- Copy to clipboard
- Download as .dart file
- Highlight syntax for better UX (using a package like highlight)
2.Optional JSON Layout Export
- If we implement a config-driven rendering architecture, offer an export of the layout plan/schema as JSON
- Useful for re-importing or using with a visual UI builder
##### 4.3 Integration into API Dash
- Add an "Export" button below the UI preview pane
- When clicked, the generated code will be shown in a modal or new tab
- Provide one-click buttons:
- "Copy Code"
- "Download Dart File"
- (Optional) "Download Layout JSON"
##### 4.4 Reusability and Developer Focus
- Ensure that the exported code:
- Is clean and idiomatic Dart
- Can be copied directly into any Flutter project with minimal edits
- Includes basic import statements and class wrappers if needed
- Add helpful comments in the generated code (e.g., // This widget was generated from API response)
##### 4.5 Challenges and Considerations
- Ensuring valid syntax across nested widgets
- Handling edge cases (e.g., empty fields, null values)
- Optionally, offer theming/styling presets to match user preferences
By the end of this step, users can instantly turn live API data into production-ready Flutter UI code, significantly reducing time spent on repetitive frontend scaffolding.
#### Step 5: Integrate into API Dash
- Add AI UI Designer button in the API response view
- Launch UI editing pane inside app
- Ensure local-only, privacy-friendly execution
- Write tests, docs, and polish UX
The final step is to fully integrate the AI UI Designer into the API Dash application, so that users can seamlessly trigger UI generation from real API responses and interact with the entire pipeline — from data to UI preview to export — within the app.
##### 5.1 Entry Point in UI
We will add a new button or menu entry labeled “AI UI Designer” within the API response tab (or near the response preview area).
- When a user executes an API call and gets a JSON response:
- A floating action button or contextual menu becomes available
- Clicking it opens the AI UI Designer pane
Implementation Plan:
- Extend the existing response panel UI to include a trigger button
- Use a showModalBottomSheet() or a full-screen route to launch the designer
##### 5.2 Internal Architecture and Flow
The full integration involves multiple coordinated modules:
- Trigger UI → (Button click)
- JSON Parser Module (from Step 1) → Convert API response to schema
- Mapping Logic (Step 2) → Rule-based and/or LLM-assisted UI mapping
- Widget Tree Builder (Step 3) → Build live widget layout
- Preview + Export UI (Step 4) → Let users customize and extract code
Each module will be built as a reusable Dart service/class, and all UI logic stays within the API Dash UI tree.
Well keep the architecture modular so the designer logic is isolated and testable.
##### 5.3 Offline / Privacy-Friendly Support
Since API Dash is a privacy-first local client, the AI agent should work entirely offline by default using lightweight LLMs such as Ollama, which can run locally.
- If a user prefers using OpenAI or Anthropic APIs, provide optional settings to configure remote endpoints
- Set Ollama as the default backend, and wrap LLM logic inside a service with interchangeable backends
##### 5.4 User Flow Example
- User sends API request in API Dash
- JSON response is shown
- User clicks “AI UI Designer” button
- The parsed structure is shown with layout suggestions
- User can preview UI, rearrange components, and customize styles
- Once satisfied, user clicks “Export”
- Dart code is generated and available to copy/download
##### 5.5 Tests, Documentation & Maintenance
- Add integration tests to validate:
- Triggering and rendering behavior
- Correct widget tree output
- Export function accuracy
- Document:
- Each module (parsing, mapping, UI rendering, export)
- Developer usage guide (in docs/)
- Ensure all new code follows API Dashs contribution style and linting rules
By integrating into API Dash cleanly and modularly, this feature becomes a native part of the developer workflow — helping users transform any API into usable UI in seconds, without leaving the app.
---
## Weekly Timeline (Tentative)
| Week | Milestone |
|------|-----------|
| Community Bonding | Join Discord, interact with mentors, finalize approach, get feedback |
| Week 12 | Build and test JSON parser → generate basic schema |
| Week 34 | Implement rule-based UI mapper; generate simple widgets |
| Week 56 | Integrate initial Flutter component generator; allow basic UI previews |
| Week 7 | Midterm Evaluation |
| Week 89 | Add customization options (visibility, layout) |
| Week 10 | Integrate AI backend (e.g., Ollama/GPT) for suggestions |
| Week 1112 | Add export functions (code, JSON config) |
| Week 13 | Final polish, tests, docs |
| Week 14 | Final Evaluation, feedback, and delivery |
---
| Week | Milestone |
|---------------|---------------------------------------------------------------------------------------------|
| Community Bonding | Join Discord, introduce myself, understand API Dash architecture, finalize scope with mentors |
| Week 1 | Build recursive parser for JSON responses; test on static examples; output schema trees |
| Week 2 | Extend parser to handle nested objects, arrays, and basic pattern recognition (e.g., timestamps) |
| Week 3 | Implement rule-based schema-to-widget mapper; define mapping logic for tables, cards, forms |
| Week 4 | Design widget data model and logic for translating schema into Flutter widget trees |
| Week 5 | Develop dynamic Flutter widget generator; render `DataTable`, `Card`, `TextField`, etc. |
| Week 6 | Build basic UI preview pane inside API Dash with user interaction support (e.g., toggles) |
| Week 7 (Midterm Evaluation) | Submit code with parser + rule-based mapping + preview UI; receive mentor feedback |
| Week 8 | Add layout customization features: visibility toggles, reordering, field labels |
| Week 9 | Integrate basic Ollama-based LLM agent for field naming & layout suggestion |
| Week 10 | Abstract LLM backend to support GPT/Anthropic alternatives via API config |
| Week 11 | Implement code export: generate Dart source code, copy-to-clipboard & download options |
| Week 12 | Optional: add JSON config export; polish UX and improve error handling |
| Week 13 | Write documentation, developer setup guide, internal tests for each module |
| Week 14 (Final Evaluation) | Final review, cleanup, feedback response, and submission |
Thanks again for your time and guidance. Ive already started studying the API Dash codebase and developer guide, and Id love your feedback on this plan — does it align with your vision?
If selected, Im excited to implement this project. If this idea is already taken, Im open to switching to another API Dash project that fits my background.

View File

@ -0,0 +1,45 @@
### Initial Idea Submission
Full Name: Syed Abdullah
University name: University of Engineering and Technology, Taxila
Program you are enrolled in (Degree & Major/Minor): Graduate Software Engineer
Expected graduation date: I am an early stage developer and new to open source.
Project Title: AI-Powered Dynamic UI Generator from API Responses
Relevant issues: #617
##About me
Hi, I'm Syed Abdullah, a passionate Software Engineer with over 2 years of experience building scalable and modern software solutions.
I'm a full-stack developer comfortable working across both frontend and backend, using C#, .NET Core, React, Flutter, and more. I'm also a beginner-level open-source contributor, continuously learning and giving back to the community.
Idea description:
The goal of this project is to enhance API Dash by developing an AI-driven agent that automatically transforms API responses (e.g., JSON, XML) into intuitive, dynamic UI components like tables, cards, charts, and forms.
This eliminates the manual process of UI creation and helps developers interact with and visualize data effortlessly.
#### Key Features:
- Parse and understand API response structures in real-time.
- Automatically generate a corresponding UI schema/model (component layout).
- Render live previews of the generated UI in the app.
- Support customization: layout templates, filters, pagination, sorting, styles.
- Export the UI code (Flutter widgets or HTML/CSS snippets) for integration in web or mobile apps.
- Extensible system with support for plugins or future rendering engines (React, Vue, etc.).
#### Approach:
1. **Phase 1** Build a response parser module that:
- Parses JSON/XML structures.
- Outputs a layout schema representing UI components.
2. **Phase 2** Implement a dynamic UI renderer:
- Converts the layout schema into interactive Flutter or Web UI.
- Allows live preview inside API Dash.
3. **Phase 3** Add customization tools:
- Enable field selection, styling options, responsive layouts.
- Add filtering/sorting controls in tables, date pickers, etc.
4. **Phase 4** Code export and integration:
- One-click export to Flutter widgets or reusable HTML/CSS components.
- Optionally support importing layout templates.
This is my initial idea, kindly give me feedback on the idea and shall I move forward with the POC and all. Looking forward. Thanks

View File

@ -0,0 +1,54 @@
# API Explorer Wireframe
## 📌 Overview
This document presents the wireframe design for the **API Explorer** feature in API Dash. The API Explorer will allow users to:
- **Discover public APIs** across various categories.
- **View API details**, including authentication methods and sample requests.
- **Import APIs into their workspace** for seamless testing.
---
## 🎨 Wireframe Design
The wireframe includes three main sections:
### **1⃣ Homepage (API Listing Page)**
- **🔍 Search Bar**: Users can search for APIs.
- **📂 Category Filters**: AI, Finance, Weather, etc., to filter APIs.
- **📌 API Cards**: Displays API name, short description, category, and a "View Details" button.
- **➡️ Navigation**: Clicking “View Details” opens the API Details Page.
### **2⃣ API Details Page**
- **📌 API Name & Description**
- **🔑 Authentication Info** (API key required or not).
- **📂 API Endpoints & Sample Requests**
- **📋 "Copy API Key" Button**
- **📥 "Import API to Workspace" Button**
### **3⃣ Sidebar (Optional)**
- **📂 Saved APIs List** (Previously imported APIs).
- **⭐ Ratings & Reviews Section** (User feedback if implemented).
---
## 🖼️ Wireframe Link
🔗 **View the wireframe on Excalidraw**:
[API Explorer Wireframe](https://excalidraw.com/#json=71K2EyrjsTEv1HXRMTRqB,iw86qFoQz9coZwkuAcXPUQ)
*(Optional: If you exported an image, add it here)*
![Wireframe Preview](images/overview-api-explorer.png)
---
## 🚀 Next Steps
1. **Review the wireframe and suggest changes (if any).**
2. Once approved, start coding the **frontend UI** (homepage, details page, sidebar).
Looking forward to feedback! 🔥

View File

@ -0,0 +1,102 @@
### Initial Idea Submission
Full Name: Sabith Fulail
University name: Informatics Institute of Technology (IIT | Colombo, Sri Lanka)
Program you are enrolled in (Degree & Major/Minor): BSc (Hons) Computer Science (Data Science)
Year: 3rd Year
Expected graduation date: May, 2026
Project Title: Adding Support for API Authentication Methods
Relevant issues:
[#557](https://github.com/foss42/apidash/issues/557) Pre-request and post-request scripts
[#121](https://github.com/foss42/apidash/issues/121) Importing from/Exporting to OpenAPI/Swagger specification
[#337](https://github.com/foss42/apidash/issues/337) Support for application/x-www-form-urlencoded
[#352](https://github.com/foss42/apidash/issues/352) Support file as request body
[#22](https://github.com/foss42/apidash/issues/22) JSON body syntax highlighting, beautification, validation
[#581](https://github.com/foss42/apidash/issues/581) Beautify JSON request body (Closed)
[#582](https://github.com/foss42/apidash/issues/582) Syntax highlighting for JSON request body (Closed)
[#583](https://github.com/foss42/apidash/issues/583) Validation for JSON request body
[#590](https://github.com/foss42/apidash/issues/590) Environment variable support in request body
[#591](https://github.com/foss42/apidash/issues/591) Environment variable support for text request body
[#592](https://github.com/foss42/apidash/issues/592) Environment variable support for JSON request body
[#593](https://github.com/foss42/apidash/issues/593) Environment variable support for form request body
[#599](https://github.com/foss42/apidash/issues/599) Support for comments in JSON request body
[#600](https://github.com/foss42/apidash/issues/600) Reading environment variables from OS environment
[#601](https://github.com/foss42/apidash/issues/601) Adding color support for environments
[#373](https://github.com/foss42/apidash/issues/373) In-app update notifications
Idea description:
This project will streamline API testing in API Dash by introducing pre/post-request scripting, robust OpenAPI/Swagger interoperability,
and enhanced JSON/GraphQL editing. These changes will reduce manual effort in API debugging and improve workflow efficiency.
Implementation Plan
Phase 1: Research & Planning (Week 1-2)
Study existing API Dash architecture and feature requests.
Prioritize features based on complexity and impact.
Research best practices for JSON syntax validation, GraphQL handling, and API import/export.
Phase 2: Core Feature Development (Week 3-10)
1. Pre-Request & Post-Request Scripts (#557)
Enable users to modify requests and responses dynamically before sending.
This includes automating tasks such as adding authentication tokens, handling environment variables,
chaining API requests, and transforming request/response data.
2. OpenAPI/Swagger Import & Export (#121)
Allow importing API requests from OpenAPI/Swagger JSON/YAML files.
Implement API export functionality to generate valid OpenAPI specifications.
3. JSON Body Enhancements (#22)
Add syntax highlighting, beautification, and validation for JSON request bodies.
Provide auto-formatting and error detection for malformed JSON.
4. GraphQL Editor Improvements
Add expand/collapse feature for GraphQL queries.
Implement support for GraphQL fragments, mutations, and subscriptions.
Improve GraphQL schema inspection.
5. Support for More Content Types (#337)
Add support for application/x-www-form-urlencoded and file upload as request body.
Phase 3: Enhancements & Testing (Week 11-14)
6. Environment Variable & UI Improvements (#600, #601)
Allow reading OS environment variables directly.
Introduce color-coded environments (e.g., RED for Prod, GREEN for Dev).
7. In-App Update Notifications (#373)
Notify users when a new version of API Dash is available.
Provide an update button to quickly navigate to the latest release.
8. Increase Test Coverage
Write more widget & integration tests to improve code coverage.
Ensure major UI and backend features are fully tested before release.
Tech Stack & Tools
Feature | Tech/Tools
Frontend | Flutter (Dart)
API Parsing | OpenAPI, Swagger
JSON Enhancements | CodeMirror, Ace Editor
GraphQL | GraphQL Parser (Dart)
Testing | Widget Testing, Integration Testing
Environment Handling | OS Environment Variables (Dart)
Why This Project?
Enhances Developer Productivity Improves usability with better request handling, scripting, and JSON validation.
Better GraphQL Support Adds crucial missing features to enhance GraphQL development.
Improves API Import/Export Makes API Dash more interoperable with OpenAPI and Swagger.
Strengthens Stability & Testing Increases test coverage and enhances debugging efficiency.
These improvements will help make API Dash more competitive with other API tools by adding support for advanced
use cases such as authentication management, JSON syntax validation, and seamless GraphQL integration
Future Scope
Implement gRPC support to expand API Dash's capabilities.
Improve UI/UX for better user experience.
Add VS Code & JetBrains integration for a seamless developer workflow.
This project will provide meaningful improvements to API Dash and enhance the overall user experience.
I am excited to work on these features and contribute to making API Dash a more powerful tool!

Binary file not shown.

After

Width:  |  Height:  |  Size: 69 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 88 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 257 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 140 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 131 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 394 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 170 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 172 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 150 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 140 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 109 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 141 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 120 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 137 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 111 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 205 KiB

View File

@ -0,0 +1,200 @@
1. Full Name: MD. Rafsanul Islam Neloy
2. Email: rafsanneloy@gmail.com
3. Phone: +880 1325161428
4. Discord handle: rafsanneloy (756821234259460157)
5. GitHub: https://github.com/RafsanNeloy
6. LinkedIn: https://www.linkedin.com/in/md-rafsanul-neloy
7. Time zone: GMT +6 (Bangladesh)
8. Resume: https://drive.google.com/file/d/1_7YC1meQ0juyK80Bvp4A_9bmbfKqZcB7/view?usp=drive_link
### University Info
1. University name: Ahsanullah University Of Science & Technology
2. Program you are enrolled in (Degree & Major/Minor): B.Sc in CSE
3. Year: 4(Final Year)
5. Expected graduation date: 14/05/2026
### Motivation & Past Experience
### **Short Answers:**
1. **Have you worked on or contributed to a FOSS project before?**
No, I haven't contributed to a FOSS project before, but I'm eager to start with this GSoC opportunity.
2. **What is your one project/achievement that you are most proud of? Why?**
One of my proudest achievements is developing an **Angry Birds game using iGraphics**. This project pushed me to deeply understand physics-based simulations, collision detection, and game mechanics. It was particularly rewarding because I had to optimize performance while maintaining smooth gameplay, and it solidified my problem-solving skills in real-time rendering.
3. **What kind of problems or challenges motivate you the most to solve them?**
I am most motivated by challenges that involve **performance optimization, real-time data processing, and system scalability**. Whether it's reducing execution time, handling large-scale data efficiently, or ensuring seamless communication in distributed systems, I find solving such problems both intellectually stimulating and rewarding.
4. **Will you be working on GSoC full-time?**
Yes, I plan to dedicate my full time to GSoC. I want to immerse myself in the project, actively contribute to discussions, and ensure high-quality deliverables.
5. **Do you mind regularly syncing up with the project mentors?**
Not at all! Regular sync-ups are essential for feedback and guidance. I believe structured discussions will help me align with project expectations, identify potential roadblocks early, and ensure smooth progress.
6. **What interests you the most about API Dash?**
What excites me the most about API Dash is its **cross-platform support and extensibility**. The idea of having a unified API testing tool that supports multiple protocols across desktop and mobile platforms is fascinating. Additionally, the opportunity to work on **real-time protocols like WebSocket, SSE, MQTT, and gRPC** aligns perfectly with my interests in high-performance systems.
7. **Can you mention some areas where the project can be improved?**
- **Real-time Collaboration:** Allow users to share and test APIs collaboratively in real time.
- **Performance Benchmarking:** Add API request performance insights, such as latency breakdowns and server response analytics.
- **Protocol-Specific Debugging Tools:** Enhance error reporting with detailed logs and debugging suggestions for WebSocket, SSE, MQTT, and gRPC failures.
- **Mobile UI Optimization:** Improve API Dashs UX on mobile devices, ensuring a seamless experience on touch interfaces.
These improvements can make API Dash an even more powerful tool for developers working on modern applications! 🚀
### Key Points
- It seems likely that adding support for WebSocket, SSE, MQTT, and gRPC in API Dash will enhance its capabilities for real-time and high-performance API testing.
- The project involves designing the core library architecture, understanding protocol specifications, and implementing testing, visualization, and code generation features.
- Research suggests that this will benefit developers working on modern applications, especially in web, IoT, and microservices, by providing a unified tool.
---
### Introduction to API Dash and Project Scope
API Dash is an open-source, cross-platform API client built with Flutter, supporting macOS, Windows, Linux, Android, and iOS. It currently allows developers to create, customize, and test HTTP and GraphQL API requests, with features like response visualization and code generation in multiple programming languages. This project aims to extend API Dash by adding support for testing, visualization, and integration code generation for WebSocket, Server-Sent Events (SSE), Message Queuing Telemetry Transport (MQTT), and gRPC protocols.
These protocols are crucial for real-time communication and efficient data exchange, used in applications ranging from web and mobile to Internet of Things (IoT) devices and microservices. By integrating these, API Dash will become a more versatile tool, catering to a broader range of developer needs.
### Project Details and Implementation
The project involves several key steps:
- **Research and Specification Analysis**: Understand the specifications of WebSocket, SSE, MQTT, and gRPC to ensure correct implementation of their communication patterns.
- **Architecture Design**: Design the core library to integrate these protocols, ensuring modularity and compatibility with existing features.
- **Implementation**: Develop protocol handlers using Dart libraries (e.g., `web_socket_channel` for WebSocket, `mqtt_client` for MQTT, `grpc` for gRPC), create user interfaces with Flutter, and extend visualization and code generation features.
- **Testing and Validation**: Write unit and integration tests, test with real-world scenarios, and gather community feedback.
- **Documentation**: Update API Dash documentation with guides and examples for the new protocols.
Each protocol will have specific features:
- **WebSocket**: Support connection establishment, sending/receiving text and binary messages, and real-time visualization.
- **SSE**: Enable connecting to endpoints, displaying incoming events with data and type, and handling automatic reconnection.
- **MQTT**: Allow connecting to brokers, subscribing/publishing to topics, and managing QoS levels and connection status.
- **gRPC**: Import .proto files, select services/methods, input parameters, and display responses, initially focusing on unary calls with potential for streaming.
### Expected Outcomes and Benefits
Upon completion, API Dash will offer full support for testing these protocols, intuitive user interfaces, advanced visualization tools, and code generation in languages like JavaScript, Python, and Java. This will benefit developers by providing a unified tool for diverse API interactions, enhancing productivity and application quality, especially for real-time and high-performance systems.
An unexpected detail is that the project will also involve ensuring cross-platform compatibility, which is crucial for mobile and desktop users, potentially expanding API Dash's user base.
---
### Survey Note: Detailed Analysis of API Testing Support Expansion in API Dash
This note provides a comprehensive analysis of the proposed project to extend API Dash, an open-source API client built with Flutter, by adding support for WebSocket, Server-Sent Events (SSE), Message Queuing Telemetry Transport (MQTT), and gRPC protocols. The project aims to enhance testing, visualization, and integration code generation capabilities, catering to modern application development needs.
#### Background and Context
API Dash, available at [GitHub Repository](https://github.com/foss42/apidash), is designed for cross-platform use, supporting macOS, Windows, Linux, Android, and iOS. It currently facilitates HTTP and GraphQL API testing, with features like response visualization and code generation in languages such as JavaScript, Python, and Java. The project idea, discussed at [API Dash Discussions](https://github.com/foss42/apidash/discussions/565/), addresses the need to support additional protocols essential for real-time communication and high-performance systems, as outlined in related issues: [#15](https://github.com/foss42/apidash/issues/15), [#115](https://github.com/foss42/apidash/issues/115), [#116](https://github.com/foss42/apidash/issues/116), and [#14](https://github.com/foss42/apidash/issues/14).
The protocols in focus—WebSocket, SSE, MQTT, and gRPC—serve diverse purposes:
- **WebSocket** enables full-duplex communication over a single TCP connection, ideal for real-time web applications like chat and live updates.
- **SSE** is a server-push technology for unidirectional updates from server to client, suitable for live data feeds.
- **MQTT**, a lightweight messaging protocol, is designed for IoT devices, supporting publish-subscribe messaging.
- **gRPC**, using HTTP/2 and Protocol Buffers, facilitates high-performance RPC calls with features like bi-directional streaming and load balancing.
This expansion aligns with the growing demand for tools supporting real-time and IoT applications, positioning API Dash as a comprehensive solution.
#### Project Objectives and Scope
The primary objectives include:
1. **Protocol Support Implementation**: Develop modules to handle WebSocket, SSE, MQTT, and gRPC, ensuring compliance with their specifications.
2. **User Interface Enhancements**: Design intuitive UIs for each protocol, maintaining consistency with API Dash's existing design, and supporting features like connection management and message handling.
3. **Visualization Tools**: Create components for displaying requests, responses, and events, with features like syntax highlighting and real-time updates.
4. **Code Generation**: Extend the existing code generation functionality to support these protocols in multiple programming languages, ensuring accuracy and efficiency.
5. **Documentation and Testing**: Provide comprehensive documentation and implement thorough testing to ensure reliability.
The project, with a difficulty rated as medium-high and requiring skills in understanding specs/protocols, UX design, Dart, and Flutter, is estimated at 350 hours, as per the project idea table:
| Feature | Details |
|---------|---------|
| API Types Supported | HTTP (✅), GraphQL (✅), SSE (#116), WebSocket (#15), MQTT (#115), gRPC (#14) |
| Import Collection From | Postman (✅), cURL (✅), Insomnia (✅), OpenAPI (#121), hurl (#123), HAR (#122) |
| Code Generation Languages/Libraries | cURL, HAR, C (libcurl), C# (HttpClient, RestSharp), Dart (http, dio), Go (net/http), JavaScript (axios, fetch, node.js axios, node.js fetch), Java (asynchttpclient, HttpClient, okhttp3, Unirest), Julia (HTTP), Kotlin (okhttp3), PHP (curl, guzzle, HTTPlug), Python (requests, http.client), Ruby (faraday, net/http), Rust (hyper, reqwest, ureq, Actix Client), Swift (URLSession) |
| MIME Types for Response Preview | PDF (application/pdf), Various Videos (video/mp4, video/webm, etc.), Images (image/apng, image/avif, etc.), Audio (audio/flac, audio/mpeg, etc.), CSV (text/csv), Syntax Highlighted (application/json, application/xml, etc.) |
| Download Links | iOS/iPad: [App Store](https://apps.apple.com/us/app/api-dash-api-client-testing/id6711353348), macOS: [Release](https://github.com/foss42/apidash/releases/latest/download/apidash-macos.dmg), Windows: [Release](https://github.com/foss42/apidash/releases/latest/download/apidash-windows-x86_64.exe), Linux (deb, rpm, PKGBUILD): [Installation Guide](https://github.com/foss42/apidash/blob/main/INSTALLATION.md) |
#### Methodology and Implementation Details
The implementation will proceed in phases:
1. **Research and Specification Analysis**: Analyze the specifications of each protocol to understand communication models. For instance, WebSocket uses a single TCP connection for full-duplex communication, while gRPC leverages HTTP/2 and Protocol Buffers for RPC calls.
2. **Architecture Design**: Design the core library to integrate new protocols, ensuring modularity. This involves creating interfaces for protocol handlers and ensuring compatibility with Flutter's cross-platform nature.
3. **Implementation**: Use established Dart packages for efficiency:
- WebSocket: Leverage `web_socket_channel` for connection and message handling.
- SSE: Utilize the `http` package for HTTP-based event streaming.
- MQTT: Use `mqtt_client` for broker connections and publish-subscribe functionality.
- gRPC: Employ the `grpc` package, handling .proto file parsing and method calls.
Develop Flutter UIs for each protocol, ensuring responsiveness across platforms, including mobile devices.
4. **Testing and Validation**: Write unit tests for protocol handlers and integration tests for UI interactions. Test with sample APIs and real-world scenarios, such as connecting to public MQTT brokers or gRPC services.
5. **Documentation**: Update the documentation at [GitHub Repository](https://github.com/foss42/apidash) with guides, including examples for connecting to WebSocket endpoints or calling gRPC methods.
| **Type of Test** | **Description** | **Examples for Protocols** |
|-----------------------------|-------------------------------------------------------------------------------|-----------------------------------------------------------------|
| **Unit Tests** | Test individual components in isolation to verify functionality. | - Verify WebSocket message encoding/decoding.<br>- Test gRPC .proto file parsing.<br>- Check MQTT QoS level handling. |
| **Widget Tests** | Validate UI components to ensure user interactions work as expected. | - Test WebSocket URL input field.<br>- Verify SSE event display.<br>- Check gRPC method selection UI. |
| **Integration Tests** | Ensure all components work together for the complete feature flow. | - Test connecting to a WebSocket server, sending a message, and receiving a response.<br>- Verify MQTT subscribe/publish flow.<br>- Validate gRPC unary call end-to-end. |
| **Code Generation Tests** | Verify that generated code for each protocol in supported languages is correct and functional. | - Ensure WebSocket code in JavaScript uses standard APIs.<br>- Validate Python MQTT code with `paho-mqtt` library.<br>- Check gRPC code generation for Java. |
| **Cross-Platform Tests** | Run tests on different platforms to ensure compatibility and consistent behavior. | - Test WebSocket on macOS, Windows, Linux, Android, and iOS.<br>- Verify SSE on mobile devices.<br>- Ensure gRPC works across desktop and mobile. |
| **Edge Case and Error Handling Tests** | Test scenarios like connection failures, invalid inputs, and large data sets to ensure stability. | - Test WebSocket connection failure.<br>- Verify SSE handles invalid event streams.<br>- Check gRPC with invalid .proto files.<br>- Test MQTT with wrong broker credentials. |
Specific features for each protocol include:
- **WebSocket**: Connection establishment with URL and headers, sending/receiving messages, and real-time visualization with timestamps.
- **SSE**: Connecting to endpoints, displaying events with data and type, and handling reconnection with retry intervals.
- **MQTT**: Broker connection with authentication, topic subscription/publishing, and QoS level management, with visualization of message history.
- **gRPC**: Importing .proto files, selecting services/methods, inputting parameters, and displaying responses, initially focusing on unary calls with potential for streaming.
#### Expected Outcomes and Impact
The project will deliver:
- Full support for testing WebSocket, SSE, MQTT, and gRPC APIs, enhancing API Dash's versatility.
- Intuitive UIs for protocol interactions, ensuring a seamless user experience across platforms.
- Advanced visualization tools, such as syntax-highlighted message logs and real-time updates, improving data inspection.
- Code generation for integrating APIs in languages like JavaScript, Python, and Java, using standard libraries (e.g., WebSocket API for JavaScript, `requests` for Python MQTT).
- Comprehensive documentation, aiding developers in leveraging new features.
An unexpected detail is the focus on cross-platform compatibility, crucial for mobile users, potentially expanding API Dash's adoption in mobile development. This aligns with its current support for iOS and Android, as seen in download links like [App Store](https://apps.apple.com/us/app/api-dash-api-client-testing/id6711353348).
The benefits include empowering developers working on real-time applications, IoT projects, and microservices, by providing a unified tool. This will enhance productivity and application quality, contributing to the open-source community.
![FOSS Image](./images/foss.jpeg)
*Testing Diagram*
#### Potential Challenges and Considerations
Several challenges may arise:
- **Protocol Complexity**: Ensuring compliance with specifications, especially for gRPC with Protocol Buffers and streaming calls.
- **User Interface Design**: Balancing intuitive design with the diverse interaction models of each protocol, while maintaining consistency.
- **Performance**: Handling real-time data streams without impacting UI responsiveness, particularly on mobile devices.
- **Code Generation**: Generating accurate code for multiple languages, considering protocol-specific libraries and best practices.
- **Cross-Platform Compatibility**: Ensuring all features work seamlessly across macOS, Windows, Linux, Android, and iOS, addressing platform-specific issues.
Solutions include leveraging established Dart packages, following existing UI patterns, optimizing asynchronous programming, researching language-specific libraries, and extensive platform testing.
#### Conclusion
This project to integrate WebSocket, SSE, MQTT, and gRPC into API Dash will significantly enhance its capabilities, making it a comprehensive tool for API testing and development. It offers valuable experience in protocol implementation, UX design, and cross-platform development, benefiting the open-source community and developers worldwide.
---
### Key Citations
- [API Dash Discussions Project Ideas List](https://github.com/foss42/apidash/discussions/565/)
- [API Dash Issue WebSocket Support](https://github.com/foss42/apidash/issues/15)
- [API Dash Issue MQTT Support](https://github.com/foss42/apidash/issues/115)
- [API Dash Issue SSE Support](https://github.com/foss42/apidash/issues/116)
- [API Dash Issue gRPC Support](https://github.com/foss42/apidash/issues/14)
- [API Dash iOS App Download Page](https://apps.apple.com/us/app/api-dash-api-client-testing/id6711353348)
- [API Dash macOS Release Download Page](https://github.com/foss42/apidash/releases/latest/download/apidash-macos.dmg)
- [API Dash Windows Release Download Page](https://github.com/foss42/apidash/releases/latest/download/apidash-windows-x86_64.exe)
- [API Dash Linux Installation Guide Page](https://github.com/foss42/apidash/blob/main/INSTALLATION.md)
### **4. Weekly Timeline**
| **Week** | **Tasks** |
|----------|----------|
| **Week 1** | Conduct research on WebSocket, SSE, MQTT, and gRPC specifications. Analyze existing API Dash architecture and finalize technical stack. |
| **Week 2** | Design the core library architecture for integrating new protocols. Outline UI requirements for protocol interactions. |
| **Week 3** | Implement WebSocket support: connection establishment, message sending/receiving, and real-time visualization. Write unit tests. |
| **Week 4** | Implement SSE support: event stream handling, automatic reconnection, and real-time event visualization. Conduct initial testing. |
| **Week 5** | Develop MQTT support: broker connection, topic subscription/publishing, QoS management. Implement visualization for messages. |
| **Week 6** | Implement gRPC support: import .proto files, select services/methods, send requests, and visualize responses. Focus on unary calls first. |
| **Week 7** | Extend code generation features for WebSocket, SSE, MQTT, and gRPC in JavaScript, Python, and Java. Validate generated code. |
| **Week 8** | Conduct integration testing for all protocols. Optimize performance and ensure cross-platform compatibility (macOS, Windows, Linux, Android, iOS). |
| **Week 9** | Improve UI/UX for seamless protocol interactions. Add customization options and refine error handling mechanisms. |
| **Week 10** | Perform extensive testing with real-world APIs. Fix bugs, optimize stability, and ensure smooth user experience. |
| **Week 11** | Finalize documentation, create user guides, and add example implementations. Conduct final performance testing. |
| **Week 12** | Prepare project demo, finalize submission, and gather community feedback for improvements. |
This timeline ensures systematic progress while allowing flexibility for testing and optimization. Let me know if you need any modifications! 🚀

View File

@ -32,9 +32,9 @@ Short answers to the following questions (Add relevant links wherever you can):
2. What is your one project/achievement that you are most proud of? Why?
3. What kind of problems or challenges motivate you the most to solve them?
4. Will you be working on GSoC full-time? In case not, what will you be studying or working on while working on the project?
6. Do you mind regularly syncing up with the project mentors?
7. What interests you the most about API Dash?
8. Can you mention some areas where the project can be improved?
5. Do you mind regularly syncing up with the project mentors?
6. What interests you the most about API Dash?
7. Can you mention some areas where the project can be improved?
### Project Proposal Information

View File

@ -84,7 +84,7 @@ class ApidashTestRequestHelper {
var headerCells = find.descendant(
of: find.byType(EditRequestHeaders),
matching: find.byType(HeaderField));
matching: find.byType(EnvHeaderField));
var valueCells = find.descendant(
of: find.byType(EditRequestHeaders),
matching: find.byType(EnvCellField));
@ -95,7 +95,7 @@ class ApidashTestRequestHelper {
tester.testTextInput.enterText(keyValuePairs[i].$2);
headerCells = find.descendant(
of: find.byType(EditRequestHeaders),
matching: find.byType(HeaderField));
matching: find.byType(EnvHeaderField));
valueCells = find.descendant(
of: find.byType(EditRequestHeaders),
matching: find.byType(EnvCellField));

View File

@ -138,7 +138,25 @@ class DashApp extends ConsumerWidget {
!kIsLinux && !kIsMobile
? const App()
: context.isMediumWindow
? const MobileDashboard()
? (kIsMobile
? FutureBuilder<bool>(
future: getOnboardingStatusFromSharedPrefs(),
builder: (context, snapshot) {
if (snapshot.connectionState ==
ConnectionState.done) {
debugPrint(
"showOnboarding: ${snapshot.data.toString()}");
final showOnboarding =
snapshot.data ?? false;
return showOnboarding
? const MobileDashboard()
: const OnboardingScreen();
}
return const Center(
child: CircularProgressIndicator());
},
)
: const MobileDashboard())
: const Dashboard(),
if (kIsWindows)
SizedBox(

View File

@ -37,14 +37,16 @@ let multipartFormData = try! MultipartFormData(boundary: boundary) {
''';
final String kTemplateJsonData = '''
let parameters = "{{jsonData}}"
let postData = parameters.data(using: .utf8)
let postData = """
{{jsonData}}
""".data(using: .utf8)
''';
final String kTemplateTextData = '''
let parameters = "{{textData}}"
let postData = parameters.data(using: .utf8)
let postData = """
{{textData}}
""".data(using: .utf8)
''';
@ -61,15 +63,23 @@ request.addValue("{{value}}", forHTTPHeaderField: "{{header}}")
""";
final String kTemplateBody = """
final String kTemplateFormDataBody = """
request.httpBody = try! multipartFormData.encode()
""";
final String kTemplateJsonTextBody = """
request.httpBody = postData
""";
final String kTemplateEnd = """
let semaphore = DispatchSemaphore(value: 0)
let task = URLSession.shared.dataTask(with: request) { data, response, error in
defer { semaphore.signal() }
if let error = error {
print("Error: (error.localizedDescription)")
print("Error: \\(error.localizedDescription)")
return
}
guard let data = data else {
@ -77,30 +87,31 @@ let task = URLSession.shared.dataTask(with: request) { data, response, error in
return
}
if let responseString = String(data: data, encoding: .utf8) {
print("Response: (responseString)")
print("Response: \\(responseString)")
}
}
task.resume()
semaphore.wait()
""";
String? getCode(HttpRequestModel requestModel) {
try {
String result = kTemplateStart;
if (requestModel.hasFormData) {
result += kTemplateFormDataImport;
}
var rec =
getValidRequestUri(requestModel.url, requestModel.enabledParams);
var rec = getValidRequestUri(requestModel.url, requestModel.enabledParams);
Uri? uri = rec.$1;
if (requestModel.hasFormData) {
result += kTemplateFormDataImport;
var formDataList = requestModel.formDataMapList.map((param) {
if (param['type'] == 'file') {
final filePath = param['value'] as String;
final fileName = path.basename(filePath);
final fileExtension =
final fileExtension =
path.extension(fileName).toLowerCase().replaceFirst('.', '');
return {
'type': 'file',
@ -122,17 +133,19 @@ task.resume()
result += templateFormData.render({
"formData": formDataList,
});
} else if (requestModel.hasJsonData) {
}
// Handle JSON data
else if (requestModel.hasJsonData) {
var templateJsonData = jj.Template(kTemplateJsonData);
result += templateJsonData.render({
"jsonData":
requestModel.body!.replaceAll('"', '\\"').replaceAll('\n', '\\n'),
});
} else if (requestModel.hasTextData) {
"jsonData": requestModel.body!
});
}
// Handle text data
else if (requestModel.hasTextData) {
var templateTextData = jj.Template(kTemplateTextData);
result += templateTextData.render({
"textData":
requestModel.body!.replaceAll('"', '\\"').replaceAll('\n', '\\n'),
"textData": requestModel.body!
});
}
@ -144,19 +157,21 @@ task.resume()
var headers = requestModel.enabledHeadersMap;
if (requestModel.hasFormData) {
headers.putIfAbsent("Content-Type",
() => "multipart/form-data; boundary=(boundary.stringValue)");
} else if (requestModel.hasJsonData || requestModel.hasTextData) {
headers.putIfAbsent(
kHeaderContentType, () => requestModel.bodyContentType.header);
}
headers['Content-Type'] =
"multipart/form-data; boundary=\\(boundary.stringValue)";
} else if(requestModel.hasJsonData||requestModel.hasTextData){
headers['Content-Type'] = 'application/json';
}
if (headers.isNotEmpty) {
var templateHeader = jj.Template(kTemplateHeaders);
result += templateHeader.render({"headers": headers});
}
if (requestModel.hasFormData || requestModel.hasBody) {
result += kTemplateBody;
if (requestModel.hasFormData) {
result += kTemplateFormDataBody;
} else if (requestModel.hasJsonData || requestModel.hasTextData) {
result += kTemplateJsonTextBody;
}
result += kTemplateEnd;

View File

@ -13,6 +13,9 @@ const kAssetIntroMd = "assets/intro.md";
const kAssetSendingLottie = "assets/sending.json";
const kAssetSavingLottie = "assets/saving.json";
const kAssetSavedLottie = "assets/completed.json";
const kAssetGenerateCodeLottie = "assets/generate.json";
const kAssetApiServerLottie = "assets/api_server.json";
const kAssetFolderLottie = "assets/files.json";
final kIsMacOS = !kIsWeb && Platform.isMacOS;
final kIsWindows = !kIsWeb && Platform.isWindows;

View File

@ -1,16 +1,18 @@
export 'api_type_dropdown.dart';
export 'button_navbar.dart';
export 'code_pane.dart';
export 'editor_title.dart';
export 'editor_title_actions.dart';
export 'envfield_url.dart';
export 'editor_title.dart';
export 'env_regexp_span_builder.dart';
export 'env_trigger_field.dart';
export 'env_trigger_options.dart';
export 'envfield_cell.dart';
export 'envfield_header.dart';
export 'envfield_url.dart';
export 'environment_dropdown.dart';
export 'envvar_indicator.dart';
export 'envvar_span.dart';
export 'envvar_popover.dart';
export 'env_trigger_options.dart';
export 'field_header.dart';
export 'envvar_span.dart';
export 'sidebar_filter.dart';
export 'sidebar_header.dart';
export 'sidebar_save_button.dart';

View File

@ -32,6 +32,7 @@ class EnvCellField extends StatelessWidget {
focusNode: focusNode,
style: kCodeStyle.copyWith(
color: clrScheme.onSurface,
fontSize: Theme.of(context).textTheme.bodyMedium?.fontSize,
),
decoration: getTextFieldInputDecoration(
clrScheme,

View File

@ -4,8 +4,8 @@ import 'package:multi_trigger_autocomplete_plus/multi_trigger_autocomplete_plus.
import 'package:apidash/utils/utils.dart';
import 'envfield_cell.dart';
class HeaderField extends StatefulWidget {
const HeaderField({
class EnvHeaderField extends StatefulWidget {
const EnvHeaderField({
super.key,
required this.keyId,
this.hintText,
@ -20,10 +20,10 @@ class HeaderField extends StatefulWidget {
final ColorScheme? colorScheme;
@override
State<HeaderField> createState() => _HeaderFieldState();
State<EnvHeaderField> createState() => _EnvHeaderFieldState();
}
class _HeaderFieldState extends State<HeaderField> {
class _EnvHeaderFieldState extends State<EnvHeaderField> {
final FocusNode focusNode = FocusNode();
@override
Widget build(BuildContext context) {

View File

@ -103,7 +103,7 @@ class EditRequestHeadersState extends ConsumerState<EditRequestHeaders> {
),
),
DataCell(
HeaderField(
EnvHeaderField(
keyId: "$selectedId-$index-headers-k-$seed",
initialValue: headerRows[index].name,
hintText: kHintAddName,

View File

@ -1 +1,2 @@
export 'dashboard.dart';
export 'onboarding_screen.dart';

View File

@ -0,0 +1,170 @@
import 'package:apidash/consts.dart';
import 'package:apidash/screens/mobile/widgets/onboarding_slide.dart';
import 'package:apidash/screens/screens.dart';
import 'package:apidash/services/services.dart';
import 'package:apidash_design_system/apidash_design_system.dart';
import 'package:carousel_slider/carousel_slider.dart';
import 'package:flutter/material.dart';
class OnboardingScreen extends StatefulWidget {
const OnboardingScreen({super.key});
@override
State<OnboardingScreen> createState() => _OnboardingScreenState();
}
class _OnboardingScreenState extends State<OnboardingScreen> {
int currentPageIndex = 0;
final CarouselSliderController _carouselController =
CarouselSliderController();
void _onNextPressed() {
if (currentPageIndex < 2) {
_carouselController.nextPage(
duration: const Duration(milliseconds: 600),
curve: Curves.ease,
);
} else {
Navigator.pushAndRemoveUntil(
context,
MaterialPageRoute(
builder: (context) => MobileDashboard(),
),
(route) => false,
);
}
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
backgroundColor: Theme.of(context).colorScheme.surface,
actions: [
TextButton(
onPressed: () async {
Navigator.pushAndRemoveUntil(
context,
MaterialPageRoute(
builder: (context) => MobileDashboard(),
),
(route) => false,
);
await setOnboardingStatusToSharedPrefs(
isOnboardingComplete: true,
);
},
child: const Text(
'Skip',
),
),
],
),
body: Container(
color: Theme.of(context).colorScheme.surface,
child: Column(
children: [
Expanded(
child: CarouselSlider(
carouselController: _carouselController,
options: CarouselOptions(
height: MediaQuery.of(context).size.height * 0.75,
viewportFraction: 1.0,
enableInfiniteScroll: false,
onPageChanged: (index, reason) {
setState(() {
currentPageIndex = index;
});
},
),
items: [
OnboardingSlide(
context: context,
assetPath: kAssetApiServerLottie,
assetSize: context.width * 0.75,
title: "Test APIs with Ease",
description:
"Send requests, preview responses, and test APIs with ease. REST and GraphQL support included!",
),
OnboardingSlide(
context: context,
assetPath: kAssetFolderLottie,
assetSize: context.width * 0.55,
title: "Organize & Save Requests",
description:
"Save and organize API requests into collections for quick access and better workflow.",
),
OnboardingSlide(
context: context,
assetPath: kAssetGenerateCodeLottie,
assetSize: context.width * 0.65,
title: "Generate Code Instantly",
description:
"Integrate APIs using well tested code generators for JavaScript, Python, Dart, Kotlin & others.",
),
],
),
),
Row(
mainAxisAlignment: MainAxisAlignment.spaceBetween,
children: [
Padding(
padding: const EdgeInsets.only(left: 36.0),
child: Row(
mainAxisAlignment: MainAxisAlignment.center,
children: List.generate(3, (index) {
bool isSelected = currentPageIndex == index;
return GestureDetector(
onTap: () {
_carouselController.animateToPage(index);
},
child: AnimatedContainer(
width: isSelected ? 40 : 18,
height: 7,
margin: const EdgeInsets.symmetric(horizontal: 3),
decoration: BoxDecoration(
color: isSelected
? Theme.of(context).colorScheme.primary
: Theme.of(context)
.colorScheme
.secondaryContainer,
borderRadius: BorderRadius.circular(9),
),
duration: const Duration(milliseconds: 300),
),
);
}),
),
),
Padding(
padding: const EdgeInsets.only(right: 16.0),
child: IconButton(
onPressed: () async {
_onNextPressed();
if (currentPageIndex == 2) {
await setOnboardingStatusToSharedPrefs(
isOnboardingComplete: true,
);
}
},
icon: const Icon(
Icons.arrow_forward_rounded,
size: 30,
),
style: IconButton.styleFrom(
elevation: 8,
shape: RoundedRectangleBorder(
borderRadius: BorderRadius.circular(35),
),
),
),
),
],
),
const SizedBox(height: 60),
],
),
),
);
}
}

View File

@ -0,0 +1,70 @@
import 'package:apidash_design_system/apidash_design_system.dart';
import 'package:flutter/material.dart';
import 'package:lottie/lottie.dart';
class OnboardingSlide extends StatelessWidget {
final BuildContext context;
final String assetPath;
final double assetSize;
final String title;
final String description;
const OnboardingSlide({
required this.context,
required this.assetPath,
required this.assetSize,
required this.title,
required this.description,
super.key,
});
@override
Widget build(BuildContext context) {
return Column(
mainAxisAlignment: MainAxisAlignment.spaceBetween,
crossAxisAlignment: CrossAxisAlignment.center,
children: [
Padding(
padding: const EdgeInsets.only(top: 75.0),
child: Center(
child: Lottie.asset(
assetPath,
renderCache: RenderCache.drawingCommands,
width: assetSize,
fit: BoxFit.cover,
),
),
),
Column(
mainAxisAlignment: MainAxisAlignment.start,
children: [
Text(
title,
textAlign: TextAlign.center,
style: TextStyle(
fontSize: 28,
fontWeight: FontWeight.bold,
color: Theme.of(context).colorScheme.primary,
),
),
const SizedBox(height: 10),
Padding(
padding: const EdgeInsets.symmetric(
vertical: 8.0,
horizontal: 16,
),
child: Text(
description,
textAlign: TextAlign.center,
style: kTextStyleButton.copyWith(fontSize: 16),
),
),
const SizedBox(
height: 70,
)
],
),
],
);
}
}

View File

@ -3,6 +3,7 @@ import 'package:shared_preferences/shared_preferences.dart';
import '../models/models.dart';
const String kSharedPrefSettingsKey = 'apidash-settings';
const String kSharedPrefOnboardingKey = 'apidash-onboarding-status';
Future<SettingsModel?> getSettingsFromSharedPrefs() async {
final prefs = await SharedPreferences.getInstance();
@ -22,7 +23,19 @@ Future<void> setSettingsToSharedPrefs(SettingsModel settingsModel) async {
await prefs.setString(kSharedPrefSettingsKey, settingsModel.toString());
}
Future<void> setOnboardingStatusToSharedPrefs(
{required bool isOnboardingComplete}) async {
final prefs = await SharedPreferences.getInstance();
await prefs.setBool(kSharedPrefOnboardingKey, isOnboardingComplete);
}
Future<bool> getOnboardingStatusFromSharedPrefs() async {
final prefs = await SharedPreferences.getInstance();
final onboardingStatus = prefs.getBool(kSharedPrefOnboardingKey) ?? false;
return onboardingStatus;
}
Future<void> clearSharedPrefs() async {
final prefs = await SharedPreferences.getInstance();
await prefs.remove(kSharedPrefSettingsKey);
await prefs.clear();
}

View File

@ -102,7 +102,7 @@ class _TextFieldEditorState extends State<TextFieldEditor> {
),
filled: true,
hoverColor: kColorTransparent,
fillColor: Theme.of(context).colorScheme.surfaceContainerLow,
fillColor: Theme.of(context).colorScheme.surfaceContainerLowest,
),
),
);

View File

@ -167,7 +167,7 @@ class _JsonTextFieldEditorState extends State<JsonTextFieldEditor> {
),
filled: true,
hoverColor: kColorTransparent,
fillColor: Theme.of(context).colorScheme.surfaceContainerLow,
fillColor: Theme.of(context).colorScheme.surfaceContainerLowest,
),
),
),

View File

@ -1,5 +1,4 @@
import 'dart:io';
import 'dart:collection';
import 'package:flutter/foundation.dart';
import 'package:http/http.dart' as http;
import 'package:http/io_client.dart';
@ -15,7 +14,7 @@ class HttpClientManager {
static final HttpClientManager _instance = HttpClientManager._internal();
static const int _maxCancelledRequests = 100;
final Map<String, http.Client> _clients = {};
final Queue<String> _cancelledRequests = Queue();
final Set<String> _cancelledRequests = {};
factory HttpClientManager() {
return _instance;
@ -38,9 +37,9 @@ class HttpClientManager {
_clients[requestId]?.close();
_clients.remove(requestId);
_cancelledRequests.addLast(requestId);
while (_cancelledRequests.length > _maxCancelledRequests) {
_cancelledRequests.removeFirst();
_cancelledRequests.add(requestId);
if (_cancelledRequests.length > _maxCancelledRequests) {
_cancelledRequests.remove(_cancelledRequests.first);
}
}
}
@ -49,6 +48,10 @@ class HttpClientManager {
return _cancelledRequests.contains(requestId);
}
void removeCancelledRequest(String requestId) {
_cancelledRequests.remove(requestId);
}
void closeClient(String requestId) {
if (_clients.containsKey(requestId)) {
_clients[requestId]?.close();

View File

@ -19,6 +19,9 @@ Future<(HttpResponse?, Duration?, String?)> sendHttpRequest(
SupportedUriSchemes defaultUriScheme = kDefaultUriScheme,
bool noSSL = false,
}) async {
if (httpClientManager.wasRequestCancelled(requestId)) {
httpClientManager.removeCancelledRequest(requestId);
}
final client = httpClientManager.createClient(requestId, noSSL: noSSL);
(Uri?, String?) uriRec = getValidRequestUri(
@ -71,37 +74,27 @@ Future<(HttpResponse?, Duration?, String?)> sendHttpRequest(
}
}
http.StreamedResponse multiPartResponse =
await multiPartRequest.send();
await client.send(multiPartRequest);
stopwatch.stop();
http.Response convertedMultiPartResponse =
await convertStreamedResponse(multiPartResponse);
return (convertedMultiPartResponse, stopwatch.elapsed, null);
}
}
switch (requestModel.method) {
case HTTPVerb.get:
response = await client.get(requestUrl, headers: headers);
break;
case HTTPVerb.head:
response = await client.head(requestUrl, headers: headers);
break;
case HTTPVerb.post:
response =
await client.post(requestUrl, headers: headers, body: body);
break;
case HTTPVerb.put:
response =
await client.put(requestUrl, headers: headers, body: body);
break;
case HTTPVerb.patch:
response =
await client.patch(requestUrl, headers: headers, body: body);
break;
case HTTPVerb.delete:
response =
await client.delete(requestUrl, headers: headers, body: body);
break;
}
response = switch (requestModel.method) {
HTTPVerb.get => await client.get(requestUrl, headers: headers),
HTTPVerb.head => response =
await client.head(requestUrl, headers: headers),
HTTPVerb.post => response =
await client.post(requestUrl, headers: headers, body: body),
HTTPVerb.put => response =
await client.put(requestUrl, headers: headers, body: body),
HTTPVerb.patch => response =
await client.patch(requestUrl, headers: headers, body: body),
HTTPVerb.delete => response =
await client.delete(requestUrl, headers: headers, body: body),
};
}
if (apiType == APIType.graphql) {
var requestBody = getGraphQLBody(requestModel);

View File

@ -10,6 +10,7 @@ class ADDropdownButton<T> extends StatelessWidget {
this.isExpanded = false,
this.isDense = false,
this.iconSize,
this.fontSize,
this.dropdownMenuItemPadding = kPs8,
this.dropdownMenuItemtextStyle,
});
@ -20,6 +21,7 @@ class ADDropdownButton<T> extends StatelessWidget {
final bool isExpanded;
final bool isDense;
final double? iconSize;
final double? fontSize;
final EdgeInsetsGeometry dropdownMenuItemPadding;
final TextStyle? Function(T)? dropdownMenuItemtextStyle;
@ -38,6 +40,7 @@ class ADDropdownButton<T> extends StatelessWidget {
elevation: 4,
style: kCodeStyle.copyWith(
color: Theme.of(context).colorScheme.primary,
fontSize: fontSize ?? Theme.of(context).textTheme.bodyMedium?.fontSize,
),
underline: Container(
height: 0,

View File

@ -175,6 +175,14 @@ packages:
url: "https://pub.dev"
source: hosted
version: "8.9.4"
carousel_slider:
dependency: "direct main"
description:
name: carousel_slider
sha256: "7b006ec356205054af5beaef62e2221160ea36b90fb70a35e4deacd49d0349ae"
url: "https://pub.dev"
source: hosted
version: "5.0.0"
characters:
dependency: transitive
description:

View File

@ -68,6 +68,7 @@ dependencies:
git:
url: https://github.com/google/flutter-desktop-embedding.git
path: plugins/window_size
carousel_slider: ^5.0.0
dependency_overrides:
extended_text_field: ^16.0.0

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,50 @@
import 'package:apidash/screens/common_widgets/envfield_header.dart';
import 'package:flutter/material.dart';
import 'package:flutter_test/flutter_test.dart';
import 'package:flutter_portal/flutter_portal.dart';
import 'package:extended_text_field/extended_text_field.dart';
import 'package:spot/spot.dart';
void main() {
group('HeaderField Widget Tests', () {
testWidgets('HeaderField renders and displays ExtendedTextField',
(tester) async {
await tester.pumpWidget(
const Portal(
child: MaterialApp(
home: Scaffold(
body: EnvHeaderField(
keyId: "testKey",
hintText: "Enter header",
),
),
),
),
);
spot<EnvHeaderField>().spot<ExtendedTextField>().existsOnce();
});
testWidgets('HeaderField calls onChanged when text changes',
(tester) async {
String? changedText;
await tester.pumpWidget(
Portal(
child: MaterialApp(
home: Scaffold(
body: EnvHeaderField(
keyId: "testKey",
hintText: "Enter header",
onChanged: (text) => changedText = text,
),
),
),
),
);
await act.tap(spot<EnvHeaderField>().spot<ExtendedTextField>());
tester.testTextInput.enterText("new header");
expect(changedText, "new header");
});
});
}

View File

@ -1,54 +1,8 @@
import 'package:apidash/screens/common_widgets/field_header.dart';
import 'package:apidash/widgets/menu_header_suggestions.dart';
import 'package:flutter/material.dart';
import 'package:flutter_test/flutter_test.dart';
import 'package:flutter_portal/flutter_portal.dart';
import 'package:extended_text_field/extended_text_field.dart';
import 'package:spot/spot.dart';
void main() {
group('HeaderField Widget Tests', () {
testWidgets('HeaderField renders and displays ExtendedTextField',
(tester) async {
await tester.pumpWidget(
const Portal(
child: MaterialApp(
home: Scaffold(
body: HeaderField(
keyId: "testKey",
hintText: "Enter header",
),
),
),
),
);
spot<HeaderField>().spot<ExtendedTextField>().existsOnce();
});
testWidgets('HeaderField calls onChanged when text changes',
(tester) async {
String? changedText;
await tester.pumpWidget(
Portal(
child: MaterialApp(
home: Scaffold(
body: HeaderField(
keyId: "testKey",
hintText: "Enter header",
onChanged: (text) => changedText = text,
),
),
),
),
);
await act.tap(spot<HeaderField>().spot<ExtendedTextField>());
tester.testTextInput.enterText("new header");
expect(changedText, "new header");
});
});
group('HeaderSuggestions Widget Tests', () {
testWidgets('HeaderSuggestions displays suggestions correctly',
(tester) async {