Merge branch 'foss42:main' into main
@ -0,0 +1,76 @@
|
||||
### About
|
||||
|
||||
1. **Full Name**: Sunil Kumar Sharma
|
||||
2. **Contact Info**: sharma.sunil12527@gmail.com, +91 8979696414
|
||||
3. **Discord User ID**: AZURE (502613458638995456)
|
||||
4. **GitHub Handle**: https://github.com/Azur3-bit
|
||||
5. **Socials**: https://www.linkedin.com/in/sunil-sharma-206871205/
|
||||
6. **Time Zone**: GMT +5:30 (India)
|
||||
7. **Resume**: https://drive.google.com/file/d/1B3ixbrlPwwCfFw8Lcq3LXvW3N5dWmviX/view?usp=sharing
|
||||
|
||||
### University Info
|
||||
|
||||
1. **University Name**: SRM Institute of Science and Technology
|
||||
2. **Program**: B.Tech in Computer Science & Engineering
|
||||
3. **Year**: 4th (Final Year)
|
||||
4. **Expected Graduation Date**: June 2025
|
||||
|
||||
### Motivation & Past Experience
|
||||
|
||||
1. **Have you worked on or contributed to a FOSS project before?**
|
||||
Yes, I have actively contributed to open-source projects, including adding support for PHP, Rust, and Golang, improving UI elements, and enhancing test coverage for various repositories. Some of my notable contributions:
|
||||
- Added support for PHP, Rust, and Golang in an online compiler.
|
||||
- Improved UI/UX for an online coding platform.
|
||||
- Introduced a Python script for OpenAI key validation.
|
||||
- Link to relevant PR: https://github.com/kalviumcommunity/compilerd/pull/139
|
||||
While my PR was not merged, it was due to a shift in project priorities, and the maintainers appreciated my effort and provided constructive feedback, which helped me refine my contributions.
|
||||
|
||||
2. **What is your one project/achievement that you are most proud of? Why?**
|
||||
One of my proudest achievements is my project on **Self-Optimizing and Intelligent Cloud Infrastructure**. This system integrates AWS Predictive Auto-Scaling with CloudWatch monitoring and cost optimization techniques, reducing infrastructure costs by ₹766.82 per month. This project showcases my expertise in **cloud computing, automation, and cost optimization** while making real-world impact.
|
||||
|
||||
3. **What kind of problems or challenges motivate you the most to solve them?**
|
||||
I like working on problems that push me to improve efficiency, enhance security, and automate complex processes. Challenges in **API authentication, cloud infrastructure, and scalable systems** interest me the most because they require a balance of security, optimization, and real-world application.
|
||||
|
||||
4. **Will you be working on GSoC full-time?**
|
||||
Yes, I will be working full-time on my GSoC project.
|
||||
|
||||
5. **Do you mind regularly syncing up with the project mentors?**
|
||||
Not at all! Regular sync-ups will help ensure alignment with project goals and continuous improvement.
|
||||
|
||||
6. **What interests you the most about API Dash?**
|
||||
API Dash is a lightweight and efficient API testing tool that avoids the unnecessary complexity of other platforms. I like how it keeps things simple while integrating AI to make API testing more intuitive and developer-friendly.
|
||||
|
||||
7. **Can you mention some areas where the project can be improved?**
|
||||
- **Authentication Mechanisms**: Implementing **Multi-Factor Authentication (MFA)**, including biometric authentication, will enhance security and improve user experience. Having worked on MFA in payment gateways, I can integrate fingerprint recognition to streamline authentication, reducing reliance on passwords while ensuring security. Secure storage will protect credentials, allowing seamless and fast authentication for valid users on both mobile and laptop platforms.
|
||||
|
||||
### Project Proposal Information
|
||||
|
||||
#### Proposal Title: **Enhancing API Authentication & Secure Storage in API Dash**
|
||||
|
||||
#### Abstract
|
||||
This project aims to **implement secure storage for authentication tokens using Flutter Secure Storage and integrate biometric authentication** for an added layer of security. The goal is to **enhance security while keeping API Dash lightweight and user-friendly**.
|
||||
|
||||

|
||||
image : (doc/proposals/2025/gsoc/images/sunil Auth image.png)
|
||||
|
||||
#### Detailed Description
|
||||
|
||||
| Feature | Description |
|
||||
|---------|------------|
|
||||
| **Secure Token Storage** | Implement **Flutter Secure Storage** to securely store authentication tokens in an encrypted format. |
|
||||
| **Biometric Authentication** | Enable **fingerprint unlock** for accessing stored API credentials. |
|
||||
| **Improved UI for Authentication Management** | Add an intuitive UI for managing saved authentication methods securely. |
|
||||
| **Multiple Authentication Methods** | Ensure seamless support for Basic Auth, API Key, JWT, OAuth 1.0, OAuth 2.0, and Digest Authentication. |
|
||||
| **Efficient Request Handling** | Ensure secure storage integration does not affect API request efficiency. |
|
||||
|
||||
#### Weekly Timeline
|
||||
|
||||
| Week | Tasks |
|
||||
|------|-------|
|
||||
| **Week 1** | Study API Dash's authentication mechanisms and security vulnerabilities. Set up the development environment. |
|
||||
| **Week 2** | Implement **Flutter Secure Storage** for encrypted token storage. |
|
||||
| **Week 3-5** | Integrate **Biometric Authentication** for secure access to stored credentials. Improve UI for managing authentication credentials securely. |
|
||||
| **Week 6-9** | Implement and test multiple authentication methods (Basic Auth, API Key, JWT, OAuth, Digest Auth) with secure storage. Ensure **efficient API request handling** with secure storage integration. |
|
||||
| **Week 10** | Optimize performance and conduct security audits for token storage. |
|
||||
| **Week 11** | Improve documentation for secure authentication management in API Dash. |
|
||||
| **Week 12** | Conduct thorough testing, debugging, and security validation. Prepare the final report, demo, and submit the project. |
|
@ -0,0 +1,322 @@
|
||||
### About
|
||||
1. Full Name: Mohit Kumar Singh
|
||||
3. Contact info :8538948208, tihom4537@gmail.com
|
||||
6. Discord handle : tihom__37
|
||||
9. GitHub profile link : https://github.com/tihom4537
|
||||
10. LinkedIn: https://www.linkedin.com/in/mohit-kumar-singh-268700254
|
||||
11. Time zone: IST (GMT+5:30)
|
||||
12. Link to a resume (PDF, publicly accessible via link and not behind any login-wall): https://drive.google.com/file/d/1j11dbTE2JYhsXkBP7Jg4wxhY-bnTt425/view?usp=drivesdk
|
||||
|
||||
### University Info
|
||||
|
||||
1. University name : National Institute Of Technology, Hamirpur
|
||||
2. Program you are enrolled in (Degree & Major/Minor): B.Tech in Electrical Engineering
|
||||
3. Year :Prefinal Year(3rd Year)-2023
|
||||
5. Expected graduation date: 2024
|
||||
|
||||
### Motivation & Past Experience
|
||||
|
||||
Short answers to the following questions (Add relevant links wherever you can):
|
||||
1. Have you worked on or contributed to a FOSS project before? Can you attach repo links or relevant PRs?
|
||||
-While I haven't had the opportunity to contribute to a FOSS project yet, I am keenly interested in open-source development and actively exploring avenues to participate.
|
||||
|
||||
2. What is your one project/achievement that you are most proud of? Why?
|
||||
-Artist Connection Platform
|
||||
I designed and developed a comprehensive artist connection platform that facilitates collaboration between artists and clients. This project represents my most significant achievement as I independently handled the entire development lifecycle from conception to deployment.
|
||||
As the sole developer, I implemented both the frontend using Flutter and the backend using Laravel. The platform features a robust set of functionalities including:
|
||||
* Secure upload and management of large media files (videos and images) to AWS S3
|
||||
* Dynamic artist work profiles with portfolio showcasing
|
||||
* Phone number verification through OTP authentication
|
||||
* Secure payment processing through Razorpay integration
|
||||
* Real-time communication via Firebase notification system
|
||||
The infrastructure deployment leverages multiple AWS services:
|
||||
* EC2 instances for backend hosting
|
||||
* S3 buckets for asset management
|
||||
* Relational Database Service (RDS) for data storage
|
||||
* Load Balancer for traffic management and high availability
|
||||
This project demonstrates my ability to handle complex technical challenges across the full stack while delivering a production-ready solution. The application is currently active with a growing user base across both mobile platforms.
|
||||
Links
|
||||
* Android: https://play.google.com/store/apps/details?id=in.primestage.onnstage&pcampaignid=web_share
|
||||
* iOS: https://apps.apple.com/in/app/primestage-artist-booking-app/id6736954597
|
||||
* GitHub (Frontend): https://github.com/hunter4433/artistaFrontend-.git
|
||||
* GitHub (Backend): https://github.com/hunter4433/artistaFrontend-.git
|
||||
|
||||
3. What kind of problems or challenges motivate you the most to solve them?
|
||||
-I am particularly motivated by smart and efficient system design challenges, especially those that focus on scalability and seamless handling of user load. I find it exciting to work on products and applications that are built to scale, ensuring they can handle growing demands without compromising performance. The opportunity to design systems that are both robust and efficient drives my passion for solving complex technical problems
|
||||
|
||||
4. Will you be working on GSoC full-time? In case not, what will you be studying or working on while working on the project?
|
||||
-I will be working full-time till mid-term evaluation(july 14) as I will be having summer vacation after 1st week of May till 1st week of July, thereafter also I will contribute the 3-4 hours daily as i Will be involved with my academic curricullum too.
|
||||
|
||||
6. Do you mind regularly syncing up with the project mentors?
|
||||
-I don't mind regular sync-ups with project mentors at all. In fact, I welcome the opportunity for consistent communication and feedback throughout the project.
|
||||
|
||||
7. What interests you the most about API Dash?
|
||||
-I have worked with API creation, management, and load testing in previous projects, which has given me insight into their industrial importance. What particularly interests me about API Dash is its comprehensive approach to API monitoring,Code generation and visualization. I'm excited about the opportunity to contribute to a tool that helps developers track and improve API performance in real-time.
|
||||
|
||||
8. Can you mention some areas where the project can be improved?
|
||||
-It lacks Integration with tools such as CI/CD pipelines and version control systems like GitHub. We can offer similar integrations to help teams manage and automate API testing and monitoring.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
# API Testing Suite Implementation - GSoC Proposal
|
||||
|
||||
## 1. Proposal Title
|
||||
API Testing Suite, Workflow Builder, Collection Runner & Monitor Implementation for API Dash Framework
|
||||
|
||||
Related Issues - #96 #100 #120
|
||||
|
||||
## 2. Abstract
|
||||
This project aims to implement a comprehensive API Testing Suite within the existing API Dash framework. Modern API development requires robust testing tools to ensure reliability, performance, and security. The proposed testing suite will provide developers with a powerful solution for creating, managing, and executing various types of API tests through a flexible and intuitive interface. By implementing features such as a hierarchical test organization structure, asynchronous test execution, JavaScript-based test scripting, and detailed reporting capabilities, this project will significantly enhance the API development workflow within the API Dash ecosystem.
|
||||
|
||||
## 3. Detailed Description
|
||||
|
||||
### Project Objectives
|
||||
The API Testing Suite implementation will focus on the following key objectives:
|
||||
|
||||
- **Test Case Management**: Develop a comprehensive system for creating and managing test cases with support for multiple test types, environment variables, and execution history.
|
||||
- **Test Suite Organization**: Implement a hierarchical structure for organizing tests with nested suites, suite-level environment variables, and advanced execution controls.
|
||||
- **Test Execution Engine**: Create a powerful engine for running tests asynchronously with configurable timeouts, progress monitoring, and status checking.
|
||||
- **Test Scripting Interface**: Build a flexible scripting interface using JavaScript/Chai for custom validation logic and assertion-based testing.
|
||||
- **Reporting System**: Implement detailed reporting capabilities with multiple output formats and comprehensive test result metrics.
|
||||
|
||||
### Workflow Architecture
|
||||
The API Testing Suite follows a logical workflow that enables systematic API testing:
|
||||
|
||||

|
||||
|
||||
This diagram illustrates the complete testing process from creating test suites to generating reports, along with the different types of tests supported and execution modes available in the implementation.
|
||||
|
||||
### Technical Implementation Plan
|
||||
|
||||
#### 1. Test Case Management Module
|
||||
The core of the project will focus on creating a robust test case management system that supports:
|
||||
|
||||
- Multiple test types including response validation, environment variables, performance, and security tests
|
||||
- Comprehensive test case properties (name, description, enable/disable functionality)
|
||||
- Environment variable integration
|
||||
- Test script association
|
||||
- Execution history tracking
|
||||
|
||||
**Implementation Details:**
|
||||
- Create `test_case_model.dart` to define the core data structure
|
||||
- Develop test result tracking mechanisms
|
||||
- Implement environment variable management within test cases
|
||||
|
||||
#### 2. Test Suite Organization
|
||||
The project will implement a hierarchical test suite structure allowing:
|
||||
|
||||
- Creation and management of test suites
|
||||
- Support for nested test suites (suite of suites)
|
||||
- Suite-level environment variables
|
||||
- Advanced test execution controls including stop on failure option, test reordering, duplication, and search
|
||||
|
||||
**Implementation Details:**
|
||||
- Develop `test_suite_model.dart` to define suite structure
|
||||
- Implement state management via `test_suite_provider.dart`
|
||||
- Create UI components for navigating and managing suite hierarchy
|
||||
|
||||
#### 3. Test Execution Engine
|
||||
A powerful test execution engine will be implemented that supports:
|
||||
|
||||
- Individual test execution
|
||||
- Suite and nested suite execution
|
||||
- Asynchronous test support with configurable timeouts
|
||||
- Status checking endpoints
|
||||
- Progress monitoring
|
||||
|
||||
**Implementation Details:**
|
||||
- Create `test_runner_service.dart` to handle execution logic
|
||||
- Implement asynchronous test handling mechanisms
|
||||
- Develop result collection functionality
|
||||
|
||||
#### 4. Test Scripting Interface
|
||||
The project will provide a flexible scripting interface using JavaScript/Chai that supports:
|
||||
|
||||
- Assertion-based testing
|
||||
- Environment variable access
|
||||
- Asynchronous operation handling
|
||||
- Custom validation logic
|
||||
|
||||
**Implementation Details:**
|
||||
- Create `test_script_model.dart` for script definition
|
||||
- Implement script execution context
|
||||
- Develop result handling mechanisms
|
||||
|
||||
#### 5. Reporting System
|
||||
A comprehensive reporting system will be implemented supporting:
|
||||
|
||||
- Multiple report formats (JSON, CSV, HTML)
|
||||
- Detailed report contents including test results, execution times, error messages, and performance metrics
|
||||
|
||||
**Implementation Details:**
|
||||
- Create report generation services
|
||||
- Implement formatters for different output types
|
||||
- Develop result visualization components
|
||||
|
||||
### API Load Testing Capabilities
|
||||
Building on the core testing framework, the implementation will include advanced load testing capabilities:
|
||||
|
||||
- **Multiple Testing Methodologies**:
|
||||
- Concurrent Users Simulation
|
||||
- Requests Per Second (RPS) Testing
|
||||
- Total Requests Testing
|
||||
- Duration-Based Testing
|
||||
|
||||

|
||||
|
||||
- **Performance Metrics**:
|
||||
- Success and failure rates
|
||||
- Average response times
|
||||
- 95th and 99th percentile response times
|
||||
- Throughput (requests per second)
|
||||
- Individual request timestamps and status codes
|
||||
|
||||
- **Configuration Options**:
|
||||
- HTTP methods (GET, POST, PUT, DELETE)
|
||||
- Custom headers and request bodies
|
||||
- Load patterns with configurable ramp-up and ramp-down periods
|
||||
|
||||
The solution implements intelligent request scheduling as demonstrated in this core algorithm:
|
||||
|
||||
```dart
|
||||
List<int> _calculateRequestTimings(LoadTestConfig config) {
|
||||
final timings = <int>[];
|
||||
switch (config.type) {
|
||||
case LoadTestType.concurrentUsers:
|
||||
// For concurrent users, we want to send all requests at once
|
||||
timings.addAll(List.filled(config.value, 0));
|
||||
break;
|
||||
case LoadTestType.requestsPerSecond:
|
||||
// For RPS, we need to space out requests evenly
|
||||
final interval = (1000 / config.value).round();
|
||||
timings.addAll(List.generate(config.value, (i) => i * interval));
|
||||
break;
|
||||
case LoadTestType.totalRequests:
|
||||
// For total requests, we'll spread them over 1 minute
|
||||
final interval = (60000 / config.value).round();
|
||||
timings.addAll(List.generate(config.value, (i) => i * interval));
|
||||
break;
|
||||
case LoadTestType.durationBased:
|
||||
// For duration-based, we'll send requests throughout the duration
|
||||
final interval = (config.value * 1000 / 100).round(); // 100 requests
|
||||
timings.addAll(List.generate(100, (i) => i * interval));
|
||||
break;
|
||||
}
|
||||
// Add ramp-up and ramp-down periods
|
||||
if (config.rampUpTime > 0) {
|
||||
final rampUpInterval = config.rampUpTime * 1000 / timings.length;
|
||||
for (var i = 0; i < timings.length; i++) {
|
||||
timings[i] += (i * rampUpInterval).round();
|
||||
}
|
||||
}
|
||||
if (config.rampDownTime > 0) {
|
||||
final rampDownInterval = config.rampDownTime * 1000 / timings.length;
|
||||
for (var i = 0; i < timings.length; i++) {
|
||||
timings[i] += ((timings.length - i) * rampDownInterval).round();
|
||||
}
|
||||
}
|
||||
return timings;
|
||||
}
|
||||
```
|
||||
|
||||
### API Collection and Workflow Management
|
||||
The implementation will include a sophisticated system for API management through collections and visual workflows:
|
||||
|
||||
- **Collections Management**:
|
||||
- Organized grouping of related API requests
|
||||
- Import/export capabilities
|
||||
- Filtering and search functionality
|
||||
|
||||
- **Visual Workflow Builder**:
|
||||
- Drag-and-drop interface for workflow creation
|
||||
- Support for various node types (requests, delays, variables, conditions)
|
||||
- Interactive connector lines between nodes
|
||||
- Conditional branching based on response data
|
||||
|
||||
- **Variable Management**:
|
||||
- Dynamic variable substitution in URLs, headers, and request bodies
|
||||
- Environment-specific variable sets
|
||||
- Automatic variable extraction from responses
|
||||
|
||||
The implementation includes a robust execution engine for workflows:
|
||||
|
||||
```dart
|
||||
Future<CollectionRunResult> _executeWorkflow(
|
||||
ApiWorkflow workflow,
|
||||
Map<String, dynamic> variables,
|
||||
) async {
|
||||
// Sort nodes by position for execution order
|
||||
final sortedNodes = workflow.nodes.toList()
|
||||
..sort((a, b) => a.position.y.compareTo(b.position.y));
|
||||
// Execute nodes in sequence
|
||||
for (final node in sortedNodes) {
|
||||
final result = await _executeNode(node, variables);
|
||||
// Process result and update variables
|
||||
}
|
||||
// Return workflow execution results
|
||||
return CollectionRunResult(/* ... */);
|
||||
}
|
||||
```
|
||||
|
||||
### Testing Strategy
|
||||
The project will include comprehensive testing of all components:
|
||||
|
||||
- Unit tests for the test runner (`test_runner_test.dart`)
|
||||
- Integration tests for the test suite provider (`test_suite_provider_test.dart`)
|
||||
- End-to-end tests to validate the full testing workflow
|
||||
|
||||
### Integration with Existing System
|
||||
The API Testing Suite will integrate seamlessly with the existing API Dash features, providing:
|
||||
|
||||
- Improved API testing workflow
|
||||
- Better test organization
|
||||
- Enhanced test automation
|
||||
- Detailed test reporting
|
||||
- Consistent user experience
|
||||
|
||||
### Benefits to the Community
|
||||
This implementation will benefit the community by:
|
||||
|
||||
- Improving API quality through comprehensive testing
|
||||
- Reducing development time with automated testing
|
||||
- Enhancing debugging capabilities with detailed reporting
|
||||
- Supporting a wider range of testing scenarios
|
||||
- Providing a more complete development ecosystem within API Dash
|
||||
|
||||
## 4. Weekly Timeline
|
||||
|
||||
| Week | Date Range | Activities | Deliverables |
|
||||
|------|------------|------------|-------------|
|
||||
| **1-2** | **May 8 - June 1** | • Review existing API Dash framework • Set up development environment • Finalize design documents • Create initial project structure | • Project repository setup • Detailed design document • Initial framework |
|
||||
| **3** | **June 2 - June 8** | • Implement test case model • Create basic test case properties • Design test case UI | • Basic test case data structure • UI wireframes |
|
||||
| **4** | **June 9 - June 15** | • Implement environment variable handling • Develop test case management UI • Create test result models | • Environment variable system • Test case management interface |
|
||||
| **5** | **June 16 - June 22** | • Implement test suite models • Create suite hierarchy structure • Begin suite-level variable implementation | • Test suite data structure • Initial hierarchy navigation |
|
||||
| **6** | **June 23 - June 30** | • Complete suite-level variable implementation • Develop test ordering functionality • Create suite management UI | • Suite management interface • Test ordering system |
|
||||
| **7** | **July 1 - July 7** | • Begin test runner service implementation • Develop basic test execution logic • Implement test status tracking | • Basic test execution engine • Status tracking system |
|
||||
| **8** | **July 8 - July 14** | • Complete test runner service • Implement asynchronous test handling • Create progress monitoring UI • Prepare midterm evaluation | • Working test execution engine • Progress monitoring interface • Midterm evaluation report |
|
||||
| **9** | **July 18 - July 24** | • Begin JavaScript/Chai integration • Create script model • Implement basic assertion handling | • Script data structure • Basic script execution |
|
||||
| **10** | **July 25 - July 31** | • Complete script execution context • Implement advanced assertions • Develop environment variable access in scripts | • Complete scripting interface • Variable access in scripts |
|
||||
| **11** | **August 1 - August 7** | • Begin reporting system implementation • Create report models • Implement JSON/CSV formatters | • Report data structure • Basic formatters |
|
||||
| **12** | **August 8 - August 14** | • Complete reporting system • Implement HTML reports • Develop visualization components • Create export functionality | • Complete reporting system • Multiple export formats |
|
||||
| **13** | **August 15 - August 21** | • Integrate load testing capabilities • Implement test collections • Begin workflow builder implementation | • Load testing functionality • Collections management |
|
||||
| **14** | **August 22 - August 25** | • Complete workflow builder • Perform comprehensive testing • Fix bugs and optimize performance | • Complete workflow system • Passing test suite |
|
||||
| **15** | **August 25 - September 1** | • Finalize documentation • Create tutorial content • Prepare final submission • Submit final evaluation | • Complete API Testing Suite • Comprehensive documentation • Final project report |
|
||||
|
||||
### Technical Skills and Qualifications
|
||||
- Proficient in Dart and Flutter development
|
||||
- Experience with API testing methodologies
|
||||
- Understanding of asynchronous programming concepts
|
||||
- Familiarity with JavaScript and testing frameworks
|
||||
- Knowledge of state management in Flutter applications
|
||||
|
||||
### Expected Outcomes
|
||||
Upon completion, the API Testing Suite will provide:
|
||||
- Comprehensive test management capabilities
|
||||
- Flexible test organization structures
|
||||
- Powerful test scripting options
|
||||
- Detailed testing reports
|
||||
- Intuitive workflow builder interface
|
||||
|
||||
This implementation will significantly enhance the API Dash framework, making it a more complete solution for API development and testing.
|
@ -0,0 +1,53 @@
|
||||
### About
|
||||
|
||||
1. Full Name: - Aviral Garg
|
||||
3. Contact info (email, phone, etc.): gargaviral99@gmail.com, +91-9971195728
|
||||
6. Discord handle: __aviral
|
||||
7. Home page (if any)
|
||||
8. Blog (if any): - https://dev.to/aviralgarg05
|
||||
9. GitHub profile link :- https://github.com/aviralgarg05
|
||||
10. Twitter, LinkedIn, other socials: - https://www.linkedin.com/in/aviral-garg-b7b053280/
|
||||
11. Time zone:- IST
|
||||
12. Link to a resume (PDF, publicly accessible via link and not behind any login-wall):- https://false-rooster-1f2.notion.site/Aviral-Garg-CV-15737ff1adc58070be95fbb15b8a6cc3?pvs=4
|
||||
|
||||
### University Info
|
||||
|
||||
1. University name:- GGSIPU, Delhi
|
||||
2. Program you are enrolled in (Degree & Major/Minor):- B.Tech in CSE
|
||||
3. Year:- 2nd Year
|
||||
5. Expected graduation date:- 2027
|
||||
|
||||
### Motivation & Past Experience
|
||||
|
||||
Short answers to the following questions (Add relevant links wherever you can):
|
||||
1. Have you worked on or contributed to a FOSS project before? Can you attach repo links or relevant PRs? :- NO
|
||||
2. What is your one project/achievement that you are most proud of? Why?:- Research Interned and Published a Research paper for DRDO in 1st year, Got offers from R&D IIT Hyderabad as a full time AI Engineer in 2nd Year
|
||||
3. What kind of problems or challenges motivate you the most to solve them? I am driven by solving complex AI, cybersecurity, and automation challenges, particularly in real-time systems, IIoT security, and intelligent decision-making. My passion lies in developing innovative, efficient, and ethical AI solutions that enhance security, automation, and human-AI interaction.
|
||||
4. Will you be working on GSoC full-time? In case not, what will you be studying or working on while working on the project? Yes
|
||||
6. Do you mind regularly syncing up with the project mentors? No
|
||||
7. What interests you the most about API Dash? I’m interested in API Dash for its ability to streamline API management, testing, and automation, which aligns with my work in AI-driven automation, cybersecurity, and real-time systems.
|
||||
8. Can you mention some areas where the project can be improved? DashBot can be improved by enhancing its NLP capabilities for better understanding of complex API queries and providing contextual solutions. Integrating real-time API testing, debugging assistance, and automation can streamline issue resolution. Additionally, implementing security checks, compliance suggestions will make it more robust and accessible for developers.
|
||||
|
||||
### Project Proposal Information
|
||||
|
||||
1. Proposal Title :- DashBot: AI-Powered Chatbot for API Management, Debugging, and Automation
|
||||
2. Abstract: A brief summary about the problem that you will be tackling & how. :- Managing APIs efficiently can be time-consuming, especially when debugging issues, optimizing workflows, or ensuring security compliance. DashBot is an AI-powered chatbot designed to assist developers in API management by providing real-time issue resolution, debugging assistance, automation of repetitive tasks, and security insights. Using natural language processing (NLP) and machine learning, DashBot will help users interact with APIs seamlessly through a conversational interface.
|
||||
3. Detailed Description:- DashBot will be designed as an intelligent chatbot that integrates with API Dash and other API management platforms. Key functionalities include:
|
||||
• Smart API Query Handling: Understands user requests and provides contextual API recommendations.
|
||||
• Real-time Debugging Assistance: Identifies errors, suggests fixes, and helps troubleshoot API failures.
|
||||
• Automation of API Workflows: Automates repetitive tasks like API calls, request scheduling, and response validation.
|
||||
• Security & Compliance Checks: Detects vulnerabilities, suggests security enhancements, and ensures adherence to best practices.
|
||||
4. Weekly Timeline: A rough week-wise timeline of activities that you would undertake.
|
||||
Week 1
|
||||
Research API Dash integration, define project scope, and finalize chatbot architecture.
|
||||
Week 2
|
||||
Develop basic chatbot framework with NLP capabilities for handling API-related queries.
|
||||
Week 3
|
||||
Implement debugging assistance, issue resolution, and API workflow automation features.
|
||||
Week 4
|
||||
Integrate security checks and compliance suggestions for API best practices.
|
||||
Week 5
|
||||
Add multi-platform support (Slack, Discord, etc.), optimize performance, and test functionalities.
|
||||
Week 6
|
||||
Conduct user testing, refine chatbot responses, and deploy DashBot for beta testing.
|
||||
|
210
doc/proposals/2025/gsoc/application_harsh_panchal_AI_API_EVAL.md
Normal file
@ -0,0 +1,210 @@
|
||||
# GSoC Proposal for AI API Evalution
|
||||
|
||||
|
||||
## About
|
||||
|
||||
1. **Full Name:** Harsh Panchal
|
||||
2. **Email:** harsh.panchal.0910@gmail.com
|
||||
3. **Phone number:** +91-9925095794
|
||||
4. **Discord Handle:** panchalharsh
|
||||
5. **Home Page:** [harshpanchal0910.netlify.app](https://harshpanchal0910.netlify.app/)
|
||||
6. **GitHub:** [GANGSTER0910](https://github.com/GANGSTER0910)
|
||||
7. **LinkedIn:** [Harsh Panchal](https://www.linkedin.com/in/harsh-panchal-902636255)
|
||||
8. **Time Zone:** IST (UTC +5:30)
|
||||
9. **Resume:** [Link to Resume](https://drive.google.com/drive/folders/1iDp0EnksaVXV3MmWd_uhGoprAuFzyqwB)
|
||||
|
||||
|
||||
## University Information
|
||||
|
||||
1. **University Name:** Ahmedabad University, Ahmedabad
|
||||
2. **Program:** BTech in Computer Science and Engineering
|
||||
3. **Year:** 3rd Year
|
||||
4. **Expected Graduation Date:** May 2026
|
||||
|
||||
|
||||
## Motivation & Past Experience
|
||||
|
||||
1. **Have you worked on or contributed to a FOSS project before?**
|
||||
- Yes I have once contributed to the foss/api in GSSOC 2024
|
||||
- [Contribution](https://github.com/foss42/api/pull/69) - Added a put API.
|
||||
|
||||
2. **What is your one project/achievement that you are most proud of? Why?**
|
||||
- One project I'm really proud of is TrippoBot, an AI-powered travel assistance chatbot I built with my team. It helps users with personalized travel recommendations, booking assistance, and real-time insights. Developing it was both challenging and rewarding — we had to integrate AI for natural language understanding, ensure smooth API interactions, and fine-tune the bot for accurate responses.
|
||||
|
||||
- What made it even more special was winning 2nd place at the TicTechToe Hackathon. Competing against talented teams and seeing our hard work recognized was an amazing feeling. It not only boosted my confidence but also sharpened my problem-solving skills and showed me the real-world impact of AI applications. Looking back, it's a reminder of how much I enjoy tackling complex problems and turning ideas into practical solutions.
|
||||
3. **What kind of problems or challenges motivate you the most to solve them?**
|
||||
- I am most inspired to solve complicated challenges that need me to think in new ways and stretch my limits. I appreciate tackling new challenges because they allow me to learn, discover innovative solutions, and gain a better understanding of developing technologies. The AI API Evaluation project interests me since it entails examining several AI models, determining their strengths and limitations, and employing rigorous evaluation procedures. The potential of breaking down complex model behaviors, evaluating performance indicators, and gaining actionable insights is very appealing. I am motivated by the challenge of developing solutions that help progress AI assessment frameworks, resulting in more transparent and dependable AI applications.
|
||||
|
||||
4. **Will you be working on GSoC full-time?**
|
||||
- Yes, I will be working full-time on my GSoC project.
|
||||
|
||||
5. **Do you mind regularly syncing up with the project mentors?**
|
||||
- Not at all, I am comfortable with regular mentor interactions to ensure aligned development.
|
||||
|
||||
6. **What interests you the most about API Dash?**
|
||||
- API Dash is like your go-to toolkit for working with APIs. It makes testing, debugging, and evaluating APIs a breeze with its user-friendly interface. You can easily compare API responses in real-time and even assess how different AI models perform. It’s designed to take the guesswork out of API management, helping you make smarter decisions and build stronger applications. Think of it as having an extra pair of hands to simplify your API tasks!
|
||||
|
||||
7. **Can you mention some areas where the project can be improved?**
|
||||
- Enhanced Evaluation Framework – Add a robust AI model evaluation system for benchmarking across industry tasks.
|
||||
|
||||
- Customizable Evaluation Criteria – Allow users to define metrics like fairness, robustness, and interpretability.
|
||||
|
||||
- Support for Offline Datasets & Models – Provide options to upload and evaluate local models and datasets.
|
||||
|
||||
- Interactive Visualizations – Improve API performance insights with comparative graphs and trend analysis.
|
||||
|
||||
|
||||
## Project Proposal Information
|
||||
|
||||
### Proposal Title
|
||||
|
||||
**AI API Evaluation Framework**
|
||||
|
||||
### Abstract
|
||||
|
||||
This project aims to develop an end-to-end AI API evaluation framework integrated into API Dash. It will provide a user-friendly interface for configuring API requests, supporting both online and offline evaluations. Online evaluations will call APIs of server-hosted models, while offline evaluations will use LoRA adapters with 4-bit quantized models for efficient storage and minimal accuracy loss. The framework will also support custom datasets, evaluation criteria, and visual result analysis through charts and tables, making AI model assessment more accessible and effective.
|
||||
|
||||
|
||||
## Detailed Description
|
||||
This project integrates an AI API evaluation framework into API Dash to assess models across text, images, audio, and video. It supports both online (API-based) and offline (LoRA adapters with 4-bit models) evaluations. Users can upload datasets, customize metrics, and visualize results through charts. Explainability features using SHAP and LIME provide insights into model decisions. The framework also tracks performance metrics and generates detailed reports for easy comparison and analysis.
|
||||
|
||||
## Screenshots
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
|
||||
|
||||
### Features to be Implemented
|
||||
|
||||
1. **AI Model Evaluation**
|
||||
- Evaluate AI models across multiple media types, including **text**, **images**, **audio**, and **video**.
|
||||
- Support offline evaluation using **LoRA adapters** with quantized models for efficient storage.
|
||||
- Provide user-selected metrics for benchmarking, including:
|
||||
- **BLEU-4** (for text)
|
||||
- **ROUGE-L** (for text)
|
||||
- **BERTScore** (for text)
|
||||
- **METEOR** (for text)
|
||||
- **PSNR** (for images and video)
|
||||
- **SSIM** (for images and video)
|
||||
- **CLIP Score** (for images)
|
||||
- **WER** (for audio)
|
||||
- Visualize score comparisons using **radar charts** for intuitive analysis.
|
||||
|
||||
2. **Custom Dataset Evaluation**
|
||||
- - Allow users to upload their own datasets for evaluation.
|
||||
- Provide access to pre-defined industry-standard benchmark datasets.
|
||||
- Support various data types including text, images, and multimedia.
|
||||
- Provide option to user to select the evalution metrics for it's own choice
|
||||
|
||||
3. **Custom Benchmark Metrics**
|
||||
- Users can customize their evaluation by choosing preferred evaluation metrics.
|
||||
- Offer flexibility to integrate additional metrics in the future.
|
||||
|
||||
4. **Explainability Integration**
|
||||
- Implement SHAP (SHapley Additive Explanations) to analyze feature importance and understand model decisions globally.
|
||||
|
||||
- Integrate LIME (Local Interpretable Model-Agnostic Explanations) for localized interpretability of individual predictions.
|
||||
|
||||
- Provide feature importance charts to show which inputs contributed most to the model's output.
|
||||
|
||||
- Use decision boundary plots to visualize how the model classifies different inputs.
|
||||
|
||||
- Implement heatmaps for images to highlight the regions that influenced predictions.
|
||||
|
||||
- Ensure transparency by helping users understand why a model made a certain decision.
|
||||
|
||||
5. **Real-Time Performance Monitoring**
|
||||
- Track latency, memory usage, and API response times.
|
||||
|
||||
6. **Reporting and Export**
|
||||
- Generate detailed reports in PDF or CSV.
|
||||
- Provide comparison summaries for different models.
|
||||
|
||||
### Tools and Tech Stack
|
||||
- **Backend:** FastAPI (Python)
|
||||
- **Frontend:** Flutter (Dart)
|
||||
- **ML Libraries:** Hugging Face Transformers, Evaluate
|
||||
- **Visualization:** Matplotlib, Plotly
|
||||
- **Explainability:** SHAP, LIME
|
||||
- **Database:** MongoDB
|
||||
|
||||
|
||||
## Week Project Timeline
|
||||
|
||||
### Week 1-2: Community Bonding and Planning
|
||||
- Engage with mentors and the community to understand project expectations.
|
||||
- Finalize project requirements and milestones.
|
||||
- Set up development environment (FastAPI, Flutter, MongoDB).
|
||||
- Research evaluation metrics and LoRA adapters for offline evaluation.
|
||||
- Design database schema and API endpoints.
|
||||
|
||||
|
||||
### Week 3: Initial API Evaluation Setup
|
||||
- Implement API integration for online model evaluation.
|
||||
- Develop backend routes using FastAPI.
|
||||
- Establish connection with server-hosted models for API evaluation.
|
||||
|
||||
|
||||
### Week 4: Offline Model Evaluation
|
||||
- Implement offline model evaluation using LoRA adapters with 4-bit quantized models.
|
||||
- Test model loading and performance in offline mode.
|
||||
- Ensure accuracy is maintained within an acceptable range.
|
||||
|
||||
|
||||
### Week 5: Media Type Support and Metrics Integration
|
||||
- Implement support for different media types: **text, images, audio, and video**.
|
||||
- Integrate benchmarking using metrics like BLEU-4, ROUGE-L, CLIP Score, PSNR, and WER.
|
||||
- Develop functions to compute and compare model performance.
|
||||
|
||||
|
||||
### Week 6: Custom Dataset and Metric Selection
|
||||
- Implement dataset upload functionality.
|
||||
- Provide options for users to select pre-defined benchmark datasets.
|
||||
- Enable users to customize their evaluation by choosing preferred metrics.
|
||||
|
||||
|
||||
### Week 7: Explainability Integration - SHAP and LIME
|
||||
- Implement SHAP for global interpretability and LIME for local interpretability.
|
||||
- Generate feature importance scores and visual explanations.
|
||||
- Develop feature importance charts, decision boundary plots, and heatmaps.
|
||||
|
||||
|
||||
### Week 8: Real-Time Monitoring
|
||||
- Implement API to monitor latency, memory usage, and response time.
|
||||
- Build a backend system to collect and store performance data.
|
||||
- Display real-time monitoring results on the frontend.
|
||||
|
||||
|
||||
### Week 9: Reporting and Export
|
||||
- Develop a reporting module to generate detailed reports in PDF and CSV formats.
|
||||
- Provide performance summaries and evaluation comparisons.
|
||||
- Ensure clear and professional report formatting.
|
||||
|
||||
|
||||
### Week 10: Frontend Development
|
||||
- Build an intuitive Flutter-based UI for API Dash.
|
||||
- Design forms for API configuration and dataset selection.
|
||||
- Implement dynamic result visualization using radar charts, graphs, and tables.
|
||||
|
||||
|
||||
### Week 11: Testing and Optimization
|
||||
- Conduct unit tests and integration tests across all modules.
|
||||
- Perform end-to-end testing to ensure smooth API interactions.
|
||||
- Optimize code for efficiency and reliability.
|
||||
- Fix bugs and address feedback.
|
||||
|
||||
|
||||
### Week 12: Documentation and Final Submission
|
||||
- Write detailed user and developer documentation.
|
||||
- Provide setup and usage instructions.
|
||||
- Create demo videos and presentations.
|
||||
- Deploy the application using FastAPI and Flutter.
|
||||
- Submit the final project and gather feedback.
|
||||
|
||||
|
||||
## Conclusion
|
||||
This AI API Evaluation Framework will simplify model evaluation for developers, researchers, and organizations. By providing explainability, real-time metrics, customizable benchmarking, and comprehensive reporting, it will ensure efficient AI model assessment and decision-making.
|
||||
|
@ -0,0 +1,168 @@
|
||||
### About
|
||||
|
||||
1. Full Name - Mohammed Ayaan
|
||||
2. Contact info (email, phone, etc.) - ayaan.md.blr@gmail.com, 99025 87579
|
||||
3. Discord handle - ayaan.md
|
||||
4. Home page (if any)
|
||||
5. Blog (if any)
|
||||
6. GitHub profile link - https://github.com/ayaan-md-blr
|
||||
7. Twitter, LinkedIn, other socials - https://www.linkedin.com/in/md-ayaan-blr/
|
||||
8. Time zone - UTC+05:30
|
||||
9. Link to a resume - https://drive.google.com/file/d/1kICrybHZfWLkmSFGOIfv9nFpnef14DPG/view?usp=sharing
|
||||
|
||||
### University Info
|
||||
|
||||
1. University name - PES University Bangalore
|
||||
2. Program you are enrolled in (Degree & Major/Minor) - BTech (AI/ML)
|
||||
3. Year - 2023
|
||||
4. Expected graduation date - 2027
|
||||
|
||||
### Motivation & Past Experience
|
||||
|
||||
Short answers to the following questions (Add relevant links wherever you can):
|
||||
|
||||
1. **Have you worked on or contributed to a FOSS project before? Can you attach repo links or relevant PRs?**
|
||||
|
||||
No. My first experience is with apidash. I have raised a PR for issue #122(https://github.com/foss42/apidash/pull/730) and
|
||||
had a good learning. Fairly comfortable with the process now
|
||||
and looking forward to contribute and work towards merging the PR in the apidash repo.
|
||||
|
||||
2. **What is your one project/achievement that you are most proud of? Why?**
|
||||
|
||||
I am proud of my self-learning journey in the AI area so far. I am equipped with considerable predictive and generative AI concepts and related tools/apis.
|
||||
I started with the perception that AI is new, exciting but extremely difficult. I overcame this challenge using multiple learning resources and balancing with
|
||||
my college academics and have been able to achieve much more than my peer group in terms of learning.
|
||||
Looking forward to learning and contributing to the open source space and add a new level to my learning journey.
|
||||
|
||||
3. **What kind of problems or challenges motivate you the most to solve them?**
|
||||
|
||||
DSA related problems challenged me the most which also pushed me to solve them. I was able to solve complex problems in trees, graphs,
|
||||
recursion which I found very interesting.
|
||||
I am also part of the avions (college club related to aviation and aerospace) where we are building working models of airplanes. It is very challenging and at the
|
||||
same time motivating to make those models from scratch and fly them.
|
||||
|
||||
4. **Will you be working on GSoC full-time? In case not, what will you be studying or working on while working on the project?**
|
||||
|
||||
Yes I can contribute full time. I dont have any other engagements since it will be my summer break.
|
||||
|
||||
5. **Do you mind regularly syncing up with the project mentors?**
|
||||
|
||||
Definitely not. This is the opportunity I am looking forward to where I can work with the bright minds and gain guidance and knowledge. I would be available for
|
||||
any form of communication as required by the assignment.
|
||||
|
||||
6. **What interests you the most about API Dash?**
|
||||
|
||||
The simplicity of the gitrepo attracted me to this project. It is very easy to understand and very well written.
|
||||
|
||||
7. **Can you mention some areas where the project can be improved?**
|
||||
|
||||
Developer documentation w.r.t to the components, system design, best practices, coding standards, testing standards will increase the productivity of contributors.
|
||||
Also I feel there can be improvement in the look and feel of the user interface in terms of making it appear attractive and also enhance usability.
|
||||
|
||||
### Project Proposal Information
|
||||
|
||||
**1. Proposal Title** - AI UI Designer for APIs (#617)
|
||||
|
||||
**2. Abstract:**
|
||||
Develop an AI Agent which transforms API responses into dynamic, user-friendly UI components, enabling developers to visualize and interact with data effortlessly.
|
||||
I plan to address this by building a new component ai_ui_agent which uses ollama models suitable for codegen (codellama or deepseek probably) to generate the flutter
|
||||
widgets which can be plugged into apidash ui. We can use third party component fl_chart for the charts generation.
|
||||
|
||||
**3. Detailed Description**
|
||||
A rough ui mockup can be as below.
|
||||
This popup will be rendered on click of the "data analysis" button on the response widget.
|
||||
The default view of the popup can have thumb nails based on the visualizations applicable for the api response.
|
||||
(Example prompt - List the charts to analyze the data in the given json)
|
||||
On selection of each item in the drop down corresponding chart with customizations can be displayed.
|
||||
Export component (link/button) can be provided on this pop up which will export the flutter component as a zip file.
|
||||

|
||||
|
||||
To implement this we need to carry out the below tasks in order -
|
||||
**Task1: LLM model evaluation and prompt design**
|
||||
Evaluate the Ollama supported LLMs with good code generation capability.
|
||||
We need to attempt several prompts which give us the output as required.
|
||||
We need the prompt to
|
||||
|
||||
- List the suitable widgets (data table/ chart/ card/ form) for the given json data.
|
||||
- The prompts should be fine tuned to generate different types of widgets as chosen by user.
|
||||
- The prompts should also have placeholders for customizations (Searching, sorting, custom labels in charts)
|
||||
- The prompts should be fine tuned to provide the look and feel of the apidash ui.
|
||||
- The prompts should give good performance as well as provide accuracy of output.
|
||||
At the end of this task we should have working prompts as per the requirement.
|
||||
|
||||
**Task2: Build ai_ui_agent component**
|
||||
|
||||
- Build the ai*ui_agent component in the lib folder of the repo which encapsulates both the back end logic and ui widgets.
|
||||
At the end of this task we expect a working component with the below structure :
|
||||
**ai_ui_agent** - features
|
||||
\_ai_ui_agent_codegen.dart*
|
||||
(This will contain the fine tuned prompts for code generation)
|
||||
_exporter.dart_
|
||||
(This will contain the logic to export the generated flutter widget) - providers
|
||||
_ai_ui_agent_providers.dart_
|
||||
(Will hold the generated flutter code as state/ available for download) - services
|
||||
_ai_ui_agent_service.dart_
|
||||
(Will invoke the ollama service using ollama*dart package) - widgets
|
||||
\_ai_ui_widget.dart*
|
||||
(container widget for the generated code)
|
||||
(any other widgets required for customizations/styles) - utils
|
||||
_validate_widget.dart_
|
||||
(This should perform some basic validation/compilation to ensure the generated component can get rendered/exported successfully)
|
||||
_ai_ui_agent.dart_
|
||||
**Task3: Integrating this component with the response_pane widget**
|
||||
_screens/home_page/editor_pane/details_card/response_pane.dart_
|
||||
(Add a new button named "Data Analysis". on click - render the ai_ui_widget in a pop up.)
|
||||
**Task4: Writing unit and integration tests**
|
||||
**Task5: Perform functional testing with different apis and response formats.**
|
||||
This will be crucial to ensure it works with different apis with different json structures.
|
||||
This task may involve fine tuning/fixing the prompts as well.
|
||||
**Taks6: Updating the dev guide and user guide**
|
||||
|
||||
## 4. Week Project Timeline
|
||||
|
||||
### Week 1: Community Bonding and project initiation
|
||||
|
||||
- Engage with mentors and the community to understand project expectations.
|
||||
- Finalize project requirements and milestones.
|
||||
- Set up development environment (Ollama, Flutter, APIDash).
|
||||
- **Outcome**: Working APIDash application, Working Ollama setup.
|
||||
|
||||
### Week 2-3: Task1: Evaluate Ollama codegen model and prompts creation
|
||||
|
||||
- Use sample json responses as input to Ollama model and develop basic prompts to generate Flutter chart components.
|
||||
- Test the generated Flutter compoents for fitment into apidash standards.
|
||||
- Document observations and gather mentor feedback.
|
||||
- Enhance the initial prompts - provide customization placeholders, applying apidash specific styles/themes
|
||||
- Repeat this step and finalize the expectations from mentor.
|
||||
- **Outcome**: Finalized prompts to use for ai_ui_agent
|
||||
|
||||
### Week 4-5: Task2: Build ai_ui_agent
|
||||
|
||||
- Code backend using the prompts and models from Task1.
|
||||
- Plan and implement unit/component tests for backend.
|
||||
- **Outcome** - ai_ui_agent_codegen.dart, ai_ui_agent_providers.dart, ai_ui_agent_service.dart
|
||||
|
||||
### Week 6: Task3: ui components, exporter.
|
||||
|
||||
- Code front end components and configuration (eg: fl_chart)
|
||||
- Plan and implement unit tests for ui widgets.
|
||||
- Implement code to export the generated component.
|
||||
- Plan and implement unit tests for exporter.
|
||||
- **Outcome** - ai_ui_widget.dart, screens/home_page/editor_pane/details_card/response_pane.dart, exporter.dart
|
||||
|
||||
### Week 7-8: Task4: Unit and integration testing
|
||||
|
||||
- Enhance the tests written in Week4 & 5 to increase code coverage, negative scenarios and corner cases.
|
||||
- Implement integration tests and capture basic performance metrics.
|
||||
- **Outcome** - Unit test dart files, code coverage report
|
||||
|
||||
### Week 9: Task5: Functional testing
|
||||
|
||||
- Run manual end to end tests with different apis and response formats.
|
||||
- **Outcome**: Bug fixes, Prompt Tuning.
|
||||
|
||||
### Week 10: Task6: Wrap up
|
||||
|
||||
- Final demo and mentor feedback.
|
||||
- Update the dev guide, user guide and other documents.
|
||||
- Create demo videos and presentations.
|
107
doc/proposals/2025/gsoc/application_srinath_dashbot.md
Normal file
@ -0,0 +1,107 @@
|
||||
# GSoC Proposal: DashBot - AI-Powered API Assistant for API Dash
|
||||
|
||||
|
||||
## About
|
||||
|
||||
1. **Full Name**: Vennapusa Srinath Reddy
|
||||
2. **Email**: srinathreddy0115@gmail.com
|
||||
3. **Phone-no**: +91-7569756336
|
||||
4. **Discord Handle**: srinath15
|
||||
5. **Home Page**: [srinathreddy.netlify.app](https://srinathreddy.netlify.app/)
|
||||
6. **Blog**: [sidduverse.notion.site/Acoustic-Echo-Cancellation](https://sidduverse.notion.site/Acoustic-Echo-Cancellation-175c6a02985880a79be4e68b56eaee51?pvs=4)
|
||||
7. **GitHub Profile Link**: [github.com/siddu015](https://github.com/siddu015/)
|
||||
8. **Twitter**: [x.com/siddu1501](https://x.com/siddu1501)
|
||||
9. **LinkedIn**: [linkedin.com/in/srinath-reddy-0a57a224b](https://www.linkedin.com/in/srinath-reddy-0a57a224b/)
|
||||
10. **Time Zone**: Indian Standard Time (IST, UTC+5:30)
|
||||
11. **Link to a Resume**: [Resume](https://drive.google.com/file/d/1zF6JrxVozYWZDKSXHUUzcVNbEc91XUoD/view?usp=sharing)
|
||||
|
||||
## University Info
|
||||
- **University Name**: Reva University
|
||||
- **Program**: B.Tech in Computer Science and Engineering (Artificial Intelligence and Data Science)
|
||||
- **Year**: 3rd Year (Started in 2022)
|
||||
- **Expected Graduation Date**: June 2026
|
||||
|
||||
## Motivation & Past Experience
|
||||
|
||||
1. **Have you worked on or contributed to a FOSS project before? Can you attach repo links or relevant PRs?**
|
||||
Yes, I've contributed to DashBot for API Dash during FOSS Hack 2025. Over the past month, I've worked on its initial development and submitted several pull requests to the [API Dash repository](https://github.com/foss42/apidash). Relevant contributions include:
|
||||
- Issue opened for ChatBot: [#605](https://github.com/foss42/apidash/issues/605)
|
||||
- FOSS Hack PR for ChatBot: [#608](https://github.com/foss42/apidash/pull/608)
|
||||
- Initial draft PR for DashBot: [#641](https://github.com/foss42/apidash/pull/641)
|
||||
- Recent PR for modified DashBot version: [#699](https://github.com/foss42/apidash/pull/699)
|
||||
|
||||
2. **What is your one project/achievement that you are most proud of? Why?**
|
||||
I'm most proud of *LaughLab*, a personalized meme suggestion platform I built. The idea was to integrate a meme recommendation system with a user's keyboard, suggesting memes as they type based on their preferences, with a database that adapts over time. Check out the repo: [LaughLab](https://github.com/siddu015/LaughLab). I'm proud of this because it won 2nd place at E-Summit 2024 at Dayananda Sagar College—it was a fun and innovative challenge.
|
||||
|
||||
3. **What kind of problems or challenges motivate you the most to solve them?**
|
||||
I'm motivated by meaningful technical challenges that push me to learn something new. I thrive on solving problems involving innovative features or complex logic, even if I only partially solve them. While I'm decent at UI/UX for usability, my passion lies in the technical backend—building things that work under the hood.
|
||||
|
||||
4. **Will you be working on GSoC full-time? In case not, what will you be studying or working on while working on the project?**
|
||||
My 6th semester ends on June 7th, 2025, after which I'll work on GSoC full-time. Until then, I'll dedicate my time to detailed project planning, researching optimal implementation strategies, and discussing ideas with mentors to ensure a strong start.
|
||||
|
||||
5. **Do you mind regularly syncing up with the project mentors?**
|
||||
Not at all—I enjoy collaborating and value mentor feedback. Regular sync-ups keep me aligned and help me improve my work continuously.
|
||||
|
||||
6. **What interests you the most about API Dash?**
|
||||
API Dash's open-source nature hooked me. As someone who uses APIs daily in personal and work projects, I've relied on tools like Postman but always wondered how they function internally. Discovering API Dash at FOSS Hack 2025 gave me that insight and sparked my interest. I'm excited to contribute meaningfully to a tool I'd use myself.
|
||||
|
||||
7. **Can you mention some areas where the project can be improved?**
|
||||
I see huge potential in enhancing API Dash through DashBot. Having developed initial features (e.g., response explanation, debugging), I believe DashBot can be fine-tuned and fully integrated into API Dash's architecture. This would enable more accurate, context-aware assistance and support personalized, AI-driven workflows using local models—making API Dash a smarter, user-centric tool.
|
||||
|
||||
## Project Proposal Information
|
||||
|
||||
### 1. Proposal Title
|
||||
**DashBot - AI-Powered API Assistant for API Dash**
|
||||
|
||||
### 2. Abstract
|
||||
DashBot aims to transform API Dash into an intelligent, AI-driven API exploration and development tool. By integrating advanced AI capabilities, we'll create a comprehensive assistant that helps developers understand, debug, document, and implement APIs more efficiently.
|
||||
|
||||
### 3. Detailed Description
|
||||
- **Problem**: API Dash users manually handle debugging, testing, and documentation, slowing workflows. As an early-stage tool, it lacks AI-driven automation.
|
||||
- **Project Goals** :
|
||||
Develop an intelligent, modular AI assistant for API interactions
|
||||
Provide context-aware API analysis and support
|
||||
Create a flexible, extensible AI service architecture
|
||||
Enhance developer productivity through intelligent insights
|
||||
|
||||
|
||||
|
||||
- **Technical Architecture**
|
||||
Core Components
|
||||
|
||||
| Service | Key Features | Capabilities |
|
||||
|---------|--------------|--------------|
|
||||
| AI Analysis Service | - Semantic API request parsing | - Contextual understanding |
|
||||
| | - Multi-model AI integration | - Intelligent insights generation |
|
||||
| Debugging Service | - Advanced error pattern recognition | - Root cause analysis |
|
||||
| | - Automated fix suggestions | - Performance bottleneck detection |
|
||||
| Documentation Generator | - Automatic API documentation | - Comprehensive endpoint description |
|
||||
| | - Example generation | - Interactive documentation support |
|
||||
| Code Generation Service | - Multi-framework code generation | - Intelligent client code creation |
|
||||
| | - Framework-specific best practices | - Customizable generation templates |
|
||||
| Visualization Service | - Interactive response explorers | - API performance charts |
|
||||
| | - Network flow visualizations | - Data transformation insights |
|
||||
|
||||
<img width="1200" alt="Screenshot 2025-03-25 at 10 00 45" src="https://github.com/user-attachments/assets/b12b488b-612d-4ca3-8b8e-be47ba59a123" />
|
||||
|
||||
**LLM Provider Management**
|
||||
- Abstracted LLM provider interface
|
||||
- Multiple provider support
|
||||
- Local Ollama models
|
||||
- Cloud AI services (OpenAI, Anthropic, other API's)
|
||||
- Dynamic model selection
|
||||
- Resource-aware model recommendations
|
||||
|
||||
### 4. Weekly Timeline (175 Hours, ~12 Weeks)
|
||||
|
||||
| Week | Duration | Focus Area | Key Activities |
|
||||
|------|----------|------------|----------------|
|
||||
| 1 | 15h | Bonding & Setup | Project initialization, mentor sync, environment setup |
|
||||
| 2 | 15h | Beta Polish | Finalize initial features, basic debugging, documentation |
|
||||
| 3-4 | 30h | Advanced Debugging | Auto-debugging implementation, comprehensive test generation |
|
||||
| 5-7 | 45h | Visualizations | Plotting system development, response visualizations, customization |
|
||||
| 8-9 | 30h | Frontend Code | Multi-framework code generation, API testing, response handling |
|
||||
| 10 | 15h | Local LLM Integration | DashBot local model setup, Ollama integration, model selection |
|
||||
| 11 | 15h | LLM Enhancements | Computational power optimization, DashBot toggle functionality |
|
||||
| 11 | 15h | Benchmarks & UI | LLM evaluation, UI improvements, model compatibility testing |
|
||||
| 12 | 10h | Testing & Wrap-Up | Comprehensive end-to-end testing, documentation finalization |
|
@ -57,73 +57,318 @@ AI UI Designer for APIs
|
||||
|
||||
### Abstract
|
||||
|
||||
This project aims to develop an AI-powered assistant within API Dash that automatically generates dynamic user interfaces (UI) based on API responses (JSON/XML). The goal is to allow developers to instantly visualize, customize, and export usable Flutter UI code from raw API data. The generated UI should adapt to the structure of the API response and be interactive, with features like sorting, filtering, and layout tweaking. This tool will streamline frontend prototyping and improve developer productivity.
|
||||
This project proposes the development of an AI-powered UI generation assistant within the API Dash application. The tool will automatically analyze API responses (primarily in JSON format), infer their structure, and dynamically generate Flutter-based UI components such as tables, forms, or cards. Developers will be able to preview, customize, and export these layouts as usable Dart code. By combining rule-based heuristics with optional LLM (e.g., Ollama, GPT) enhancements, the feature aims to streamline API data visualization and speed up frontend prototyping. The generated UI will be clean, modular, and directly reusable in real-world Flutter applications.
|
||||
|
||||
---
|
||||
|
||||
### Detailed Description
|
||||
|
||||
The AI UI Designer will be a new feature integrated into the API Dash interface, triggered by a button after an API response is received. It will analyze the data and suggest corresponding UI layouts using Dart/Flutter widgets such as `DataTable`, `Card`, or `Form`.
|
||||
This project introduces a new feature into API Dash: AI UI Designer — an intelligent assistant that takes an API response and converts it into dynamic UI components, allowing developers to quickly visualize, customize, and export frontend code based on live API data. It will analyze the data and suggest corresponding UI layouts using Dart/Flutter widgets such as `DataTable`, `Card`, or `Form`.
|
||||
|
||||
#### Step 1: Parse API Response Structure
|
||||
|
||||
- Focus initially on JSON (XML can be added later)
|
||||
- Build a recursive parser to convert the API response into a schema-like tree
|
||||
- Extract field types, array/object structure, nesting depth
|
||||
- Identify patterns (e.g., timestamps, prices, lists)
|
||||
The first step is to understand the structure of the API response, which is usually in JSON format. The goal is to transform the raw response into an intermediate schema that can guide UI generation.
|
||||
|
||||
- Most API responses are either:
|
||||
- Object: A flat or nested key-value map.
|
||||
- Array of Objects: A list of items, each following a similar structure.
|
||||
- Understanding the structure allows us to decide:
|
||||
- What kind of UI component fits best (e.g., table, form, card).
|
||||
- How many fields to show, and how deep the nesting goes.
|
||||
- Common field types (string, number, boolean, array, object) impact widget selection.
|
||||
- Special patterns (e.g., timestamps, emails, URLs) can be detected and used to enhance UI.
|
||||
|
||||
##### Implementation Plan:
|
||||
|
||||
- Start with JSON
|
||||
- Initially only support JSON input, as it's the most common.
|
||||
- Use Dart's built-in dart:convert package to parse the response.
|
||||
- Build a Recursive Schema Parser
|
||||
- Traverse the JSON response recursively.
|
||||
- For each node (key), determine:
|
||||
- Type: string, number, bool, object, array
|
||||
- Optional metadata (e.g., nullability, format hints)
|
||||
- Depth and parent-child relationships
|
||||
- Output a tree-like structure such as:
|
||||
```json
|
||||
{
|
||||
"type": "object",
|
||||
"fields": [
|
||||
{"key": "name", "type": "string"},
|
||||
{"key": "age", "type": "number"},
|
||||
{"key": "profile", "type": "object", "fields": [...]},
|
||||
{"key": "posts", "type": "array", "itemType": "object", "fields": [...]}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
- Detect Patterns (Optional AI Help)
|
||||
- Apply heuristics or regex to detect:
|
||||
- Timestamps: ISO strings, epoch time
|
||||
- Prices: numeric + currency signs
|
||||
- Boolean flags: isActive, enabled, etc.
|
||||
- This helps in choosing smart widgets (e.g., Switch for booleans).
|
||||
|
||||
- Create a Schema Class
|
||||
- Implement a Dart class (e.g., ParsedSchema) to store this structure.
|
||||
- This class will be passed into the UI generation logic in Step 2.
|
||||
|
||||
- Add Support for Validation
|
||||
- Check if response is malformed or inconsistent (e.g., arrays with mixed types).
|
||||
- If invalid, show fallback UI or error.
|
||||
|
||||
- Future Scope
|
||||
- Add XML support by using XML parsers.
|
||||
- Extend the parser to allow user overrides/custom schema mapping.
|
||||
|
||||
#### Step 2: Design AI Agent Logic
|
||||
|
||||
- Use a rule-based system to map schema to UI components
|
||||
- List of objects → Table
|
||||
- Simple object → Card/Form
|
||||
- Number over time → Line Chart (optional)
|
||||
- Integrate LLM backend (e.g., Ollama, GPT API) to enhance:
|
||||
- Field labeling
|
||||
- Layout suggestion
|
||||
- Component naming
|
||||
This step involves designing the core logic that maps the parsed API response schema to corresponding UI components. The AI agent will follow a hybrid approach: combining rule-based mapping with optional LLM-powered enhancement for smarter UI suggestions.
|
||||
|
||||
#### Step 3: Generate UI in Flutter
|
||||
##### 2.1 Rule-Based Mapping System
|
||||
To ensure fast and consistent results, we will first implement a simple rule-based system that maps specific JSON structures to Flutter widgets. This allows us to generate a basic layout even in environments where LLMs are not available or desirable.
|
||||
|
||||
- Dynamically generate:
|
||||
- `DataTable`, `Card`, `TextField`, `Dropdown`, etc.
|
||||
- Optional chart widgets (e.g., `fl_chart`)
|
||||
- Support:
|
||||
- Layout rearrangement (form-based or drag-drop)
|
||||
- Field visibility toggles
|
||||
- Previewing final UI
|
||||
Example rules:
|
||||
- If the root is an array of objects → generate a DataTable
|
||||
- If the object contains mostly key-value pairs → generate a Card or Form
|
||||
- If fields include timestamps or numeric trends → suggest LineChart
|
||||
- If keys match common patterns like email, phone, price, etc. → render with appropriate widgets (TextField, Dropdown, Currency formatter)
|
||||
|
||||
These mappings will be implemented using Dart classes and can be loaded from a YAML/JSON config file to support extensibility.
|
||||
|
||||
##### 2.2 LLM-Powered Enhancements
|
||||
To go beyond static rules and provide smarter UI suggestions, we will integrate an LLM (e.g., Ollama locally or GPT via API). The LLM will receive the parsed schema and be prompted to:
|
||||
- Suggest the layout structure (vertical list, tabs, grouped cards, etc.)
|
||||
- Label fields more intuitively (e.g., product_id → "Product ID")
|
||||
- Reorder fields based on usage context
|
||||
- Suggest default values, placeholder text, or icons
|
||||
|
||||
Prompt Example:
|
||||
```json
|
||||
{
|
||||
"task": "Generate UI plan for API response",
|
||||
"schema": {
|
||||
"type": "object",
|
||||
"fields": [
|
||||
{"name": "username", "type": "string"},
|
||||
{"name": "email", "type": "string"},
|
||||
{"name": "created_at", "type": "timestamp"}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Expected LLM output:
|
||||
```json
|
||||
{
|
||||
"layout": "vertical_card",
|
||||
"fields": [
|
||||
{"label": "Username", "widget": "TextField"},
|
||||
{"label": "Email", "widget": "TextField"},
|
||||
{"label": "Signup Date", "widget": "DateDisplay"}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
##### 2.3 Fallback and Configuration
|
||||
- If LLM call fails or is disabled (e.g., offline use), the system falls back to rule-based logic.
|
||||
- The user can toggle LLM mode in settings.
|
||||
- The response from LLM will be cached for repeat inputs to reduce latency and cost.
|
||||
|
||||
##### 2.4 Customization Layer (Optional)
|
||||
After layout generation, users will be able to:
|
||||
- Preview different layout suggestions (from rule-based vs. LLM)
|
||||
- Select a layout and make field-level changes (hide/show, rename, rearrange)
|
||||
- Submit feedback for improving future suggestions (optional)
|
||||
|
||||
#### Step 3: Generate and Render UI in Flutter
|
||||
|
||||
Once the layout plan is decided (via rule-based mapping or LLM suggestion), the system will dynamically generate corresponding Flutter widgets based on the API response structure and content types.
|
||||
|
||||
##### 3.1 Widget Mapping and Construction
|
||||
|
||||
- For each field or group in the parsed schema, we map it to a predefined Flutter widget. Example mappings:
|
||||
- List of Objects → DataTable
|
||||
- Simple key-value object → Card, Column with Text widgets
|
||||
- String fields → TextField (if editable), or SelectableText
|
||||
- Number series over time → Line chart (e.g., using fl_chart package)
|
||||
- The widget structure will be built using standard Dart code with StatefulWidget or StatelessWidget, depending on interactivity.
|
||||
-
|
||||
Implementation Plan:
|
||||
|
||||
- Create a WidgetFactory class that receives a layout plan and schema, and returns a Widget tree.
|
||||
- This factory will follow a clean design pattern to make it testable and modular.
|
||||
- Use Flutter’s json_serializable or custom classes to deserialize API responses into displayable values.
|
||||
|
||||
##### 3.2 Dynamic Rendering in the App
|
||||
|
||||
- The generated widget tree will be rendered in a dedicated “AI UI Preview” pane inside API Dash.
|
||||
- The rendering will be fully dynamic: when the schema or layout changes, the UI preview updates in real time.
|
||||
- This pane will support:
|
||||
- Light customization like toggling fields, reordering, hiding/showing
|
||||
- Live data preview using the actual API response
|
||||
|
||||
Technical Flow:
|
||||
|
||||
- When user clicks "AI UI Designer", a modal or new route opens with the UI preview panel.
|
||||
- This panel will:
|
||||
- Show the raw schema & layout (editable if needed)
|
||||
- Render the widget tree using Flutter's widget system
|
||||
- Any user adjustments will re-trigger the widget regeneration and re-render.
|
||||
|
||||
##### 3.3 Preview and Debugging Tools
|
||||
|
||||
- Add a “Developer Mode” that shows:
|
||||
- Schema tree
|
||||
- Widget mapping details
|
||||
- Generated Dart code (read-only)
|
||||
- This helps with debugging and refining layout logic.
|
||||
|
||||
##### 3.4 Scalability Considerations
|
||||
|
||||
- To keep UI rendering responsive:
|
||||
- Use lazy-loading for large JSON arrays (e.g., scrollable tables)
|
||||
- Avoid deep nesting: limit UI depth or use ExpansionTile for hierarchical views
|
||||
- Support pagination if list is too long
|
||||
|
||||
By the end of this step, users should be able to preview their API response as a fully functional, dynamic UI inside API Dash — without writing a single line of Flutter code.
|
||||
|
||||
#### Step 4: Export UI Code
|
||||
|
||||
- Export generated layout as Dart code
|
||||
- Allow download or copy-to-clipboard
|
||||
- Support JSON config export (optional for renderer-based architecture)
|
||||
Once the user is satisfied with the generated and customized UI layout, the tool should allow them to export the UI as usable Flutter code, so it can be directly reused in their own projects. This step focuses on transforming the dynamic widget tree into clean, readable Dart code and offering convenient export options.
|
||||
|
||||
##### 4.1 Code Generation Pipeline
|
||||
|
||||
To generate Flutter code dynamically, we will:
|
||||
- Traverse the internal widget tree (from Step 3)
|
||||
- For each widget, generate corresponding Dart code using string templates
|
||||
- Example: a DataTable widget will generate its DataTable constructor and children rows
|
||||
- Use indentation and formatting to ensure readability
|
||||
|
||||
Implementation Plan:
|
||||
- Create a CodeGenerator class responsible for converting widget definitions into raw Dart code strings.
|
||||
- Use prebuilt templates for common components: Card, Column, DataTable, etc.
|
||||
- Handle nested widgets recursively to maintain structure.
|
||||
|
||||
##### 4.2 Export Formats
|
||||
|
||||
We will support two export options:
|
||||
1.Raw Dart Code Export
|
||||
- Output the generated Dart code into a text area or preview pane
|
||||
- Allow users to:
|
||||
- Copy to clipboard
|
||||
- Download as .dart file
|
||||
- Highlight syntax for better UX (using a package like highlight)
|
||||
|
||||
2.Optional JSON Layout Export
|
||||
- If we implement a config-driven rendering architecture, offer an export of the layout plan/schema as JSON
|
||||
- Useful for re-importing or using with a visual UI builder
|
||||
|
||||
##### 4.3 Integration into API Dash
|
||||
|
||||
- Add an "Export" button below the UI preview pane
|
||||
- When clicked, the generated code will be shown in a modal or new tab
|
||||
- Provide one-click buttons:
|
||||
- "Copy Code"
|
||||
- "Download Dart File"
|
||||
- (Optional) "Download Layout JSON"
|
||||
|
||||
##### 4.4 Reusability and Developer Focus
|
||||
|
||||
- Ensure that the exported code:
|
||||
- Is clean and idiomatic Dart
|
||||
- Can be copied directly into any Flutter project with minimal edits
|
||||
- Includes basic import statements and class wrappers if needed
|
||||
- Add helpful comments in the generated code (e.g., // This widget was generated from API response)
|
||||
|
||||
##### 4.5 Challenges and Considerations
|
||||
|
||||
- Ensuring valid syntax across nested widgets
|
||||
- Handling edge cases (e.g., empty fields, null values)
|
||||
- Optionally, offer theming/styling presets to match user preferences
|
||||
|
||||
By the end of this step, users can instantly turn live API data into production-ready Flutter UI code, significantly reducing time spent on repetitive frontend scaffolding.
|
||||
|
||||
#### Step 5: Integrate into API Dash
|
||||
|
||||
- Add AI UI Designer button in the API response view
|
||||
- Launch UI editing pane inside app
|
||||
- Ensure local-only, privacy-friendly execution
|
||||
- Write tests, docs, and polish UX
|
||||
The final step is to fully integrate the AI UI Designer into the API Dash application, so that users can seamlessly trigger UI generation from real API responses and interact with the entire pipeline — from data to UI preview to export — within the app.
|
||||
|
||||
##### 5.1 Entry Point in UI
|
||||
|
||||
We will add a new button or menu entry labeled “AI UI Designer” within the API response tab (or near the response preview area).
|
||||
|
||||
- When a user executes an API call and gets a JSON response:
|
||||
- A floating action button or contextual menu becomes available
|
||||
- Clicking it opens the AI UI Designer pane
|
||||
|
||||
Implementation Plan:
|
||||
- Extend the existing response panel UI to include a trigger button
|
||||
- Use a showModalBottomSheet() or a full-screen route to launch the designer
|
||||
|
||||
##### 5.2 Internal Architecture and Flow
|
||||
|
||||
The full integration involves multiple coordinated modules:
|
||||
- Trigger UI → (Button click)
|
||||
- JSON Parser Module (from Step 1) → Convert API response to schema
|
||||
- Mapping Logic (Step 2) → Rule-based and/or LLM-assisted UI mapping
|
||||
- Widget Tree Builder (Step 3) → Build live widget layout
|
||||
- Preview + Export UI (Step 4) → Let users customize and extract code
|
||||
|
||||
Each module will be built as a reusable Dart service/class, and all UI logic stays within the API Dash UI tree.
|
||||
|
||||
We’ll keep the architecture modular so the designer logic is isolated and testable.
|
||||
|
||||
##### 5.3 Offline / Privacy-Friendly Support
|
||||
|
||||
Since API Dash is a privacy-first local client, the AI agent should work entirely offline by default using lightweight LLMs such as Ollama, which can run locally.
|
||||
|
||||
- If a user prefers using OpenAI or Anthropic APIs, provide optional settings to configure remote endpoints
|
||||
- Set Ollama as the default backend, and wrap LLM logic inside a service with interchangeable backends
|
||||
|
||||
##### 5.4 User Flow Example
|
||||
|
||||
- User sends API request in API Dash
|
||||
- JSON response is shown
|
||||
- User clicks “AI UI Designer” button
|
||||
- The parsed structure is shown with layout suggestions
|
||||
- User can preview UI, rearrange components, and customize styles
|
||||
- Once satisfied, user clicks “Export”
|
||||
- Dart code is generated and available to copy/download
|
||||
|
||||
##### 5.5 Tests, Documentation & Maintenance
|
||||
|
||||
- Add integration tests to validate:
|
||||
- Triggering and rendering behavior
|
||||
- Correct widget tree output
|
||||
- Export function accuracy
|
||||
- Document:
|
||||
- Each module (parsing, mapping, UI rendering, export)
|
||||
- Developer usage guide (in docs/)
|
||||
- Ensure all new code follows API Dash’s contribution style and linting rules
|
||||
|
||||
By integrating into API Dash cleanly and modularly, this feature becomes a native part of the developer workflow — helping users transform any API into usable UI in seconds, without leaving the app.
|
||||
|
||||
---
|
||||
|
||||
## Weekly Timeline (Tentative)
|
||||
|
||||
| Week | Milestone |
|
||||
|------|-----------|
|
||||
| Community Bonding | Join Discord, interact with mentors, finalize approach, get feedback |
|
||||
| Week 1–2 | Build and test JSON parser → generate basic schema |
|
||||
| Week 3–4 | Implement rule-based UI mapper; generate simple widgets |
|
||||
| Week 5–6 | Integrate initial Flutter component generator; allow basic UI previews |
|
||||
| Week 7 | Midterm Evaluation |
|
||||
| Week 8–9 | Add customization options (visibility, layout) |
|
||||
| Week 10 | Integrate AI backend (e.g., Ollama/GPT) for suggestions |
|
||||
| Week 11–12 | Add export functions (code, JSON config) |
|
||||
| Week 13 | Final polish, tests, docs |
|
||||
| Week 14 | Final Evaluation, feedback, and delivery |
|
||||
|
||||
---
|
||||
| Week | Milestone |
|
||||
|---------------|---------------------------------------------------------------------------------------------|
|
||||
| Community Bonding | Join Discord, introduce myself, understand API Dash architecture, finalize scope with mentors |
|
||||
| Week 1 | Build recursive parser for JSON responses; test on static examples; output schema trees |
|
||||
| Week 2 | Extend parser to handle nested objects, arrays, and basic pattern recognition (e.g., timestamps) |
|
||||
| Week 3 | Implement rule-based schema-to-widget mapper; define mapping logic for tables, cards, forms |
|
||||
| Week 4 | Design widget data model and logic for translating schema into Flutter widget trees |
|
||||
| Week 5 | Develop dynamic Flutter widget generator; render `DataTable`, `Card`, `TextField`, etc. |
|
||||
| Week 6 | Build basic UI preview pane inside API Dash with user interaction support (e.g., toggles) |
|
||||
| Week 7 (Midterm Evaluation) | Submit code with parser + rule-based mapping + preview UI; receive mentor feedback |
|
||||
| Week 8 | Add layout customization features: visibility toggles, reordering, field labels |
|
||||
| Week 9 | Integrate basic Ollama-based LLM agent for field naming & layout suggestion |
|
||||
| Week 10 | Abstract LLM backend to support GPT/Anthropic alternatives via API config |
|
||||
| Week 11 | Implement code export: generate Dart source code, copy-to-clipboard & download options |
|
||||
| Week 12 | Optional: add JSON config export; polish UX and improve error handling |
|
||||
| Week 13 | Write documentation, developer setup guide, internal tests for each module |
|
||||
| Week 14 (Final Evaluation) | Final review, cleanup, feedback response, and submission |
|
||||
|
||||
Thanks again for your time and guidance. I’ve already started studying the API Dash codebase and developer guide, and I’d love your feedback on this plan — does it align with your vision?
|
||||
If selected, I’m excited to implement this project. If this idea is already taken, I’m open to switching to another API Dash project that fits my background.
|
||||
|
45
doc/proposals/2025/gsoc/idea_SyedAbdullah_ai-ui-designer.md
Normal file
@ -0,0 +1,45 @@
|
||||
### Initial Idea Submission
|
||||
|
||||
Full Name: Syed Abdullah
|
||||
University name: University of Engineering and Technology, Taxila
|
||||
Program you are enrolled in (Degree & Major/Minor): Graduate Software Engineer
|
||||
Expected graduation date: I am an early stage developer and new to open source.
|
||||
|
||||
Project Title: AI-Powered Dynamic UI Generator from API Responses
|
||||
|
||||
|
||||
Relevant issues: #617
|
||||
##About me
|
||||
Hi, I'm Syed Abdullah, a passionate Software Engineer with over 2 years of experience building scalable and modern software solutions.
|
||||
I'm a full-stack developer comfortable working across both frontend and backend, using C#, .NET Core, React, Flutter, and more. I'm also a beginner-level open-source contributor, continuously learning and giving back to the community.
|
||||
|
||||
Idea description:
|
||||
The goal of this project is to enhance API Dash by developing an AI-driven agent that automatically transforms API responses (e.g., JSON, XML) into intuitive, dynamic UI components like tables, cards, charts, and forms.
|
||||
This eliminates the manual process of UI creation and helps developers interact with and visualize data effortlessly.
|
||||
|
||||
#### Key Features:
|
||||
- Parse and understand API response structures in real-time.
|
||||
- Automatically generate a corresponding UI schema/model (component layout).
|
||||
- Render live previews of the generated UI in the app.
|
||||
- Support customization: layout templates, filters, pagination, sorting, styles.
|
||||
- Export the UI code (Flutter widgets or HTML/CSS snippets) for integration in web or mobile apps.
|
||||
- Extensible system with support for plugins or future rendering engines (React, Vue, etc.).
|
||||
|
||||
#### Approach:
|
||||
1. **Phase 1** – Build a response parser module that:
|
||||
- Parses JSON/XML structures.
|
||||
- Outputs a layout schema representing UI components.
|
||||
|
||||
2. **Phase 2** – Implement a dynamic UI renderer:
|
||||
- Converts the layout schema into interactive Flutter or Web UI.
|
||||
- Allows live preview inside API Dash.
|
||||
|
||||
3. **Phase 3** – Add customization tools:
|
||||
- Enable field selection, styling options, responsive layouts.
|
||||
- Add filtering/sorting controls in tables, date pickers, etc.
|
||||
|
||||
4. **Phase 4** – Code export and integration:
|
||||
- One-click export to Flutter widgets or reusable HTML/CSS components.
|
||||
- Optionally support importing layout templates.
|
||||
|
||||
This is my initial idea, kindly give me feedback on the idea and shall I move forward with the POC and all. Looking forward. Thanks
|
@ -0,0 +1,54 @@
|
||||
# API Explorer Wireframe
|
||||
|
||||
## 📌 Overview
|
||||
|
||||
This document presents the wireframe design for the **API Explorer** feature in API Dash. The API Explorer will allow users to:
|
||||
|
||||
- **Discover public APIs** across various categories.
|
||||
- **View API details**, including authentication methods and sample requests.
|
||||
- **Import APIs into their workspace** for seamless testing.
|
||||
|
||||
---
|
||||
|
||||
## 🎨 Wireframe Design
|
||||
|
||||
The wireframe includes three main sections:
|
||||
|
||||
### **1️⃣ Homepage (API Listing Page)**
|
||||
|
||||
- **🔍 Search Bar**: Users can search for APIs.
|
||||
- **📂 Category Filters**: AI, Finance, Weather, etc., to filter APIs.
|
||||
- **📌 API Cards**: Displays API name, short description, category, and a "View Details" button.
|
||||
- **➡️ Navigation**: Clicking “View Details” opens the API Details Page.
|
||||
|
||||
### **2️⃣ API Details Page**
|
||||
|
||||
- **📌 API Name & Description**
|
||||
- **🔑 Authentication Info** (API key required or not).
|
||||
- **📂 API Endpoints & Sample Requests**
|
||||
- **📋 "Copy API Key" Button**
|
||||
- **📥 "Import API to Workspace" Button**
|
||||
|
||||
### **3️⃣ Sidebar (Optional)**
|
||||
|
||||
- **📂 Saved APIs List** (Previously imported APIs).
|
||||
- **⭐ Ratings & Reviews Section** (User feedback if implemented).
|
||||
|
||||
---
|
||||
|
||||
## 🖼️ Wireframe Link
|
||||
|
||||
🔗 **View the wireframe on Excalidraw**:
|
||||
[API Explorer Wireframe](https://excalidraw.com/#json=71K2EyrjsTEv1HXRMTRqB,iw86qFoQz9coZwkuAcXPUQ)
|
||||
|
||||
*(Optional: If you exported an image, add it here)*
|
||||

|
||||
|
||||
---
|
||||
|
||||
## 🚀 Next Steps
|
||||
|
||||
1. **Review the wireframe and suggest changes (if any).**
|
||||
2. Once approved, start coding the **frontend UI** (homepage, details page, sidebar).
|
||||
|
||||
Looking forward to feedback! 🔥
|
@ -0,0 +1,102 @@
|
||||
### Initial Idea Submission
|
||||
|
||||
Full Name: Sabith Fulail
|
||||
University name: Informatics Institute of Technology (IIT | Colombo, Sri Lanka)
|
||||
Program you are enrolled in (Degree & Major/Minor): BSc (Hons) Computer Science (Data Science)
|
||||
Year: 3rd Year
|
||||
Expected graduation date: May, 2026
|
||||
|
||||
Project Title: Adding Support for API Authentication Methods
|
||||
|
||||
Relevant issues:
|
||||
[#557](https://github.com/foss42/apidash/issues/557) – Pre-request and post-request scripts
|
||||
[#121](https://github.com/foss42/apidash/issues/121) – Importing from/Exporting to OpenAPI/Swagger specification
|
||||
[#337](https://github.com/foss42/apidash/issues/337) – Support for application/x-www-form-urlencoded
|
||||
[#352](https://github.com/foss42/apidash/issues/352) – Support file as request body
|
||||
[#22](https://github.com/foss42/apidash/issues/22) – JSON body syntax highlighting, beautification, validation
|
||||
[#581](https://github.com/foss42/apidash/issues/581) – Beautify JSON request body (Closed)
|
||||
[#582](https://github.com/foss42/apidash/issues/582) – Syntax highlighting for JSON request body (Closed)
|
||||
[#583](https://github.com/foss42/apidash/issues/583) – Validation for JSON request body
|
||||
[#590](https://github.com/foss42/apidash/issues/590) – Environment variable support in request body
|
||||
[#591](https://github.com/foss42/apidash/issues/591) – Environment variable support for text request body
|
||||
[#592](https://github.com/foss42/apidash/issues/592) – Environment variable support for JSON request body
|
||||
[#593](https://github.com/foss42/apidash/issues/593) – Environment variable support for form request body
|
||||
[#599](https://github.com/foss42/apidash/issues/599) – Support for comments in JSON request body
|
||||
[#600](https://github.com/foss42/apidash/issues/600) – Reading environment variables from OS environment
|
||||
[#601](https://github.com/foss42/apidash/issues/601) – Adding color support for environments
|
||||
[#373](https://github.com/foss42/apidash/issues/373) – In-app update notifications
|
||||
|
||||
Idea description:
|
||||
This project will streamline API testing in API Dash by introducing pre/post-request scripting, robust OpenAPI/Swagger interoperability,
|
||||
and enhanced JSON/GraphQL editing. These changes will reduce manual effort in API debugging and improve workflow efficiency.
|
||||
|
||||
|
||||
Implementation Plan
|
||||
Phase 1: Research & Planning (Week 1-2)
|
||||
Study existing API Dash architecture and feature requests.
|
||||
Prioritize features based on complexity and impact.
|
||||
Research best practices for JSON syntax validation, GraphQL handling, and API import/export.
|
||||
|
||||
Phase 2: Core Feature Development (Week 3-10)
|
||||
1. Pre-Request & Post-Request Scripts (#557)
|
||||
Enable users to modify requests and responses dynamically before sending.
|
||||
This includes automating tasks such as adding authentication tokens, handling environment variables,
|
||||
chaining API requests, and transforming request/response data.
|
||||
|
||||
2. OpenAPI/Swagger Import & Export (#121)
|
||||
Allow importing API requests from OpenAPI/Swagger JSON/YAML files.
|
||||
Implement API export functionality to generate valid OpenAPI specifications.
|
||||
|
||||
3. JSON Body Enhancements (#22)
|
||||
Add syntax highlighting, beautification, and validation for JSON request bodies.
|
||||
Provide auto-formatting and error detection for malformed JSON.
|
||||
|
||||
4. GraphQL Editor Improvements
|
||||
Add expand/collapse feature for GraphQL queries.
|
||||
Implement support for GraphQL fragments, mutations, and subscriptions.
|
||||
Improve GraphQL schema inspection.
|
||||
|
||||
5. Support for More Content Types (#337)
|
||||
Add support for application/x-www-form-urlencoded and file upload as request body.
|
||||
|
||||
Phase 3: Enhancements & Testing (Week 11-14)
|
||||
6. Environment Variable & UI Improvements (#600, #601)
|
||||
Allow reading OS environment variables directly.
|
||||
Introduce color-coded environments (e.g., RED for Prod, GREEN for Dev).
|
||||
|
||||
7. In-App Update Notifications (#373)
|
||||
Notify users when a new version of API Dash is available.
|
||||
Provide an update button to quickly navigate to the latest release.
|
||||
|
||||
8. Increase Test Coverage
|
||||
Write more widget & integration tests to improve code coverage.
|
||||
Ensure major UI and backend features are fully tested before release.
|
||||
|
||||
Tech Stack & Tools
|
||||
Feature | Tech/Tools
|
||||
|
||||
Frontend | Flutter (Dart)
|
||||
API Parsing | OpenAPI, Swagger
|
||||
JSON Enhancements | CodeMirror, Ace Editor
|
||||
GraphQL | GraphQL Parser (Dart)
|
||||
Testing | Widget Testing, Integration Testing
|
||||
Environment Handling | OS Environment Variables (Dart)
|
||||
|
||||
|
||||
Why This Project?
|
||||
Enhances Developer Productivity – Improves usability with better request handling, scripting, and JSON validation.
|
||||
Better GraphQL Support – Adds crucial missing features to enhance GraphQL development.
|
||||
Improves API Import/Export – Makes API Dash more interoperable with OpenAPI and Swagger.
|
||||
Strengthens Stability & Testing – Increases test coverage and enhances debugging efficiency.
|
||||
|
||||
These improvements will help make API Dash more competitive with other API tools by adding support for advanced
|
||||
use cases such as authentication management, JSON syntax validation, and seamless GraphQL integration
|
||||
|
||||
Future Scope
|
||||
Implement gRPC support to expand API Dash's capabilities.
|
||||
Improve UI/UX for better user experience.
|
||||
Add VS Code & JetBrains integration for a seamless developer workflow.
|
||||
|
||||
This project will provide meaningful improvements to API Dash and enhance the overall user experience.
|
||||
I am excited to work on these features and contribute to making API Dash a more powerful tool!
|
||||
|
BIN
doc/proposals/2025/gsoc/images/AI_API_EVAL_Dashboard_1.png
Normal file
After Width: | Height: | Size: 69 KiB |
BIN
doc/proposals/2025/gsoc/images/AI_API_EVAL_Dashboard_2.png
Normal file
After Width: | Height: | Size: 66 KiB |
BIN
doc/proposals/2025/gsoc/images/AI_API_EVAL_result.png
Normal file
After Width: | Height: | Size: 83 KiB |
BIN
doc/proposals/2025/gsoc/images/API_testing.jpg
Normal file
After Width: | Height: | Size: 63 KiB |
BIN
doc/proposals/2025/gsoc/images/LOAD_TEST.jpg
Normal file
After Width: | Height: | Size: 73 KiB |
BIN
doc/proposals/2025/gsoc/images/ayaan_mockup.png
Normal file
After Width: | Height: | Size: 1.5 MiB |
BIN
doc/proposals/2025/gsoc/images/foss.jpeg
Normal file
After Width: | Height: | Size: 51 KiB |
BIN
doc/proposals/2025/gsoc/images/overview-api-explorer.png
Normal file
After Width: | Height: | Size: 111 KiB |
BIN
doc/proposals/2025/gsoc/images/sunil Auth image.png
Normal file
After Width: | Height: | Size: 205 KiB |
@ -0,0 +1,200 @@
|
||||
1. Full Name: MD. Rafsanul Islam Neloy
|
||||
2. Email: rafsanneloy@gmail.com
|
||||
3. Phone: +880 1325161428
|
||||
4. Discord handle: rafsanneloy (756821234259460157)
|
||||
5. GitHub: https://github.com/RafsanNeloy
|
||||
6. LinkedIn: https://www.linkedin.com/in/md-rafsanul-neloy
|
||||
7. Time zone: GMT +6 (Bangladesh)
|
||||
8. Resume: https://drive.google.com/file/d/1_7YC1meQ0juyK80Bvp4A_9bmbfKqZcB7/view?usp=drive_link
|
||||
|
||||
### University Info
|
||||
|
||||
1. University name: Ahsanullah University Of Science & Technology
|
||||
2. Program you are enrolled in (Degree & Major/Minor): B.Sc in CSE
|
||||
3. Year: 4(Final Year)
|
||||
5. Expected graduation date: 14/05/2026
|
||||
|
||||
### Motivation & Past Experience
|
||||
|
||||
### **Short Answers:**
|
||||
|
||||
1. **Have you worked on or contributed to a FOSS project before?**
|
||||
No, I haven't contributed to a FOSS project before, but I'm eager to start with this GSoC opportunity.
|
||||
|
||||
2. **What is your one project/achievement that you are most proud of? Why?**
|
||||
One of my proudest achievements is developing an **Angry Birds game using iGraphics**. This project pushed me to deeply understand physics-based simulations, collision detection, and game mechanics. It was particularly rewarding because I had to optimize performance while maintaining smooth gameplay, and it solidified my problem-solving skills in real-time rendering.
|
||||
|
||||
3. **What kind of problems or challenges motivate you the most to solve them?**
|
||||
I am most motivated by challenges that involve **performance optimization, real-time data processing, and system scalability**. Whether it's reducing execution time, handling large-scale data efficiently, or ensuring seamless communication in distributed systems, I find solving such problems both intellectually stimulating and rewarding.
|
||||
|
||||
4. **Will you be working on GSoC full-time?**
|
||||
Yes, I plan to dedicate my full time to GSoC. I want to immerse myself in the project, actively contribute to discussions, and ensure high-quality deliverables.
|
||||
|
||||
5. **Do you mind regularly syncing up with the project mentors?**
|
||||
Not at all! Regular sync-ups are essential for feedback and guidance. I believe structured discussions will help me align with project expectations, identify potential roadblocks early, and ensure smooth progress.
|
||||
|
||||
6. **What interests you the most about API Dash?**
|
||||
What excites me the most about API Dash is its **cross-platform support and extensibility**. The idea of having a unified API testing tool that supports multiple protocols across desktop and mobile platforms is fascinating. Additionally, the opportunity to work on **real-time protocols like WebSocket, SSE, MQTT, and gRPC** aligns perfectly with my interests in high-performance systems.
|
||||
|
||||
7. **Can you mention some areas where the project can be improved?**
|
||||
- **Real-time Collaboration:** Allow users to share and test APIs collaboratively in real time.
|
||||
- **Performance Benchmarking:** Add API request performance insights, such as latency breakdowns and server response analytics.
|
||||
- **Protocol-Specific Debugging Tools:** Enhance error reporting with detailed logs and debugging suggestions for WebSocket, SSE, MQTT, and gRPC failures.
|
||||
- **Mobile UI Optimization:** Improve API Dash’s UX on mobile devices, ensuring a seamless experience on touch interfaces.
|
||||
|
||||
These improvements can make API Dash an even more powerful tool for developers working on modern applications! 🚀
|
||||
### Key Points
|
||||
- It seems likely that adding support for WebSocket, SSE, MQTT, and gRPC in API Dash will enhance its capabilities for real-time and high-performance API testing.
|
||||
- The project involves designing the core library architecture, understanding protocol specifications, and implementing testing, visualization, and code generation features.
|
||||
- Research suggests that this will benefit developers working on modern applications, especially in web, IoT, and microservices, by providing a unified tool.
|
||||
|
||||
---
|
||||
|
||||
### Introduction to API Dash and Project Scope
|
||||
API Dash is an open-source, cross-platform API client built with Flutter, supporting macOS, Windows, Linux, Android, and iOS. It currently allows developers to create, customize, and test HTTP and GraphQL API requests, with features like response visualization and code generation in multiple programming languages. This project aims to extend API Dash by adding support for testing, visualization, and integration code generation for WebSocket, Server-Sent Events (SSE), Message Queuing Telemetry Transport (MQTT), and gRPC protocols.
|
||||
|
||||
These protocols are crucial for real-time communication and efficient data exchange, used in applications ranging from web and mobile to Internet of Things (IoT) devices and microservices. By integrating these, API Dash will become a more versatile tool, catering to a broader range of developer needs.
|
||||
|
||||
### Project Details and Implementation
|
||||
The project involves several key steps:
|
||||
- **Research and Specification Analysis**: Understand the specifications of WebSocket, SSE, MQTT, and gRPC to ensure correct implementation of their communication patterns.
|
||||
- **Architecture Design**: Design the core library to integrate these protocols, ensuring modularity and compatibility with existing features.
|
||||
- **Implementation**: Develop protocol handlers using Dart libraries (e.g., `web_socket_channel` for WebSocket, `mqtt_client` for MQTT, `grpc` for gRPC), create user interfaces with Flutter, and extend visualization and code generation features.
|
||||
- **Testing and Validation**: Write unit and integration tests, test with real-world scenarios, and gather community feedback.
|
||||
- **Documentation**: Update API Dash documentation with guides and examples for the new protocols.
|
||||
|
||||
Each protocol will have specific features:
|
||||
- **WebSocket**: Support connection establishment, sending/receiving text and binary messages, and real-time visualization.
|
||||
- **SSE**: Enable connecting to endpoints, displaying incoming events with data and type, and handling automatic reconnection.
|
||||
- **MQTT**: Allow connecting to brokers, subscribing/publishing to topics, and managing QoS levels and connection status.
|
||||
- **gRPC**: Import .proto files, select services/methods, input parameters, and display responses, initially focusing on unary calls with potential for streaming.
|
||||
|
||||
### Expected Outcomes and Benefits
|
||||
Upon completion, API Dash will offer full support for testing these protocols, intuitive user interfaces, advanced visualization tools, and code generation in languages like JavaScript, Python, and Java. This will benefit developers by providing a unified tool for diverse API interactions, enhancing productivity and application quality, especially for real-time and high-performance systems.
|
||||
|
||||
An unexpected detail is that the project will also involve ensuring cross-platform compatibility, which is crucial for mobile and desktop users, potentially expanding API Dash's user base.
|
||||
|
||||
---
|
||||
|
||||
### Survey Note: Detailed Analysis of API Testing Support Expansion in API Dash
|
||||
|
||||
This note provides a comprehensive analysis of the proposed project to extend API Dash, an open-source API client built with Flutter, by adding support for WebSocket, Server-Sent Events (SSE), Message Queuing Telemetry Transport (MQTT), and gRPC protocols. The project aims to enhance testing, visualization, and integration code generation capabilities, catering to modern application development needs.
|
||||
|
||||
#### Background and Context
|
||||
API Dash, available at [GitHub Repository](https://github.com/foss42/apidash), is designed for cross-platform use, supporting macOS, Windows, Linux, Android, and iOS. It currently facilitates HTTP and GraphQL API testing, with features like response visualization and code generation in languages such as JavaScript, Python, and Java. The project idea, discussed at [API Dash Discussions](https://github.com/foss42/apidash/discussions/565/), addresses the need to support additional protocols essential for real-time communication and high-performance systems, as outlined in related issues: [#15](https://github.com/foss42/apidash/issues/15), [#115](https://github.com/foss42/apidash/issues/115), [#116](https://github.com/foss42/apidash/issues/116), and [#14](https://github.com/foss42/apidash/issues/14).
|
||||
|
||||
The protocols in focus—WebSocket, SSE, MQTT, and gRPC—serve diverse purposes:
|
||||
- **WebSocket** enables full-duplex communication over a single TCP connection, ideal for real-time web applications like chat and live updates.
|
||||
- **SSE** is a server-push technology for unidirectional updates from server to client, suitable for live data feeds.
|
||||
- **MQTT**, a lightweight messaging protocol, is designed for IoT devices, supporting publish-subscribe messaging.
|
||||
- **gRPC**, using HTTP/2 and Protocol Buffers, facilitates high-performance RPC calls with features like bi-directional streaming and load balancing.
|
||||
|
||||
This expansion aligns with the growing demand for tools supporting real-time and IoT applications, positioning API Dash as a comprehensive solution.
|
||||
|
||||
#### Project Objectives and Scope
|
||||
The primary objectives include:
|
||||
1. **Protocol Support Implementation**: Develop modules to handle WebSocket, SSE, MQTT, and gRPC, ensuring compliance with their specifications.
|
||||
2. **User Interface Enhancements**: Design intuitive UIs for each protocol, maintaining consistency with API Dash's existing design, and supporting features like connection management and message handling.
|
||||
3. **Visualization Tools**: Create components for displaying requests, responses, and events, with features like syntax highlighting and real-time updates.
|
||||
4. **Code Generation**: Extend the existing code generation functionality to support these protocols in multiple programming languages, ensuring accuracy and efficiency.
|
||||
5. **Documentation and Testing**: Provide comprehensive documentation and implement thorough testing to ensure reliability.
|
||||
|
||||
The project, with a difficulty rated as medium-high and requiring skills in understanding specs/protocols, UX design, Dart, and Flutter, is estimated at 350 hours, as per the project idea table:
|
||||
|
||||
| Feature | Details |
|
||||
|---------|---------|
|
||||
| API Types Supported | HTTP (✅), GraphQL (✅), SSE (#116), WebSocket (#15), MQTT (#115), gRPC (#14) |
|
||||
| Import Collection From | Postman (✅), cURL (✅), Insomnia (✅), OpenAPI (#121), hurl (#123), HAR (#122) |
|
||||
| Code Generation Languages/Libraries | cURL, HAR, C (libcurl), C# (HttpClient, RestSharp), Dart (http, dio), Go (net/http), JavaScript (axios, fetch, node.js axios, node.js fetch), Java (asynchttpclient, HttpClient, okhttp3, Unirest), Julia (HTTP), Kotlin (okhttp3), PHP (curl, guzzle, HTTPlug), Python (requests, http.client), Ruby (faraday, net/http), Rust (hyper, reqwest, ureq, Actix Client), Swift (URLSession) |
|
||||
| MIME Types for Response Preview | PDF (application/pdf), Various Videos (video/mp4, video/webm, etc.), Images (image/apng, image/avif, etc.), Audio (audio/flac, audio/mpeg, etc.), CSV (text/csv), Syntax Highlighted (application/json, application/xml, etc.) |
|
||||
| Download Links | iOS/iPad: [App Store](https://apps.apple.com/us/app/api-dash-api-client-testing/id6711353348), macOS: [Release](https://github.com/foss42/apidash/releases/latest/download/apidash-macos.dmg), Windows: [Release](https://github.com/foss42/apidash/releases/latest/download/apidash-windows-x86_64.exe), Linux (deb, rpm, PKGBUILD): [Installation Guide](https://github.com/foss42/apidash/blob/main/INSTALLATION.md) |
|
||||
|
||||
#### Methodology and Implementation Details
|
||||
The implementation will proceed in phases:
|
||||
1. **Research and Specification Analysis**: Analyze the specifications of each protocol to understand communication models. For instance, WebSocket uses a single TCP connection for full-duplex communication, while gRPC leverages HTTP/2 and Protocol Buffers for RPC calls.
|
||||
2. **Architecture Design**: Design the core library to integrate new protocols, ensuring modularity. This involves creating interfaces for protocol handlers and ensuring compatibility with Flutter's cross-platform nature.
|
||||
3. **Implementation**: Use established Dart packages for efficiency:
|
||||
- WebSocket: Leverage `web_socket_channel` for connection and message handling.
|
||||
- SSE: Utilize the `http` package for HTTP-based event streaming.
|
||||
- MQTT: Use `mqtt_client` for broker connections and publish-subscribe functionality.
|
||||
- gRPC: Employ the `grpc` package, handling .proto file parsing and method calls.
|
||||
Develop Flutter UIs for each protocol, ensuring responsiveness across platforms, including mobile devices.
|
||||
4. **Testing and Validation**: Write unit tests for protocol handlers and integration tests for UI interactions. Test with sample APIs and real-world scenarios, such as connecting to public MQTT brokers or gRPC services.
|
||||
5. **Documentation**: Update the documentation at [GitHub Repository](https://github.com/foss42/apidash) with guides, including examples for connecting to WebSocket endpoints or calling gRPC methods.
|
||||
|
||||
| **Type of Test** | **Description** | **Examples for Protocols** |
|
||||
|-----------------------------|-------------------------------------------------------------------------------|-----------------------------------------------------------------|
|
||||
| **Unit Tests** | Test individual components in isolation to verify functionality. | - Verify WebSocket message encoding/decoding.<br>- Test gRPC .proto file parsing.<br>- Check MQTT QoS level handling. |
|
||||
| **Widget Tests** | Validate UI components to ensure user interactions work as expected. | - Test WebSocket URL input field.<br>- Verify SSE event display.<br>- Check gRPC method selection UI. |
|
||||
| **Integration Tests** | Ensure all components work together for the complete feature flow. | - Test connecting to a WebSocket server, sending a message, and receiving a response.<br>- Verify MQTT subscribe/publish flow.<br>- Validate gRPC unary call end-to-end. |
|
||||
| **Code Generation Tests** | Verify that generated code for each protocol in supported languages is correct and functional. | - Ensure WebSocket code in JavaScript uses standard APIs.<br>- Validate Python MQTT code with `paho-mqtt` library.<br>- Check gRPC code generation for Java. |
|
||||
| **Cross-Platform Tests** | Run tests on different platforms to ensure compatibility and consistent behavior. | - Test WebSocket on macOS, Windows, Linux, Android, and iOS.<br>- Verify SSE on mobile devices.<br>- Ensure gRPC works across desktop and mobile. |
|
||||
| **Edge Case and Error Handling Tests** | Test scenarios like connection failures, invalid inputs, and large data sets to ensure stability. | - Test WebSocket connection failure.<br>- Verify SSE handles invalid event streams.<br>- Check gRPC with invalid .proto files.<br>- Test MQTT with wrong broker credentials. |
|
||||
|
||||
|
||||
Specific features for each protocol include:
|
||||
- **WebSocket**: Connection establishment with URL and headers, sending/receiving messages, and real-time visualization with timestamps.
|
||||
- **SSE**: Connecting to endpoints, displaying events with data and type, and handling reconnection with retry intervals.
|
||||
- **MQTT**: Broker connection with authentication, topic subscription/publishing, and QoS level management, with visualization of message history.
|
||||
- **gRPC**: Importing .proto files, selecting services/methods, inputting parameters, and displaying responses, initially focusing on unary calls with potential for streaming.
|
||||
|
||||
#### Expected Outcomes and Impact
|
||||
The project will deliver:
|
||||
- Full support for testing WebSocket, SSE, MQTT, and gRPC APIs, enhancing API Dash's versatility.
|
||||
- Intuitive UIs for protocol interactions, ensuring a seamless user experience across platforms.
|
||||
- Advanced visualization tools, such as syntax-highlighted message logs and real-time updates, improving data inspection.
|
||||
- Code generation for integrating APIs in languages like JavaScript, Python, and Java, using standard libraries (e.g., WebSocket API for JavaScript, `requests` for Python MQTT).
|
||||
- Comprehensive documentation, aiding developers in leveraging new features.
|
||||
|
||||
An unexpected detail is the focus on cross-platform compatibility, crucial for mobile users, potentially expanding API Dash's adoption in mobile development. This aligns with its current support for iOS and Android, as seen in download links like [App Store](https://apps.apple.com/us/app/api-dash-api-client-testing/id6711353348).
|
||||
|
||||
The benefits include empowering developers working on real-time applications, IoT projects, and microservices, by providing a unified tool. This will enhance productivity and application quality, contributing to the open-source community.
|
||||
|
||||

|
||||
*Testing Diagram*
|
||||
|
||||
#### Potential Challenges and Considerations
|
||||
Several challenges may arise:
|
||||
- **Protocol Complexity**: Ensuring compliance with specifications, especially for gRPC with Protocol Buffers and streaming calls.
|
||||
- **User Interface Design**: Balancing intuitive design with the diverse interaction models of each protocol, while maintaining consistency.
|
||||
- **Performance**: Handling real-time data streams without impacting UI responsiveness, particularly on mobile devices.
|
||||
- **Code Generation**: Generating accurate code for multiple languages, considering protocol-specific libraries and best practices.
|
||||
- **Cross-Platform Compatibility**: Ensuring all features work seamlessly across macOS, Windows, Linux, Android, and iOS, addressing platform-specific issues.
|
||||
|
||||
Solutions include leveraging established Dart packages, following existing UI patterns, optimizing asynchronous programming, researching language-specific libraries, and extensive platform testing.
|
||||
|
||||
#### Conclusion
|
||||
This project to integrate WebSocket, SSE, MQTT, and gRPC into API Dash will significantly enhance its capabilities, making it a comprehensive tool for API testing and development. It offers valuable experience in protocol implementation, UX design, and cross-platform development, benefiting the open-source community and developers worldwide.
|
||||
|
||||
---
|
||||
|
||||
### Key Citations
|
||||
|
||||
- [API Dash Discussions Project Ideas List](https://github.com/foss42/apidash/discussions/565/)
|
||||
- [API Dash Issue WebSocket Support](https://github.com/foss42/apidash/issues/15)
|
||||
- [API Dash Issue MQTT Support](https://github.com/foss42/apidash/issues/115)
|
||||
- [API Dash Issue SSE Support](https://github.com/foss42/apidash/issues/116)
|
||||
- [API Dash Issue gRPC Support](https://github.com/foss42/apidash/issues/14)
|
||||
- [API Dash iOS App Download Page](https://apps.apple.com/us/app/api-dash-api-client-testing/id6711353348)
|
||||
- [API Dash macOS Release Download Page](https://github.com/foss42/apidash/releases/latest/download/apidash-macos.dmg)
|
||||
- [API Dash Windows Release Download Page](https://github.com/foss42/apidash/releases/latest/download/apidash-windows-x86_64.exe)
|
||||
- [API Dash Linux Installation Guide Page](https://github.com/foss42/apidash/blob/main/INSTALLATION.md)
|
||||
|
||||
### **4. Weekly Timeline**
|
||||
|
||||
| **Week** | **Tasks** |
|
||||
|----------|----------|
|
||||
| **Week 1** | Conduct research on WebSocket, SSE, MQTT, and gRPC specifications. Analyze existing API Dash architecture and finalize technical stack. |
|
||||
| **Week 2** | Design the core library architecture for integrating new protocols. Outline UI requirements for protocol interactions. |
|
||||
| **Week 3** | Implement WebSocket support: connection establishment, message sending/receiving, and real-time visualization. Write unit tests. |
|
||||
| **Week 4** | Implement SSE support: event stream handling, automatic reconnection, and real-time event visualization. Conduct initial testing. |
|
||||
| **Week 5** | Develop MQTT support: broker connection, topic subscription/publishing, QoS management. Implement visualization for messages. |
|
||||
| **Week 6** | Implement gRPC support: import .proto files, select services/methods, send requests, and visualize responses. Focus on unary calls first. |
|
||||
| **Week 7** | Extend code generation features for WebSocket, SSE, MQTT, and gRPC in JavaScript, Python, and Java. Validate generated code. |
|
||||
| **Week 8** | Conduct integration testing for all protocols. Optimize performance and ensure cross-platform compatibility (macOS, Windows, Linux, Android, iOS). |
|
||||
| **Week 9** | Improve UI/UX for seamless protocol interactions. Add customization options and refine error handling mechanisms. |
|
||||
| **Week 10** | Perform extensive testing with real-world APIs. Fix bugs, optimize stability, and ensure smooth user experience. |
|
||||
| **Week 11** | Finalize documentation, create user guides, and add example implementations. Conduct final performance testing. |
|
||||
| **Week 12** | Prepare project demo, finalize submission, and gather community feedback for improvements. |
|
||||
|
||||
This timeline ensures systematic progress while allowing flexibility for testing and optimization. Let me know if you need any modifications! 🚀
|
@ -84,7 +84,7 @@ class ApidashTestRequestHelper {
|
||||
|
||||
var headerCells = find.descendant(
|
||||
of: find.byType(EditRequestHeaders),
|
||||
matching: find.byType(HeaderField));
|
||||
matching: find.byType(EnvHeaderField));
|
||||
var valueCells = find.descendant(
|
||||
of: find.byType(EditRequestHeaders),
|
||||
matching: find.byType(EnvCellField));
|
||||
@ -95,7 +95,7 @@ class ApidashTestRequestHelper {
|
||||
tester.testTextInput.enterText(keyValuePairs[i].$2);
|
||||
headerCells = find.descendant(
|
||||
of: find.byType(EditRequestHeaders),
|
||||
matching: find.byType(HeaderField));
|
||||
matching: find.byType(EnvHeaderField));
|
||||
valueCells = find.descendant(
|
||||
of: find.byType(EditRequestHeaders),
|
||||
matching: find.byType(EnvCellField));
|
||||
|
@ -37,14 +37,16 @@ let multipartFormData = try! MultipartFormData(boundary: boundary) {
|
||||
''';
|
||||
|
||||
final String kTemplateJsonData = '''
|
||||
let parameters = "{{jsonData}}"
|
||||
let postData = parameters.data(using: .utf8)
|
||||
let postData = """
|
||||
{{jsonData}}
|
||||
""".data(using: .utf8)
|
||||
|
||||
''';
|
||||
|
||||
final String kTemplateTextData = '''
|
||||
let parameters = "{{textData}}"
|
||||
let postData = parameters.data(using: .utf8)
|
||||
let postData = """
|
||||
{{textData}}
|
||||
""".data(using: .utf8)
|
||||
|
||||
''';
|
||||
|
||||
@ -61,15 +63,23 @@ request.addValue("{{value}}", forHTTPHeaderField: "{{header}}")
|
||||
|
||||
""";
|
||||
|
||||
final String kTemplateBody = """
|
||||
final String kTemplateFormDataBody = """
|
||||
request.httpBody = try! multipartFormData.encode()
|
||||
""";
|
||||
|
||||
final String kTemplateJsonTextBody = """
|
||||
request.httpBody = postData
|
||||
|
||||
""";
|
||||
|
||||
final String kTemplateEnd = """
|
||||
let semaphore = DispatchSemaphore(value: 0)
|
||||
|
||||
let task = URLSession.shared.dataTask(with: request) { data, response, error in
|
||||
defer { semaphore.signal() }
|
||||
|
||||
if let error = error {
|
||||
print("Error: (error.localizedDescription)")
|
||||
print("Error: \\(error.localizedDescription)")
|
||||
return
|
||||
}
|
||||
guard let data = data else {
|
||||
@ -77,30 +87,31 @@ let task = URLSession.shared.dataTask(with: request) { data, response, error in
|
||||
return
|
||||
}
|
||||
if let responseString = String(data: data, encoding: .utf8) {
|
||||
print("Response: (responseString)")
|
||||
print("Response: \\(responseString)")
|
||||
}
|
||||
}
|
||||
|
||||
task.resume()
|
||||
|
||||
semaphore.wait()
|
||||
""";
|
||||
|
||||
|
||||
String? getCode(HttpRequestModel requestModel) {
|
||||
try {
|
||||
String result = kTemplateStart;
|
||||
|
||||
if (requestModel.hasFormData) {
|
||||
result += kTemplateFormDataImport;
|
||||
}
|
||||
|
||||
var rec =
|
||||
getValidRequestUri(requestModel.url, requestModel.enabledParams);
|
||||
var rec = getValidRequestUri(requestModel.url, requestModel.enabledParams);
|
||||
Uri? uri = rec.$1;
|
||||
|
||||
if (requestModel.hasFormData) {
|
||||
result += kTemplateFormDataImport;
|
||||
|
||||
var formDataList = requestModel.formDataMapList.map((param) {
|
||||
if (param['type'] == 'file') {
|
||||
final filePath = param['value'] as String;
|
||||
final fileName = path.basename(filePath);
|
||||
final fileExtension =
|
||||
final fileExtension =
|
||||
path.extension(fileName).toLowerCase().replaceFirst('.', '');
|
||||
return {
|
||||
'type': 'file',
|
||||
@ -122,17 +133,19 @@ task.resume()
|
||||
result += templateFormData.render({
|
||||
"formData": formDataList,
|
||||
});
|
||||
} else if (requestModel.hasJsonData) {
|
||||
}
|
||||
// Handle JSON data
|
||||
else if (requestModel.hasJsonData) {
|
||||
var templateJsonData = jj.Template(kTemplateJsonData);
|
||||
result += templateJsonData.render({
|
||||
"jsonData":
|
||||
requestModel.body!.replaceAll('"', '\\"').replaceAll('\n', '\\n'),
|
||||
});
|
||||
} else if (requestModel.hasTextData) {
|
||||
"jsonData": requestModel.body!
|
||||
});
|
||||
}
|
||||
// Handle text data
|
||||
else if (requestModel.hasTextData) {
|
||||
var templateTextData = jj.Template(kTemplateTextData);
|
||||
result += templateTextData.render({
|
||||
"textData":
|
||||
requestModel.body!.replaceAll('"', '\\"').replaceAll('\n', '\\n'),
|
||||
"textData": requestModel.body!
|
||||
});
|
||||
}
|
||||
|
||||
@ -144,19 +157,21 @@ task.resume()
|
||||
|
||||
var headers = requestModel.enabledHeadersMap;
|
||||
if (requestModel.hasFormData) {
|
||||
headers.putIfAbsent("Content-Type",
|
||||
() => "multipart/form-data; boundary=(boundary.stringValue)");
|
||||
} else if (requestModel.hasJsonData || requestModel.hasTextData) {
|
||||
headers.putIfAbsent(
|
||||
kHeaderContentType, () => requestModel.bodyContentType.header);
|
||||
}
|
||||
headers['Content-Type'] =
|
||||
"multipart/form-data; boundary=\\(boundary.stringValue)";
|
||||
} else if(requestModel.hasJsonData||requestModel.hasTextData){
|
||||
headers['Content-Type'] = 'application/json';
|
||||
}
|
||||
|
||||
if (headers.isNotEmpty) {
|
||||
var templateHeader = jj.Template(kTemplateHeaders);
|
||||
result += templateHeader.render({"headers": headers});
|
||||
}
|
||||
|
||||
if (requestModel.hasFormData || requestModel.hasBody) {
|
||||
result += kTemplateBody;
|
||||
if (requestModel.hasFormData) {
|
||||
result += kTemplateFormDataBody;
|
||||
} else if (requestModel.hasJsonData || requestModel.hasTextData) {
|
||||
result += kTemplateJsonTextBody;
|
||||
}
|
||||
|
||||
result += kTemplateEnd;
|
||||
|
@ -1,16 +1,18 @@
|
||||
export 'api_type_dropdown.dart';
|
||||
export 'button_navbar.dart';
|
||||
export 'code_pane.dart';
|
||||
export 'editor_title.dart';
|
||||
export 'editor_title_actions.dart';
|
||||
export 'envfield_url.dart';
|
||||
export 'editor_title.dart';
|
||||
export 'env_regexp_span_builder.dart';
|
||||
export 'env_trigger_field.dart';
|
||||
export 'env_trigger_options.dart';
|
||||
export 'envfield_cell.dart';
|
||||
export 'envfield_header.dart';
|
||||
export 'envfield_url.dart';
|
||||
export 'environment_dropdown.dart';
|
||||
export 'envvar_indicator.dart';
|
||||
export 'envvar_span.dart';
|
||||
export 'envvar_popover.dart';
|
||||
export 'env_trigger_options.dart';
|
||||
export 'field_header.dart';
|
||||
export 'envvar_span.dart';
|
||||
export 'sidebar_filter.dart';
|
||||
export 'sidebar_header.dart';
|
||||
export 'sidebar_save_button.dart';
|
||||
|
@ -32,6 +32,7 @@ class EnvCellField extends StatelessWidget {
|
||||
focusNode: focusNode,
|
||||
style: kCodeStyle.copyWith(
|
||||
color: clrScheme.onSurface,
|
||||
fontSize: Theme.of(context).textTheme.bodyMedium?.fontSize,
|
||||
),
|
||||
decoration: getTextFieldInputDecoration(
|
||||
clrScheme,
|
||||
|
@ -4,8 +4,8 @@ import 'package:multi_trigger_autocomplete_plus/multi_trigger_autocomplete_plus.
|
||||
import 'package:apidash/utils/utils.dart';
|
||||
import 'envfield_cell.dart';
|
||||
|
||||
class HeaderField extends StatefulWidget {
|
||||
const HeaderField({
|
||||
class EnvHeaderField extends StatefulWidget {
|
||||
const EnvHeaderField({
|
||||
super.key,
|
||||
required this.keyId,
|
||||
this.hintText,
|
||||
@ -20,10 +20,10 @@ class HeaderField extends StatefulWidget {
|
||||
final ColorScheme? colorScheme;
|
||||
|
||||
@override
|
||||
State<HeaderField> createState() => _HeaderFieldState();
|
||||
State<EnvHeaderField> createState() => _EnvHeaderFieldState();
|
||||
}
|
||||
|
||||
class _HeaderFieldState extends State<HeaderField> {
|
||||
class _EnvHeaderFieldState extends State<EnvHeaderField> {
|
||||
final FocusNode focusNode = FocusNode();
|
||||
@override
|
||||
Widget build(BuildContext context) {
|
@ -103,7 +103,7 @@ class EditRequestHeadersState extends ConsumerState<EditRequestHeaders> {
|
||||
),
|
||||
),
|
||||
DataCell(
|
||||
HeaderField(
|
||||
EnvHeaderField(
|
||||
keyId: "$selectedId-$index-headers-k-$seed",
|
||||
initialValue: headerRows[index].name,
|
||||
hintText: kHintAddName,
|
||||
|
@ -102,7 +102,7 @@ class _TextFieldEditorState extends State<TextFieldEditor> {
|
||||
),
|
||||
filled: true,
|
||||
hoverColor: kColorTransparent,
|
||||
fillColor: Theme.of(context).colorScheme.surfaceContainerLow,
|
||||
fillColor: Theme.of(context).colorScheme.surfaceContainerLowest,
|
||||
),
|
||||
),
|
||||
);
|
||||
|
@ -167,7 +167,7 @@ class _JsonTextFieldEditorState extends State<JsonTextFieldEditor> {
|
||||
),
|
||||
filled: true,
|
||||
hoverColor: kColorTransparent,
|
||||
fillColor: Theme.of(context).colorScheme.surfaceContainerLow,
|
||||
fillColor: Theme.of(context).colorScheme.surfaceContainerLowest,
|
||||
),
|
||||
),
|
||||
),
|
||||
|
@ -1,5 +1,4 @@
|
||||
import 'dart:io';
|
||||
import 'dart:collection';
|
||||
import 'package:flutter/foundation.dart';
|
||||
import 'package:http/http.dart' as http;
|
||||
import 'package:http/io_client.dart';
|
||||
@ -15,7 +14,7 @@ class HttpClientManager {
|
||||
static final HttpClientManager _instance = HttpClientManager._internal();
|
||||
static const int _maxCancelledRequests = 100;
|
||||
final Map<String, http.Client> _clients = {};
|
||||
final Queue<String> _cancelledRequests = Queue();
|
||||
final Set<String> _cancelledRequests = {};
|
||||
|
||||
factory HttpClientManager() {
|
||||
return _instance;
|
||||
@ -38,9 +37,9 @@ class HttpClientManager {
|
||||
_clients[requestId]?.close();
|
||||
_clients.remove(requestId);
|
||||
|
||||
_cancelledRequests.addLast(requestId);
|
||||
while (_cancelledRequests.length > _maxCancelledRequests) {
|
||||
_cancelledRequests.removeFirst();
|
||||
_cancelledRequests.add(requestId);
|
||||
if (_cancelledRequests.length > _maxCancelledRequests) {
|
||||
_cancelledRequests.remove(_cancelledRequests.first);
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -49,6 +48,10 @@ class HttpClientManager {
|
||||
return _cancelledRequests.contains(requestId);
|
||||
}
|
||||
|
||||
void removeCancelledRequest(String requestId) {
|
||||
_cancelledRequests.remove(requestId);
|
||||
}
|
||||
|
||||
void closeClient(String requestId) {
|
||||
if (_clients.containsKey(requestId)) {
|
||||
_clients[requestId]?.close();
|
||||
|
@ -19,6 +19,9 @@ Future<(HttpResponse?, Duration?, String?)> sendHttpRequest(
|
||||
SupportedUriSchemes defaultUriScheme = kDefaultUriScheme,
|
||||
bool noSSL = false,
|
||||
}) async {
|
||||
if (httpClientManager.wasRequestCancelled(requestId)) {
|
||||
httpClientManager.removeCancelledRequest(requestId);
|
||||
}
|
||||
final client = httpClientManager.createClient(requestId, noSSL: noSSL);
|
||||
|
||||
(Uri?, String?) uriRec = getValidRequestUri(
|
||||
@ -71,37 +74,27 @@ Future<(HttpResponse?, Duration?, String?)> sendHttpRequest(
|
||||
}
|
||||
}
|
||||
http.StreamedResponse multiPartResponse =
|
||||
await multiPartRequest.send();
|
||||
await client.send(multiPartRequest);
|
||||
|
||||
stopwatch.stop();
|
||||
http.Response convertedMultiPartResponse =
|
||||
await convertStreamedResponse(multiPartResponse);
|
||||
return (convertedMultiPartResponse, stopwatch.elapsed, null);
|
||||
}
|
||||
}
|
||||
switch (requestModel.method) {
|
||||
case HTTPVerb.get:
|
||||
response = await client.get(requestUrl, headers: headers);
|
||||
break;
|
||||
case HTTPVerb.head:
|
||||
response = await client.head(requestUrl, headers: headers);
|
||||
break;
|
||||
case HTTPVerb.post:
|
||||
response =
|
||||
await client.post(requestUrl, headers: headers, body: body);
|
||||
break;
|
||||
case HTTPVerb.put:
|
||||
response =
|
||||
await client.put(requestUrl, headers: headers, body: body);
|
||||
break;
|
||||
case HTTPVerb.patch:
|
||||
response =
|
||||
await client.patch(requestUrl, headers: headers, body: body);
|
||||
break;
|
||||
case HTTPVerb.delete:
|
||||
response =
|
||||
await client.delete(requestUrl, headers: headers, body: body);
|
||||
break;
|
||||
}
|
||||
response = switch (requestModel.method) {
|
||||
HTTPVerb.get => await client.get(requestUrl, headers: headers),
|
||||
HTTPVerb.head => response =
|
||||
await client.head(requestUrl, headers: headers),
|
||||
HTTPVerb.post => response =
|
||||
await client.post(requestUrl, headers: headers, body: body),
|
||||
HTTPVerb.put => response =
|
||||
await client.put(requestUrl, headers: headers, body: body),
|
||||
HTTPVerb.patch => response =
|
||||
await client.patch(requestUrl, headers: headers, body: body),
|
||||
HTTPVerb.delete => response =
|
||||
await client.delete(requestUrl, headers: headers, body: body),
|
||||
};
|
||||
}
|
||||
if (apiType == APIType.graphql) {
|
||||
var requestBody = getGraphQLBody(requestModel);
|
||||
|
@ -10,6 +10,7 @@ class ADDropdownButton<T> extends StatelessWidget {
|
||||
this.isExpanded = false,
|
||||
this.isDense = false,
|
||||
this.iconSize,
|
||||
this.fontSize,
|
||||
this.dropdownMenuItemPadding = kPs8,
|
||||
this.dropdownMenuItemtextStyle,
|
||||
});
|
||||
@ -20,6 +21,7 @@ class ADDropdownButton<T> extends StatelessWidget {
|
||||
final bool isExpanded;
|
||||
final bool isDense;
|
||||
final double? iconSize;
|
||||
final double? fontSize;
|
||||
final EdgeInsetsGeometry dropdownMenuItemPadding;
|
||||
final TextStyle? Function(T)? dropdownMenuItemtextStyle;
|
||||
|
||||
@ -38,6 +40,7 @@ class ADDropdownButton<T> extends StatelessWidget {
|
||||
elevation: 4,
|
||||
style: kCodeStyle.copyWith(
|
||||
color: Theme.of(context).colorScheme.primary,
|
||||
fontSize: fontSize ?? Theme.of(context).textTheme.bodyMedium?.fontSize,
|
||||
),
|
||||
underline: Container(
|
||||
height: 0,
|
||||
|
50
test/screens/common_widgets/envfield_header_test.dart
Normal file
@ -0,0 +1,50 @@
|
||||
import 'package:apidash/screens/common_widgets/envfield_header.dart';
|
||||
import 'package:flutter/material.dart';
|
||||
import 'package:flutter_test/flutter_test.dart';
|
||||
import 'package:flutter_portal/flutter_portal.dart';
|
||||
import 'package:extended_text_field/extended_text_field.dart';
|
||||
import 'package:spot/spot.dart';
|
||||
|
||||
void main() {
|
||||
group('HeaderField Widget Tests', () {
|
||||
testWidgets('HeaderField renders and displays ExtendedTextField',
|
||||
(tester) async {
|
||||
await tester.pumpWidget(
|
||||
const Portal(
|
||||
child: MaterialApp(
|
||||
home: Scaffold(
|
||||
body: EnvHeaderField(
|
||||
keyId: "testKey",
|
||||
hintText: "Enter header",
|
||||
),
|
||||
),
|
||||
),
|
||||
),
|
||||
);
|
||||
|
||||
spot<EnvHeaderField>().spot<ExtendedTextField>().existsOnce();
|
||||
});
|
||||
|
||||
testWidgets('HeaderField calls onChanged when text changes',
|
||||
(tester) async {
|
||||
String? changedText;
|
||||
await tester.pumpWidget(
|
||||
Portal(
|
||||
child: MaterialApp(
|
||||
home: Scaffold(
|
||||
body: EnvHeaderField(
|
||||
keyId: "testKey",
|
||||
hintText: "Enter header",
|
||||
onChanged: (text) => changedText = text,
|
||||
),
|
||||
),
|
||||
),
|
||||
),
|
||||
);
|
||||
|
||||
await act.tap(spot<EnvHeaderField>().spot<ExtendedTextField>());
|
||||
tester.testTextInput.enterText("new header");
|
||||
expect(changedText, "new header");
|
||||
});
|
||||
});
|
||||
}
|
@ -1,54 +1,8 @@
|
||||
import 'package:apidash/screens/common_widgets/field_header.dart';
|
||||
import 'package:apidash/widgets/menu_header_suggestions.dart';
|
||||
import 'package:flutter/material.dart';
|
||||
import 'package:flutter_test/flutter_test.dart';
|
||||
import 'package:flutter_portal/flutter_portal.dart';
|
||||
import 'package:extended_text_field/extended_text_field.dart';
|
||||
import 'package:spot/spot.dart';
|
||||
|
||||
void main() {
|
||||
group('HeaderField Widget Tests', () {
|
||||
testWidgets('HeaderField renders and displays ExtendedTextField',
|
||||
(tester) async {
|
||||
await tester.pumpWidget(
|
||||
const Portal(
|
||||
child: MaterialApp(
|
||||
home: Scaffold(
|
||||
body: HeaderField(
|
||||
keyId: "testKey",
|
||||
hintText: "Enter header",
|
||||
),
|
||||
),
|
||||
),
|
||||
),
|
||||
);
|
||||
|
||||
spot<HeaderField>().spot<ExtendedTextField>().existsOnce();
|
||||
});
|
||||
|
||||
testWidgets('HeaderField calls onChanged when text changes',
|
||||
(tester) async {
|
||||
String? changedText;
|
||||
await tester.pumpWidget(
|
||||
Portal(
|
||||
child: MaterialApp(
|
||||
home: Scaffold(
|
||||
body: HeaderField(
|
||||
keyId: "testKey",
|
||||
hintText: "Enter header",
|
||||
onChanged: (text) => changedText = text,
|
||||
),
|
||||
),
|
||||
),
|
||||
),
|
||||
);
|
||||
|
||||
await act.tap(spot<HeaderField>().spot<ExtendedTextField>());
|
||||
tester.testTextInput.enterText("new header");
|
||||
expect(changedText, "new header");
|
||||
});
|
||||
});
|
||||
|
||||
group('HeaderSuggestions Widget Tests', () {
|
||||
testWidgets('HeaderSuggestions displays suggestions correctly',
|
||||
(tester) async {
|